Updates from: 08/24/2022 01:09:31
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-domain-services Administration Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/administration-concepts.md
Title: Management concepts for Azure AD Domain Services | Microsoft Docs
description: Learn about how to administer an Azure Active Directory Domain Services managed domain and the behavior of user accounts and passwords -+
active-directory-domain-services Alert Ldaps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/alert-ldaps.md
Title: Resolve secure LDAP alerts in Azure AD Domain Services | Microsoft Docs
description: Learn how to troubleshoot and resolve common alerts with secure LDAP for Azure Active Directory Domain Services. -+ ms.assetid: 81208c0b-8d41-4f65-be15-42119b1b5957
active-directory-domain-services Alert Nsg https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/alert-nsg.md
Title: Resolve network security group alerts in Azure AD DS | Microsoft Docs
description: Learn how to troubleshoot and resolve network security group configuration alerts for Azure Active Directory Domain Services -+ ms.assetid: 95f970a7-5867-4108-a87e-471fa0910b8c
active-directory-domain-services Alert Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/alert-service-principal.md
Title: Resolve service principal alerts in Azure AD Domain Services | Microsoft
description: Learn how to troubleshoot service principal configuration alerts for Azure Active Directory Domain Services -+ ms.assetid: f168870c-b43a-4dd6-a13f-5cfadc5edf2c
active-directory-domain-services Change Sku https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/change-sku.md
Title: Change the SKU for an Azure AD Domain Services | Microsoft Docs
description: Learn how to the SKU tier for an Azure AD Domain Services managed domain if your business requirements change -+
active-directory-domain-services Check Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/check-health.md
Title: Check the health of Azure Active Directory Domain Services | Microsoft Do
description: Learn how to check the health of an Azure Active Directory Domain Services (Azure AD DS) managed domain and understand status messages using the Azure portal. -+ ms.assetid: 8999eec3-f9da-40b3-997a-7a2587911e96
active-directory-domain-services Compare Identity Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/compare-identity-solutions.md
Title: Compare Active Directory-based services in Azure | Microsoft Docs
description: In this overview, you compare the different identity offerings for Active Directory Domain Services, Azure Active Directory, and Azure Active Directory Domain Services. -+
active-directory-domain-services Concepts Forest Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/concepts-forest-trust.md
Title: How trusts work for Azure AD Domain Services | Microsoft Docs
description: Learn more about how forest trust work with Azure AD Domain Services -+
active-directory-domain-services Concepts Migration Benefits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/concepts-migration-benefits.md
Title: Benefits of Classic deployment migration in Azure AD Domain Services | Mi
description: Learn more about the benefits of migrating a Classic deployment of Azure Active Directory Domain Services to the Resource Manager deployment model -+
active-directory-domain-services Concepts Replica Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/concepts-replica-sets.md
Title: Replica sets concepts for Azure AD Domain Services | Microsoft Docs
description: Learn what replica sets are in Azure Active Directory Domain Services and how they provide redundancy to applications that require identity services. -+
active-directory-domain-services Concepts Resource Forest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/concepts-resource-forest.md
Title: Resource forest concepts for Azure AD Domain Services | Microsoft Docs
description: Learn what a resource forest is in Azure Active Directory Domain Services and how they benefit your organization in hybrid environment with limited user authentication options or security concerns. -+
active-directory-domain-services Create Gmsa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/create-gmsa.md
Title: Group managed service accounts for Azure AD Domain Services | Microsoft D
description: Learn how to create a group managed service account (gMSA) for use with Azure Active Directory Domain Services managed domains -+ ms.assetid: e6faeddd-ef9e-4e23-84d6-c9b3f7d16567
active-directory-domain-services Create Ou https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/create-ou.md
Title: Create an organizational unit (OU) in Azure AD Domain Services | Microsof
description: Learn how to create and manage a custom Organizational Unit (OU) in an Azure AD Domain Services managed domain. -+ ms.assetid: 52602ad8-2b93-4082-8487-427bdcfa8126
active-directory-domain-services Create Resource Forest Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/create-resource-forest-powershell.md
Title: Create an Azure AD Domain Services resource forest using Azure PowerShell | Microsoft Docs description: In this article, learn how to create and configure an Azure Active Directory Domain Services resource forest and outbound forest to an on-premises Active Directory Domain Services environment using Azure PowerShell. -+
active-directory-domain-services Delete Aadds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/delete-aadds.md
Title: Delete Azure Active Directory Domain Services | Microsoft Docs
description: Learn how to disable, or delete, an Azure Active Directory Domain Services managed domain using the Azure portal -+ ms.assetid: 89e407e1-e1e0-49d1-8b89-de11484eee46
active-directory-domain-services Deploy Azure App Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/deploy-azure-app-proxy.md
Title: Deploy Azure AD Application Proxy for Azure AD Domain Services | Microsof
description: Learn how to provide secure access to internal applications for remote workers by deploying and configuring Azure Active Directory Application Proxy in an Azure Active Directory Domain Services managed domain -+ ms.assetid: 938a5fbc-2dd1-4759-bcce-628a6e19ab9d
active-directory-domain-services Deploy Kcd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/deploy-kcd.md
Title: Kerberos constrained delegation for Azure AD Domain Services | Microsoft
description: Learn how to enable resource-based Kerberos constrained delegation (KCD) in an Azure Active Directory Domain Services managed domain. -+ ms.assetid: 938a5fbc-2dd1-4759-bcce-628a6e19ab9d
active-directory-domain-services Deploy Sp Profile Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/deploy-sp-profile-sync.md
Title: Enable SharePoint User Profile service with Azure AD DS | Microsoft Docs
description: Learn how to configure an Azure Active Directory Domain Services managed domain to support profile synchronization for SharePoint Server -+ ms.assetid: 938a5fbc-2dd1-4759-bcce-628a6e19ab9d
active-directory-domain-services Fleet Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/fleet-metrics.md
+
+ Title: Check fleet metrics of Azure Active Directory Domain Services | Microsoft Docs
+description: Learn how to check fleet metrics of an Azure Active Directory Domain Services (Azure AD DS) managed domain.
++++
+ms.assetid: 8999eec3-f9da-40b3-997a-7a2587911e96
++++ Last updated : 08/16/2022+++
+# Check fleet metrics of Azure Active Directory Domain Services
+
+Administrators can use Azure Monitor Metrics to configure a scope for Azure Active Directory Domain Services (Azure AD DS) and gain insights into how the service is performing.
+You can access Azure AD DS metrics from two places:
+
+- In Azure Monitor Metrics, click **New chart** > **Select a scope** and select the Azure AD DS instance:
+
+ :::image type="content" border="true" source="media/fleet-metrics/select.png" alt-text="Screenshot of how to select Azure AD DS for fleet metrics.":::
+
+- In Azure AD DS, under **Monitoring**, click **Metrics**:
+
+ :::image type="content" border="true" source="media/fleet-metrics/metrics-scope.png" alt-text="Screenshot of how to select Azure AD DS as scope in Azure Monitor Metrics.":::
+
+ The following screenshot shows how to select combined metrics for Total Processor Time and LDAP searches:
+
+ :::image type="content" border="true" source="media/fleet-metrics/combined-metrics.png" alt-text="Screenshot of combined metrics in Azure Monitor Metrics.":::
+
+ You can also view metrics for a fleet of Azure AD DS instances:
+
+ :::image type="content" border="true" source="media/fleet-metrics/metrics-instance.png" alt-text="Screenshot of how to select an Azure AD DS instance as the scope for fleet metrics.":::
+
+ The following screenshot shows combined metrics for Total Processor Time, DNS Queries, and LDAP searches by role instance:
+
+ :::image type="content" border="true" source="media/fleet-metrics/combined-metrics-instance.png" alt-text="Screenshot of combined metrics for an Azure AD DS instance.":::
+
+## Metrics definitions and descriptions
+
+You can select a metric for more details about the data collection.
++
+The following table describes the metrics that are available for Azure AD DS.
+
+| Metric | Description |
+|--|-|
+|DNS - Total Query Received/sec |The average number of queries received by DNS server in each second. It's backed by performance counter data from the domain controller, and can be filtered or split by role instance.|
+|Total Response Sent/sec |The average number of responses sent by DNS server in each second. It's backed by performance counter data from the domain controller, and can be filtered or split by role instance.|
+|NTDS - LDAP Successful Binds/sec|The number of LDAP successful binds per second for the NTDS object. It's backed by performance counter data from the domain controller, and can be filtered or split by role instance.|
+|% Committed Bytes In Use |The ratio of Memory\\\Committed Bytes to the Memory\\\Commit Limit. Committed memory is the physical memory in use for which space has been reserved in the paging file should it need to be written to disk. The commit limit is determined by the size of the paging file. If the paging file is enlarged, the commit limit increases, and the ratio is reduced. This counter displays the current percentage value only; it isn't an average. It's backed by performance counter data from the domain controller, and can be filtered or split by role instance.|
+|Total Processor Time |The percentage of elapsed time that the processor spends to execute a non-Idle thread. It's calculated by measuring the percentage of time that the processor spends executing the idle thread and then subtracting that value from 100%. (Each processor has an idle thread that consumes cycles when no other threads are ready to run). This counter is the primary indicator of processor activity, and displays the average percentage of busy time observed during the sample interval. It should be noted that the accounting calculation of whether the processor is idle is performed at an internal sampling interval of the system clock (10ms). On today's fast processors, % Processor Time can therefore underestimate the processor utilization as the processor may be spending much time servicing threads between the system clock sampling interval. Workload-based timer applications are one type application that is more likely to be measured inaccurately because timers are signaled just after the sample is taken. It's backed by performance counter data from the domain controller, and can be filtered or split by role instance.|
+|Kerberos Authentications |The number of times that clients use a ticket to authenticate to this computer per second. It's backed by performance counter data from the domain controller, and can be filtered or split by role instance.|
+|NTLM Authentications|The number of NTLM authentications processed per second for the Active Directory on this domain controller or for local accounts on this member server. It's backed by performance counter data from the domain controller, and can be filtered or split by role instance.|
+|% Processor Time (dns)|The percentage of elapsed time that all of dns process threads used the processor to execute instructions. An instruction is the basic unit of execution in a computer, a thread is the object that executes instructions, and a process is the object created when a program is run. Code executed to handle some hardware interrupts and trap conditions are included in this count. It's backed by performance counter data from the domain controller, and can be filtered or split by role instance.|
+|% Processor Time (lsass)|The percentage of elapsed time that all of lsass process threads used the processor to execute instructions. An instruction is the basic unit of execution in a computer, a thread is the object that executes instructions, and a process is the object created when a program is run. Code executed to handle some hardware interrupts and trap conditions are included in this count. It's backed by performance counter data from the domain controller, and can be filtered or split by role instance.|
+|NTDS - LDAP Searches/sec |The average number of searches per second for the NTDS object. It's backed by performance counter data from the domain controller, and can be filtered or split by role instance.|
+
+## Azure Monitor alert
+
+You can configure metric alerts for Azure AD DS to be notified of possible problems. Metric alerts are one type of alert for Azure Monitor. For more information about other types of alerts, see [What are Azure Monitor Alerts?](/azure/azure-monitor/alerts/alerts-overview).
+
+To view and manage Azure Monitor alert, a user needs to be assigned [Azure Monitor roles](/azure/azure-monitor/roles-permissions-security).
+
+In Azure Monitor or Azure AD DS Metrics, click **New alert** and configure an Azure AD DS instance as the scope. Then choose the metrics you want to measure from the list of available signals:
+
+ :::image type="content" border="true" source="media/fleet-metrics/available-alerts.png" alt-text="Screenshot of available alerts.":::
+
+The following screenshot shows how to define a metric alert with a threshold for **Total Processor Time**:
++
+You can also configure an alert notification, which can be email, SMS, or voice call:
++
+The following screenshot shows a metrics alert triggered for **Total Processor Time**:
++
+In this case, an email notification is sent after an alert activation:
++
+Another email notification is sent after deactivation of the alert:
++
+## Select multiple resources
+
+You can upvote to enable multiple resource selection to correlate data between resource types.
++
+## Next steps
+
+- [Check the health of an Azure Active Directory Domain Services managed domain](check-health.md)
active-directory-domain-services How To Data Retrieval https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/how-to-data-retrieval.md
Title: Instructions for data retrieval from Azure Active Directory Domain Servic
description: Learn how to retrieve data from Azure Active Directory Domain Services (Azure AD DS). -+
active-directory-domain-services Join Centos Linux Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/join-centos-linux-vm.md
Title: Join a CentOS VM to Azure AD Domain Services | Microsoft Docs
description: Learn how to configure and join a CentOS Linux virtual machine to an Azure Active Directory Domain Services managed domain. -+ ms.assetid: 16100caa-f209-4cb0-86d3-9e218aeb51c6
active-directory-domain-services Join Coreos Linux Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/join-coreos-linux-vm.md
Title: Join a CoreOS VM to Azure AD Domain Services | Microsoft Docs
description: Learn how to configure and join a CoreOS virtual machine to an Azure AD Domain Services managed domain. -+ ms.assetid: 5db65f30-bf69-4ea3-9ea5-add1db83fdb8
active-directory-domain-services Join Rhel Linux Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/join-rhel-linux-vm.md
Title: Join a RHEL VM to Azure AD Domain Services | Microsoft Docs
description: Learn how to configure and join a Red Hat Enterprise Linux virtual machine to an Azure AD Domain Services managed domain. -+ ms.assetid: 16100caa-f209-4cb0-86d3-9e218aeb51c6
active-directory-domain-services Join Suse Linux Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/join-suse-linux-vm.md
Title: Join a SLE VM to Azure AD Domain Services | Microsoft Docs
description: Learn how to configure and join a SUSE Linux Enterprise virtual machine to an Azure AD Domain Services managed domain. -+
active-directory-domain-services Join Ubuntu Linux Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/join-ubuntu-linux-vm.md
Title: Join an Ubuntu VM to Azure AD Domain Services | Microsoft Docs
description: Learn how to configure and join an Ubuntu Linux virtual machine to an Azure AD Domain Services managed domain. -+ ms.assetid: 804438c4-51a1-497d-8ccc-5be775980203
active-directory-domain-services Join Windows Vm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/join-windows-vm-template.md
Title: Use a template to join a Windows VM to Azure AD DS | Microsoft Docs
description: Learn how to use Azure Resource Manager templates to join a new or existing Windows Server VM to an Azure Active Directory Domain Services managed domain. -+ ms.assetid: 4eabfd8e-5509-4acd-86b5-1318147fddb5
active-directory-domain-services Join Windows Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/join-windows-vm.md
Title: Join a Windows Server VM to an Azure AD Domain Services managed domain | Microsoft Docs description: In this tutorial, learn how to join a Windows Server virtual machine to an Azure Active Directory Domain Services managed domain. -+
active-directory-domain-services Manage Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/manage-dns.md
Title: Manage DNS for Azure AD Domain Services | Microsoft Docs description: Learn how to install the DNS Server Tools to manage DNS and create conditional forwarders for an Azure Active Directory Domain Services managed domain. -+ ms.assetid: 938a5fbc-2dd1-4759-bcce-628a6e19ab9d
active-directory-domain-services Manage Group Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/manage-group-policy.md
Title: Create and manage group policy in Azure AD Domain Services | Microsoft Docs description: Learn how to edit the built-in group policy objects (GPOs) and create your own custom policies in an Azure Active Directory Domain Services managed domain. -+ ms.assetid: 938a5fbc-2dd1-4759-bcce-628a6e19ab9d
active-directory-domain-services Migrate From Classic Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/migrate-from-classic-vnet.md
Title: Migrate Azure AD Domain Services from a Classic virtual network | Microsoft Docs description: Learn how to migrate an existing Azure AD Domain Services managed domain from the Classic virtual network model to a Resource Manager-based virtual network. -+
active-directory-domain-services Mismatched Tenant Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/mismatched-tenant-error.md
Title: Fix mismatched directory errors in Azure AD Domain Services | Microsoft D
description: Learn what a mismatched directory error means and how to resolve it in Azure AD Domain Services -+ ms.assetid: 40eb75b7-827e-4d30-af6c-ca3c2af915c7
active-directory-domain-services Network Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/network-considerations.md
Title: Network planning and connections for Azure AD Domain Services | Microsoft
description: Learn about some of the virtual network design considerations and resources used for connectivity when you run Azure Active Directory Domain Services. -+
active-directory-domain-services Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/notifications.md
Title: Email notifications for Azure AD Domain Services | Microsoft Docs
description: Learn how to configure email notifications to alert you about issues in an Azure Active Directory Domain Services managed domain -+ ms.assetid: b9af1792-0b7f-4f3e-827a-9426cdb33ba6
active-directory-domain-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/overview.md
Title: Overview of Azure Active Directory Domain Services | Microsoft Docs
description: In this overview, learn what Azure Active Directory Domain Services provides and how to use it in your organization to provide identity services to applications and services in the cloud. -+
active-directory-domain-services Password Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/password-policy.md
Title: Create and use password policies in Azure AD Domain Services | Microsoft
description: Learn how and why to use fine-grained password policies to secure and control account passwords in an Azure AD DS managed domain. -+ ms.assetid: 1a14637e-b3d0-4fd9-ba7a-576b8df62ff2
active-directory-domain-services Powershell Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/powershell-create-instance.md
Title: Enable Azure DS Domain Services using PowerShell | Microsoft Docs
description: Learn how to configure and enable Azure Active Directory Domain Services using Azure AD PowerShell and Azure PowerShell. -+ ms.assetid: d4bc5583-6537-4cd9-bc4b-7712fdd9272a
active-directory-domain-services Powershell Scoped Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/powershell-scoped-synchronization.md
Title: Scoped synchronization using PowerShell for Azure AD Domain Services | Mi
description: Learn how to use Azure AD PowerShell to configure scoped synchronization from Azure AD to an Azure Active Directory Domain Services managed domain -+
active-directory-domain-services Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/scenarios.md
Title: Common deployment scenarios for Azure AD Domain Services | Microsoft Docs
description: Learn about some of the common scenarios and use-cases for Azure Active Directory Domain Services to provide value and meet business needs. -+ ms.assetid: c5216ec9-4c4f-4b7e-830b-9d70cf176b20
active-directory-domain-services Scoped Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/scoped-synchronization.md
Title: Scoped synchronization for Azure AD Domain Services | Microsoft Docs
description: Learn how to use the Azure portal to configure scoped synchronization from Azure AD to an Azure Active Directory Domain Services managed domain -+ ms.assetid: 9389cf0f-0036-4b17-95da-80838edd2225
active-directory-domain-services Secure Remote Vm Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/secure-remote-vm-access.md
Title: Secure remote VM access in Azure AD Domain Services | Microsoft Docs
description: Learn how to secure remote access to VMs using Network Policy Server (NPS) and Azure AD Multi-Factor Authentication with a Remote Desktop Services deployment in an Azure Active Directory Domain Services managed domain. -+
active-directory-domain-services Secure Your Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/secure-your-domain.md
Title: Secure Azure AD Domain Services | Microsoft Docs
description: Learn how to disable weak ciphers, old protocols, and NTLM password hash synchronization for an Azure Active Directory Domain Services managed domain. -+ ms.assetid: 6b4665b5-4324-42ab-82c5-d36c01192c2a
active-directory-domain-services Security Audit Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/security-audit-events.md
Title: Enable security audits for Azure AD Domain Services | Microsoft Docs
description: Learn how to enable security audits to centralize the logging of events for analysis and alerts in Azure AD Domain Services -+ ms.assetid: 662362c3-1a5e-4e94-ae09-8e4254443697
active-directory-domain-services Suspension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/suspension.md
Title: Suspended domains in Azure AD Domain Services | Microsoft Docs
description: Learn about the different health states for an Azure AD DS managed domain and how to restore a suspended domain. -+ ms.assetid: 95e1d8da-60c7-4fc1-987d-f48fde56a8cb
active-directory-domain-services Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/synchronization.md
Title: How synchronization works in Azure AD Domain Services | Microsoft Docs
description: Learn how the synchronization process works for objects and credentials from an Azure AD tenant or on-premises Active Directory Domain Services environment to an Azure Active Directory Domain Services managed domain. -+ ms.assetid: 57cbf436-fc1d-4bab-b991-7d25b6e987ef
active-directory-domain-services Template Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/template-create-instance.md
Title: Enable Azure DS Domain Services using a template | Microsoft Docs
description: Learn how to configure and enable Azure Active Directory Domain Services using an Azure Resource Manager template -+
active-directory-domain-services Troubleshoot Account Lockout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/troubleshoot-account-lockout.md
Title: Troubleshoot account lockout in Azure AD Domain Services | Microsoft Docs
description: Learn how to troubleshoot common problems that cause user accounts to be locked out in Azure Active Directory Domain Services. -+
active-directory-domain-services Troubleshoot Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/troubleshoot-alerts.md
Title: Common alerts and resolutions in Azure AD Domain Services | Microsoft Doc
description: Learn how to resolve common alerts generated as part of the health status for Azure Active Directory Domain Services -+ ms.assetid: 54319292-6aa0-4a08-846b-e3c53ecca483
active-directory-domain-services Troubleshoot Domain Join https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/troubleshoot-domain-join.md
Title: Troubleshoot domain-join with Azure AD Domain Services | Microsoft Docs
description: Learn how to troubleshoot common problems when you try to domain-join a VM or connect an application to Azure Active Directory Domain Services and you can't connect or authenticate to the managed domain. -+
active-directory-domain-services Troubleshoot Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/troubleshoot-sign-in.md
Title: Troubleshoot sign in problems in Azure AD Domain Services | Microsoft Doc
description: Learn how to troubleshoot common user sign-in problems and errors in Azure Active Directory Domain Services. -+
active-directory-domain-services Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/troubleshoot.md
Title: Azure Active Directory Domain Services troubleshooting | Microsoft Docs'
description: Learn how to troubleshoot common errors when you create or manage Azure Active Directory Domain Services -+ ms.assetid: 4bc8c604-f57c-4f28-9dac-8b9164a0cf0b
active-directory-domain-services Tshoot Ldaps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tshoot-ldaps.md
Title: Troubleshoot secure LDAP in Azure AD Domain Services | Microsoft Docs
description: Learn how to troubleshoot secure LDAP (LDAPS) for an Azure Active Directory Domain Services managed domain -+ ms.assetid: 445c60da-e115-447b-841d-96739975bdf6
active-directory-domain-services Tutorial Configure Ldaps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-configure-ldaps.md
Title: Tutorial - Configure LDAPS for Azure Active Directory Domain Services | Microsoft Docs description: In this tutorial, you learn how to configure secure lightweight directory access protocol (LDAPS) for an Azure Active Directory Domain Services managed domain. -+
active-directory-domain-services Tutorial Configure Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-configure-networking.md
Title: Tutorial - Configure virtual networking for Azure AD Domain Services | Microsoft Docs description: In this tutorial, you learn how to create and configure an Azure virtual network subnet or network peering for an Azure Active Directory Domain Services managed domain using the Azure portal. -+
active-directory-domain-services Tutorial Configure Password Hash Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-configure-password-hash-sync.md
Title: Enable password hash sync for Azure AD Domain Services | Microsoft Docs description: In this tutorial, learn how to enable password hash synchronization using Azure AD Connect to an Azure Active Directory Domain Services managed domain. -+
active-directory-domain-services Tutorial Create Forest Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-forest-trust.md
Title: Tutorial - Create a forest trust in Azure AD Domain Services | Microsoft
description: Learn how to create a one-way outbound forest to an on-premises AD DS domain in the Azure portal for Azure AD Domain Services -+
active-directory-domain-services Tutorial Create Instance Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-instance-advanced.md
Title: Tutorial - Create a customized Azure Active Directory Domain Services managed domain | Microsoft Docs description: In this tutorial, you learn how to create and configure a customized Azure Active Directory Domain Services managed domain and specify advanced configuration options using the Azure portal. -+
active-directory-domain-services Tutorial Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-instance.md
Title: Tutorial - Create an Azure Active Directory Domain Services managed domain | Microsoft Docs description: In this tutorial, you learn how to create and configure an Azure Active Directory Domain Services managed domain using the Azure portal. -+
active-directory-domain-services Tutorial Create Management Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-management-vm.md
Title: Tutorial - Create a management VM for Azure Active Directory Domain Services | Microsoft Docs description: In this tutorial, you learn how to create and configure a Windows virtual machine that you use to administer Azure Active Directory Domain Services managed domain. -+
active-directory-domain-services Tutorial Create Replica Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-replica-set.md
Title: Tutorial - Create a replica set in Azure AD Domain Services | Microsoft D
description: Learn how to create and use replica sets in the Azure portal for service resiliency with Azure AD Domain Services -+
active-directory-domain-services Tutorial Perform Disaster Recovery Drill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-perform-disaster-recovery-drill.md
Title: Tutorial - Perform a disaster recovery drill in Azure AD Domain Services
description: Learn how to perform a disaster recovery drill using replica sets in Azure AD Domain Services -+
active-directory-domain-services Use Azure Monitor Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/use-azure-monitor-workbooks.md
Title: Use Azure Monitor Workbooks with Azure AD Domain Services | Microsoft Docs description: Learn how to use Azure Monitor Workbooks to review security audits and understand issues in an Azure Active Directory Domain Services managed domain. -+
active-directory Concept Password Ban Bad On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-password-ban-bad-on-premises.md
Previously updated : 07/17/2020 Last updated : 08/22/2022
Azure AD Password Protection is designed with the following principles in mind:
* Domain controllers (DCs) never have to communicate directly with the internet. * No new network ports are opened on DCs. * No AD DS schema changes are required. The software uses the existing AD DS *container* and *serviceConnectionPoint* schema objects.
-* No minimum AD DS domain or forest functional level (DFL/FFL) is required.
+* Any supported AD DS domain or forest functional level can be used.
* The software doesn't create or require accounts in the AD DS domains that it protects. * User clear-text passwords never leave the domain controller, either during password validation operations or at any other time. * The software isn't dependent on other Azure AD features. For example, Azure AD password hash sync (PHS) isn't related or required for Azure AD Password Protection.
active-directory Howto Authentication Passwordless Security Key On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-security-key-on-premises.md
Previously updated : 02/22/2022 Last updated : 08/22/2022
Run the following steps in each domain and forest in your organization that cont
1. Run the following PowerShell commands to create a new Azure AD Kerberos Server object both in your on-premises Active Directory domain and in your Azure Active Directory tenant. ### Example 1 prompt for all credentials
- > [!NOTE]
- > Replace `contoso.corp.com` in the following example with your on-premises Active Directory domain name.
```powershell # Specify the on-premises Active Directory domain. A new Azure AD # Kerberos Server object will be created in this Active Directory domain.
- $domain = "contoso.corp.com"
+ $domain = $env:USERDNSDOMAIN
# Enter an Azure Active Directory global administrator username and password. $cloudCred = Get-Credential -Message 'An Active Directory user who is a member of the Global Administrators group for Azure AD.'
Run the following steps in each domain and forest in your organization that cont
```powershell # Specify the on-premises Active Directory domain. A new Azure AD # Kerberos Server object will be created in this Active Directory domain.
- $domain = "contoso.corp.com"
+ $domain = $env:USERDNSDOMAIN
# Enter an Azure Active Directory global administrator username and password. $cloudCred = Get-Credential
Run the following steps in each domain and forest in your organization that cont
```powershell # Specify the on-premises Active Directory domain. A new Azure AD # Kerberos Server object will be created in this Active Directory domain.
- $domain = "contoso.corp.com"
+ $domain = $env:USERDNSDOMAIN
# Enter a UPN of an Azure Active Directory global administrator $userPrincipalName = "administrator@contoso.onmicrosoft.com"
Run the following steps in each domain and forest in your organization that cont
### Example 4 prompt for cloud credentials using modern authentication > [!NOTE] > If you are working on a domain-joined machine with an account that has domain administrator privileges and your organization protects password-based sign-in and enforces modern authentication methods such as multifactor authentication, FIDO2, or smart card technology, you must use the `-UserPrincipalName` parameter with the User Principal Name (UPN) of a global administrator. And you can skip the "-DomainCredential" parameter.
- > - Replace `contoso.corp.com` in the following example with your on-premises Active Directory domain name.
- > - Replace `administrator@contoso.onmicrosoft.com` in the following example with the UPN of a global administrator.
+ > - Replace `administrator@contoso.onmicrosoft.com` in the following example with the UPN of a global administrator.
```powershell # Specify the on-premises Active Directory domain. A new Azure AD # Kerberos Server object will be created in this Active Directory domain.
- $domain = "contoso.corp.com"
+ $domain = $env:USERDNSDOMAIN
# Enter a UPN of an Azure Active Directory global administrator $userPrincipalName = "administrator@contoso.onmicrosoft.com"
active-directory Howto Password Ban Bad On Premises Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-password-ban-bad-on-premises-deploy.md
Previously updated : 08/17/2022 Last updated : 08/22/2022
The following core requirements apply:
The following requirements apply to the Azure AD Password Protection DC agent:
-* All machines where the Azure AD Password Protection DC agent software will be installed must run Windows Server 2012 or later, including Windows Server Core editions.
- * The Active Directory domain or forest doesn't need to be at Windows Server 2012 domain functional level (DFL) or forest functional level (FFL). As mentioned in [Design Principles](concept-password-ban-bad-on-premises.md#design-principles), there's no minimum DFL or FFL required for either the DC agent or proxy software to run.
+* Machines where the Azure AD Password Protection DC agent software will be installed can run any supported version of Windows Server, including Windows Server Core editions.
+ * The Active Directory domain or forest can be any supported functional level.
* All machines where the Azure AD Password Protection DC agent will be installed must have .NET 4.7.2 installed. * If .NET 4.7.2 is not already installed, download and run the installer found at [The .NET Framework 4.7.2 offline installer for Windows](https://support.microsoft.com/topic/microsoft-net-framework-4-7-2-offline-installer-for-windows-05a72734-2127-a15d-50cf-daf56d5faec2). * Any Active Directory domain that runs the Azure AD Password Protection DC agent service must use Distributed File System Replication (DFSR) for sysvol replication.
active-directory Tutorial Enable Cloud Sync Sspr Writeback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/tutorial-enable-cloud-sync-sspr-writeback.md
Previously updated : 05/31/2022 Last updated : 08/22/2022
Permissions for cloud sync are configured by default. If permissions need to be
### Enable password writeback in Azure AD Connect cloud sync
-For public preview, you need to enable password writeback in Azure AD Connect cloud sync by using the Set-AADCloudSyncPasswordWritebackConfiguration cmdlet on the servers with the provisioning agents. You will need global administrator credentials:
+For public preview, you need to enable password writeback in Azure AD Connect cloud sync by running `Set-AADCloudSyncPasswordWritebackConfiguration` on any server with the provisioning agent. You will need global administrator credentials:
```powershell Import-Module 'C:\\Program Files\\Microsoft Azure AD Connect Provisioning Agent\\Microsoft.CloudSync.Powershell.dll'
active-directory Onboard Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-azure.md
# Onboard a Microsoft Azure subscription
-This article describes how to onboard a Microsoft Azure subscription or subscriptions on Permissions Management (Permissions Management). Onboarding a subscription creates a new authorization system to represent the Azure subscription in Permissions Management.
+This article describes how to onboard a Microsoft Azure subscription or subscriptions on Permissions Management. Onboarding a subscription creates a new authorization system to represent the Azure subscription in Permissions Management.
> [!NOTE] > A *global administrator* or *super admin* (an admin for all authorization system types) can perform the tasks in this article after the global administrator has initially completed the steps provided in [Enable Permissions Management on your Azure Active Directory tenant](onboard-enable-tenant.md).
To add Permissions Management to your Azure AD tenant:
1. If the **Data Collectors** dashboard isn't displayed when Permissions Management launches:
- - In the Permissions Management home page, select **Settings** (the gear icon), and then select the **Data Collectors** subtab.
+ - In the Permissions Management home page, select **Settings** (the gear icon, top right), and then select the **Data Collectors** subtab.
1. On the **Data Collectors** dashboard, select **Azure**, and then select **Create Configuration**.
Choose from 3 options to manage Azure subscriptions.
#### Option 1: Automatically manage
-This option allows subscriptions to be automatically detected and monitored without additional configuration. Steps to detect list of subscriptions and onboard for collection:
+This option allows subscriptions to be automatically detected and monitored without extra configuration.A key benefit of automatic management is that any current or future subscriptions found get onboarded automatically. Steps to detect list of subscriptions and onboard for collection:
-- Grant Reader role to Cloud Infrastructure Entitlement Management application at management group or subscription scope.
+- Firstly, grant Reader role to Cloud Infrastructure Entitlement Management application at management group or subscription scope.
-Any current or future subscriptions found get onboarded automatically.
-
- To view status of onboarding after saving the configuration:
-
-1. In the MEPM portal, click the cog on the top right-hand side.
-1. Navigate to data collectors tab.
+1. In the EPM portal, left-click the cog on the top right-hand side.
+1. Navigate to data collectors tab
+1. Ensure 'Azure' is selected
1. Click ΓÇÿCreate ConfigurationΓÇÖ 1. For onboarding mode, select ΓÇÿAutomatically ManageΓÇÖ
-1. Click ΓÇÿVerify Now & SaveΓÇÖ
+
+The steps listed on the screen outline how to create the role assignment for the Cloud Infrastructure Entitlements Management application. This can be performed manually in the Entra console, or programatically with PowerShell or the Azure CLI.
+
+Lastly, Click ΓÇÿVerify Now & SaveΓÇÖ
+
+To view status of onboarding after saving the configuration:
+ 1. Collectors will now be listed and change through status types. For each collector listed with a status of ΓÇ£Collected InventoryΓÇ¥, click on that status to view further information. 1. You can then view subscriptions on the In Progress page
Any current or future subscriptions found get onboarded automatically.
You have the ability to specify only certain subscriptions to manage and monitor with MEPM (up to 10 per collector). Follow the steps below to configure these subscriptions to be monitored: 1. For each subscription you wish to manage, ensure that the ΓÇÿReaderΓÇÖ role has been granted to Cloud Infrastructure Entitlement Management application for this subscription.
-1. In the MEPM portal, click the cog on the top right-hand side.
+1. In the EPM portal, click the cog on the top right-hand side.
1. Navigate to data collectors tab
+1. Ensure 'Azure' is selected
1. Click ΓÇÿCreate ConfigurationΓÇÖ 1. Select ΓÇÿEnter Authorization SystemsΓÇÖ 1. Under the Subscription IDs section, enter a desired subscription ID into the input box. Click the ΓÇ£+ΓÇ¥ up to 9 additional times, putting a single subscription ID into each respective input box.
To view status of onboarding after saving the configuration:
This option detects all subscriptions that are accessible by the Cloud Infrastructure Entitlement Management application.
-1. Grant Reader role to Cloud Infrastructure Entitlement Management application at management group or subscription(s) scope.
-1. Click Verify and Save.
+- Firstly, grant Reader role to Cloud Infrastructure Entitlement Management application at management group or subscription scope.
+
+1. In the EPM portal, click the cog on the top right-hand side.
+1. Navigate to data collectors tab
+1. Ensure 'Azure' is selected
+1. Click ΓÇÿCreate ConfigurationΓÇÖ
+1. For onboarding mode, select ΓÇÿAutomatically ManageΓÇÖ
+
+The steps listed on the screen outline how to create the role assignment for the Cloud Infrastructure Entitlements Management application. You can do this manually in the Entra console, or programatically with PowerShell or the Azure CLI.
+
+Lastly, Click ΓÇÿVerify Now & SaveΓÇÖ
+
+To view status of onboarding after saving the configuration:
+ 1. Navigate to newly create Data Collector row under Azure data collectors. 1. Click on Status column when the row has ΓÇ£PendingΓÇ¥ status 1. To onboard and start collection, choose specific ones subscriptions from the detected list and consent for collection.
active-directory Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-gcp.md
Choose from 3 options to manage GCP projects.
This option allows projects to be automatically detected and monitored without additional configuration. Steps to detect list of projects and onboard for collection: -- Grant Viewer and Security Reviewer role to service account created in previous step at organization, folder or project scope.
+Firstly, grant Viewer and Security Reviewer role to service account created in previous step at organization, folder or project scope.
+
+Once done, the steps are listed in the screen to do this manually in the GPC console, or programatically with the gcloud CLI.
+
+Once this has been configured, click next, then 'Verify Now & Save'.
Any current or future projects found get onboarded automatically. To view status of onboarding after saving the configuration: -- Navigate to data collectors tab. -- Click on the status of the data collector.
+- Navigate to data collectors tab
+- Click on the status of the data collector
- View projects on the In Progress page #### Option 2: Enter authorization systems
To view status of onboarding after saving the configuration:
This option detects all projects that are accessible by the Cloud Infrastructure Entitlement Management application. -- Grant Viewer and Security Reviewer role to service account created in previous step at organization, folder or project scope. -- Click Verify and Save. -- Navigate to newly create Data Collector row under GCP data collectors.
+- Firstly, grant Viewer and Security Reviewer role to service account created in previous step at organization, folder or project scope
+- Once done, the steps are listed in the screen to do this manually in the GPC console, or programatically with the gcloud CLI
+- Click Next
+- Click 'Verify Now & Save'
+- Navigate to newly create Data Collector row under GCP data collectors
- Click on Status column when the row has ΓÇ£PendingΓÇ¥ status -- To onboard and start collection, choose specific ones from the detected list and consent for collection.
+- To onboard and start collection, choose specific ones from the detected list and consent for collection
### 3. Set up GCP member projects.
active-directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/overview.md
Organizations have to consider permissions management as a central piece of thei
Permissions Management allows customers to address three key use cases: *discover*, *remediate*, and *monitor*.
+Permissions Management has been designed in such a way that we recommended your organization sequentially 'step-through' each of the below phases in order to gain insights into permissions across the organization. This is because you generally cannot action what is yet to be discovered, likewise you cannot continually evaluate what is yet to be remediated.
++ ### Discover Customers can assess permission risks by evaluating the gap between permissions granted and permissions used.
Permissions Management deepens Zero Trust security strategies by augmenting the
- Automate least privilege access: Use access analytics to ensure identities have the right permissions, at the right time. - Unify access policies across infrastructure as a service (IaaS) platforms: Implement consistent security policies across your cloud infrastructure. -
+Once your organization has explored and implemented the discover, remediation and monitor phases, you have established one of the core pillars of a modern zero-trust security strategy.
## Next steps
active-directory Entitlement Management Access Package Auto Assignment Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-auto-assignment-policy.md
During this preview, you can have at most one automatic assignment policy in an
This article describes how to create an access package automatic assignment policy for an existing access package.
+## Before you begin
+
+You'll need to have attributes populated on the users who will be in scope for being assigned access. The attributes you can use in the rules criteria of an access package assignment policy are those attributes listed in [supported properties](../enterprise-users/groups-dynamic-membership.md#supported-properties), along with [extension attributes and custom extension properties](../enterprise-users/groups-dynamic-membership.md#extension-properties-and-custom-extension-properties). These attributes can be brought into Azure AD from [Graph](/graph/api/resources/user?view=graph-rest-beta), an HR system such as [SuccessFactors](../app-provisioning/sap-successfactors-integration-reference.md), [Azure AD Connect cloud sync](../cloud-sync/how-to-attribute-mapping.md) or [Azure AD Connect sync](../hybrid/how-to-connect-sync-feature-directory-extensions.md).
+ ## Create an automatic assignment policy (Preview) To create a policy for an access package, you need to start from the access package's policy tab. Follow these steps to create a new policy for an access package.
To create a policy for an access package, you need to start from the access pack
1. Provide a dynamic membership rule, using the [membership rule builder](../enterprise-users/groups-dynamic-membership.md) or by clicking **Edit** on the rule syntax text box. > [!NOTE]
- > The rule builder might not be able to display some rules constructed in the text box. For more information, see [rule builder in the Azure portal](/enterprise-users/groups-create-rule.md#rule-builder-in-the-azure-portal).
+ > The rule builder might not be able to display some rules constructed in the text box, and validating a rule currently requires the you to be in the Global administrator role. For more information, see [rule builder in the Azure portal](/enterprise-users/groups-create-rule.md#rule-builder-in-the-azure-portal).
![Screenshot of an access package automatic assignment policy rule configuration.](./media/entitlement-management-access-package-auto-assignment-policy/auto-assignment-rule-configuration.png)
active-directory Manage Guest Access With Access Reviews https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/manage-guest-access-with-access-reviews.md
description: Manage guest users as members of a group or assigned to an applicat
documentationcenter: '' -+ editor: markwahl-msft na Previously updated : 4/16/2021 Last updated : 08/23/2021
For more information, [License requirements](access-reviews-overview.md#license-
First, you must be assigned one of the following roles: - global administrator - User administrator-- (Preview) M365 or AAD Security Group owner of the group to be reviewed
+- (Preview) Microsoft 365 or Azure AD Security Group owner of the group to be reviewed
Then, go to the [Identity Governance page](https://portal.azure.com/#blade/Microsoft_AAD_ERM/DashboardBlade/) to ensure that access reviews is ready for your organization.
In some organizations, guests might not be aware of their group memberships.
4. After the reviewers give input, stop the access review. For more information, see [Complete an access review of groups or applications](complete-access-review.md).
-5. Remove guest access for guests who were denied, didn't complete the review, or didn't previously accept their invitation. If some of the guests are contacts who were selected to participate in the review or they didn't previously accept an invitation, you can disable their accounts by using the Azure portal or PowerShell. If the guest no longer needs access and isn't a contact, you can remove their user object from your directory by using the Azure portal or PowerShell to delete the guest user object.
+5. You can automatically delete the guest users Azure AD B2B accounts as part of an access review when you are configuring an Access review for **Select Team + Groups**. This option is not available for **All Microsoft 365 groups with guest users**.
+
+![Screenshot showing page to create access review.](media/manage-guest-access-with-access-reviews/new-access-review.png)
+
+To do so, select **Auto apply results to resource** as this will automatically remove the user from the resource. **If reviewer don't respond** should be set to **Remove access** and **Action to apply on denied guest users** should also be set to **Block from signing in for 30 days then remove user from the tenant**.
+
+This will immediately block sign in to the guest user account and then automatically delete their Azure AD B2B account after 30 days.
## Next steps
active-directory Concept Identity Protection B2b https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-identity-protection-b2b.md
Previously updated : 05/03/2021 Last updated : 08/22/2022
# Identity Protection and B2B users
-Identity Protection detects compromised credentials for Azure AD users. If your credential is detected as compromised, it means that someone else may have your password and be using it illegitimately. To prevent further risk to your account, it is important to securely reset your password so that the bad actor can no longer use your compromised password. Identity Protection marks accounts that may be compromised as "at risk."
+Identity Protection detects compromised credentials for Azure AD users. If your credential is detected as compromised, it means that someone else may have your password and be using it illegitimately. To prevent further risk to your account, it's important to securely reset your password so that the bad actor can no longer use your compromised password. Identity Protection marks accounts that may be compromised as "at risk."
-You can use your organizational credentials to sign-in to another organization as a guest. This process is referred to [business-to-business or B2B collaboration](../external-identities/what-is-b2b.md). Organizations can configure policies to block users from signing-in if their credentials are considered [at risk](concept-identity-protection-risks.md). If your account is at risk and you are blocked from signing-in to another organization as a guest, you may be able to self-remediate your account using the following steps. If your organization has not enabled self-service password reset, your administrator will need to manually remediate your account.
+You can use your organizational credentials to sign-in to another organization as a guest. This process is referred to [business-to-business or B2B collaboration](../external-identities/what-is-b2b.md). Organizations can configure policies to block users from signing-in if their credentials are considered [at risk](concept-identity-protection-risks.md). If your account is at risk and you're blocked from signing-in to another organization as a guest, you may be able to self-remediate your account using the following steps. If your organization hasn't enabled self-service password reset, your administrator will need to manually remediate your account.
## How to unblock your account
-If you are attempting to sign-in to another organization as a guest and are blocked due to risk, you will see the following block message: "Your account is blocked. We've detected suspicious activity on your account."
+If you're attempting to sign-in to another organization as a guest and are blocked due to risk, you'll see the following block message: "Your account is blocked. We've detected suspicious activity on your account."
![Guest account blocked, contact your organization's administrator](./media/concept-identity-protection-b2b/risky-guest-user-blocked.png) If your organization enables it, you can use self-service password reset unblock your account and get your credentials back to a safe state.
-1. Go to the [Password reset portal](https://passwordreset.microsoftonline.com/) and initiate the password reset. If self-service password reset is not enabled for your account and you cannot proceed, reach out to your IT administrator with the information [below](#how-to-remediate-a-users-risk-as-an-administrator).
-2. If self-service password reset is enabled for your account, you will be prompted to verify your identity using security methods prior to changing your password. For assistance, see the [Reset your work or school password](https://support.microsoft.com/account-billing/reset-your-work-or-school-password-using-security-info-23dde81f-08bb-4776-ba72-e6b72b9dda9e) article.
+1. Go to the [Password reset portal](https://passwordreset.microsoftonline.com/) and initiate the password reset. If self-service password reset isn't enabled for your account and you can't proceed, reach out to your IT administrator with the information [below](#how-to-remediate-a-users-risk-as-an-administrator).
+2. If self-service password reset is enabled for your account, you'll be prompted to verify your identity using security methods prior to changing your password. For assistance, see the [Reset your work or school password](https://support.microsoft.com/account-billing/reset-your-work-or-school-password-using-security-info-23dde81f-08bb-4776-ba72-e6b72b9dda9e) article.
3. Once you have successfully and securely reset your password, your user risk will be remediated. You can now try again to sign-in as a guest user.
-If after resetting your password you are still blocked as a guest due to risk, reach out to your organization's IT administrator.
+If after resetting your password you're still blocked as a guest due to risk, reach out to your organization's IT administrator.
## How to remediate a user's risk as an administrator
-Identity Protection automatically detects risky users for Azure AD tenants. If you have not previously checked the Identity Protection reports, there may be a large number of users with risk. Since resource tenants can apply user risk policies to guest users, your users can be blocked due to risk even if they were previously unaware of their risky state. If your user reports they have been blocked as a guest user in another tenant due to risk, it is important to remediate the user to protect their account and enable collaboration.
+Identity Protection automatically detects risky users for Azure AD tenants. If you haven't previously checked the Identity Protection reports, there may be a large number of users with risk. Since resource tenants can apply user risk policies to guest users, your users can be blocked due to risk even if they were previously unaware of their risky state. If your user reports they've been blocked as a guest user in another tenant due to risk, it's important to remediate the user to protect their account and enable collaboration.
### Reset the user's password
-From the [Risky users report](https://portal.azure.com/#blade/Microsoft_AAD_IAM/SecurityMenuBlade/RiskyUsers) in the Azure AD Security menu, search for the impacted user using the 'User' filter. Select the impacted user in the report and click "Reset password" in the top toolbar. The user will be assigned a temporary password that must be changed on the next sign in. This process will remediate their user risk and bring their credentials back to a safe state.
+From the [Risky users report](https://portal.azure.com/#blade/Microsoft_AAD_IAM/SecurityMenuBlade/RiskyUsers) in the Azure AD Security menu, search for the impacted user using the 'User' filter. Select the impacted user in the report and select "Reset password" in the top toolbar. The user will be assigned a temporary password that must be changed on the next sign-in. This process will remediate their user risk and bring their credentials back to a safe state.
### Manually dismiss user's risk
-If password reset is not an option for you from the Azure AD portal, you can choose to manually dismiss user risk. Dismissing user risk does not have any impact on the user's existing password, but this process will change the user's Risk State from At Risk to Dismissed. It is important that you change the user's password using whatever means are available to you in order to bring the identity back to a safe state.
+If password reset isn't an option for you from the Azure AD portal, you can choose to manually dismiss user risk. Dismissing user risk doesn't have any impact on the user's existing password, but this process will change the user's Risk State from At Risk to Dismissed. It's important that you change the user's password using whatever means are available to you in order to bring the identity back to a safe state.
-To dismiss user risk, go to the [Risky users report](https://portal.azure.com/#blade/Microsoft_AAD_IAM/SecurityMenuBlade/RiskyUsers) in the Azure AD Security menu. Search for the impacted user using the 'User' filter and click on the user. Click on "dismiss user risk" option from the top toolbar. This action may take a few minutes to complete and update the user risk state in the report.
+To dismiss user risk, go to the [Risky users report](https://portal.azure.com/#blade/Microsoft_AAD_IAM/SecurityMenuBlade/RiskyUsers) in the Azure AD Security menu. Search for the impacted user using the 'User' filter and select the user. Select the "dismiss user risk" option from the top toolbar. This action may take a few minutes to complete and update the user risk state in the report.
To learn more about Identity Protection, see [What is Identity Protection](overview-identity-protection.md).
To learn more about Identity Protection, see [What is Identity Protection](overv
The user risk for B2B collaboration users is evaluated at their home directory. The real-time sign-in risk for these users is evaluated at the resource directory when they try to access the resource. With Azure AD B2B collaboration, organizations can enforce risk-based policies for B2B users using Identity Protection. These policies be configured in two ways: -- Administrators can configure the built-in Identity Protection risk-based policies, that apply to all apps, that include guest users.-- Administrators can configure their Conditional Access policies, using sign-in risk as a condition, that includes guest users.
+- Administrators can configure the built-in Identity Protection risk-based policies, that apply to all apps, and include guest users.
+- Administrators can configure their Conditional Access policies, using sign-in risk as a condition, and includes guest users.
## Limitations of Identity Protection for B2B collaboration users
-There are limitations in the implementation of Identity Protection for B2B collaboration users in a resource directory due to their identity existing in their home directory. The main limitations are as follows:
+There are limitations in the implementation of Identity Protection for B2B collaboration users in a resource directory, due to their identity existing in their home directory. The main limitations are as follows:
- If a guest user triggers the Identity Protection user risk policy to force password reset, **they will be blocked**. This block is due to the inability to reset passwords in the resource directory. - **Guest users do not appear in the risky users report**. This limitation is due to the risk evaluation occurring in the B2B user's home directory.
There are limitations in the implementation of Identity Protection for B2B colla
### Why can't I remediate risky B2B collaboration users in my directory?
-The risk evaluation and remediation for B2B users occurs in their home directory. Due to this fact, the guest users do not appear in the risky users report in the resource directory and admins in the resource directory cannot force a secure password reset for these users.
+The risk evaluation and remediation for B2B users occurs in their home directory. Due to this fact, the guest users don't appear in the risky users report in the resource directory and admins in the resource directory can't force a secure password reset for these users.
### What do I do if a B2B collaboration user was blocked due to a risk-based policy in my organization?
-If a risky B2B user in your directory is blocked by your risk-based policy, the user will need to remediate that risk in their home directory. Users can remediate their risk by performing a secure password reset in their home directory [as outlined above](#how-to-unblock-your-account). If they do not have self-service password reset enabled in their home directory, they will need to contact their own organization's IT Staff to have an administrator manually dismiss their risk or reset their password.
+If a risky B2B user in your directory is blocked by your risk-based policy, the user will need to remediate that risk in their home directory. Users can remediate their risk by performing a secure password reset in their home directory [as outlined above](#how-to-unblock-your-account). If they don't have self-service password reset enabled in their home directory, they'll need to contact their own organization's IT Staff to have an administrator manually dismiss their risk or reset their password.
### How do I prevent B2B collaboration users from being impacted by risk-based policies?
active-directory Concept Identity Protection Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-identity-protection-policies.md
Previously updated : 05/20/2020 Last updated : 08/22/2022
Azure Active Directory Identity Protection includes three default policies that
## Azure AD MFA registration policy
-Identity Protection can help organizations roll out Azure AD Multi-Factor Authentication (MFA) using a Conditional Access policy requiring registration at sign-in. Enabling this policy is a great way to ensure new users in your organization have registered for MFA on their first day. Multi-factor authentication is one of the self-remediation methods for risk events within Identity Protection. Self-remediation allows your users to take action on their own to reduce helpdesk call volume.
+Identity Protection can help organizations roll out Azure AD Multifactor Authentication (MFA) using a Conditional Access policy requiring registration at sign-in. Enabling this policy is a great way to ensure new users in your organization have registered for MFA on their first day. Multifactor authentication is one of the self-remediation methods for risk events within Identity Protection. Self-remediation allows your users to take action on their own to reduce helpdesk call volume.
-More information about Azure AD Multi-Factor Authentication can be found in the article, [How it works: Azure AD Multi-Factor Authentication](../authentication/concept-mfa-howitworks.md).
+More information about Azure AD Multifactor Authentication can be found in the article, [How it works: Azure AD Multifactor Authentication](../authentication/concept-mfa-howitworks.md).
## Sign-in risk policy
-Identity Protection analyzes signals from each sign-in, both real-time and offline, and calculates a risk score based on the probability that the sign-in wasn't performed by the user. Administrators can make a decision based on this risk score signal to enforce organizational requirements. Administrators can choose to block access, allow access, or allow access but require multi-factor authentication.
+Identity Protection analyzes signals from each sign-in, both real-time and offline, and calculates a risk score based on the probability that the sign-in wasn't really the user. Administrators can make a decision based on this risk score signal to enforce organizational requirements like:
-If risk is detected, users can perform multi-factor authentication to self-remediate and close the risky sign-in event to prevent unnecessary noise for administrators.
+- Block access
+- Allow access
+- Require multifactor authentication
+
+If risk is detected, users can perform multifactor authentication to self-remediate and close the risky sign-in event to prevent unnecessary noise for administrators.
> [!NOTE]
-> Users must have previously registered for Azure AD Multi-Factor Authentication before triggering the sign-in risk policy.
+> Users must have previously registered for Azure AD Multifactor Authentication before triggering the sign-in risk policy.
### Custom Conditional Access policy
If risk is detected, users can perform self-service password reset to self-remed
## Next steps - [Enable Azure AD self-service password reset](../authentication/howto-sspr-deployment.md)--- [Enable Azure AD Multi-Factor Authentication](../authentication/howto-mfa-getstarted.md)--- [Enable Azure AD Multi-Factor Authentication registration policy](howto-identity-protection-configure-mfa-policy.md)-
+- [Enable Azure AD Multifactor Authentication](../authentication/howto-mfa-getstarted.md)
+- [Enable Azure AD Multifactor Authentication registration policy](howto-identity-protection-configure-mfa-policy.md)
- [Enable sign-in and user risk policies](howto-identity-protection-configure-risk-policies.md)
active-directory Concept Identity Protection Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-identity-protection-security-overview.md
Previously updated : 07/02/2020 Last updated : 08/22/2022
The ΓÇÿSecurity overviewΓÇÖ is broadly divided into two sections:
- Tiles, on the right, highlight the key ongoing issues in your organization and suggest how to quickly take action. :::image type="content" source="./media/concept-identity-protection-security-overview/01.png" alt-text="Screenshot of the Azure portal Security overview. Bar charts show the count of risks over time. Tiles summarize information on users and sign-ins." border="false":::
-
+
+You can find the security overview page in the **Azure portal** > **Azure Active Directory** > **Security** > **Identity Protection** > **Overview**.
+ ## Trends ### New risky users detected
-This chart shows the number of new risky users that were detected over the chosen time period. You can filter the view of this chart by user risk level (low, medium, high). Hover over the UTC date increments to see the number of risky users detected for that day. A click on this chart will bring you to the ΓÇÿRisky usersΓÇÖ report. To remediate users that are at risk, consider changing their password.
+This chart shows the number of new risky users that were detected over the chosen time period. You can filter the view of this chart by user risk level (low, medium, high). Hover over the UTC date increments to see the number of risky users detected for that day. Selecting this chart will bring you to the ΓÇÿRisky usersΓÇÖ report. To remediate users that are at risk, consider changing their password.
### New risky sign-ins detected
-This chart shows the number of risky sign-ins detected over the chosen time period. You can filter the view of this chart by the sign-in risk type (real-time or aggregate) and the sign-in risk level (low, medium, high). Unprotected sign-ins are successful real-time risk sign-ins that were not MFA challenged. (Note: Sign-ins that are risky because of offline detections cannot be protected in real-time by sign-in risk policies). Hover over the UTC date increments to see the number of sign-ins detected at risk for that day. A click on this chart will bring you to the ΓÇÿRisky sign-insΓÇÖ report.
+This chart shows the number of risky sign-ins detected over the chosen time period. You can filter the view of this chart by the sign-in risk type (real-time or aggregate) and the sign-in risk level (low, medium, high). Unprotected sign-ins are successful real-time risk sign-ins that weren't MFA challenged. (Note: Sign-ins that are risky because of offline detections can't be protected in real-time by sign-in risk policies). Hover over the UTC date increments to see the number of sign-ins detected at risk for that day. Selecting this chart will bring you to the ΓÇÿRisky sign-insΓÇÖ report.
## Tiles ### High risk users
-The ΓÇÿHigh risk usersΓÇÖ tile shows the latest count of users with high probability of identity compromise. These should be a top priority for investigation. A click on the ΓÇÿHigh risk usersΓÇÖ tile will redirect to a filtered view of the ΓÇÿRisky usersΓÇÖ report showing only users with a risk level of high. Using this report, you can learn more and remediate these users with a password reset.
+The ΓÇÿHigh risk usersΓÇÖ tile shows the latest count of users with high probability of identity compromise. These users should be a top priority for investigation. Selecting the ΓÇÿHigh risk usersΓÇÖ tile will redirect to a filtered view of the ΓÇÿRisky usersΓÇÖ report showing only users with a risk level of high. Using this report, you can learn more and remediate these users with a password reset.
:::image type="content" source="./media/concept-identity-protection-security-overview/02.png" alt-text="Screenshot of the Azure portal Security overview, with tiles visible for high-risk and medium-risk users and other risk factors." border="false"::: ### Medium risk users
-The ΓÇÿMedium risk usersΓÇÖ tile shows the latest count of users with medium probability of identity compromise. A click on ΓÇÿMedium risk usersΓÇÖ tile will redirect to a filtered view of the ΓÇÿRisky usersΓÇÖ report showing only users with a risk level of medium. Using this report, you can further investigate and remediate these users.
+The ΓÇÿMedium risk usersΓÇÖ tile shows the latest count of users with medium probability of identity compromise. Selecting the ΓÇÿMedium risk usersΓÇÖ tile will take you to a view of the ΓÇÿRisky usersΓÇÖ report showing only users with a risk level of medium. Using this report, you can further investigate and remediate these users.
### Unprotected risky sign-ins
-The ΓÇÿUnprotected risky sign-ins' tile shows the last weekΓÇÖs count of successful, real-time risky sign-ins that were not blocked or MFA challenged by a Conditional Access policy, Identity Protection risk policy, or per-user MFA. These are potentially compromised logins that were successful and not MFA challenged. To protect such sign-ins in future, apply a sign-in risk policy. A click on ΓÇÿUnprotected risky sign-ins' tile will redirect to the sign-in risk policy configuration blade where you can configure the sign-in risk policy to require MFA on a sign-in with a specified risk level.
+The ΓÇÿUnprotected risky sign-ins' tile shows the last weekΓÇÖs count of successful, real-time risky sign-ins that weren't blocked or MFA challenged by a Conditional Access policy, Identity Protection risk policy, or per-user MFA. These successful sign-ins are potentially compromised and not challenged for MFA. To protect such sign-ins in future, apply a sign-in risk policy. Selecting the ΓÇÿUnprotected risky sign-ins' tile will take you to the sign-in risk policy configuration blade where you can configure the sign-in risk policy.
### Legacy authentication
-The ΓÇÿLegacy authenticationΓÇÖ tile shows the last weekΓÇÖs count of legacy authentications with risk present in your organization. Legacy authentication protocols do not support modern security methods such as an MFA. To prevent legacy authentication, you can apply a Conditional Access policy. A click on ΓÇÿLegacy authenticationΓÇÖ tile will redirect you to the ΓÇÿIdentity Secure ScoreΓÇÖ.
+The ΓÇÿLegacy authenticationΓÇÖ tile shows the last weekΓÇÖs count of legacy authentications with risk present in your organization. Legacy authentication protocols don't support modern security methods such as an MFA. To prevent legacy authentication, you can apply a Conditional Access policy. Selecting the ΓÇÿLegacy authenticationΓÇÖ tile will redirect you to the ΓÇÿIdentity Secure ScoreΓÇÖ.
### Identity Secure Score
-The Identity Secure Score measures and compares your security posture to industry patterns. If you click on ΓÇÿIdentity Secure Score (Preview)ΓÇÖ tile, it will redirect to the ΓÇÿIdentity Secure ScoreΓÇÖ blade where you can learn more about improving your security posture.
+The Identity Secure Score measures and compares your security posture to industry patterns. If you select the **Identity Secure Score** tile, it will redirect to [Identity Secure Score](../fundamentals/identity-secure-score.md) where you can learn more about improving your security posture.
## Next steps - [What is risk](concept-identity-protection-risks.md)- - [Policies available to mitigate risks](concept-identity-protection-policies.md)
active-directory Howto Identity Protection Configure Mfa Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-configure-mfa-policy.md
Title: Configure the MFA registration policy - Azure Active Directory Identity Protection
-description: Learn how to configure the Azure AD Identity Protection multi-factor authentication registration policy.
+description: Learn how to configure the Azure AD Identity Protection multifactor authentication registration policy.
Previously updated : 06/05/2020 Last updated : 08/22/2022
-# How To: Configure the Azure AD Multi-Factor Authentication registration policy
+# How To: Configure the Azure AD Multifactor Authentication registration policy
-Azure AD Identity Protection helps you manage the roll-out of Azure AD Multi-Factor Authentication (MFA) registration by configuring a Conditional Access policy to require MFA registration no matter what modern authentication app you are signing in to.
+Azure Active Directory (Azure AD) Identity Protection helps you manage the roll-out of Azure AD Multifactor Authentication (MFA) registration by configuring a Conditional Access policy to require MFA registration no matter what modern authentication app you're signing in to.
-## What is the Azure AD Multi-Factor Authentication registration policy?
+## What is the Azure AD Multifactor Authentication registration policy?
-Azure AD Multi-Factor Authentication provides a means to verify who you are using more than just a username and password. It provides a second layer of security to user sign-ins. In order for users to be able to respond to MFA prompts, they must first register for Azure AD Multi-Factor Authentication.
+Azure AD Multifactor Authentication provides a means to verify who you are using more than just a username and password. It provides a second layer of security to user sign-ins. In order for users to be able to respond to MFA prompts, they must first register for Azure AD Multifactor Authentication.
-We recommend that you require Azure AD Multi-Factor Authentication for user sign-ins because it:
+We recommend that you require Azure AD Multifactor Authentication for user sign-ins because it:
- Delivers strong authentication through a range of verification options. - Plays a key role in preparing your organization to self-remediate from risk detections in Identity Protection.
-For more information on Azure AD Multi-Factor Authentication, see [What is Azure AD Multi-Factor Authentication?](../authentication/howto-mfa-getstarted.md)
+For more information on Azure AD Multifactor Authentication, see [What is Azure AD Multifactor Authentication?](../authentication/howto-mfa-getstarted.md)
## Policy configuration
For more information on Azure AD Multi-Factor Authentication, see [What is Azure
1. Under **Assignments** 1. **Users** - Choose **All users** or **Select individuals and groups** if limiting your rollout. 1. Optionally you can choose to exclude users from the policy.
- 1. **Enforce Policy** - **On**
- 1. **Save**
+1. **Enforce Policy** - **On**
+1. **Save**
## User experience
-Azure Active Directory Identity Protection will prompt your users to register the next time they sign in interactively and they will have 14 days to complete registration. During this 14-day period, they can bypass registration if MFA is not required as a condition, but at the end of the period they will be required to register before they can complete the sign-in process.
+Azure AD Identity Protection will prompt your users to register the next time they sign in interactively and they'll have 14 days to complete registration. During this 14-day period, they can bypass registration if MFA isn't required as a condition, but at the end of the period they'll be required to register before they can complete the sign-in process.
For an overview of the related user experience, see:
For an overview of the related user experience, see:
- [Enable Azure AD self-service password reset](../authentication/howto-sspr-deployment.md) -- [Enable Azure AD Multi-Factor Authentication](../authentication/howto-mfa-getstarted.md)
+- [Enable Azure AD Multifactor Authentication](../authentication/howto-mfa-getstarted.md)
active-directory Howto Identity Protection Configure Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-configure-notifications.md
Previously updated : 09/23/2021 Last updated : 08/22/2022
active-directory Howto Identity Protection Configure Risk Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-configure-risk-policies.md
Previously updated : 03/18/2022 Last updated : 08/23/2022
Before organizations enable remediation policies, they may want to [investigate]
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
-1. Under **Assignments**, select **Users or workload identities**..
+1. Under **Assignments**, select **Users or workload identities**.
1. Under **Include**, select **All users**. 1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts. 1. Select **Done**.
Before organizations enable remediation policies, they may want to [investigate]
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
-1. Under **Assignments**, select **Users or workload identities**..
+1. Under **Assignments**, select **Users or workload identities**.
1. Under **Include**, select **All users**. 1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts. 1. Select **Done**.
Before organizations enable remediation policies, they may want to [investigate]
## Next steps - [Enable Azure AD Multi-Factor Authentication registration policy](howto-identity-protection-configure-mfa-policy.md)- - [What is risk](concept-identity-protection-risks.md)- - [Investigate risk detections](howto-identity-protection-investigate-risk.md)- - [Simulate risk detections](howto-identity-protection-simulate-risk.md)
active-directory Howto Identity Protection Graph Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-graph-api.md
Title: Microsoft Graph PowerShell SDK and Azure Active Directory Identity Protection
-description: Learn how to query Microsoft Graph risk detections and associated information from Azure Active Directory
+description: Query Microsoft Graph risk detections and associated information from Azure Active Directory
Previously updated : 01/25/2021 Last updated : 08/23/2022
Microsoft Graph is the Microsoft unified API endpoint and the home of [Azure Active Directory Identity Protection](./overview-identity-protection.md) APIs. This article will show you how to use the [Microsoft Graph PowerShell SDK](/powershell/microsoftgraph/get-started) to get risky user details using PowerShell. Organizations that want to query the Microsoft Graph APIs directly can use the article, [Tutorial: Identify and remediate risks using Microsoft Graph APIs](/graph/tutorial-riskdetection-api) to begin that journey. - ## Connect to Microsoft Graph There are four steps to accessing Identity Protection data through Microsoft Graph: -- [Create a certificate](#create-a-certificate)-- [Create a new app registration](#create-a-new-app-registration)-- [Configure API permissions](#configure-api-permissions)-- [Configure a valid credential](#configure-a-valid-credential)
+1. [Create a certificate](#create-a-certificate)
+1. [Create a new app registration](#create-a-new-app-registration)
+1. [Configure API permissions](#configure-api-permissions)
+1. [Configure a valid credential](#configure-a-valid-credential)
### Create a certificate
-In a production environment you would use a certificate from your production Certificate Authority, but in this sample we will use a self-signed certificate. Create and export the certificate using the following PowerShell commands.
+In a production environment you would use a certificate from your production Certificate Authority, but in this sample we'll use a self-signed certificate. Create and export the certificate using the following PowerShell commands.
```powershell $cert = New-SelfSignedCertificate -Subject "CN=MSGraph_ReportingAPI" -CertStoreLocation "Cert:\CurrentUser\My" -KeyExportPolicy Exportable -KeySpec Signature -KeyLength 2048 -KeyAlgorithm RSA -HashAlgorithm SHA256
Export-Certificate -Cert $cert -FilePath "C:\Reporting\MSGraph_ReportingAPI.cer"
1. In the **Name** textbox, type a name for your application (for example: Azure AD Risk Detection API). 1. Under **Supported account types**, select the type of accounts that will use the APIs. 1. Select **Register**.
-1. Take note of the **Application (client) ID** and **Directory (tenant) ID** as you will need these items later.
+1. Take note of the **Application (client) ID** and **Directory (tenant) ID** as you'll need these items later.
### Configure API permissions
In this example, we configure application permissions allowing this sample to be
1. Under **certificates**, select **Upload certificate**. 1. Select the previously exported certificate from the window that opens. 1. Select **Add**.
-1. Take note of the **Thumbprint** of the certificate as you will need this information in the next step.
+1. Take note of the **Thumbprint** of the certificate as you'll need this information in the next step.
## List risky users using PowerShell
active-directory Howto Identity Protection Risk Feedback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-risk-feedback.md
Previously updated : 06/05/2020 Last updated : 08/23/2022
Azure AD Identity Protection allows you to give feedback on its risk assessment.
An Identity Protection detection is an indicator of suspicious activity from an identity risk perspective. These suspicious activities are called risk detections. These identity-based detections can be based on heuristics, machine learning or can come from partner products. These detections are used to determine sign-in risk and user risk, * User risk represents the probability an identity is compromised.
-* Sign-in risk represents the probability a sign-in is compromised (for example, the sign-in is not authorized by the identity owner).
+* Sign-in risk represents the probability a sign-in is compromised (for example, the sign-in isn't authorized by the identity owner).
## Why should I give risk feedback to Azure ADΓÇÖs risk assessments?
Here are the scenarios and mechanisms to give risk feedback to Azure AD.
| Scenario | How to give feedback? | What happens under the hood? | Notes | | | | | |
-| **Sign-in not compromised (False positive)** <br> ΓÇÿRisky sign-insΓÇÖ report shows an at-risk sign-in [Risk state = At risk] but that sign-in was not compromised. | Select the sign-in and click on ΓÇÿConfirm sign-in safeΓÇÖ. | Azure AD will move the sign-inΓÇÖs aggregate risk to none [Risk state = Confirmed safe; Risk level (Aggregate) = -] and will reverse its impact on the user risk. | Currently, the ΓÇÿConfirm sign-in safeΓÇÖ option is only available in ΓÇÿRisky sign-insΓÇÖ report. |
-| **Sign-in compromised (True positive)** <br> ΓÇÿRisky sign-insΓÇÖ report shows an at-risk sign-in [Risk state = At risk] with low risk [Risk level (Aggregate) = Low] and that sign-in was indeed compromised. | Select the sign-in and click on ΓÇÿConfirm sign-in compromisedΓÇÖ. | Azure AD will move the sign-inΓÇÖs aggregate risk and the user risk to High [Risk state = Confirmed compromised; Risk level = High]. | Currently, the ΓÇÿConfirm sign-in compromisedΓÇÖ option is only available in ΓÇÿRisky sign-insΓÇÖ report. |
-| **User compromised (True positive)** <br> ΓÇÿRisky usersΓÇÖ report shows an at-risk user [Risk state = At risk] with low risk [Risk level = Low] and that user was indeed compromised. | Select the user and click on ΓÇÿConfirm user compromisedΓÇÖ. | Azure AD will move the user risk to High [Risk state = Confirmed compromised; Risk level = High] and will add a new detection ΓÇÿAdmin confirmed user compromisedΓÇÖ. | Currently, the ΓÇÿConfirm user compromisedΓÇÖ option is only available in ΓÇÿRisky usersΓÇÖ report. <br> The detection ΓÇÿAdmin confirmed user compromisedΓÇÖ is shown in the tab ΓÇÿRisk detections not linked to a sign-inΓÇÖ in the ΓÇÿRisky usersΓÇÖ report. |
-| **User remediated outside of Azure AD Identity Protection (True positive + Remediated)** <br> ΓÇÿRisky usersΓÇÖ report shows an at-risk user and I have subsequently remediated the user outside of Azure AD Identity Protection. | 1. Select the user and click ΓÇÿConfirm user compromisedΓÇÖ. (This process confirms to Azure AD that the user was indeed compromised.) <br> 2. Wait for the userΓÇÖs ΓÇÿRisk levelΓÇÖ to go to High. (This time gives Azure AD the needed time to take the above feedback to the risk engine.) <br> 3. Select the user and click ΓÇÿDismiss user riskΓÇÖ. (This process confirms to Azure AD that the user is no longer compromised.) | Azure AD moves the user risk to none [Risk state = Dismissed; Risk level = -] and closes the risk on all existing sign-ins having active risk. | Clicking ΓÇÿDismiss user riskΓÇÖ will close all risk on the user and past sign-ins. This action cannot be undone. |
-| **User not compromised (False positive)** <br> ΓÇÿRisky usersΓÇÖ report shows at at-risk user but the user is not compromised. | Select the user and click ΓÇÿDismiss user riskΓÇÖ. (This process confirms to Azure AD that the user is not compromised.) | Azure AD moves the user risk to none [Risk state = Dismissed; Risk level = -]. | Clicking ΓÇÿDismiss user riskΓÇÖ will close all risk on the user and past sign-ins. This action cannot be undone. |
-| I want to close the user risk but I am not sure whether the user is compromised / safe. | Select the user and click ΓÇÿDismiss user riskΓÇÖ. (This process confirms to Azure AD that the user is no longer compromised.) | Azure AD moves the user risk to none [Risk state = Dismissed; Risk level = -]. | Clicking ΓÇÿDismiss user riskΓÇÖ will close all risk on the user and past sign-ins. This action cannot be undone. We recommend you remediate the user by clicking on ΓÇÿReset passwordΓÇÖ or request the user to securely reset/change their credentials. |
+| **Sign-in not compromised (False positive)** <br> ΓÇÿRisky sign-insΓÇÖ report shows an at-risk sign-in [Risk state = At risk] but that sign-in wasn't compromised. | Select the sign-in and then ΓÇÿConfirm sign-in safeΓÇÖ. | Azure AD will move the sign-inΓÇÖs aggregate risk to none [Risk state = Confirmed safe; Risk level (Aggregate) = -] and will reverse its impact on the user risk. | Currently, the ΓÇÿConfirm sign-in safeΓÇÖ option is only available in ΓÇÿRisky sign-insΓÇÖ report. |
+| **Sign-in compromised (True positive)** <br> ΓÇÿRisky sign-insΓÇÖ report shows an at-risk sign-in [Risk state = At risk] with low risk [Risk level (Aggregate) = Low] and that sign-in was indeed compromised. | Select the sign-in and then ΓÇÿConfirm sign-in compromisedΓÇÖ. | Azure AD will move the sign-inΓÇÖs aggregate risk and the user risk to High [Risk state = Confirmed compromised; Risk level = High]. | Currently, the ΓÇÿConfirm sign-in compromisedΓÇÖ option is only available in ΓÇÿRisky sign-insΓÇÖ report. |
+| **User compromised (True positive)** <br> ΓÇÿRisky usersΓÇÖ report shows an at-risk user [Risk state = At risk] with low risk [Risk level = Low] and that user was indeed compromised. | Select the user and then ΓÇÿConfirm user compromisedΓÇÖ. | Azure AD will move the user risk to High [Risk state = Confirmed compromised; Risk level = High] and will add a new detection ΓÇÿAdmin confirmed user compromisedΓÇÖ. | Currently, the ΓÇÿConfirm user compromisedΓÇÖ option is only available in ΓÇÿRisky usersΓÇÖ report. <br> The detection ΓÇÿAdmin confirmed user compromisedΓÇÖ is shown in the tab ΓÇÿRisk detections not linked to a sign-inΓÇÖ in the ΓÇÿRisky usersΓÇÖ report. |
+| **User remediated outside of Azure AD Identity Protection (True positive + Remediated)** <br> ΓÇÿRisky usersΓÇÖ report shows an at-risk user and I've then remediated the user outside of Azure AD Identity Protection. | 1. Select the user and then ΓÇÿConfirm user compromisedΓÇÖ. (This process confirms to Azure AD that the user was indeed compromised.) <br> 2. Wait for the userΓÇÖs ΓÇÿRisk levelΓÇÖ to go to High. (This time gives Azure AD the needed time to take the above feedback to the risk engine.) <br> 3. Select the user and then ΓÇÿDismiss user riskΓÇÖ. (This process confirms to Azure AD that the user is no longer compromised.) | Azure AD moves the user risk to none [Risk state = Dismissed; Risk level = -] and closes the risk on all existing sign-ins having active risk. | Clicking ΓÇÿDismiss user riskΓÇÖ will close all risk on the user and past sign-ins. This action can't be undone. |
+| **User not compromised (False positive)** <br> ΓÇÿRisky usersΓÇÖ report shows at at-risk user but the user isn't compromised. | Select the user and then ΓÇÿDismiss user riskΓÇÖ. (This process confirms to Azure AD that the user isn't compromised.) | Azure AD moves the user risk to none [Risk state = Dismissed; Risk level = -]. | Clicking ΓÇÿDismiss user riskΓÇÖ will close all risk on the user and past sign-ins. This action can't be undone. |
+| I want to close the user risk but I'm not sure whether the user is compromised / safe. | Select the user and then ΓÇÿDismiss user riskΓÇÖ. (This process confirms to Azure AD that the user is no longer compromised.) | Azure AD moves the user risk to none [Risk state = Dismissed; Risk level = -]. | Clicking ΓÇÿDismiss user riskΓÇÖ will close all risk on the user and past sign-ins. This action can't be undone. We recommend you remediate the user by clicking on ΓÇÿReset passwordΓÇÖ or request the user to securely reset/change their credentials. |
Feedback on user risk detections in Identity Protection is processed offline and may take some time to update. The risk processing state column will provide the current state of feedback processing.
active-directory Howto Identity Protection Simulate Risk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-simulate-risk.md
Previously updated : 06/05/2020 Last updated : 08/22/2022
This article provides you with steps for simulating the following risk detection
- Atypical travel (difficult) - Leaked credentials in GitHub for workload identities (moderate)
-Other risk detections cannot be simulated in a secure manner.
+Other risk detections can't be simulated in a secure manner.
More information about each risk detection can be found in the article, What is risk for [user](concept-identity-protection-risks.md) and [workload identity](concept-workload-identity-risk.md).
More information about each risk detection can be found in the article, What is
Completing the following procedure requires you to use: - The [Tor Browser](https://www.torproject.org/projects/torbrowser.html.en) to simulate anonymous IP addresses. You might need to use a virtual machine if your organization restricts using the Tor browser.-- A test account that is not yet registered for Azure AD Multi-Factor Authentication.
+- A test account that isn't yet registered for Azure AD Multi-Factor Authentication.
**To simulate a sign-in from an anonymous IP, perform the following steps**:
The sign-in shows up on the Identity Protection dashboard within 10 - 15 minutes
## Unfamiliar sign-in properties
-To simulate unfamiliar locations, you have to sign in from a location and device your test account has not signed in from before.
+To simulate unfamiliar locations, you have to sign in from a location and device your test account hasn't signed in from before.
The procedure below uses a newly created:
The sign-in shows up on the Identity Protection dashboard within 10 - 15 minutes
## Atypical travel
-Simulating the atypical travel condition is difficult because the algorithm uses machine learning to weed out false-positives such as atypical travel from familiar devices, or sign-ins from VPNs that are used by other users in the directory. Additionally, the algorithm requires a sign-in history of 14 days and 10 logins of the user before it begins generating risk detections. Because of the complex machine learning models and above rules, there is a chance that the following steps will not lead to a risk detection. You might want to replicate these steps for multiple Azure AD accounts to simulate this detection.
+Simulating the atypical travel condition is difficult because the algorithm uses machine learning to weed out false-positives such as atypical travel from familiar devices, or sign-ins from VPNs that are used by other users in the directory. Additionally, the algorithm requires a sign-in history of 14 days and 10 logins of the user before it begins generating risk detections. Because of the complex machine learning models and above rules, there's a chance that the following steps won't lead to a risk detection. You might want to replicate these steps for multiple Azure AD accounts to simulate this detection.
**To simulate an atypical travel risk detection, perform the following steps**:
This risk detection indicates that the application's valid credentials have been
**To simulate Leaked Credentials in GitHub for Workload Identities, perform the following steps**: 1. Navigate to the [Azure portal](https://portal.azure.com). 2. Browse to **Azure Active Directory** > **App registrations**.
-3. Select **New registration** to register a new application or reuse an exsiting stale application.
-4. Select **Certificates & Secrets** > **New client Secret** , add a description of your client secret and set an expiration for the secret or specify a custom lifetime and click **Add**. Record the secret's value for later use for your GitHub Commit.
+3. Select **New registration** to register a new application or reuse an existing stale application.
+4. Select **Certificates & Secrets** > **New client Secret** , add a description of your client secret and set an expiration for the secret or specify a custom lifetime and select **Add**. Record the secret's value for later use for your GitHub Commit.
> [!Note] > **You can not retrieve the secret again after you leave this page**.
This risk detection indicates that the application's valid credentials have been
"AadTenantDomain": "XXXX.onmicrosoft.com", "AadTenantId": "99d4947b-XXX-XXXX-9ace-abceab54bcd4", ```
-7. In about 8 hours, you will be able to view a leaked credentail detection under **Azure Active Directory** > **Security** > **Risk Detection** > **Workload identity detections** where the additional info will contain your the URL of your GitHub commit.
+7. In about 8 hours, you'll be able to view a leaked credential detection under **Azure Active Directory** > **Security** > **Risk Detection** > **Workload identity detections** where the additional info will contain the URL of your GitHub commit.
## Testing risk policies
To test a user risk security policy, perform the following steps:
### Sign-in risk security policy
-To test a sign in risk policy, perform the following steps:
+To test a sign-in risk policy, perform the following steps:
1. Navigate to the [Azure portal](https://portal.azure.com). 1. Browse to **Azure Active Directory** > **Security** > **Identity Protection** > **Overview**.
active-directory Azure Pim Resource Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/azure-pim-resource-rbac.md
You may have a compliance requirement where you must provide a complete list of
1. Select the resource you want to export role assignments for, such as a subscription.
-1. Select **Members**.
+1. Select **Assignments**.
1. Select **Export** to open the Export membership pane.
active-directory Groups Role Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-role-settings.md
Follow these steps to open the settings for an Azure privileged access group rol
1. Open **Azure AD Privileged Identity Management**. 1. Select **Privileged access (Preview)**.
+ >[!NOTE]
+ > Approver doesn't have to be member of the group, owner of the group or have Azure AD role assigned.
1. Select the group that you want to manage.
active-directory Pim Resource Roles Configure Role Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-configure-role-settings.md
Follow these steps to open the settings for an Azure resource role.
1. Open **Azure AD Privileged Identity Management**. 1. Select **Azure resources**.
+ >[!NOTE]
+ > Approver doesn't have to have any Azure or Azure AD role assigned.
1. Select the resource you want to manage, such as a subscription or management group.
active-directory Ideagen Cloud Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ideagen-cloud-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
1. Determine what data to [map between Azure AD and Ideagen Cloud](../app-provisioning/customize-application-attributes.md). ## Step 2. Configure Ideagen Cloud to support provisioning with Azure AD
-1. Login to [Ideagen Home](https://cktenant-homev2-scimtest1.ideagenhomedev.com). Click on the **Administration** icon to show the left hand side menu.
+1. Log in to Ideagen. Click on the **Administration** icon to show the left hand side menu.
![Screenshot of administration menu.](media\ideagen-cloud-provisioning-tutorial\admin.png)
advisor Advisor Performance Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-performance-recommendations.md
Azure Premium Storage delivers high-performance, low-latency disk support for vi
## Remove data skew on your Azure Synapse Analytics tables to increase query performance
-Data skew can cause unnecessary data movement or resource bottlenecks when you run your workload. Advisor detects distribution data skew of greater than 15%. It recommends that you redistribute your data and revisit your table distribution key selections. To learn more about identifying and removing skew, see [troubleshooting skew](../synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-distribute.md#how-to-tell-if-your-distribution-column-is-a-good-choice).
+Data skew can cause unnecessary data movement or resource bottlenecks when you run your workload. Advisor detects distribution data skew of greater than 15%. It recommends that you redistribute your data and revisit your table distribution key selections. To learn more about identifying and removing skew, see [troubleshooting skew](../synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-distribute.md#how-to-tell-if-your-distribution-is-a-good-choice).
## Create or update outdated table statistics in your Azure Synapse Analytics tables to increase query performance
Learn more about [Azure Communication Services](../communication-services/overvi
1. Sign in to the [Azure portal](https://portal.azure.com), and then open [Advisor](https://aka.ms/azureadvisordashboard).
-2. On the Advisor dashboard, select the **Performance** tab.
+2. On the Advisor dashboard, select the **Performance** tab.
## Next steps
aks Support Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/support-policies.md
Microsoft provides technical support for the following examples:
* Connectivity to other Azure services and applications * Ingress controllers and ingress or load balancer configurations * Network performance and latency
- * [Network policies](use-network-policies.md#differences-between-azure-and-calico-policies-and-their-capabilities)
-
+ * [Network policies](use-network-policies.md#differences-between-azure-npm-and-calico-network-policy-and-their-capabilities)
> [!NOTE] > Any cluster actions taken by Microsoft/AKS are made with user consent under a built-in Kubernetes role `aks-service` and built-in role binding `aks-service-rolebinding`. This role enables AKS to troubleshoot and diagnose cluster issues, but can't modify permissions nor create roles or role bindings, or other high privilege actions. Role access is only enabled under active support tickets with just-in-time (JIT) access.
aks Use Network Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-network-policies.md
Last updated 06/24/2022
When you run modern, microservices-based applications in Kubernetes, you often want to control which components can communicate with each other. The principle of least privilege should be applied to how traffic can flow between pods in an Azure Kubernetes Service (AKS) cluster. Let's say you likely want to block traffic directly to back-end applications. The *Network Policy* feature in Kubernetes lets you define rules for ingress and egress traffic between pods in a cluster.
-This article shows you how to install the network policy engine and create Kubernetes network policies to control the flow of traffic between pods in AKS. Network policy should only be used for Linux-based nodes and pods in AKS.
+This article shows you how to install the Network Policy engine and create Kubernetes network policies to control the flow of traffic between pods in AKS. Network Policy could be used for Linux-based or Windows-based nodes and pods in AKS.
## Before you begin You need the Azure CLI version 2.0.61 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
-## Overview of network policy
+## Overview of Network Policy
All pods in an AKS cluster can send and receive traffic without limitations, by default. To improve security, you can define rules that control the flow of traffic. Back-end applications are often only exposed to required front-end services, for example. Or, database components are only accessible to the application tiers that connect to them.
-Network Policy is a Kubernetes specification that defines access policies for communication between Pods. Using Network Policies, you define an ordered set of rules to send and receive traffic and apply them to a collection of pods that match one or more label selectors.
+Network Policy is a Kubernetes specification that defines access policies for communication between Pods. Using network policies, you define an ordered set of rules to send and receive traffic and apply them to a collection of pods that match one or more label selectors.
-These network policy rules are defined as YAML manifests. Network policies can be included as part of a wider manifest that also creates a deployment or service.
+These Network Policy rules are defined as YAML manifests. Network policies can be included as part of a wider manifest that also creates a deployment or service.
-### Network policy options in AKS
+## Network policy options in AKS
-Azure provides two ways to implement network policy. You choose a network policy option when you create an AKS cluster. The policy option can't be changed after the cluster is created:
+Azure provides two ways to implement Network Policy. You choose a Network Policy option when you create an AKS cluster. The policy option can't be changed after the cluster is created:
-* Azure's own implementation, called *Azure Network Policies*.
+* Azure's own implementation, called *Azure Network Policy Manager (NPM)*.
* *Calico Network Policies*, an open-source network and network security solution founded by [Tigera][tigera].
-Both implementations use Linux *IPTables* to enforce the specified policies. Policies are translated into sets of allowed and disallowed IP pairs. These pairs are then programmed as IPTable filter rules.
+Azure NPM for Linux uses Linux *IPTables* and Azure NPM for Windows uses *Host Network Service (HNS) ACLPolicies* to enforce the specified policies. Policies are translated into sets of allowed and disallowed IP pairs. These pairs are then programmed as IPTable/HNS ACLPolicy filter rules.
-### Differences between Azure and Calico policies and their capabilities
+## Differences between Azure NPM and Calico Network Policy and their capabilities
-| Capability | Azure | Calico |
+| Capability | Azure NPM | Calico Network Policy |
||-|--|
-| Supported platforms | Linux | Linux, Windows Server 2019 and 2022 |
+| Supported platforms | Linux, Windows Server 2022 | Linux, Windows Server 2019 and 2022 |
| Supported networking options | Azure CNI | Azure CNI (Linux, Windows Server 2019 and 2022) and kubenet (Linux) | | Compliance with Kubernetes specification | All policy types supported | All policy types supported | | Additional features | None | Extended policy model consisting of Global Network Policy, Global Network Set, and Host Endpoint. For more information on using the `calicoctl` CLI to manage these extended features, see [calicoctl user reference][calicoctl]. | | Support | Supported by Azure support and Engineering team | Calico community support. For more information on additional paid support, see [Project Calico support options][calico-support]. |
-| Logging | Rules added / deleted in IPTables are logged on every host under */var/log/azure-npm.log* | For more information, see [Calico component logs][calico-logs] |
+| Logging | Logs available with **kubectl log -n kube-system <network-policy-pod>** command | For more information, see [Calico component logs][calico-logs] |
-## Create an AKS cluster and enable network policy
+## Limitations:
-To see network policies in action, let's create and then expand on a policy that defines traffic flow:
+Azure Network Policy Manager(NPM) does not support IPv6. Otherwise, Azure NPM fully supports the network policy spec in Linux.
+* In Windows, Azure NPM does not support the following:
+ * named ports
+ * SCTP protocol
+ * negative match label or namespace selectors (e.g. all labels except "debug=true")
+ * "except" CIDR blocks (a CIDR with exceptions)
-* Deny all traffic to pod.
-* Allow traffic based on pod labels.
-* Allow traffic based on namespace.
+>[!NOTE]
+> * Azure NPM pod logs will record an error if an unsupported policy is created.
-First, let's create an AKS cluster that supports network policy.
+## Create an AKS cluster and enable Network Policy
+
+To see network policies in action, let's create an AKS cluster that supports network policy and then work on adding policies.
> [!IMPORTANT] > > The network policy feature can only be enabled when the cluster is created. You can't enable network policy on an existing AKS cluster.
-To use Azure Network Policy, you must use the [Azure CNI plug-in][azure-cni]. Calico Network Policy could be used with either this same Azure CNI plug-in or with the Kubenet CNI plug-in.
+To use Azure NPM, you must use the [Azure CNI plug-in][azure-cni]. Calico Network Policy could be used with either this same Azure CNI plug-in or with the Kubenet CNI plug-in.
The following example script:
-* Creates an AKS cluster with system-assigned identity and enables network policy.
- * The _Azure Network_ policy option is used. To use Calico as the network policy option instead, use the `--network-policy calico` parameter. Note: Calico could be used with either `--network-plugin azure` or `--network-plugin kubenet`.
+* Creates an AKS cluster with system-assigned identity and enables Network Policy.
+ * The _Azure NPM_ option is used. To use Calico as the Network Policy option instead, use the `--network-policy calico` parameter. Note: Calico could be used with either `--network-plugin azure` or `--network-plugin kubenet`.
Instead of using a system-assigned identity, you can also use a user-assigned identity. For more information, see [Use managed identities](use-managed-identity.md).
-### Create an AKS cluster for Azure network policies
+### Create an AKS cluster with Azure NPM enabled - Linux only
+
+In this section, we will work on creating a cluster with Linux node pools and Azure NPM enabled.
-You can replace the *RESOURCE_GROUP_NAME* and *CLUSTER_NAME* variables:
+To begin, you should replace the values for *$RESOURCE_GROUP_NAME* and *$CLUSTER_NAME* variables.
```azurecli-interactive
-RESOURCE_GROUP_NAME=myResourceGroup-NP
-CLUSTER_NAME=myAKSCluster
-LOCATION=canadaeast
+$RESOURCE_GROUP_NAME=myResourceGroup-NP
+$CLUSTER_NAME=myAKSCluster
+$LOCATION=canadaeast
+```
-Create the AKS cluster and specify *azure* for the network plugin and network policy.
+Create the AKS cluster and specify *azure* for the `network-plugin` and `network-policy`.
+Use the following command to create a cluster:
```azurecli az aks create \ --resource-group $RESOURCE_GROUP_NAME \
az aks create \
--network-policy azure ```
-It takes a few minutes to create the cluster. When the cluster is ready, configure `kubectl` to connect to your Kubernetes cluster by using the [az aks get-credentials][az-aks-get-credentials] command. This command downloads credentials and configures the Kubernetes CLI to use them:
+### Create an AKS cluster with Azure NPM enabled - Windows Server 2022 (Preview)
-```azurecli-interactive
-az aks get-credentials --resource-group $RESOURCE_GROUP_NAME --name $CLUSTER_NAME
-```
+In this section, we will work on creating a cluster with Windows node pools and Azure NPM enabled.
-### Create an AKS cluster for Calico network policies
+Please execute the following commands prior to creating a cluster:
-Create the AKS cluster and specify *azure* for the network plugin, and *calico* for the network policy. Using *calico* as the network policy enables Calico networking on both Linux and Windows node pools.
-
-If you plan on adding Windows node pools to your cluster, include the `windows-admin-username` and `windows-admin-password` parameters with that meet the [Windows Server password requirements][windows-server-password].
+```azurecli
+ az extension add --name aks-preview
+ az extension update --name aks-preview
+ az feature register --namespace Microsoft.ContainerService --name AKSWindows2022Preview
+ az feature register --namespace Microsoft.ContainerService --name WindowsNetworkPolicyPreview
+ az provider register -n Microsoft.ContainerService
+```
-> [!IMPORTANT]
-> At this time, using Calico network policies with Windows nodes is available on new clusters using Kubernetes version 1.20 or later with Calico 3.17.2 and requires using Azure CNI networking. Windows nodes on AKS clusters with Calico enabled also have [Direct Server Return (DSR)][dsr] enabled by default.
+> [!NOTE]
+> At this time, Azure NPM with Windows nodes is available on Windows Server 2022 only
>
-> For clusters with only Linux node pools running Kubernetes 1.20 with earlier versions of Calico, the Calico version will automatically be upgraded to 3.17.2.
-Create a username to use as administrator credentials for your Windows Server containers on your cluster. The following commands prompt you for a username and set it WINDOWS_USERNAME for use in a later command (remember that the commands in this article are entered into a BASH shell).
+Now, you should replace the values for *$RESOURCE_GROUP_NAME*, *$CLUSTER_NAME* and *$WINDOWS_USERNAME* variables.
+
+```azurecli-interactive
+$RESOURCE_GROUP_NAME=myResourceGroup-NP
+$CLUSTER_NAME=myAKSCluster
+$WINDOWS_USERNAME=myWindowsUserName
+$LOCATION=canadaeast
+```
+
+Create a username to use as administrator credentials for your Windows Server containers on your cluster. The following command prompts you for a username. Set it to `$WINDOWS_USERNAME`(remember that the commands in this article are entered into a BASH shell).
```azurecli-interactive echo "Please enter the username to use as administrator credentials for Windows Server containers on your cluster: " && read WINDOWS_USERNAME ```
+Use the following command to create a cluster :
+ ```azurecli az aks create \ --resource-group $RESOURCE_GROUP_NAME \
az aks create \
--node-count 1 \ --windows-admin-username $WINDOWS_USERNAME \ --network-plugin azure \
- --network-policy calico
+ --network-policy azure
``` It takes a few minutes to create the cluster. By default, your cluster is created with only a Linux node pool. If you would like to use Windows node pools, you can add one. For example:
az aks nodepool add \
--node-count 1 ```
-When the cluster is ready, configure `kubectl` to connect to your Kubernetes cluster by using the [az aks get-credentials][az-aks-get-credentials] command. This command downloads credentials and configures the Kubernetes CLI to use them:
-
-```azurecli-interactive
-az aks get-credentials --resource-group $RESOURCE_GROUP_NAME --name $CLUSTER_NAME
-```
-
-## Deny all inbound traffic to a pod
-
-Before you define rules to allow specific network traffic, first create a network policy to deny all traffic. This policy gives you a starting point to begin to create an allowlist for only the desired traffic. You can also clearly see that traffic is dropped when the network policy is applied.
-
-For the sample application environment and traffic rules, let's first create a namespace called *development* to run the example pods:
-
-```console
-kubectl create namespace development
-kubectl label namespace/development purpose=development
-```
-
-Create an example back-end pod that runs NGINX. This back-end pod can be used to simulate a sample back-end web-based application. Create this pod in the *development* namespace, and open port *80* to serve web traffic. Label the pod with *app=webapp,role=backend* so that we can target it with a network policy in the next section:
-
-```console
-kubectl run backend --image=mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine --labels app=webapp,role=backend --namespace development --expose --port 80
-```
-
-Create another pod and attach a terminal session to test that you can successfully reach the default NGINX webpage:
-
-```console
-kubectl run --rm -it --image=mcr.microsoft.com/dotnet/runtime-deps:6.0 network-policy --namespace development
-```
-
-Install `wget`:
-
-```console
-apt-get update && apt-get install -y wget
-```
-
-At the shell prompt, use `wget` to confirm that you can access the default NGINX webpage:
-
-```console
-wget -qO- http://backend
-```
-
-The following sample output shows that the default NGINX webpage returned:
-```output
-<!DOCTYPE html>
-<html>
-<head>
-<title>Welcome to nginx!</title>
-[...]
-```
-
-Exit out of the attached terminal session. The test pod is automatically deleted.
-
-```console
-exit
-```
-
-### Create and apply a network policy
-
-Now that you've confirmed you can use the basic NGINX webpage on the sample back-end pod, create a network policy to deny all traffic. Create a file named `backend-policy.yaml` and paste the following YAML manifest. This manifest uses a *podSelector* to attach the policy to pods that have the *app:webapp,role:backend* label, like your sample NGINX pod. No rules are defined under *ingress*, so all inbound traffic to the pod is denied:
-
-```yaml
-kind: NetworkPolicy
-apiVersion: networking.k8s.io/v1
-metadata:
- name: backend-policy
- namespace: development
-spec:
- podSelector:
- matchLabels:
- app: webapp
- role: backend
- ingress: []
-```
-
-Go to [https://shell.azure.com](https://shell.azure.com) to open Azure Cloud Shell in your browser.
-
-Apply the network policy by using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest:
-
-```console
-kubectl apply -f backend-policy.yaml
-```
-
-### Test the network policy
-
-Let's see if you can use the NGINX webpage on the back-end pod again. Create another test pod and attach a terminal session:
-
-```console
-kubectl run --rm -it --image=mcr.microsoft.com/dotnet/runtime-deps:6.0 network-policy --namespace development
-```
-
-Install `wget`:
-
-```console
-apt-get update && apt-get install -y wget
-```
+### Create an AKS cluster for Calico network policies
-At the shell prompt, use `wget` to see if you can access the default NGINX webpage. This time, set a timeout value to *2* seconds. The network policy now blocks all inbound traffic, so the page can't be loaded, as shown in the following example:
+Create the AKS cluster and specify *azure* for the network plugin, and *calico* for the Network Policy. Using *calico* as the Network Policy enables Calico networking on both Linux and Windows node pools.
-```console
-wget -O- --timeout=2 --tries=1 http://backend
-```
+If you plan on adding Windows node pools to your cluster, include the `windows-admin-username` and `windows-admin-password` parameters with that meet the [Windows Server password requirements][windows-server-password].
-```output
-wget: download timed out
-```
+> [!IMPORTANT]
+> At this time, using Calico network policies with Windows nodes is available on new clusters using Kubernetes version 1.20 or later with Calico 3.17.2 and requires using Azure CNI networking. Windows nodes on AKS clusters with Calico enabled also have [Direct Server Return (DSR)][dsr] enabled by default.
+>
+> For clusters with only Linux node pools running Kubernetes 1.20 with earlier versions of Calico, the Calico version will automatically be upgraded to 3.17.2.
-Exit out of the attached terminal session. The test pod is automatically deleted.
+Create a username to use as administrator credentials for your Windows Server containers on your cluster. The following command prompts you for a username. Set it to `$WINDOWS_USERNAME`(remember that the commands in this article are entered into a BASH shell).
-```console
-exit
+```azurecli-interactive
+echo "Please enter the username to use as administrator credentials for Windows Server containers on your cluster: " && read WINDOWS_USERNAME
```
-## Allow inbound traffic based on a pod label
-
-In the previous section, a back-end NGINX pod was scheduled, and a network policy was created to deny all traffic. Let's create a front-end pod and update the network policy to allow traffic from front-end pods.
-
-Update the network policy to allow traffic from pods with the labels *app:webapp,role:frontend* and in any namespace. Edit the previous *backend-policy.yaml* file, and add *matchLabels* ingress rules so that your manifest looks like the following example:
-
-```yaml
-kind: NetworkPolicy
-apiVersion: networking.k8s.io/v1
-metadata:
- name: backend-policy
- namespace: development
-spec:
- podSelector:
- matchLabels:
- app: webapp
- role: backend
- ingress:
- - from:
- - namespaceSelector: {}
- podSelector:
- matchLabels:
- app: webapp
- role: frontend
+```azurecli
+az aks create \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --name $CLUSTER_NAME \
+ --node-count 1 \
+ --windows-admin-username $WINDOWS_USERNAME \
+ --network-plugin azure \
+ --network-policy calico
```
-> [!NOTE]
-> This network policy uses a *namespaceSelector* and a *podSelector* element for the ingress rule. The YAML syntax is important for the ingress rules to be additive. In this example, both elements must match for the ingress rule to be applied. Kubernetes versions prior to *1.12* might not interpret these elements correctly and restrict the network traffic as you expect. For more about this behavior, see [Behavior of to and from selectors][policy-rules].
-
-Apply the updated network policy by using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest:
+It takes a few minutes to create the cluster. By default, your cluster is created with only a Linux node pool. If you would like to use Windows node pools, you can add one. For example:
-```console
-kubectl apply -f backend-policy.yaml
+```azurecli
+az aks nodepool add \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --cluster-name $CLUSTER_NAME \
+ --os-type Windows \
+ --name npwin \
+ --node-count 1
```
-Schedule a pod that is labeled as *app=webapp,role=frontend* and attach a terminal session:
+## Verify Network Policy setup
-```console
-kubectl run --rm -it frontend --image=mcr.microsoft.com/dotnet/runtime-deps:6.0 --labels app=webapp,role=frontend --namespace development
-```
-
-Install `wget`:
+When the cluster is ready, configure `kubectl` to connect to your Kubernetes cluster by using the [az aks get-credentials][az-aks-get-credentials] command. This command downloads credentials and configures the Kubernetes CLI to use them:
-```console
-apt-get update && apt-get install -y wget
+```azurecli-interactive
+az aks get-credentials --resource-group $RESOURCE_GROUP_NAME --name $CLUSTER_NAME
```
+To begin verification of Network Policy, we will create a sample application and set traffic rules.
-At the shell prompt, use `wget` to see if you can access the default NGINX webpage:
+Firstly, let's create a namespace called *demo* to run the example pods:
```console
-wget -qO- http://backend
+kubectl create namespace demo
```
-Because the ingress rule allows traffic with pods that have the labels *app: webapp,role: frontend*, the traffic from the front-end pod is allowed. The following example output shows the default NGINX webpage returned:
+We will now create two pods in the cluster named *client* and *server*.
-```output
-<!DOCTYPE html>
-<html>
-<head>
-<title>Welcome to nginx!</title>
-[...]
-```
+>[!NOTE]
+> If you want to schedule the *client* or *server* on a particular node, add the following bit before the *--command* argument in the pod creation [kubectl run][kubectl-run] command:
-Exit out of the attached terminal session. The pod is automatically deleted.
+> ```console
+>--overrides='{"spec": { "nodeSelector": {"kubernetes.io/os": "linux|windows"}}}'
-```console
-exit
-```
-
-### Test a pod without a matching label
-
-The network policy allows traffic from pods labeled *app: webapp,role: frontend*, but should deny all other traffic. Let's test to see whether another pod without those labels can access the back-end NGINX pod. Create another test pod and attach a terminal session:
+Create a *server* pod. This pod will serve on TCP port 80:
```console
-kubectl run --rm -it --image=mcr.microsoft.com/dotnet/runtime-deps:6.0 network-policy --namespace development
+kubectl run server -n demo --image=k8s.gcr.io/e2e-test-images/agnhost:2.33 --labels="app=server" --port=80 --command -- /agnhost serve-hostname --tcp --http=false --port "80"
```
-Install `wget`:
+Create a *client* pod. The below command will run bash on the client pod:
```console
-apt-get update && apt-get install -y wget
+kubectl run -it client -n demo --image=k8s.gcr.io/e2e-test-images/agnhost:2.33 --command -- bash
```
-At the shell prompt, use `wget` to see if you can access the default NGINX webpage. The network policy blocks the inbound traffic, so the page can't be loaded, as shown in the following example:
-
+Now, in a separate window, run the following command to get the server IP:
```console
-wget -O- --timeout=2 --tries=1 http://backend
+kubectl get pod --output=wide
```
+The output should look like:
```output
-wget: download timed out
-```
-
-Exit out of the attached terminal session. The test pod is automatically deleted.
-
-```console
-exit
+NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
+server 1/1 Running 0 30s 10.224.0.72 akswin22000001 <none> <none>
```
-## Allow traffic only from within a defined namespace
-
-In the previous examples, you created a network policy that denied all traffic, and then updated the policy to allow traffic from pods with a specific label. Another common need is to limit traffic to only within a given namespace. If the previous examples were for traffic in a *development* namespace, create a network policy that prevents traffic from another namespace, such as *production*, from reaching the pods.
+### Test Connectivity without Network Policy
-First, create a new namespace to simulate a production namespace:
+In the client's shell, verify connectivity with the server by executing the following command. Replace *server-ip* by IP found in the output from executing previous command. There will be no output if the connection is successful:
```console
-kubectl create namespace production
-kubectl label namespace/production purpose=production
+/agnhost connect <server-ip>:80 --timeout=3s --protocol=tcp
```
-Schedule a test pod in the *production* namespace that is labeled as *app=webapp,role=frontend*. Attach a terminal session:
-
-```console
-kubectl run --rm -it frontend --image=mcr.microsoft.com/dotnet/runtime-deps:6.0 --labels app=webapp,role=frontend --namespace production
-```
+### Test Connectivity with Network Policy
-Install `wget`:
-
-```console
-apt-get update && apt-get install -y wget
-```
-
-At the shell prompt, use `wget` to confirm that you can access the default NGINX webpage:
-
-```console
-wget -qO- http://backend.development
-```
-
-Because the labels for the pod match what is currently permitted in the network policy, the traffic is allowed. The network policy doesn't look at the namespaces, only the pod labels. The following example output shows the default NGINX webpage returned:
-
-```output
-<!DOCTYPE html>
-<html>
-<head>
-<title>Welcome to nginx!</title>
-[...]
-```
-
-Exit out of the attached terminal session. The test pod is automatically deleted.
-
-```console
-exit
-```
-
-### Update the network policy
-
-Let's update the ingress rule *namespaceSelector* section to only allow traffic from within the *development* namespace. Edit the *backend-policy.yaml* manifest file as shown in the following example:
+Create a file named demo-policy.yaml and paste the following YAML manifest to add network policies:
```yaml
-kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
+kind: NetworkPolicy
metadata:
- name: backend-policy
- namespace: development
+ name: demo-policy
+ namespace: demo
spec: podSelector: matchLabels:
- app: webapp
- role: backend
+ app: server
ingress: - from:
- - namespaceSelector:
- matchLabels:
- purpose: development
- podSelector:
+ - podSelector:
matchLabels:
- app: webapp
- role: frontend
+ app: client
+ ports:
+ - port: 80
+ protocol: TCP
```-
-In more complex examples, you could define multiple ingress rules, like a *namespaceSelector* and then a *podSelector*.
-
-Apply the updated network policy by using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest:
-
-```console
-kubectl apply -f backend-policy.yaml
-```
-
-### Test the updated network policy
-
-Schedule another pod in the *production* namespace and attach a terminal session:
-
-```console
-kubectl run --rm -it frontend --image=mcr.microsoft.com/dotnet/runtime-deps:6.0 --labels app=webapp,role=frontend --namespace production
-```
-
-Install `wget`:
-
-```console
-apt-get update && apt-get install -y wget
-```
-
-At the shell prompt, use `wget` to see that the network policy now denies traffic:
-
-```console
-wget -O- --timeout=2 --tries=1 http://backend.development
-```
-
-```output
-wget: download timed out
-```
-
-Exit out of the test pod:
-
-```console
-exit
-```
-
-With traffic denied from the *production* namespace, schedule a test pod back in the *development* namespace and attach a terminal session:
-
-```console
-kubectl run --rm -it frontend --image=mcr.microsoft.com/dotnet/runtime-deps:6.0 --labels app=webapp,role=frontend --namespace development
-```
-
-Install `wget`:
+Specify the name of your YAML manifest and apply it using [kubectl apply][kubectl-apply]:
```console
-apt-get update && apt-get install -y wget
+kubectl apply ΓÇôf demo-policy.yaml
```
-At the shell prompt, use `wget` to see that the network policy allows the traffic:
+Now, in the client's shell, verify connectivity with the server by executing the following `/agnhost` command:
```console
-wget -qO- http://backend
+/agnhost connect <server-ip>:80 --timeout=3s --protocol=tcp
```
-Traffic is allowed because the pod is scheduled in the namespace that matches what's permitted in the network policy. The following sample output shows the default NGINX webpage returned:
+Connectivity with traffic will be blocked since the server is labeled with app=server, but the client is not labeled. The connect command above will yield this output:
```output
-<!DOCTYPE html>
-<html>
-<head>
-<title>Welcome to nginx!</title>
-[...]
+TIMEOUT
```
-Exit out of the attached terminal session. The test pod is automatically deleted.
+Run the following command to label the *client* and verify connectivity with the server (output should return nothing).
```console
-exit
+kubectl label pod client -n demo app=client
``` ## Clean up resources
-In this article, we created two namespaces and applied a network policy. To clean up these resources, use the [kubectl delete][kubectl-delete] command and specify the resource names:
+In this article, we created a namespace and two pods and applied a Network Policy. To clean up these resources, use the [kubectl delete][kubectl-delete] command and specify the resource name:
```console
-kubectl delete namespace production
-kubectl delete namespace development
+kubectl delete namespace demo
``` ## Next steps
To learn more about policies, see [Kubernetes network policies][kubernetes-netwo
<!-- LINKS - external --> [kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply [kubectl-delete]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#delete
+[kubectl-run]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#run
[kubernetes-network-policies]: https://kubernetes.io/docs/concepts/services-networking/network-policies/ [azure-cni]: https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md [policy-rules]: https://kubernetes.io/docs/concepts/services-networking/network-policies/#behavior-of-to-and-from-selectors
api-management Api Management Transformation Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-transformation-policies.md
or
``` > [!NOTE]
-> Backend entities can be managed via [Azure portal](how-to-configure-service-fabric-backend.md), management [API](/rest/api/apimanagement), and [PowerShell](https://www.powershellgallery.com/packages?q=apimanagement).
+> Backend entities can be managed via [Azure portal](how-to-configure-service-fabric-backend.md), management [API](/rest/api/apimanagement), and [PowerShell](https://www.powershellgallery.com/packages?q=apimanagement). Currently, if you define a base `set-backend-service` policy using the `backend-id` attribute and inherit the base policy using `<base />` within the scope, then it can be only overridden with a policy using the `backend-id` attribute, not the `base-url` attribute.
### Example
OriginalUrl.
- **Policy scopes:** all scopes
app-service Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/networking/private-endpoint.md
description: Connect privately to a Web App using Azure Private Endpoint
ms.assetid: 2dceac28-1ba6-4904-a15d-9e91d5ee162c Previously updated : 03/04/2022 Last updated : 08/23/2022
From a security perspective:
- By default, when you enable Private Endpoints to your Web App, you disable all public access. - You can enable multiple Private Endpoints in others VNets and Subnets, including VNets in other regions. - The IP address of the Private Endpoint NIC must be dynamic, but will remain the same until you delete the Private Endpoint.-- The NIC of the Private Endpoint can't have an NSG associated. - The Subnet that hosts the Private Endpoint can have an NSG associated, but you must disable the network policies enforcement for the Private Endpoint: see [Disable network policies for private endpoints][disablesecuritype]. As a result, you can't filter by any NSG the access to your Private Endpoint. - By default, when you enable Private Endpoint to your Web App, the [access restrictions][accessrestrictions] configuration of the Web App isn't evaluated. - You can eliminate the data exfiltration risk from the VNet by removing all NSG rules where destination is tag Internet or Azure services. When you deploy a Private Endpoint for a Web App, you can only reach this specific Web App through the Private Endpoint. If you have another Web App, you must deploy another dedicated Private Endpoint for this other Web App.
app-service Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-python.md
Title: 'Quickstart: Deploy a Python (Django or Flask) web app to Azure' description: Get started with Azure App Service by deploying your first Python app to Azure App Service. Previously updated : 03/22/2022 Last updated : 08/23/2022 ms.devlang: python
To complete this quickstart, you need:
1. An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). 1. <a href="https://www.python.org/downloads/" target="_blank">Python 3.9 or higher</a> installed locally.
+>**Note**: This article contains current instructions on deploying a Python web app using Azure App Service. Python on Windows is no longer supported.
+ ## 1 - Sample application This quickstart can be completed using either Flask or Django. A sample application in each framework is provided to help you follow along with this quickstart. Download or clone the sample application to your local workstation.
automation Disable Local Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/disable-local-authentication.md
Disabling local authentication doesn't take effect immediately. Allow a few minu
>[!NOTE] > Currently, PowerShell support for the new API version (2021-06-22) or the flag ΓÇô `DisableLocalAuth` is not available. However, you can use the Rest-API with this API version to update the flag.
-To allow list and enroll your subscription for this feature in your respective regions, follow the steps in [how to create an Azure support request - Azure supportability | Microsoft Docs](../azure-portal/supportability/how-to-create-azure-support-request.md).
## Re-enable local authentication
availability-zones Az Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/az-overview.md
description: Learn about regions and availability zones and how they work to hel
Previously updated : 06/21/2022 Last updated : 08/23/2022
availability-zones Az Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/az-region.md
description: Learn what services are supported by availability zones and underst
Previously updated : 08/18/2022 Last updated : 08/23/2022
azure-arc Manage Vm Extensions Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-vm-extensions-powershell.md
The following example enables the Key Vault VM extension on an Azure Arc-enabled
$location = "regionName" # Start the deployment
- New-AzConnectedMachineExtension -ResourceGroupName $resourceGRoup -Location $location -MachineName $machineName -Name "KeyVaultForWindows or KeyVaultforLinux" -Publisher "Microsoft.Azure.KeyVault" -ExtensionType "KeyVaultforWindows or KeyVaultforLinux" -Setting (ConvertTo-Json $settings)
+ New-AzConnectedMachineExtension -ResourceGroupName $resourceGroup -Location $location -MachineName $machineName -Name "KeyVaultForWindows or KeyVaultforLinux" -Publisher "Microsoft.Azure.KeyVault" -ExtensionType "KeyVaultforWindows or KeyVaultforLinux" -Setting $settings
``` ## List extensions installed
azure-arc Manage Vmware Vms In Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/manage-vmware-vms-in-azure.md
To perform guest OS operations on Arc-enabled VMs, you must enable guest managem
|-|-|--| |Custom Script extension |Microsoft.Compute | CustomScriptExtension | |Log Analytics agent |Microsoft.EnterpriseCloud.Monitoring |MicrosoftMonitoringAgent |
+|Azure Automation Hybrid Runbook Worker extension (preview) |Microsoft.Compute | HybridWorkerForWindows|
+ ### Linux extensions
To perform guest OS operations on Arc-enabled VMs, you must enable guest managem
|-|-|--| |Custom Script extension |Microsoft.Azure.Extensions |CustomScript | |Log Analytics agent |Microsoft.EnterpriseCloud.Monitoring |OmsAgentForLinux |
+|Azure Automation Hybrid Runbook Worker extension (preview) | Microsoft.Compute | HybridWorkerForLinux|
## Enable guest management
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/overview.md
The following scenarios are supported in Azure Arc-enabled VMware vSphere (previ
- App teams can use Azure interfaces (portal, CLI, or REST API) to manage the lifecycle of on-premises VMs they use for deploying their applications (CRUD, Start/Stop/Restart). -- App teams and administrators can install extensions such as the Log Analytics agent, Custom Script Extension, and Dependency Agent, on the virtual machines and do operations supported by the extensions.
+- App teams and administrators can install extensions such as the Log Analytics agent, Custom Script Extension, Dependency Agent, and Azure Automation Hybrid Runbook Worker extension on the virtual machines and do operations supported by the extensions.
## Supported regions
azure-fluid-relay Container Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/how-tos/container-recovery.md
We aren't recovering (rolling back) existing container. `copyContainer` will giv
### New Container is detached New container is initially in `detached` state. We can continue working with detached container, or immediately attach. After calling `attach` we'll get back unique Container ID, representing newly created instance.+
+ ## Post-recovery considerations
+
+When it comes to building use cases around post-recovery scenarios, here are couple of considerations on what application might want do to get its remote collaborators all working on the same container again.
+
+If you are modeling your application data solely using fluid containers, the communication ΓÇ£linkΓÇ¥ is effectively broken when the container is corrupted. Similar real-world example may be video-call where the original author has shared the link with participants and that link is not working any more. With that perspective in mind, one option is to limit recovery permissions to original author and let them share new container link in the same way they shared original link, after recovering the copy of the original container.
+
+Alternatively, if you are using fluid framework for transient data only, you can always use your own source-of-truth data and supporting services to manage more autonomous recovery workflows. For example, multiple clients may kick off the recovery process until your app has a first recovered copy. Your app can then notify all participating clients to transition to a new container. This can be useful as any currently active client can unblock the participating group to proceed with collaboration. One consideration here is the incurred costs of redundancy.
azure-functions Create First Function Vs Code Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-csharp.md
adobe-target-content: ./create-first-function-vs-code-csharp-ieux
In this article, you use Visual Studio Code to create a C# function that responds to HTTP requests. After testing the code locally, you deploy it to the serverless environment of Azure Functions. This article creates an HTTP triggered function that runs on .NET 6.0. There's also a [CLI-based version](create-first-function-cli-csharp.md) of this article.
-By default, this article shows you how to create C# functions that runs on .NET 6 [in the same process as the Functions host](functions-dotnet-class-library.md). These _in-process_ C# functions are only supported on Long Term Support (LTS) versions of .NET, such as .NET 6. To create C# functions on .NET 6 that can also run on .NET 5.0 and .NET Framework 4.8 (in preview) [in an isolated process](dotnet-isolated-process-guide.md), see the [alternate version of this article](create-first-function-vs-code-csharp.md?tabs=isolated-process).
+By default, this article shows you how to create C# functions that run on .NET 6 [in the same process as the Functions host](functions-dotnet-class-library.md). These _in-process_ C# functions are only supported on Long Term Support (LTS) versions of .NET, such as .NET 6. To create C# functions on .NET 6 that can also run on .NET 5.0 and .NET Framework 4.8 (in preview) [in an isolated process](dotnet-isolated-process-guide.md), see the [alternate version of this article](create-first-function-vs-code-csharp.md?tabs=isolated-process).
Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account.
Completing this quickstart incurs a small cost of a few USD cents or less in you
Before you get started, make sure you have the following requirements in place:
-+ [.NET 6.0 SDK](https://dotnet.microsoft.com/download/dotnet/6.0)
-
-+ [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 4.x.
-
-+ [Visual Studio Code](https://code.visualstudio.com/) on one of the [supported platforms](https://code.visualstudio.com/docs/supporting/requirements#_platforms).
-
-+ [C# extension](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp) for Visual Studio Code.
-
-+ [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) for Visual Studio Code.
-
-You also need an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
## <a name="create-an-azure-functions-project"></a>Create your local project
azure-functions Create First Function Vs Code Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-java.md
Completing this quickstart incurs a small cost of a few USD cents or less in you
Before you get started, make sure you have the following requirements in place:
-+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
-
-+ The [Java Development Kit](/azure/developer/java/fundamentals/java-support-on-azure), version 11 or 8.
-
-+ [Apache Maven](https://maven.apache.org), version 3.0 or above.
-
-+ [Visual Studio Code](https://code.visualstudio.com/) on one of the [supported platforms](https://code.visualstudio.com/docs/supporting/requirements#_platforms).
-
-+ The [Java extension pack](https://marketplace.visualstudio.com/items?itemName=vscjava.vscode-java-pack)
-
-+ The [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) for Visual Studio Code.
## <a name="create-an-azure-functions-project"></a>Create your local project
azure-functions Create First Function Vs Code Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-node.md
There's also a [CLI-based version](create-first-function-cli-node.md) of this ar
Before you get started, make sure you have the following requirements in place:
-+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
-
-+ [Node.js 14.x](https://nodejs.org/en/download/releases/) or [Node.js 16.x](https://nodejs.org/en/download/releases/). Use the `node --version` command to check your version.
-
-+ [Visual Studio Code](https://code.visualstudio.com/) on one of the [supported platforms](https://code.visualstudio.com/docs/supporting/requirements#_platforms).
-
-+ The [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) for Visual Studio Code.
-
-+ [Azure Functions Core Tools 4.x](functions-run-local.md#install-the-azure-functions-core-tools).
## <a name="create-an-azure-functions-project"></a>Create your local project
azure-functions Create First Function Vs Code Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-powershell.md
There's also a [CLI-based version](create-first-function-cli-powershell.md) of t
Before you get started, make sure you have the following requirements in place:
-+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
-
-+ The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 4.x.
-
-+ [PowerShell 7](/powershell/scripting/install/installing-powershell-core-on-windows)
-
-+ [.NET Core 3.1 runtime](https://dotnet.microsoft.com/download/dotnet)
-
-+ [Visual Studio Code](https://code.visualstudio.com/) on one of the [supported platforms](https://code.visualstudio.com/docs/supporting/requirements#_platforms).
-
-+ The [PowerShell extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-vscode.PowerShell).
-
-+ The [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) for Visual Studio Code.
## <a name="create-an-azure-functions-project"></a>Create your local project
azure-functions Create First Function Vs Code Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-python.md
There's also a [CLI-based version](create-first-function-cli-python.md) of this
Before you begin, make sure that you have the following requirements in place:
-+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
-
-+ The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 3.x.
-
-+ Python versions that are [supported by Azure Functions](supported-languages.md#languages-by-runtime-version). For more information, see [How to install Python](https://wiki.python.org/moin/BeginnersGuide/Download).
-
-+ [Visual Studio Code](https://code.visualstudio.com/) on one of the [supported platforms](https://code.visualstudio.com/docs/supporting/requirements#_platforms).
-
-+ The [Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) for Visual Studio Code.
-
-+ The [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) for Visual Studio Code.
## <a name="create-an-azure-functions-project"></a>Create your local project
azure-functions Durable Functions Event Publishing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-event-publishing.md
The following list explains the lifecycle events schema:
## How to test locally
-To test locally, read [Azure Function Event Grid Trigger Local Debugging](../functions-debug-event-grid-trigger-local.md).
+To test locally, read [Local testing with viewer web app](../event-grid-how-tos.md#local-testing-with-viewer-web-app). You can also use the *ngrok* utility as shown in [this tutorial](../functions-event-grid-blob-trigger.md#start-local-debugging).
## Next steps
azure-functions Event Grid How Tos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/event-grid-how-tos.md
Azure Functions provides built-in integration with Azure Event Grid by using [triggers and bindings](functions-triggers-bindings.md). This article shows you how to configure and locally evaluate your Event Grid trigger and bindings. For more information about Event Grid trigger and output binding definitions and examples, see one of the following reference articles:
-+ [Azure Event Grid bindings for Azure Functions](functions-bindings-event-grid.md)
++ [Azure Event Grid bindings Overview](functions-bindings-event-grid.md) + [Azure Event Grid trigger for Azure Functions](functions-bindings-event-grid-trigger.md) + [Azure Event Grid output binding for Azure Functions](functions-bindings-event-grid-output.md)
-## Create a subscription
-
-To start receiving Event Grid HTTP requests, create an Event Grid subscription that specifies the endpoint URL that invokes the function.
-
-### Azure portal
-
-For functions that you develop in the Azure portal with the Event Grid trigger, select **Integration** then choose the **Event Grid Trigger** and select **Create Event Grid subscription**.
+## Event subscriptions
+To start receiving Event Grid HTTP requests, you need a subscription to events raised by Event Grid. Event subscriptions specify the endpoint URL that invokes the function. When you create an event subscription from your function's **Integration** tab in the [Azure portal](https://portal.azure.com), the URL is supplied for you. When you programmatically create an event subscription or when you create the event subscription from Event Grid, you'll need to provide the endpoint. The endpoint URL contains a system key, which you must obtain from Functions administrator REST APIs.
-When you select this link, the portal opens the **Create Event Subscription** page with the current trigger endpoint already defined.
--
-For more information about how to create subscriptions by using the Azure portal, see [Create custom event - Azure portal](../event-grid/custom-event-quickstart-portal.md) in the Event Grid documentation.
+### Webhook endpoint URL
-### Azure CLI
-
-To create a subscription by using [the Azure CLI](/cli/azure/get-started-with-azure-cli), use the [az eventgrid event-subscription create](/cli/azure/eventgrid/event-subscription#az-eventgrid-event-subscription-create) command.
-
-The command requires the endpoint URL that invokes the function, and the endpoint varies between version 1.x of the Functions runtime and later versions. The following example shows the version-specific URL pattern:
+The URL endpoint for your Event Grid triggered function depends on the version of the Functions runtime. The following example shows the version-specific URL pattern:
# [v2.x+](#tab/v2)
https://{functionappname}.azurewebsites.net/admin/extensions/EventGridExtensionC
```
-The system key is an authorization key that has to be included in the endpoint URL for an Event Grid trigger. The following section explains how to get the system key.
-
-Here's an example that subscribes to a blob storage account (with a placeholder for the system key):
-
-# [Bash](#tab/bash/v2)
-
-```azurecli
-az eventgrid resource event-subscription create -g myResourceGroup \
- --provider-namespace Microsoft.Storage --resource-type storageAccounts \
- --resource-name myblobstorage12345 --name myFuncSub \
- --included-event-types Microsoft.Storage.BlobCreated \
- --subject-begins-with /blobServices/default/containers/images/blobs/ \
- --endpoint https://mystoragetriggeredfunction.azurewebsites.net/runtime/webhooks/eventgrid?functionName=imageresizefunc&code=<key>
-```
-
-# [Cmd](#tab/cmd/v2)
+### System key
-```azurecli
-az eventgrid resource event-subscription create -g myResourceGroup ^
- --provider-namespace Microsoft.Storage --resource-type storageAccounts ^
- --resource-name myblobstorage12345 --name myFuncSub ^
- --included-event-types Microsoft.Storage.BlobCreated ^
- --subject-begins-with /blobServices/default/containers/images/blobs/ ^
- --endpoint https://mystoragetriggeredfunction.azurewebsites.net/runtime/webhooks/eventgrid?functionName=imageresizefunc&code=<key>
-```
-
-# [Bash](#tab/bash/v1)
-
-```azurecli
-az eventgrid resource event-subscription create -g myResourceGroup \
- --provider-namespace Microsoft.Storage --resource-type storageAccounts \
- --resource-name myblobstorage12345 --name myFuncSub \
- --included-event-types Microsoft.Storage.BlobCreated \
- --subject-begins-with /blobServices/default/containers/images/blobs/ \
- --endpoint https://mystoragetriggeredfunction.azurewebsites.net/admin/extensions/EventGridExtensionConfig?functionName=imageresizefunc&code=<key>
-```
-
-# [Cmd](#tab/cmd/v1)
-
-```azurecli
-az eventgrid resource event-subscription create -g myResourceGroup ^
- --provider-namespace Microsoft.Storage --resource-type storageAccounts ^
- --resource-name myblobstorage12345 --name myFuncSub ^
- --included-event-types Microsoft.Storage.BlobCreated ^
- --subject-begins-with /blobServices/default/containers/images/blobs/ ^
- --endpoint https://mystoragetriggeredfunction.azurewebsites.net/admin/extensions/EventGridExtensionConfig?functionName=imageresizefunc&code=<key>
-```
---
-For more information about how to create a subscription, see [the blob storage quickstart](../storage/blobs/storage-blob-event-quickstart.md#subscribe-to-your-storage-account) or the other Event Grid quickstarts.
-
-### Get the system key
+The URL endpoint you construct includes the system key value. The system key is an authorization key that has to be included in the endpoint URL for an Event Grid trigger. The following section explains how to get the system key.
You can get the system key by using the following API (HTTP GET):
http://{functionappname}.azurewebsites.net/admin/host/systemkeys/eventgridextens
-This REST API is an administrator API, so it requires your function app [master key](functions-bindings-http-webhook-trigger.md#authorization-keys). Don't confuse the system key (for invoking an Event Grid trigger function) with the master key (for performing administrative tasks on the function app). When you subscribe to an event grid topic, be sure to use the system key.
+This REST API is an administrator API, so it requires your function app [master key](functions-bindings-http-webhook-trigger.md#authorization-keys). Don't confuse the system key (for invoking an Event Grid trigger function) with the master key (for performing administrative tasks on the function app). When you subscribe to an Event Grid topic, be sure to use the system key.
Here's an example of the response that provides the system key:
You can get the master key for your function app from the **Function app setting
For more information, see [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys) in the HTTP trigger reference article.
+### <a name="create-a-subscription"></a>Create an event subscription
+
+You can create an event subscription either from the [Azure portal](https://portal.azure.com) or by using the Azure CLI.
+
+# [Portal](#tab/portal)
+
+For functions that you develop in the Azure portal with the Event Grid trigger, select **Integration** then choose the **Event Grid Trigger** and select **Create Event Grid subscription**.
++
+When you select this link, the portal opens the **Create Event Subscription** page with the current trigger endpoint already defined.
++
+For more information about how to create subscriptions by using the Azure portal, see [Create custom event - Azure portal](../event-grid/custom-event-quickstart-portal.md) in the Event Grid documentation.
+
+# [Azure CLI](#tab/azure-cli)
+
+To create a subscription by using [the Azure CLI](/cli/azure/get-started-with-azure-cli), use the [`az eventgrid event-subscription create`](/cli/azure/eventgrid/event-subscription#az-eventgrid-event-subscription-create) command. Examples use the v2.x+ version of the URL and are written to run in [Azure Cloud Shell](../cloud-shell/overview.md). You'll need to modify the examples to run from a Windows command prompt.
+
+This example creates a subscription to a blob storage account, with a placeholder for the [system key](#system-key):
+
+```azurecli-interactive
+az eventgrid resource event-subscription create -g myResourceGroup \
+ --provider-namespace Microsoft.Storage --resource-type storageAccounts \
+ --resource-name myblobstorage12345 --name myFuncSub \
+ --included-event-types Microsoft.Storage.BlobCreated \
+ --subject-begins-with /blobServices/default/containers/images/blobs/ \
+ --endpoint https://mystoragetriggeredfunction.azurewebsites.net/runtime/webhooks/eventgrid?functionName=imageresizefunc&code=<key>
+```
+++
+For more information about how to create a subscription, see [the blob storage quickstart](../storage/blobs/storage-blob-event-quickstart.md#subscribe-to-your-storage-account) or the other Event Grid quickstarts.
+ ## Local testing with viewer web app To test an Event Grid trigger locally, you have to get Event Grid HTTP requests delivered from their origin in the cloud to your local machine. One way to do that is by capturing requests online and manually resending them on your local machine:
To test an Event Grid trigger locally, you have to get Event Grid HTTP requests
1. [Generate a request](#generate-a-request) and copy the request body from the viewer app. 1. [Manually post the request](#manually-post-the-request) to the localhost URL of your Event Grid trigger function.
-When you're done testing, you can use the same subscription for production by updating the endpoint. Use the [az eventgrid event-subscription update](/cli/azure/eventgrid/event-subscription#az-eventgrid-event-subscription-update) Azure CLI command.
+When you're done testing, you can use the same subscription for production by updating the endpoint. Use the [`az eventgrid event-subscription update`](/cli/azure/eventgrid/event-subscription#az-eventgrid-event-subscription-update) Azure CLI command.
+
+You can also use the *ngrok* utility to forward remote requests to your locally running functions. For more information, see [this tutorial](./functions-event-grid-blob-trigger.md#start-local-debugging).
### Create a viewer web app
azure-functions Functions Bindings Event Grid Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid-trigger.md
Title: Azure Event Grid trigger for Azure Functions description: Learn to run code when Event Grid events in Azure Functions are dispatched.- Last updated 03/04/2022 ms.devlang: csharp, java, javascript, powershell, python
zone_pivot_groups: programming-languages-set-functions-lang-workers
# Azure Event Grid trigger for Azure Functions
-Use the function trigger to respond to an event sent to an event grid topic. To learn how to work with the Event Grid trigger.
--
-For information on setup and configuration details, see the [overview](./functions-bindings-event-grid.md).
+Use the function trigger to respond to an event sent by an [Event Grid source](../event-grid/overview.md). You must have an event subscription to the source to receive events. To learn how to create an event subscription, see [Create a subscription](event-grid-how-tos.md#create-a-subscription). For information on binding setup and configuration, see the [overview](./functions-bindings-event-grid.md).
> [!NOTE] > Event Grid triggers aren't natively supported in an internal load balancer App Service Environment (ASE). The trigger uses an HTTP request that can't reach the function app without a gateway into the virtual network.
Upon arrival, the event's JSON payload is de-serialized into the ```EventSchema`
} ```
-In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `EventGridTrigger` annotation on parameters whose value would come from EventGrid. Parameters with these annotations cause the function to run when an event arrives. This annotation can be used with native Java types, POJOs, or nullable values using `Optional<T>`.
+In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `EventGridTrigger` annotation on parameters whose value would come from Event Grid. Parameters with these annotations cause the function to run when an event arrives. This annotation can be used with native Java types, POJOs, or nullable values using `Optional<T>`.
::: zone-end ::: zone pivot="programming-language-javascript" The following example shows a trigger binding in a *function.json* file and a [JavaScript function](functions-reference-node.md) that uses the binding.
azure-functions Functions Bindings Register https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-register.md
The following table lists the currently available versions of the default *Micro
<sup>1</sup> Version 3.x of the extension bundle currently doesn't include the [Table Storage bindings](./functions-bindings-storage-table.md). If your app requires Table Storage, you'll need to continue using the 2.x version for now. > [!NOTE]
-> While you can a specify custom version range in host.json, we recommend you use a version value from this table.
+> Even though host.json supports custom ranges for `version`, you should use a version value from this table.
## Explicitly install extensions
azure-functions Functions Bindings Storage Blob Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-trigger.md
zone_pivot_groups: programming-languages-set-functions-lang-workers
The Blob storage trigger starts a function when a new or updated blob is detected. The blob contents are provided as [input to the function](./functions-bindings-storage-blob-input.md).
-The Azure Blob storage trigger requires a general-purpose storage account. Storage V2 accounts with [hierarchical namespaces](../storage/blobs/data-lake-storage-namespace.md) are also supported. To use a blob-only account, or if your application has specialized needs, review the alternatives to using this trigger.
+There are several ways to execute your function code based on changes to blobs in a storage container. Use the following table to determine which function trigger best fits your needs:
-For information on setup and configuration details, see the [overview](./functions-bindings-storage-blob.md).
+| | Blob Storage (standard) | Blob Storage (event-based) | Queue Storage | Event Grid |
+| -- | -- | -- | -- | - |
+| Latency | High (up to 10 min) | Low | Medium | Low |
+| [Storage account](../storage/common/storage-account-overview.md#types-of-storage-accounts) limitations | Blob-only accounts not supported┬╣ | general purpose v1 not supported | none | general purpose v1 not supported |
+| Extension version |Any | Storage v5.x+ |Any |Any |
+| Processes existing blobs | Yes | No | No | No |
+| Filters | [Blob name pattern](#blob-name-patterns) | [Event filters](../storage/blobs/storage-blob-event-overview.md#filtering-events) | n/a | [Event filters](../storage/blobs/storage-blob-event-overview.md#filtering-events) |
+| Requires [event subscription](../event-grid/concepts.md#event-subscriptions) | No | Yes | No | Yes |
+| Supports high-scale┬▓ | No | Yes | Yes | Yes |
+| Description | Default trigger behavior, which relies on polling the container for updates. For more information, see the [examples in this article](#example). | Consumes blob storage events from an event subscription. Requires a `Source` parameter value of `EventGrid`. For more information, see [Tutorial: Trigger Azure Functions on blob containers using an event subscription](./functions-event-grid-blob-trigger.md). | Blob name string is manually added to a storage queue when a blob is added to the container. This value is passed directly by a Queue Storage trigger to a Blob Storage input binding on the same function. | Provides the flexibility of triggering on events besides those coming from a storage container. Use when need to also have non-storage events trigger your function. For more information, see [How to work with Event Grid triggers and bindings in Azure Functions](event-grid-how-tos.md). |
+
+┬╣Blob Storage input and output bindings support blob-only accounts.
+┬▓High scale can be loosely defined as containers that have more than 100,000 blobs in them or storage accounts that have more than 100 blob updates per second.
+
+For information on setup and configuration details, see the [overview](./functions-bindings-storage-blob.md).
## Example
To look for curly braces in file names, escape the braces by using two braces. T
If the blob is named *{20140101}-soundfile.mp3*, the `name` variable value in the function code is *soundfile.mp3*.
+## Polling and latency
-
-## Polling
-
-Polling works as a hybrid between inspecting logs and running periodic container scans. Blobs are scanned in groups of 10,000 at a time with a continuation token used between intervals.
+Polling works as a hybrid between inspecting logs and running periodic container scans. Blobs are scanned in groups of 10,000 at a time with a continuation token used between intervals. If your function app is on the Consumption plan, there can be up to a 10-minute delay in processing new blobs if a function app has gone idle.
> [!WARNING]
-> In addition, [storage logs are created on a "best effort"](/rest/api/storageservices/About-Storage-Analytics-Logging) basis. There's no guarantee that all events are captured. Under some conditions, logs may be missed.
->
-> If you require faster or more reliable blob processing, consider creating a [queue message](../storage/queues/storage-dotnet-how-to-use-queues.md) when you create the blob. Then use a [queue trigger](functions-bindings-storage-queue.md) instead of a blob trigger to process the blob. Another option is to use Event Grid; see the tutorial [Automate resizing uploaded images using Event Grid](../event-grid/resize-images-on-storage-blob-upload-event.md).
->
-
-## Alternatives
-
-### Event Grid trigger
-
-> [!NOTE]
-> When using Storage Extensions 5.x and higher, the Blob trigger has built-in support for an Event Grid based Blob trigger. For more information, see the [Storage extension 5.x and higher](#storage-extension-5x-and-higher) section below.
-
-The [Event Grid trigger](functions-bindings-event-grid.md) also has built-in support for [blob events](../storage/blobs/storage-blob-event-overview.md). Use Event Grid instead of the Blob storage trigger for the following scenarios:
--- **Blob-only storage accounts**: [Blob-only storage accounts](../storage/common/storage-account-overview.md#types-of-storage-accounts) are supported for blob input and output bindings but not for blob triggers.--- **High-scale**: High scale can be loosely defined as containers that have more than 100,000 blobs in them or storage accounts that have more than 100 blob updates per second.--- **Existing Blobs**: The blob trigger will process all existing blobs in the container when you set up the trigger. If you have a container with many existing blobs and only want to trigger for new blobs, use the Event Grid trigger.--- **Minimizing latency**: If your function app is on the Consumption plan, there can be up to a 10-minute delay in processing new blobs if a function app has gone idle. To avoid this latency, you can switch to an App Service plan with Always On enabled. You can also use an [Event Grid trigger](functions-bindings-event-grid.md) with your Blob storage account. For an example, see the [Event Grid tutorial](../event-grid/resize-images-on-storage-blob-upload-event.md?toc=%2Fazure%2Fazure-functions%2Ftoc.json).-
-See the [Image resize with Event Grid](../event-grid/resize-images-on-storage-blob-upload-event.md) tutorial of an Event Grid example.
-
-#### Storage Extension 5.x and higher
-
-When using the storage extension, there is built-in support for Event Grid in the Blob trigger, which requires setting the `source` parameter to Event Grid in your existing Blob trigger.
-
-For more information on how to use the Blob Trigger based on Event Grid, refer to the [Event Grid Blob Trigger guide](./functions-event-grid-blob-trigger.md).
+> [Storage logs are created on a "best effort"](/rest/api/storageservices/About-Storage-Analytics-Logging) basis. There's no guarantee that all events are captured. Under some conditions, logs may be missed.
-### Queue storage trigger
+If you require faster or more reliable blob processing, you should instead implement one of the following strategies:
-Another approach to processing blobs is to write queue messages that correspond to blobs being created or modified and then use a [Queue storage trigger](./functions-bindings-storage-queue.md) to begin processing.
++ Change your binding definition to consume [blob events](../storage/blobs/storage-blob-event-overview.md) instead of polling the container. You can do this in one of two ways:
+ + Add the `source` parameter with a value of `EventGrid` to your binding definition and create an event subscription on the same container. For more information, see [Tutorial: Trigger Azure Functions on blob containers using an event subscription](./functions-event-grid-blob-trigger.md).
+ + Replace the Blob Storage trigger with an [Event Grid trigger](functions-bindings-event-grid-trigger.md) using an event subscription on the same container. For more information, see the [Image resize with Event Grid](../event-grid/resize-images-on-storage-blob-upload-event.md) tutorial.
++ Consider creating a [queue message](../storage/queues/storage-dotnet-how-to-use-queues.md) when you create the blob. Then use a [queue trigger](functions-bindings-storage-queue.md) instead of a blob trigger to process the blob.++ Switch your hosting to use an App Service plan with Always On enabled, which may result in increased costs. ## Blob receipts
azure-functions Functions Debug Event Grid Trigger Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-debug-event-grid-trigger-local.md
- Title: Azure Functions Event Grid local debugging
-description: Learn to locally debug Azure Functions triggered by an Event Grid event
-- Previously updated : 10/18/2018--
-# Azure Function Event Grid Trigger Local Debugging
-
-This article demonstrates how to debug a local function that handles an Azure Event Grid event raised by a storage account.
-
-## Prerequisites
--- Create or use an existing function app-- Create or use an existing storage account. Event Grid notification subscription can be set on Azure Storage accounts for `BlobStorage`, `StorageV2`, or [Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md).-- Download [ngrok](https://ngrok.com/) to allow Azure to call your local function-
-## Create a new function
-
-Open your function app in Visual Studio and, right-click on the project name in the Solution Explorer and click **Add > New Azure Function**.
-
-In the *New Azure Function* window, select **Event Grid trigger** and click **OK**.
-
-![Create new function](./media/functions-debug-event-grid-trigger-local/functions-debug-event-grid-trigger-local-add-function.png)
-
-Once the function is created, open the code file and copy the URL commented out at the top of the file. This location is used when configuring the Event Grid trigger.
-
-![Copy location](./media/functions-debug-event-grid-trigger-local/functions-debug-event-grid-trigger-local-copy-location.png)
-
-Then, set a breakpoint on the line that begins with `log.LogInformation`.
-
-![Set breakpoint](./media/functions-debug-event-grid-trigger-local/functions-debug-event-grid-trigger-local-set-breakpoint.png)
--
-Next, **press F5** to start a debugging session.
--
-## Debug the function
-
-Once the Event Grid recognizes a new file is uploaded to the storage container, the break point is hit in your local function.
-
-![Start ngrok](./media/functions-debug-event-grid-trigger-local/functions-debug-event-grid-trigger-local-breakpoint.png)
-
-## Clean up resources
-
-To clean up the resources created in this article, delete the **test** container in your storage account.
-
-## Next steps
--- [Automate resizing uploaded images using Event Grid](../event-grid/resize-images-on-storage-blob-upload-event.md)-- [Event Grid trigger for Azure Functions](./functions-bindings-event-grid.md)
azure-functions Functions Develop Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-vs-code.md
The following steps publish your project to a new function app created with adva
| | -- | | Enter a globally unique name for the new function app. | Type a globally unique name that identifies your new function app and then select Enter. Valid characters for a function app name are `a-z`, `0-9`, and `-`. | | Select a runtime stack. | Choose the language version on which you've been running locally. |
- | Select an OS. | Choose either Linux or Windows. Python apps must run on Linux |
+ | Select an OS. | Choose either Linux or Windows. Python apps must run on Linux. |
| Select a resource group for new resources. | Choose **Create new resource group** and type a resource group name, like `myResourceGroup`, and then select enter. You can also select an existing resource group. | | Select a location for new resources. | Select a location in a [region](https://azure.microsoft.com/regions/) near you or near other services that your functions access. | | Select a hosting plan. | Choose **Consumption** for serverless [Consumption plan hosting](consumption-plan.md), where you're only charged when your functions run. |
azure-functions Functions Event Grid Blob Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-event-grid-blob-trigger.md
Title: Azure Functions Event Grid Blob Trigger
-description: Learn to setup and debug with the Event Grid Blob Trigger
+ Title: 'Tutorial: Trigger Azure Functions on blob containers using an event subscription'
+description: In this tutorial, you learn how to use an Event Grid event subscription to create a low-latency, event-driven trigger on an Azure Blob Storage container.
--+ Last updated 3/1/2021
+zone_pivot_groups: programming-languages-set-functions-lang-workers
+#Customer intent: As an Azure Functions developer, I want learn how to create an Event Grid-based trigger on a Blob Storage container so that I can get a more rapid response to changes in the container.
-# Azure Function Event Grid Blob Trigger
+# Tutorial: Trigger Azure Functions on blob containers using an event subscription
-This article demonstrates how to debug and deploy a local Event Grid Blob triggered function that handles events raised by a storage account.
+Earlier versions of the Blob Storage trigger for Azure Functions polled the container for updates, which often resulted in delayed execution. By using the latest version of the extension, you can reduce latency by instead triggering on an event subscription to the same blob container. The event subscription uses Event Grid to forward changes in the blob container as events for your function to consume. This article demonstrates how to use Visual Studio Code to locally develop a function that runs based events raised when a blob is added to a container. You'll locally verify the function before deploying your project to Azure.
-> [!NOTE]
-> The Event Grid Blob trigger is in preview.
+> [!div class="checklist"]
+> * Create a general storage v2 account in Azure Storage.
+> * Create a container in blob storage.
+> * Create an event-driven Blob Storage triggered function.
+> * Create an event subscription to a blob container.
+> * Debug locally using ngrok by uploading files.
+> * Deploy to Azure and create a filtered event subscription.
## Prerequisites -- Create or use an existing function app-- Create or use an existing storage account-- Have version 5.0+ of the [Microsoft.Azure.WebJobs.Extensions.Storage extension](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage/5.0.0-beta.2) installed-- Download [ngrok](https://ngrok.com/) to allow Azure to call your local function++ The [ngrok](https://ngrok.com/) utility, which provides a way for Azure to call into your locally running function.+++ The [Azure Storage extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurestorage) for Visual Studio Code.
-## Create a new function
+> [!NOTE]
+> The Storage Extension for Visual Studio Code is currently in preview.
-1. Open your function app in Visual Studio Code.
+## Create a storage account
-1. **Press F1** to create a new blob trigger function. Make sure to use the connection string for your storage account.
+Using an event subscription to Azure Storage requires you to use a general-purpose v2 storage account. With the Azure Storage extension installed, you can create this kind of storage account by default from your Visual Studio Code project.
-1. The default url for your event grid blob trigger is:
+1. In Visual Studio Code, open the command palette (press F1), type `Azure Storage: Create Storage Account...`, and then provide the following information at the prompts:
- # [C#](#tab/csharp)
+ |Prompt|Selection|
+ |--|--|
+ |**Enter the name of the new storage account**| Type a globally unique name. Storage account names must be between 3 and 24 characters in length and can contain numbers and lowercase letters only. We'll use the same name for the resource group and the function app name, to make it easier. |
+ |**Select a location for new resources**| For better performance, choose a [region](https://azure.microsoft.com/regions/) near you.|
- ```http
- http://localhost:7071/runtime/webhooks/blobs?functionName={functionname}
- ```
+ The extension creates a new general-purpose v2 storage account with the name you provided. The same name is also used for the resource group in which the storage account is created.
- # [Python](#tab/python)
+1. After the storage account is created, open the command palette (press F1) and type `Azure Storage: Create Blob Container...`, and then provide the following information at the prompts:
- ```http
- http://localhost:7071/runtime/webhooks/blobs?functionName=Host.Functions.{functionname}
- ```
+ |Prompt|Selection|
+ |--|--|
+ |**Select a resource**| Choose the name of the storage account you created. |
+ |**Enter a name for the new blob container**| Type `samples-workitems`, which is the container name referenced in your code project.|
- # [Java](#tab/java)
+Now that you have the blob container, you can create both the function that triggers on this container and the event subscription that delivers events to your function.
- ```http
- http://localhost:7071/runtime/webhooks/blobs?functionName=Host.Functions.{functionname}
- ```
+## Create a Blob triggered function
-
+When you use Visual Studio Code to create a Blob Storage triggered function, you also create a new project. You'll then need to modify the function to consume an event subscription as the source instead of the regular polled container.
- Note your function app's name and that the trigger type is a blob trigger, which is indicated by `blobs` in the url. This will be needed when setting up endpoints later in the how to guide.
+1. Open your function app in Visual Studio Code.
-1. Once the function is created, add the Event Grid source parameter.
+1. Open the command palette (press F1) and type `Azure Functions: Create Function...` and select **Create new project**.
+
+1. Choose the directory location for your project workspace and choose **Select**. You should either create a new folder or choose an empty folder for the project workspace. Don't choose a project folder that is already part of a workspace.
+
+1. Provide the following information at the prompts:
+
+ ::: zone pivot="programming-language-csharp"
+ |Prompt|Selection|
+ |--|--|
+ |**Select a language**|Choose `C#`.|
+ |**Select a .NET runtime**| Choose `.NET 6.0 LTS`. Event-driven blob triggers aren't yet supported when running in an isolated process. |
+ |**Select a template for your project's first function**|Choose `Azure Blob Storage trigger`.|
+ |**Provide a function name**|Type `BlobTriggerEventGrid`.|
+ |**Provide a namespace** | Type `My.Functions`. |
+ |**Select setting from "local.settings.json"**|Choose `Create new local app setting`.|
+ |**Select a storage account**|Choose the storage account you created from the list. |
+ |**This is the path within your storage account that the trigger will monitor**| Accept the default value `samples-workitems`. |
+ |**Select how you would like to open your project**|Choose `Add to workspace`.|
+ ::: zone-end
+ ::: zone pivot="programming-language-python"
+ |Prompt|Selection|
+ |--|--|
+ |**Select a language**|Choose `Python`.|
+ |**Select a Python interpreter to create a virtual environment**| Choose your preferred Python interpreter. If an option isn't shown, type in the full path to your Python binary.|
+ |**Select a template for your project's first function**|Choose `Azure Blob Storage trigger`.|
+ |**Provide a function name**|Type `BlobTriggerEventGrid`.|
+ |**Select setting from "local.settings.json"**|Choose `Create new local app setting`.|
+ |**Select a storage account**|Choose the storage account you created from the list. |
+ |**This is the path within your storage account that the trigger will monitor**| Accept the default value `samples-workitems`. |
+ |**Select how you would like to open your project**|Choose `Add to workspace`.|
+ ::: zone-end
+ ::: zone pivot="programming-language-java"
+ |Prompt|Selection|
+ |--|--|
+ |**Select a language**|Choose `Java`.|
+ |**Select a version of Java**| Choose `Java 11` or `Java 8`, the Java version on which your functions run in Azure. Choose a Java version that you've verified locally. |
+ | **Provide a group ID** | Choose `com.function`. |
+ | **Provide an artifact ID** | Choose `BlobTriggerEventGrid`. |
+ | **Provide a version** | Choose `1.0-SNAPSHOT`. |
+ | **Provide a package name** | Choose `com.function`. |
+ | **Provide an app name** | Accept the generated name starting with `BlobTriggerEventGrid`. |
+ | **Select the build tool for Java project** | Choose `Maven`. |
+ |**Select how you would like to open your project**|Choose `Add to workspace`.|
+ ::: zone-end
+ ::: zone pivot="programming-language-javascript"
+ |Prompt|Selection|
+ |--|--|
+ |**Select a language for your function project**|Choose `JavaScript`.|
+ |**Select a template for your project's first function**|Choose `Azure Blob Storage trigger`.|
+ |**Provide a function name**|Type `BlobTriggerEventGrid`.|
+ |**Select setting from "local.settings.json"**|Choose `Create new local app setting`.|
+ |**Select a storage account**|Choose the storage account you created from the list. |
+ |**This is the path within your storage account that the trigger will monitor**| Accept the default value `samples-workitems`. |
+ |**Select how you would like to open your project**|Choose `Add to workspace`.|
+ ::: zone-end
+ ::: zone pivot="programming-language-powershell"
+ |Prompt|Selection|
+ |--|--|
+ |**Select a language for your function project**|Choose `PowerShell`.|
+ |**Select a template for your project's first function**|Choose `Azure Blob Storage trigger`.|
+ |**Provide a function name**|Type `BlobTriggerEventGrid`.|
+ |**Select setting from "local.settings.json"**|Choose `Create new local app setting`.|
+ |**Select a storage account**|Choose the storage account you created from the list. |
+ |**This is the path within your storage account that the trigger will monitor**| Accept the default value `samples-workitems`. |
+ |**Select how you would like to open your project**|Choose `Add to workspace`.|
+ ::: zone-end
+
+1. When prompted, choose **Select storage account** and then **Add to workspace**.
+
+To simplify things, this tutorial reuses the same storage account with your function app. In production, you might want to use a separate storage account for your function app. For more information, see [Storage considerations for Azure Functions](storage-considerations.md).
+
+## Upgrade the Blob Storage extension
+
+To be able to use the Event Grid-based Blog Storage trigger, your function needs to be using version 5.x of the Blob Storage extension.
+
+To upgrade your project to use the latest extension, run the following [dotnet add package](/dotnet/core/tools/dotnet-add-package) command in the Terminal window.
+
+<!# [In-process](#tab/in-process) -->
+```bash
+dotnet add package Microsoft.Azure.WebJobs.Extensions.Storage --version 5.0.1
+```
+<!# [Isolated process](#tab/isolated-process)
+```bash
+dotnet add package Microsoft.Azure.Functions.Worker.Extensions.Storage --version 5.0.0
+```
+
+-->
- # [C#](#tab/csharp)
- Add **Source = BlobTriggerSource.EventGrid** to the function parameters.
-
- ```csharp
- [FunctionName("BlobTriggerCSharp")]
- public static void Run([BlobTrigger("samples-workitems/{name}", Source = BlobTriggerSource.EventGrid, Connection = "connection")]Stream myBlob, string name, ILogger log)
- {
- log.LogInformation($"C# Blob trigger function Processed blob\n Name:{name} \n Size: {myBlob.Length} Bytes");
- }
+1. Open the host.json project file and inspect the `extensionBundle` element.
+
+1. If `extensionBundle.version` isn't at least `3.3.0 `, replace `extensionBundle` with the following version:
+
+ ```json
+ "extensionBundle": {
+ "id": "Microsoft.Azure.Functions.ExtensionBundle",
+ "version": "[3.3.0, 4.0.0)"
+ }
```
- # [Python](#tab/python)
- Add **"source": "EventGrid"** to the function.json binding data.
+
+## Update the function to use events
+
+Open the BlobTriggerEventGrid.cs file and, add `Source = BlobTriggerSource.EventGrid` to the parameters for the blob trigger attribute, as shown in the following example:
- ```json
+```csharp
+[FunctionName("BlobTriggerCSharp")]
+public static void Run([BlobTrigger("samples-workitems/{name}", Source = BlobTriggerSource.EventGrid, Connection = "<NAMED_STORAGE_CONNECTION>")]Stream myBlob, string name, ILogger log)
+{
+ log.LogInformation($"C# Blob trigger function Processed blob\n Name:{name} \n Size: {myBlob.Length} Bytes");
+}
+```
+After the function is created add `"source": "EventGrid"` to the `myBlob` binding in the function.json configuration file, as shown in the following example:
+
+```json
+{
+ "scriptFile": "__init__.py",
+ "bindings": [
{
- "scriptFile": "__init__.py",
- "bindings": [
- {
- "name": "myblob",
- "type": "blobTrigger",
- "direction": "in",
- "path": "samples-workitems/{name}",
- "source": "EventGrid",
- "connection": "MyStorageAccountConnectionString"
+ "name": "myblob",
+ "type": "blobTrigger",
+ "direction": "in",
+ "path": "samples-workitems/{name}",
+ "source": "EventGrid",
+ "connection": "<NAMED_STORAGE_CONNECTION>"
+ }
+ ]
+}
+```
+1. Replace contents of the generated `Function.java` file with the following code and rename the file to `BlobTriggerEventGrid.java`:
+
+ ```java
+ package com.function;
+
+ import com.microsoft.azure.functions.annotation.*;
+ import com.microsoft.azure.functions.*;
+
+ /**
+ * Azure Functions with Azure Blob trigger.
+ */
+ public class BlobTriggerEventGrid {
+ /**
+ * This function will be invoked when a new or updated blob is detected at the specified path. The blob contents are provided as input to this function.
+ */
+ @FunctionName("BlobTriggerEventGrid")
+ @StorageAccount("glengatesteventgridblob_STORAGE")
+ public void run(
+ @BlobTrigger(name = "content", path = "samples-workitems/{name}", dataType = "binary", source = "EventGrid" ) byte[] content,
+ @BindingName("name") String name,
+ final ExecutionContext context
+ ) {
+ context.getLogger().info("Java Blob trigger function processed a blob. Name: " + name + "\n Size: " + content.length + " Bytes");
}
- ]
} ```-
- # [Java](#tab/java)
- **Press F5** to build the function. Once the build is complete, add **"source": "EventGrid"** to the **function.json** binding data.
+2. Remove the associated unit test file, which is no longer relevant to the new trigger type.
+After the function is created, add `"source": "EventGrid"` to the `myBlob` binding in the function.json configuration file, as shown in the following example:
- ```json
+```json
+{
+ "bindings": [
{
- "scriptFile" : "../java-1.0-SNAPSHOT.jar",
- "entryPoint" : "com.function.{MyFunctionName}.run",
- "bindings" : [ {
- "type" : "blobTrigger",
- "direction" : "in",
- "name" : "content",
- "path" : "samples-workitems/{name}",
- "dataType" : "binary",
- "source": "EventGrid",
- "connection" : "MyStorageAccountConnectionString"
- } ]
+ "name": "myblob",
+ "type": "blobTrigger",
+ "direction": "in",
+ "path": "samples-workitems/{name}",
+ "source": "EventGrid",
+ "connection": "<NAMED_STORAGE_CONNECTION>"
}
+ ]
+}
+ ```
+
+## Start local debugging
+
+Event Grid validates the endpoint URL when you create an event subscription in the Azure portal. This validation means that before you can create an event subscription for local debugging, your function must be running locally with remote access enabled by the ngrok utility. If your local function code isn't running and accessible to Azure, you won't be able to create the event subscription.
+
+### Determine the blob trigger endpoint
+
+When your function runs locally, the default endpoint used for an event-driven blob storage trigger looks like the following URL:
+
+```http
+http://localhost:7071/runtime/webhooks/blobs?functionName=BlobTriggerEventGrid
+```
+```http
+http://localhost:7071/runtime/webhooks/blobs?functionName=Host.Functions.BlobTriggerEventGrid
+```
+
+Save this path, which you'll use later to create endpoint URLs for event subscriptions. If you used a different name for your Blob Storage triggered function, you need to change the `functionName` value in the query string.
+
+> [!NOTE]
+> Because the endpoint is handling events for a Blob Storage trigger, the endpoint path includes `blobs`. The endpoint URL for an Event Grid trigger would instead have `eventgrid` in the path.
+
+### Run ngrok
+
+To break into a function being debugged on your machine, you must provide a way for Azure Event Grid to communicate with functions running on your local computer.
+
+The [ngrok](https://ngrok.com/) utility forwards external requests to a randomly generated proxy server address to a specific address and port on your local computer. through to call the webhook endpoint of the function running on your machine.
+
+1. Start *ngrok* using the following command:
+
+ ```bash
+ ngrok.exe http http://localhost:7071
```
-
+ As the utility starts, the command window should look similar to the following screenshot:
-1. Set a breakpoint in your function on the line that handles logging.
+ ![Screenshot that shows the Command Prompt after starting the "ngrok" utility.](./media/functions-event-grid-blob-trigger/functions-event-grid-local-dev-ngrok.png)
-1. Start a debugging session.
+1. Copy the **HTTPS** URL generated when *ngrok* is run. This value is used to determine the webhook endpoint on your computer exposed using ngrok.
+
+> [!IMPORTANT]
+> At this point, don't stop `ngrok`. Every time you start `ngrok`, the HTTPS URL is regenerated with a different value. Because the endpoint of an event subscription can't be modified, you have to create a new event subscription every time you run `ngrok`.
+>
+> Unless you create an ngrok account, the maximum ngrok session time is limited to two hours.
+
+### Build the endpoint URL
+
+The endpoint used in the event subscription is made up of three different parts, a prefixed server name, a path, and a query string. The following table describes these parts:
- # [C#](#tab/csharp)
- **Press F5** to start a debugging session.
+| URL part | Description |
+| | |
+| Prefix and server name | When your function runs locally, the server name with an `https://` prefix comes from the **Forwarding** URL generated by *ngrok*. In the localhost URL, the *ngrok* URL replaces `http://localhost:7071`. When running in Azure, you'll instead use the published function app server, which is usually in the form `https://<FUNCTION_APP_NAME>.azurewebsites.net`. |
+| Path | The path portion of the endpoint URL comes from the localhost URL copied earlier, and looks like `/runtime/webhooks/blobs` for a Blob Storage trigger. The path for an Event Grid trigger would be `/runtime/webhooks/EventGrid` |
+| Query string | The `functionName=BlobTriggerEventGrid` parameter in the query string sets the name of the function that handles the event. For functions other than C#, the function name is qualified by `Host.Functions.`. If you used a different name for your function, you'll need to change this value. An access key isn't required when running locally. When running in Azure, you'll also need to include a `code=` parameter in the URL, which contains a key that you can get from the portal. |
+
+The following screenshot shows an example of how the final endpoint URL should look when using a Blob Storage trigger named `BlobTriggerEventGrid`:
+
+ ![Endpoint selection](./media/functions-event-grid-blob-trigger/functions-event-grid-local-dev-event-subscription-endpoint-selection.png)
+ ![Endpoint selection](./media/functions-event-grid-blob-trigger/functions-event-grid-local-dev-event-subscription-endpoint-selection-qualified.png)
+
+### Start debugging
+
+With ngrok already running, start your local project as follows:
+
+1. Set a breakpoint in your function on the line that handles logging.
- # [Python](#tab/python)
- **Press F5** to start a debugging session.
+1. Start a debugging session.
- # [Java](#tab/java)
- Open a new terminal and run the below mvn command to start the debugging session.
+ ::: zone pivot="programming-language-java"
+ Open a new terminal and run the following `mvn` command to start the debugging session.
```bash mvn azure-functions:run ```
+ ::: zone-end
+ ::: zone pivot="programming-language-javascript,programming-language-powershell,programming-language-python,programming-language-csharp"
+ Press **F5** to start a debugging session.
+ ::: zone-end
-
+With your code running and ngrok forwarding requests, it's time to create an event subscription to the blob container.
+## Create the event subscription
-## Debug the function
-Once the Blob Trigger recognizes a new file is uploaded to the storage container, the break point is hit in your local function.
+An event subscription, powered by Azure Event Grid, raises events based on changes in the linked blob container. This event is then sent to the webhook endpoint on your function's trigger. After an event subscription is created, the endpoint URL can't be changed. This means that after you're done with local debugging (or if you restart ngrok), you'll need to delete and recreate the event subscription.
-## Deployment
+1. In Visual Studio Code, choose the Azure icon in the Activity bar. In **Resources**, expand your subscription, expand **Storage accounts**, right-click the storage account you created earlier, and select **Open in portal**.
-As you deploy the function app to Azure, update the webhook endpoint from your local endpoint to your deployed app endpoint. To update an endpoint, follow the steps in [Add a storage event](#add-a-storage-event) and use the below for the webhook URL in step 5. The `<BLOB-EXTENSION-KEY>` can be found in the **App Keys** section from the left menu of your **Function App**.
+1. Sign in to the [Azure portal](https://portal.azure.com) and make a note of the **Resource group** for your storage account. You'll create your other resources in the same group to make it easier to clean up resources when you're done.
-# [C#](#tab/csharp)
+1. select the **Events** option from the left menu.
-```http
-https://<FUNCTION-APP-NAME>.azurewebsites.net/runtime/webhooks/blobs?functionName=<FUNCTION-NAME>&code=<BLOB-EXTENSION-KEY>
-```
+ ![Add storage account event](./media/functions-event-grid-blob-trigger/functions-event-grid-local-dev-add-event.png)
-# [Python](#tab/python)
+1. In the **Events** window, select the **+ Event Subscription** button, and provide values from the following table into the **Basic** tab:
-```http
-https://<FUNCTION-APP-NAME>.azurewebsites.net/runtime/webhooks/blobs?functionName=Host.Functions.<FUNCTION-NAME>&code=<BLOB-EXTENSION-KEY>
-```
+ | Setting | Suggested value | Description |
+ | | - | -- |
+ | **Name** | *myBlobLocalNgrokEventSub* | Name that identifies the event subscription. You can use the name to quickly find the event subscription. |
+ | **Event Schema** | **Event Grid Schema** | Use the default schema for events. |
+ | **System Topic Name** | *samples-workitems-blobs* | Name for the topic, which represents the container. The topic is created with the first subscription, and you'll use it for future event subscriptions. |
+ | **Filter to Event Types** | *Blob Created*<br/>*Blob Deleted*|
+ | **Endpoint Type** | **Web Hook** | The blob storage trigger uses a web hook endpoint. You would use Azure Functions for an Event Grid trigger. |
+ | **Endpoint** | Your ngrok-based URL endpoint | Use the ngrok-based URL endpoint that you determined earlier. |
+
+1. Select **Confirm selection** to validate the endpoint URL.
+
+1. Select **Create** to create the event subscription.
+
+## Upload a file to the container
+
+With the event subscription in place and your code project and ngrok still running, you can now upload a file to your storage container to trigger your function. You can upload a file from your computer to your blob storage container using Visual Studio Code.
+
+1. In Visual Studio Code, open the command palette (press F1) and type `Azure Storage: Upload Files...`.
+
+1. In the **Open** dialog box, choose a file, preferably a binary image file that's not too large, select **Upload** .
+
+1. Provide the following information at the prompts:
+
+ | Setting | Suggested value | Description |
+ | | - | -- |
+ | **Select a resource** | Storage account name | Choose the name of the storage account you created in a previous step. |
+ | **Select a resource type** | **Blob Containers** | You're uploading to a blob container. |
+ | **Select Blob Container** | **samples-workitems** | This value is the name of the container you created in a previous step. |
+ | **Enter the destination directory of this upload** | default | Just accept the default value of `/`, which is the container root. |
+
+This command uploads a file from your computer to the storage container in Azure. At this point, your running ngrok instance should report that a request was forwarded. You'll also see in the func.exe output for your debugging session that your function has been started. Hopefully, at this point, your debug session is waiting for you where you set the breakpoint.
+
+## Publish the project to Azure
+
+Now that you've successfully validated your function code locally, it's time to publish the project to a new function app in Azure.
+
+### Create the function app
+
+The following steps create the resources you need in Azure and deploy your project files.
+
+1. In the command pallet, enter **Azure Functions: Create function app in Azure...(Advanced)**.
+
+1. Following the prompts, provide this information:
+
+ | Prompt | Selection |
+ | | -- |
+ | **Enter a globally unique name for the new function app.** | Type a globally unique name that identifies your new function app and then select Enter. Valid characters for a function app name are `a-z`, `0-9`, and `-`. Write down this name; you'll need it later when building the new endpoint URL. |
+ | **Select a runtime stack.** | Choose the language version on which you've been running locally. |
+ | **Select an OS.** | Choose either Linux or Windows. Python apps must run on Linux. |
+ | **Select a resource group for new resources.** | Choose the name of the resource group you created with your storage account, which you previously noted in the portal. |
+ | **Select a location for new resources.** | Select a location in a [region](https://azure.microsoft.com/regions/) near you or near other services that your functions access. |
+ | **Select a hosting plan.** | Choose **Consumption** for serverless [Consumption plan hosting](consumption-plan.md), where you're only charged when your functions run. |
+ | **Select a storage account.** | Choose the name of the existing storage account that you've been using. |
+ | **Select an Application Insights resource for your app.** | Choose **Create new Application Insights resource** and at the prompt, type a name for the instance used to store runtime data from your functions.|
+
+ A notification appears after your function app is created and the deployment package is applied. Select **View Output** in this notification to view the creation and deployment results, including the Azure resources that you created.
+
+### Deploy the function code
++
+### Publish application settings
+
+Because the local settings from local.settings.json aren't automatically published, you must upload them now so that your function run correctly in Azure.
+
+In the command pallet, enter **Azure Functions: Upload Local Settings...**, and in the **Select a resource.** prompt choose the name of your function app.
-# [Java](#tab/java)
+## Recreate the event subscription
+Now that the function app is running in Azure, you need to create a new event subscription. This new event subscription uses the endpoint of your function in Azure. You'll also add a filter to the event subscription so that the function is only triggered when JPEG (.jpg) files are added to the container. In Azure, the endpoint URL also contains an access key, which helps to block actors other than Event Grid from accessing the endpoint.
+
+### Get the blob extension key
+
+1. In Visual Studio Code, choose the Azure icon in the Activity bar. In **Resources**, expand your subscription, expand **Function App**, right-click the function app you created, and select **Open in portal**.
+
+1. Under **Functions** in the left menu, select **App keys**.
+
+1. Under **System keys** select the key named **blobs_extension**, and copy the key **Value**.
+
+You'll include this value in the query string of new endpoint URL.
+
+### Build the endpoint URL
+
+Create a new endpoint URL for the Blob Storage trigger based on the following example:
+ ```http
-https://<FUNCTION-APP-NAME>.azurewebsites.net/runtime/webhooks/blobs?functionName=Host.Functions.<FUNCTION-NAME>&code=<BLOB-EXTENSION-KEY>
+https://<FUNCTION_APP_NAME>.azurewebsites.net/runtime/webhooks/blobs?functionName=BlobTriggerEventGrid&code=<BLOB_EXTENSION_KEY>
```
+```http
+https://<FUNCTION_APP_NAME>.azurewebsites.net/runtime/webhooks/blobs?functionName=Host.Functions.BlobTriggerEventGrid&code=<BLOB_EXTENSION_KEY>
+```
-
+In this example, replace `<FUNCTION_APP_NAME>` with the name of your function app and replace `<BLOB_EXTENSION_KEY>` with the value you got from the portal. If you used a different name for your function, you'll also need to change the `functionName` query string as needed.
+
+### Create a filtered event subscription
+
+Because the endpoint URL of an event subscription can't be changed, you must create a new event subscription. You should also delete the old event subscription at this time, since it can't be reused.
+
+This time, you'll include the filter on the event subscription so that only JPEG files (*.jpg) trigger the function.
+
+1. In Visual Studio Code, choose the Azure icon in the Activity bar. In **Resources**, expand your subscription, expand **Storage accounts**, right-click the storage account you created earlier, and select **Open in portal**.
+
+1. In the [Azure portal](https://portal.azure.com), select the **Events** option from the left menu.
+
+1. In the **Events** window, select your old ngrok-based event subscription, select **Delete** > **Save**. This action removes the old event subscription.
+
+1. Select the **+ Event Subscription** button, and provide values from the following table into the **Basic** tab:
+
+ | Setting | Suggested value | Description |
+ | | - | -- |
+ | **Name** | *myBlobAzureEventSub* | Name that identifies the event subscription. You can use the name to quickly find the event subscription. |
+ | **Event Schema** | **Event Grid Schema** | Use the default schema for events. |
+ | **Filter to Event Types** | *Blob Created*<br/>*Blob Deleted*|
+ | **Endpoint Type** | **Web Hook** | The blob storage trigger uses a web hook endpoint. You would use Azure Functions for an Event Grid trigger. |
+ | **Endpoint** | Your new Azure-based URL endpoint | Use the URL endpoint that you built, which includes the key value. |
+
+1. Select **Confirm selection** to validate the endpoint URL.
+
+1. Select the **Filters** tab, under **Subject filters** check **Enable subject filtering**, type `.jpg` in **Subject ends with**. This filters events to only JPEG files.
+
+ ![Add filter](./media/functions-event-grid-blob-trigger/container_filter.png)
+
+1. Select **Create** to create the event subscription.
+
+## Verify the function in Azure
+
+With the entire topology now running Azure, it's time to verify that everything is working correctly. Since you're already in the portal, it's easiest to just upload a file from there.
+
+1. In your storage account page in the portal, select **Containers** and select your **samples-workitems** container.
+
+1. Select the **Upload** button to open the upload page on the right, browse your local file system to find a `.jpg` file to upload, and then select the **Upload** button to upload the blob. Now, you can verify that your function ran based on the container upload event.
+
+1. In your storage account, return to the **Events** page, select **Event Subscriptions**, and you should see that an event was delivered.
+
+1. Back in your function app page in the portal, under **Functions** select **Functions**, choose your function and you should see a **Total Execution Count** of at least one.
-## Clean up resources
+1. Under **Developer**, select **Monitor**, and you should see traces written from your successful function executions. There might be up a five-minute delay as events are processed by Application Insights.
-To clean up the resources created in this article, delete the event grid subscription you created in this tutorial.
## Next steps
azure-functions Functions How To Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-how-to-github-actions.md
Last updated 10/07/2020
-# Continuous delivery by using GitHub Action
+# Continuous delivery by using GitHub Actions
Use [GitHub Actions](https://github.com/features/actions) to define a workflow to automatically build and deploy code to your function app in Azure Functions.
jobs:
build-and-deploy: runs-on: ubuntu-latest steps:
- - name: 'Checkout GitHub Action'
+ - name: 'Checkout GitHub action'
uses: actions/checkout@v2 - name: Setup DotNet ${{ env.DOTNET_VERSION }} Environment
jobs:
pushd './${{ env.AZURE_FUNCTIONAPP_PACKAGE_PATH }}' dotnet build --configuration Release --output ./output popd
- - name: 'Run Azure Functions Action'
+ - name: 'Run Azure Functions action'
uses: Azure/functions-action@v1 with: app-name: ${{ env.AZURE_FUNCTIONAPP_NAME }}
jobs:
build-and-deploy: runs-on: windows-latest steps:
- - name: 'Checkout GitHub Action'
+ - name: 'Checkout GitHub action'
uses: actions/checkout@v2 - name: Setup DotNet ${{ env.DOTNET_VERSION }} Environment
jobs:
pushd './${{ env.AZURE_FUNCTIONAPP_PACKAGE_PATH }}' dotnet build --configuration Release --output ./output popd
- - name: 'Run Azure Functions Action'
+ - name: 'Run Azure Functions action'
uses: Azure/functions-action@v1 with: app-name: ${{ env.AZURE_FUNCTIONAPP_NAME }}
jobs:
build-and-deploy: runs-on: ubuntu-latest steps:
- - name: 'Checkout GitHub Action'
+ - name: 'Checkout GitHub action'
uses: actions/checkout@v2 - name: Setup Java Sdk ${{ env.JAVA_VERSION }}
jobs:
mvn clean package mvn azure-functions:package popd
- - name: 'Run Azure Functions Action'
+ - name: 'Run Azure Functions action'
uses: Azure/functions-action@v1 with: app-name: ${{ env.AZURE_FUNCTIONAPP_NAME }}
jobs:
build-and-deploy: runs-on: windows-latest steps:
- - name: 'Checkout GitHub Action'
+ - name: 'Checkout GitHub action'
uses: actions/checkout@v2 - name: Setup Java Sdk ${{ env.JAVA_VERSION }}
jobs:
mvn clean package mvn azure-functions:package popd
- - name: 'Run Azure Functions Action'
+ - name: 'Run Azure Functions action'
uses: Azure/functions-action@v1 with: app-name: ${{ env.AZURE_FUNCTIONAPP_NAME }}
jobs:
build-and-deploy: runs-on: ubuntu-latest steps:
- - name: 'Checkout GitHub Action'
+ - name: 'Checkout GitHub action'
uses: actions/checkout@v2 - name: Setup Node ${{ env.NODE_VERSION }} Environment
jobs:
npm run build --if-present npm run test --if-present popd
- - name: 'Run Azure Functions Action'
+ - name: 'Run Azure Functions action'
uses: Azure/functions-action@v1 with: app-name: ${{ env.AZURE_FUNCTIONAPP_NAME }}
jobs:
build-and-deploy: runs-on: windows-latest steps:
- - name: 'Checkout GitHub Action'
+ - name: 'Checkout GitHub action'
uses: actions/checkout@v2 - name: Setup Node ${{ env.NODE_VERSION }} Environment
jobs:
npm run build --if-present npm run test --if-present popd
- - name: 'Run Azure Functions Action'
+ - name: 'Run Azure Functions action'
uses: Azure/functions-action@v1 with: app-name: ${{ env.AZURE_FUNCTIONAPP_NAME }}
jobs:
build-and-deploy: runs-on: ubuntu-latest steps:
- - name: 'Checkout GitHub Action'
+ - name: 'Checkout GitHub action'
uses: actions/checkout@v2 - name: Setup Python ${{ env.PYTHON_VERSION }} Environment
jobs:
python -m pip install --upgrade pip pip install -r requirements.txt --target=".python_packages/lib/site-packages" popd
- - name: 'Run Azure Functions Action'
+ - name: 'Run Azure Functions action'
uses: Azure/functions-action@v1 with: app-name: ${{ env.AZURE_FUNCTIONAPP_NAME }}
azure-functions Functions Manually Run Non Http https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-manually-run-non-http.md
Open Postman and follow these steps:
## Next steps - [Strategies for testing your code in Azure Functions](./functions-test-a-function.md)-- [Azure Function Event Grid Trigger Local Debugging](./functions-debug-event-grid-trigger-local.md)
+- [Event Grid local testing with viewer web app](./event-grid-how-tos.md#local-testing-with-viewer-web-app)
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
The tables below provide a comparison of Azure Monitor Agent with the legacy the
| | Event Hub | | | X | | **Services and features supported** | | | | | | | Microsoft Sentinel | X ([View scope](#supported-services-and-features)) | X | |
-| | VM Insights | | X (Public preview) | |
-| | Azure Automation | | X | |
-| | Microsoft Defender for Cloud | | X | |
+| | VM Insights | X (Public preview) | X | |
+| | Microsoft Defender for Cloud | X (Public preview) | X | |
+| | Update Management | X (Public preview, independent of monitoring agents) | X | |
+| | Change Tracking | | X | |
### Linux agents
The tables below provide a comparison of Azure Monitor Agent with the legacy the
| | Azure Storage | | | X | | | | Event Hub | | | X | | | **Services and features supported** | | | | | |
-| | Microsoft Sentinel | X ([View scope](#supported-services-and-features)) | X | | |
-| | VM Insights | X (Public preview) | X | | |
-| | Container Insights | X (Public preview) | X | | |
-| | Azure Automation | | X | | |
-| | Microsoft Defender for Cloud | | X | | |
+| | Microsoft Sentinel | X ([View scope](#supported-services-and-features)) | X | |
+| | VM Insights | X (Public preview) | X | |
+| | Microsoft Defender for Cloud | X (Public preview) | X | |
+| | Update Management | X (Public preview, independent of monitoring agents) | X | |
+| | Change Tracking | | X | |
<sup>1</sup> To review other limitations of using Azure Monitor Metrics, see [quotas and limits](../essentials/metrics-custom-overview.md#quotas-and-limits). On Linux, using Azure Monitor Metrics as the only destination is supported in v.1.10.9.0 or higher.
The following tables list the operating systems that Azure Monitor Agent and the
#### Linux
-| Operating system | Azure Monitor agent <sup>1</sup> | Log Analytics agent <sup>1</sup> | Diagnostics extension <sup>2</sup>|
+| Operating system | Azure Monitor agent <sup>1</sup> | Log Analytics agent <sup>1</sup> | Diagnostics extension <sup>2</sup>|
|:|::|::|::|::
-| AlmaLinux 8.* | X | X | |
+| AlmaLinux 8 | X | X | |
| Amazon Linux 2017.09 | | X | | | Amazon Linux 2 | | X | |
-| CentOS Linux 8 | X <sup>3</sup> | X | |
+| CentOS Linux 8 | X | X | |
| CentOS Linux 7 | X | X | X | | CentOS Linux 6 | | X | | | CentOS Linux 6.5+ | | X | X |
-| Debian 11 <sup>1</sup> | X | | |
-| Debian 10 <sup>1</sup> | X | X | |
+| Debian 11 | X | | |
+| Debian 10 | X | X | |
| Debian 9 | X | X | X | | Debian 8 | | X | | | Debian 7 | | | X | | OpenSUSE 13.1+ | | | X |
-| Oracle Linux 8 | X <sup>3</sup> | X | |
+| Oracle Linux 8 | X | X | |
| Oracle Linux 7 | X | X | X | | Oracle Linux 6 | | X | | | Oracle Linux 6.4+ | | X | X |
-| Red Hat Enterprise Linux Server 8.5, 8.6 | X | X | |
-| Red Hat Enterprise Linux Server 8, 8.1, 8.2, 8.3, 8.4 | X <sup>3</sup> | X | |
+| Red Hat Enterprise Linux Server 8 | X | X | |
| Red Hat Enterprise Linux Server 7 | X | X | X | | Red Hat Enterprise Linux Server 6 | | X | | | Red Hat Enterprise Linux Server 6.7+ | | X | X |
-| Rocky Linux 8.* | X | X | |
-| SUSE Linux Enterprise Server 15.2 | X <sup>3</sup> | | |
-| SUSE Linux Enterprise Server 15.1 | X <sup>3</sup> | X | |
+| Rocky Linux 8 | X | X | |
+| SUSE Linux Enterprise Server 15 SP2 | X | | |
| SUSE Linux Enterprise Server 15 SP1 | X | X | | | SUSE Linux Enterprise Server 15 | X | X | |
-| SUSE Linux Enterprise Server 12 SP5 | X | X | X |
| SUSE Linux Enterprise Server 12 | X | X | X | | Ubuntu 22.04 LTS | X | | | | Ubuntu 20.04 LTS | X | X | X |
The following tables list the operating systems that Azure Monitor Agent and the
| Ubuntu 14.04 LTS | | X | X | <sup>1</sup> Requires Python (2 or 3) to be installed on the machine.<br>
-<sup>2</sup> Known issue collecting Syslog events in versions prior to 1.9.0.<br>
-<sup>3</sup> Not all kernel versions are supported. For more information, see [Dependency Agent Linux support](../vm/vminsights-dependency-agent-maintenance.md#dependency-agent-linux-support).
+<sup>2</sup> Requires Python 2 to be installed on the machine and aliased to the `python` command.<br>
## Next steps
azure-monitor Data Collection Text Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-text-log.md
The custom table must be created before you can send data to it. When you create
Use the **Tables - Update** API to create the table with the PowerShell code below. This code creates a table called *MyTable_CL* with two columns. Modify this schema to collect a different table. > [!IMPORTANT]
-> Custom tables must use a suffix of *_CL* as in *tablename_CL*. The *tablename_CL* in the DataFlows Streams must match the *tablename_CL* name created in the log Analytics workspace.
+> Custom tables have a suffix of *_CL*; for example, *tablename_CL*. The *tablename_CL* in the DataFlows Streams must match the *tablename_CL* name in the Log Analytics workspace.
1. Click the **Cloud Shell** button in the Azure portal and ensure the environment is set to **PowerShell**.
The [data collection rule (DCR)](../essentials/data-collection-rule-overview.md)
See [Structure of a data collection rule in Azure Monitor (preview)](../essentials/data-collection-rule-structure.md#custom-logs) if you want to modify the text log DCR. > [!IMPORTANT]
- > Custom tables must use a suffix of *_CL* as in *tablename_CL*. The *tablename_CL* in the DataFlows Streams must match the *tablename_CL* name created in the log Analytics workspace.
+ > Custom tables have a suffix of *_CL*; for example, *tablename_CL*. The *tablename_CL* in the DataFlows Streams must match the *tablename_CL* name in the Log Analytics workspace.
```json {
azure-monitor Activity Log Alerts Webhook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/activity-log-alerts-webhook.md
Title: Understand the webhook schema used in activity log alerts
+ Title: Configure the webhook to get activity log alerts
description: Learn about the schema of the JSON that is posted to a webhook URL when an activity log alert activates. Last updated 03/31/2017
-# Webhooks for Azure activity log alerts
+# Webhooks for activity log alerts
As part of the definition of an action group, you can configure webhook endpoints to receive activity log alert notifications. With webhooks, you can route these notifications to other systems for post-processing or custom actions. This article shows what the payload for the HTTP POST to a webhook looks like. For more information on activity log alerts, see how to [create Azure activity log alerts](./activity-log-alerts.md).
azure-monitor Alerts Smart Detections Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-smart-detections-migration.md
A new set of alert rules is created when migrating an Application Insights resou
<sup>(2)</sup> Name of new alert rule after migration <sup>(3)</sup> These smart detection capabilities aren't converted to alerts, because of low usage and reassessment of detection effectiveness. These detectors will no longer be supported for this resource once its migration is completed.
+ > [!NOTE]
+ > The **Failure Anomalies** smart detector is already created as an alert rule and therefore does not require migration, it is not covered in this document.
+
The migration doesn't change the algorithmic design and behavior of smart detection. The same detection performance is expected before and after the change. You need to apply the migration to each Application Insights resource separately. For resources that aren't explicitly migrated, smart detection will continue to work as before.
azure-monitor Auto Collect Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/auto-collect-dependencies.md
description: Application Insights automatically collect and visualize dependenci
ms.devlang: csharp, java, javascript Previously updated : 05/06/2020 Last updated : 08/22/2022
Below is the currently supported list of dependency calls that are automatically
## Node.js
-| Communication libraries | Versions |
-| |-|
-| [HTTP](https://nodejs.org/api/http.html), [HTTPS](https://nodejs.org/api/https.html) | 0.10+ |
-| <b>Storage clients</b> | |
-| [Redis](https://www.npmjs.com/package/redis) | 2.x - 3.x |
-| [MongoDb](https://www.npmjs.com/package/mongodb); [MongoDb Core](https://www.npmjs.com/package/mongodb-core) | 2.x - 3.x |
-| [MySQL](https://www.npmjs.com/package/mysql) | 2.x |
-| [PostgreSql](https://www.npmjs.com/package/pg); | 6.x - 8.x |
-| [pg-pool](https://www.npmjs.com/package/pg-pool) | 1.x - 2.x |
-| <b>Logging libraries</b> | |
-| [console](https://nodejs.org/api/console.html) | 0.10+ |
-| [Bunyan](https://www.npmjs.com/package/bunyan) | 1.x |
-| [Winston](https://www.npmjs.com/package/winston) | 2.x - 3.x |
+A list of the latest [currently-supported modules](https://github.com/microsoft/node-diagnostic-channel/tree/master/src/diagnostic-channel-publishers) is maintained [here](https://github.com/microsoft/node-diagnostic-channel/tree/master/src/diagnostic-channel-publishers).
## JavaScript
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md
Title: Migrate an Application Insights classic resource to a workspace-based resource - Azure Monitor | Microsoft Docs description: Learn about the steps required to upgrade your Application Insights classic resource to the new workspace-based model. Previously updated : 09/23/2020 Last updated : 08/22/2022
Once the migration is complete, you can use [diagnostic settings](../essentials/
- Check your current retention settings under **General** > **Usage and estimated costs** > **Data Retention** for your Log Analytics workspace. This setting will affect how long any new ingested data is stored once you migrate your Application Insights resource. > [!NOTE]
- > - If you currently store Application Insights data for longer than the default 90 days and want to retain this larger retention period, you may need to adjust your workspace retention settings.
+ > - If you currently store Application Insights data for longer than the default 90 days and want to retain this larger retention period after migration, you will need to adjust your [workspace retention settings](https://docs.microsoft.com/azure/azure-monitor/logs/data-retention-archive?tabs=portal-1%2Cportal-2#set-retention-and-archive-policy-by-table) from the default 90 days to the desired longer retention period.
> - If youΓÇÖve selected data retention greater than 90 days on data ingested into the Classic Application Insights resource prior to migration, data retention will continue to be billed to through that Application Insights resource until that data exceeds the retention period. > - If the retention setting for your Application Insights instance under **Configure** > **Usage and estimated costs** > **Data Retention** is enabled, then use that setting to control the retention days for the telemetry data still saved in your classic resource's storage.
azure-monitor Export Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/export-telemetry.md
To migrate to diagnostic settings export:
> [!CAUTION] > If you want to store diagnostic logs in a Log Analytics workspace, there are two things to consider to avoid seeing duplicate data in Application Insights: > * The destination can't be the same Log Analytics workspace that your Application Insights resource is based on.
-> * The Application Insights user can't have access to both the Application Insights resource and the workspace created for diagnostic logs. This can be done with [Azure role-based access control (Azure RBAC)](./resources-roles-access-control.md).
+> * The Application Insights user can't have access to both workspaces. This can be done by setting the Log Analytics [Access control mode](/azure/azure-monitor/logs/log-analytics-workspace-overview#permissions) to **Requires workspace permissions** and ensuring through [Azure role-based access control (Azure RBAC)](./resources-roles-access-control.md) that the user only has access to the Log Analytics workspace the Application Insights resource is based on.
+>
+> These steps are necessary because Application Insights accesses telemetry across Application Insight resources (including Log Analytics workspaces) to provide complete end-to-end transaction operations and accurate application maps. Because diagnostic logs use the same table names, duplicate telemetry can be displayed if the user has access to multiple resources containing the same data.
<!--Link references-->
azure-monitor Tutorial Asp Net Custom Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-asp-net-custom-metrics.md
+
+ Title: Application Insights custom metrics with .NET and .NET Core
+description: Learn how to use Application Insights to capture locally pre-aggregated metrics for .NET and .NET Core applications.
+ Last updated : 08/22/2022
+ms.devlang: csharp
+++
+# Capture Application Insights custom metrics with .NET and .NET Core
+
+In this article, you'll learn how to capture custom metrics with Application Insights in .NET and .NET Core apps.
+
+Insert a few lines of code in your application to find out what users are doing with it, or to help diagnose issues. You can send telemetry from device and desktop apps, web clients, and web servers. Use the [Application Insights](./app-insights-overview.md) core telemetry API to send custom events and metrics and your own versions of standard telemetry. This API is the same API that the standard Application Insights data collectors use.
+
+## ASP.NET Core applications
+
+### Prerequisites
+
+If you'd like to follow along with the guidance in this article, certain pre-requisites are needed.
+
+* Visual Studio 2022
+* Visual Studio Workloads: ASP.NET and web development, Data storage and processing, and Azure development
+* .NET 6.0
+* Azure subscription and user account (with the ability to create and delete resources)
+* Deploy the [completed sample application (`2 - Completed Application`)](./tutorial-asp-net-core.md) or an existing ASP.NET Core application with the [Application Insights for ASP.NET Core](https://nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore) NuGet package installed and [configured to gather server-side telemetry](asp-net-core.md#enable-application-insights-server-side-telemetry-visual-studio).
+
+### Custom metrics overview
+
+The Application Insights .NET and .NET Core SDKs have two different methods of collecting custom metrics, `TrackMetric()`, and `GetMetric()`. The key difference between these two methods is local aggregation. `TrackMetric()` lacks pre-aggregation while `GetMetric()` has pre-aggregation. The recommended approach is to use aggregation, therefore, `TrackMetric()` is no longer the preferred method of collecting custom metrics. This article will walk you through using the GetMetric() method, and some of the rationale behind how it works.
+
+#### Pre-aggregating vs non pre-aggregating API
+
+`TrackMetric()` sends raw telemetry denoting a metric. It's inefficient to send a single telemetry item for each value. `TrackMetric()` is also inefficient in terms of performance since every `TrackMetric(item)` goes through the full SDK pipeline of telemetry initializers and processors. Unlike `TrackMetric()`, `GetMetric()` handles local pre-aggregation for you and then only submits an aggregated summary metric at a fixed interval of one minute. So if you need to closely monitor some custom metric at the second or even millisecond level you can do so while only incurring the storage and network traffic cost of only monitoring every minute. This behavior also greatly reduces the risk of throttling occurring since the total number of telemetry items that need to be sent for an aggregated metric are greatly reduced.
+
+In Application Insights, custom metrics collected via `TrackMetric()` and `GetMetric()` aren't subject to [sampling](./sampling.md). Sampling important metrics can lead to scenarios where alerting you may have built around those metrics could become unreliable. By never sampling your custom metrics, you can generally be confident that when your alert thresholds are breached, an alert will fire. But since custom metrics aren't sampled, there are some potential concerns.
+
+Trend tracking in a metric every second, or at an even more granular interval can result in:
+
+- Increased data storage costs. There's a cost associated with how much data you send to Azure Monitor. (The more data you send the greater the overall cost of monitoring.)
+- Increased network traffic/performance overhead. (In some scenarios this overhead could have both a monetary and application performance cost.)
+- Risk of ingestion throttling. (The Azure Monitor service drops ("throttles") data points when your app sends a high rate of telemetry in a short time interval.)
+
+Throttling is a concern as it can lead to missed alerts. The condition to trigger an alert could occur locally and then be dropped at the ingestion endpoint due to too much data being sent. We don't recommend using `TrackMetric()` for .NET and .NET Core unless you've implemented your own local aggregation logic. If you're trying to track every instance an event occurs over a given time period, you may find that [`TrackEvent()`](./api-custom-events-metrics.md#trackevent) is a better fit. Though keep in mind that unlike custom metrics, custom events are subject to sampling. You can still use `TrackMetric()` even without writing your own local pre-aggregation, but if you do so be aware of the pitfalls.
+
+In summary `GetMetric()` is the recommended approach since it does pre-aggregation, it accumulates values from all the Track() calls and sends a summary/aggregate once every minute. `GetMetric()` can significantly reduce the cost and performance overhead by sending fewer data points, while still collecting all relevant information.
+
+## Getting a TelemetryClient instance
+
+Get an instance of `TelemetryClient` from the dependency injection container in **HomeController.cs**:
+
+```csharp
+//... additional code removed for brevity
+using Microsoft.ApplicationInsights;
+
+namespace AzureCafe.Controllers
+{
+ public class HomeController : Controller
+ {
+ private readonly ILogger<HomeController> _logger;
+ private AzureCafeContext _cafeContext;
+ private BlobContainerClient _blobContainerClient;
+ private TextAnalyticsClient _textAnalyticsClient;
+ private TelemetryClient _telemetryClient;
+
+ public HomeController(ILogger<HomeController> logger, AzureCafeContext context, BlobContainerClient blobContainerClient, TextAnalyticsClient textAnalyticsClient, TelemetryClient telemetryClient)
+ {
+ _logger = logger;
+ _cafeContext = context;
+ _blobContainerClient = blobContainerClient;
+ _textAnalyticsClient = textAnalyticsClient;
+ _telemetryClient = telemetryClient;
+ }
+
+ //... additional code removed for brevity
+ }
+}
+```
+
+`TelemetryClient` is thread safe.
+
+## TrackMetric
+
+Application Insights can chart metrics that aren't attached to particular events. For example, you could monitor a queue length at regular intervals. With metrics, the individual measurements are of less interest than the variations and trends, and so statistical charts are useful.
+
+To send metrics to Application Insights, you can use the `TrackMetric(..)` API. We'll cover the recommended way to send a metric:
+
+* **Aggregation**. When you work with metrics, every single measurement is rarely of interest. Instead, a summary of what happened during a particular time period is important. Such a summary is called _aggregation_.
+
+ For example, the aggregate metric sum for that time period is `1` and the count of the metric values is `2`. When you use the aggregation approach, you invoke `TrackMetric` only once per time period and send the aggregate values. We recommend this approach because it can significantly reduce the cost and performance overhead by sending fewer data points to Application Insights, while still collecting all relevant information.
+
+### TrackMetric example
+
+1. From the Visual Studio Solution Explorer, locate and open the **HomeController.cs** file.
+
+2. Locate the `CreateReview` method and the following code.
+
+ ```csharp
+ if (model.Comments != null)
+ {
+ var response = _textAnalyticsClient.AnalyzeSentiment(model.Comments);
+ review.CommentsSentiment = response.Value.Sentiment.ToString();
+ }
+ ```
+
+3. Immediately following the previous code, insert the following to add a custom metric.
+
+ ```csharp
+ _telemetryClient.TrackMetric("ReviewPerformed", model.Rating);
+ ```
+
+4. Right-click the **AzureCafe** project in Solution Explorer and select **Publish** from the context menu.
+
+ ![Screenshot of the Visual Studio Solution Explorer with the Azure Cafe project selected and the Publish context menu item highlighted.](./media/tutorial-asp-net-custom-metrics/web-project-publish-context-menu.png "Publish Web App")
+
+5. Select **Publish** to promote the new code to the Azure App Service.
+
+ ![Screenshot of the Azure Cafe publish profile screen with the Publish button highlighted.](./media/tutorial-asp-net-custom-metrics/publish-profile.png "Publish profile")
+
+6. Once the publish has succeeded, a new browser window opens to the Azure Cafe web application.
+
+ ![Screenshot of the Azure Cafe web application.](./media/tutorial-asp-net-custom-metrics/azure-cafe-index.png "Azure Cafe web application")
+
+7. Perform various activities in the web application to generate some telemetry.
+
+ 1. Select **Details** next to a Cafe to view its menu and reviews.
+
+ ![Screenshot of a portion of the Azure Cafe list with the Details button highlighted.](./media/tutorial-asp-net-custom-metrics/cafe-details-button.png "Azure Cafe Details")
+
+ 2. On the Cafe screen, select the **Reviews** tab to view and add reviews. Select the **Add review** button to add a review.
+
+ ![Screenshot of the Cafe details screen with the Add review button highlighted.](./media/tutorial-asp-net-custom-metrics/cafe-add-review-button.png "Add review")
+
+ 3. On the Create a review dialog, enter a name, rating, comments, and upload a photo for the review. Once completed, select **Add review**.
+
+ ![Screenshot of the Create a review dialog.](./media/tutorial-asp-net-custom-metrics/create-a-review-dialog.png "Create a review")
+
+ 4. Repeat adding reviews as desired to generate more telemetry.
+
+### View metrics in Application Insights
+
+1. Go to the **Application Insights** resource in the [Azure portal](https://portal.azure.com).
+
+ :::image type="content" source="media/tutorial-asp-net-custom-metrics/application-insights-resource-group.png" alt-text="First screenshot of a resource group with the Application Insights resource highlighted." lightbox="media/tutorial-asp-net-custom-metrics/application-insights-resource-group.png":::
+
+2. From the left menu of the Application Insights resource, select **Logs** from beneath the **Monitoring** section. In the **Tables** pane, double-click on the **customMetrics** table, located under the **Application Insights** tree. Modify the query to retrieve metrics for the **ReviewPerformed** custom named metric as follows, then select **Run** to filter the results.
+
+ ```kql
+ customMetrics
+ | where name == "ReviewPerformed"
+ ```
+
+3. Observe the results display the rating value present in the Review.
+
+## GetMetric
+
+As referenced before, `GetMetric(..)` is the preferred method for sending metrics. In order to make use of this method, we'll be performing some changes to the existing code.
+
+When running the sample code, you'll see that no telemetry is being sent from the application right away. A single telemetry item will be sent by around the 60-second mark.
+
+> [!NOTE]
+> GetMetric does not support tracking the last value (i.e. "gauge") or tracking histograms/distributions.
+
+### GetMetric example
+
+1. From the Visual Studio Solution Explorer, locate and open the **HomeController.cs** file.
+
+2. Locate the `CreateReview` method and the code added in the previous [TrackMetric example](#trackmetric-example).
+
+3. Replace the previously added code in _Step 3_ with the following one.
+
+ ```csharp
+ var metric = _telemetryClient.GetMetric("ReviewPerformed");
+ metric.TrackValue(model.Rating);
+ ```
+
+4. Right-click the **AzureCafe** project in Solution Explorer and select **Publish** from the context menu.
+
+ ![Screenshot of the Visual Studio Solution Explorer with the Azure Cafe project selected and the Publish context menu item highlighted.](./media/tutorial-asp-net-custom-metrics/web-project-publish-context-menu.png "Publish Web App")
+
+5. Select **Publish** to promote the new code to the Azure App Service.
+
+ ![Screenshot of the Azure Cafe publish profile with the Publish button highlighted.](./media/tutorial-asp-net-custom-metrics/publish-profile.png "Publish profile")
+
+6. Once the publish has succeeded, a new browser window opens to the Azure Cafe web application.
+
+ ![Screenshot of the Azure Cafe web application.](./media/tutorial-asp-net-custom-metrics/azure-cafe-index.png "Azure Cafe web application")
+
+7. Perform various activities in the web application to generate some telemetry.
+
+ 1. Select **Details** next to a Cafe to view its menu and reviews.
+
+ ![Screenshot of a portion of the Azure Cafe list with the Details button highlighted.](./media/tutorial-asp-net-custom-metrics/cafe-details-button.png "Azure Cafe Details")
+
+ 2. On the Cafe screen, select the **Reviews** tab to view and add reviews. Select the **Add review** button to add a review.
+
+ ![Screenshot of the Cafe details with the Add review button highlighted.](./media/tutorial-asp-net-custom-metrics/cafe-add-review-button.png "Add review")
+
+ 3. On the Create a review dialog, enter a name, rating, comments, and upload a photo for the review. Once completed, select **Add review**.
+
+ ![Screenshot of the Create a review dialog displays.](./media/tutorial-asp-net-custom-metrics/create-a-review-dialog.png "Create a review")
+
+ 4. Repeat adding reviews as desired to generate more telemetry.
+
+### View metrics in Application Insights
+
+1. Go to the **Application Insights** resource in the [Azure portal](https://portal.azure.com).
+
+ ![Second screenshot of a resource group with the Application Insights resource highlighted.](./media/tutorial-asp-net-custom-metrics/application-insights-resource-group.png "Resource Group")
+
+2. From the left menu of the Application Insights resource, select **Logs** from beneath the **Monitoring** section. In the **Tables** pane, double-click on the **customMetrics** table, located under the **Application Insights** tree. Modify the query to retrieve metrics for the **ReviewPerformed** custom named metric as follows, then select **Run** to filter the results.
+
+ ```kql
+ customMetrics
+ | where name == "ReviewPerformed"
+ ```
+
+3. Observe the results display the rating value present in the Review and the aggregated values.
+
+## Multi-dimensional metrics
+
+The examples in the previous section show zero-dimensional metrics. Metrics can also be multi-dimensional. We currently support up to 10 dimensions.
+
+By default multi-dimensional metrics within the Metric explorer experience aren't turned on in Application Insights resources.
+
+>[!NOTE]
+> This is a preview feature and additional billing may apply in the future.
+
+### Enable multi-dimensional metrics
+
+To enable multi-dimensional metrics for an Application Insights resource, Select **Usage and estimated costs** > **Custom Metrics** > **Send custom metrics to Azure Metric Store (With dimensions)** > **OK**.
+
+Once you have made that change and send new multi-dimensional telemetry, you'll be able to **Apply splitting**.
+
+> [!NOTE]
+> Only newly sent metrics after the feature was turned on in the portal will have dimensions stored.
+
+### Multi-dimensional metrics example
+
+1. From the Visual Studio Solution Explorer, locate and open the **HomeController.cs** file.
+
+2. Locate the `CreateReview` method and the code added in the previous [GetMetric example](#getmetric-example).
+
+3. Replace the previously added code in _Step 3_ with the following one.
+
+ ```csharp
+ var metric = _telemetryClient.GetMetric("ReviewPerformed", "IncludesPhoto");
+ ```
+
+4. Still in the `CreateReview` method, change to code to match the following one.
+
+ ```csharp
+ [HttpPost]
+ [ValidateAntiForgeryToken]
+ public ActionResult CreateReview(int id, CreateReviewModel model)
+ {
+ //... additional code removed for brevity
+ var metric = _telemetryClient.GetMetric("ReviewPerformed", "IncludesPhoto");
+
+ if ( model.ReviewPhoto != null )
+ {
+ using (Stream stream = model.ReviewPhoto.OpenReadStream())
+ {
+ //... additional code removed for brevity
+ }
+
+ metric.TrackValue(model.Rating, bool.TrueString);
+ }
+ else
+ {
+ metric.TrackValue(model.Rating, bool.FalseString);
+ }
+ //... additional code removed for brevity
+ }
+ ```
+
+5. Right-click the **AzureCafe** project in Solution Explorer and select **Publish** from the context menu.
+
+ ![Screenshot of the Visual Studio Solution Explorer with the Azure Cafe project selected and the Publish context menu item highlighted.](./media/tutorial-asp-net-custom-metrics/web-project-publish-context-menu.png "Publish Web App")
+
+6. Select **Publish** to promote the new code to the Azure App Service.
+
+ ![Screenshot of the Azure Cafe publish profile with the Publish button highlighted.](./media/tutorial-asp-net-custom-metrics/publish-profile.png "Publish profile")
+
+7. Once the publish has succeeded, a new browser window opens to the Azure Cafe web application.
+
+ ![Screenshot of the Azure Cafe web application.](./media/tutorial-asp-net-custom-metrics/azure-cafe-index.png "Azure Cafe web application")
+
+8. Perform various activities in the web application to generate some telemetry.
+
+ 1. Select **Details** next to a Cafe to view its menu and reviews.
+
+ ![Screenshot of a portion of the Azure Cafe list with the Details button highlighted.](./media/tutorial-asp-net-custom-metrics/cafe-details-button.png "Azure Cafe Details")
+
+ 2. On the Cafe screen, select the **Reviews** tab to view and add reviews. Select the **Add review** button to add a review.
+
+ ![Screenshot of the Cafe details screen with the Add review button highlighted.](./media/tutorial-asp-net-custom-metrics/cafe-add-review-button.png "Add review")
+
+ 3. On the Create a review dialog, enter a name, rating, comments, and upload a photo for the review. Once completed, select **Add review**.
+
+ ![Screenshot of the Create a review dialog.](./media/tutorial-asp-net-custom-metrics/create-a-review-dialog.png "Create a review")
+
+ 4. Repeat adding reviews as desired to generate more telemetry.
+
+### View logs in Application Insights
+
+1. Go to the **Application Insights** resource in the [Azure portal](https://portal.azure.com).
+
+ ![Third screenshot of a resource group with the Application Insights resource highlighted.](./media/tutorial-asp-net-custom-metrics/application-insights-resource-group.png "Resource Group")
+
+2. From the left menu of the Application Insights resource, select **Logs** from beneath the **Monitoring** section. In the **Tables** pane, double-click on the **customMetrics** table, located under the **Application Insights** tree. Modify the query to retrieve metrics for the **ReviewPerformed** custom named metric as follows, then select **Run** to filter the results.
+
+ ```kql
+ customMetrics
+ | where name == "ReviewPerformed"
+ ```
+
+3. Observe the results display the rating value present in the Review and the aggregated values.
+
+4. In order to better observe the **IncludesPhoto** dimension, we can extract it into a separate variable (column) by using the following query.
+
+ ```kql
+ customMetrics
+ | extend IncludesPhoto = tobool(customDimensions.IncludesPhoto)
+ | where name == "ReviewPerformed"
+ ```
+
+5. Since we reused the same custom metric name has before, results with and without the custom dimension will be displayed. In order to avoid that, we'll update the query to match the following one.
+
+ ```kql
+ customMetrics
+ | extend IncludesPhoto = tobool(customDimensions.IncludesPhoto)
+ | where name == "ReviewPerformed" and isnotnull(IncludesPhoto)
+ ```
+
+### View metrics in Application Insights
+
+1. Go to the **Application Insights** resource in the [Azure portal](https://portal.azure.com).
+
+ ![Fourth screenshot of a resource group with the Application Insights resource highlighted.](./media/tutorial-asp-net-custom-metrics/application-insights-resource-group.png "Resource Group")
+
+2. From the left menu of the Application Insights resource, select **Metrics** from beneath the **Monitoring** section.
+
+3. For **Metric Namespace**, select **azure.applicationinsights**.
+
+ ![Screenshot of metrics explorer with the Metric Namespace highlighted.](./media/tutorial-asp-net-custom-metrics/metrics-explorer-namespace.png "Metric Namespace")
+
+4. For **Metric**, select **ReviewPerformed**.
+
+ ![Screenshot of metrics explorer with the Metric highlighted.](./media/tutorial-asp-net-custom-metrics/metrics-explorer-metric.png "Metric")
+
+5. However, you'll notice that you aren't able to split the metric by your new custom dimension, or view your custom dimension with the metrics view. Select **Apply Splitting**.
+
+ ![Screenshot of the Apply Splitting button.](./media/tutorial-asp-net-custom-metrics/apply-splitting.png "Splitting")
+
+6. For the custom dimension **Values** to use, select **IncludesPhoto**.
+
+ ![Screenshot illustrating splitting using a custom dimension](./media/tutorial-asp-net-custom-metrics/splitting-dimension.png "Splitting dimension")
+
+## Next steps
+
+* [Metric Explorer](../essentials/metrics-getting-started.md)
+* How to enable Application Insights for [ASP.NET Core Applications](./asp-net-core.md)
azure-monitor Best Practices Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-alerts.md
Title: Azure Monitor best practices - Alerts and automated actions
+ Title: 'Azure Monitor best practices: Alerts and automated actions'
description: Recommendations for deployment of Azure Monitor alerts and automated actions.
-# Deploying Azure Monitor - Alerts and automated actions
-This article is part of the scenario [Recommendations for configuring Azure Monitor](best-practices.md). It provides guidance on alerts in Azure Monitor, which proactively notify you of important data or patterns identified in your monitoring data. You can view alerts in the Azure portal, have them send a proactive notification, or have them initiated some automated action to attempt to remediate the issue.
+# Deploy Azure Monitor: Alerts and automated actions
+
+This article is part of the scenario [Recommendations for configuring Azure Monitor](best-practices.md). It provides guidance on alerts in Azure Monitor. Alerts proactively notify you of important data or patterns identified in your monitoring data. You can view alerts in the Azure portal. You can create alerts that:
+
+- Send a proactive notification.
+- Initiate an automated action to attempt to remediate an issue.
+ ## Alerting strategy
-An alerting strategy defines your organizations standards for the types of alert rules that you'll create for different scenarios, how you'll categorize and manage alerts after they're created, and automated actions and notifications that you'll take in response to alerts. Defining an alert strategy assists you defining the configuration of alert rules including alert severity and action groups.
-See [Successful alerting strategy](/azure/cloud-adoption-framework/manage/monitor/alerting#successful-alerting-strategy) for factors that you should consider in developing an alerting strategy.
+An alerting strategy defines your organization's standards for:
+- The types of alert rules that you'll create for different scenarios.
+- How you'll categorize and manage alerts after they're created.
+- Automated actions and notifications that you'll take in response to alerts.
+
+Defining an alert strategy assists you in defining the configuration of alert rules including alert severity and action groups.
+
+For factors to consider as you develop an alerting strategy, see [Successful alerting strategy](/azure/cloud-adoption-framework/manage/monitor/alerting#successful-alerting-strategy).
## Alert rule types
-Alerts in Azure Monitor are created by alert rules which you must create. See the monitoring documentation for each Azure service for guidance on recommended alert rules. Azure Monitor does not have any alert rules by default.
-There are multiple types of alert rules defined by the type of data that they use. Each has different capabilities and a different cost. The basic strategy you should follow is to use the alert rule type with the lowest cost that provides the logic that you require.
+Alerts in Azure Monitor are created by alert rules that you must create. For guidance on recommended alert rules, see the monitoring documentation for each Azure service. Azure Monitor doesn't have any alert rules by default.
+
+Multiple types of alert rules are defined by the type of data they use. Each has different capabilities and a different cost. The basic strategy is to use the alert rule type with the lowest cost that provides the logic you require.
-- [Activity log rules](alerts/activity-log-alerts.md). Creates an alert in response to a new Activity log event that matches specified conditions. There is no cost to these alerts so they should be your first choice, although the conditions they can detect are limited. See [Create, view, and manage activity log alerts by using Azure Monitor](alerts/alerts-activity-log.md) for details on creating an Activity log alert.-- [Metric alert rules](alerts/alerts-metric-overview.md). Creates an alert in response to one or more metric values exceeding a threshold. Metric alerts are stateful meaning that the alert will automatically close when the value drops below the threshold, and it will only send out notifications when the state changes. There is a cost to metric alerts, but this is often significantly less than log alerts. See [Create, view, and manage metric alerts using Azure Monitor](alerts/alerts-metric.md) for details on creating a metric alert.-- [Log alert rules](alerts/alerts-unified-log.md). Creates an alert when the results of a schedule query matches specified criteria. They are the most expensive of the alert rules, but they allow the most complex criteria. See [Create, view, and manage log alerts using Azure Monitor](alerts/alerts-log.md) for details on creating a log query alert.-- [Application alerts](app/monitor-web-app-availability.md) allow you to perform proactive performance and availability testing of your web application. You can perform a simple ping test at no cost, but there is a cost to more complex testing. See [Monitor the availability of any website](app/monitor-web-app-availability.md) for a description of the different tests and details on creating them.
+- [Activity log rules](alerts/activity-log-alerts.md). Creates an alert in response to a new activity log event that matches specified conditions. There's no cost to these alerts so they should be your first choice, although the conditions they can detect are limited. See [Create, view, and manage activity log alerts by using Azure Monitor](alerts/alerts-activity-log.md) for information on creating an activity log alert.
+- [Metric alert rules](alerts/alerts-metric-overview.md). Creates an alert in response to one or more metric values exceeding a threshold. Metric alerts are stateful, which means that the alert will automatically close when the value drops below the threshold, and it will only send out notifications when the state changes. There's a cost to metric alerts, but it's often much less than log alerts. See [Create, view, and manage metric alerts by using Azure Monitor](alerts/alerts-metric.md) for information on creating a metric alert.
+- [Log alert rules](alerts/alerts-unified-log.md). Creates an alert when the results of a schedule query match specified criteria. They're the most expensive of the alert rules, but they allow the most complex criteria. See [Create, view, and manage log alerts by using Azure Monitor](alerts/alerts-log.md) for information on creating a log query alert.
+- [Application alerts](app/monitor-web-app-availability.md). Performs proactive performance and availability testing of your web application. You can perform a ping test at no cost, but there's a cost to more complex testing. See [Monitor the availability of any website](app/monitor-web-app-availability.md) for a description of the different tests and information on creating them.
## Alert severity
-Each alert rule defines the severity of the alerts that it creates based on the table below. Alerts in the Azure portal are grouped by level so that you can manage similar alerts together and quickly identify those that require the greatest urgency.
+
+Each alert rule defines the severity of the alerts that it creates based on the following table. Alerts in the Azure portal are grouped by level so that you can manage similar alerts together and quickly identify alerts that require the greatest urgency.
| Level | Name | Description | |:|:|:| | Sev 0 | Critical | Loss of service or application availability or severe degradation of performance. Requires immediate attention. | | Sev 1 | Error | Degradation of performance or loss of availability of some aspect of an application or service. Requires attention but not immediate. |
-| Sev 2 | Warning | A problem that does not include any current loss in availability or performance, although has the potential to lead to more sever problems if unaddressed. |
-| Sev 3 | Informational | Does not indicate a problem but rather interesting information to an operator such as successful completion of a regular process. |
-| Sev 4 | Verbose | Detailed information not useful
+| Sev 2 | Warning | A problem that doesn't include any current loss in availability or performance, although it has the potential to lead to more severe problems if unaddressed. |
+| Sev 3 | Informational | Doesn't indicate a problem but provides interesting information to an operator, such as successful completion of a regular process. |
+| Sev 4 | Verbose | Detailed information that isn't useful.
-You should assess the severity of the condition each rule is identifying to assign an appropriate level. The types of issues you assign to each severity level and your standard response to each should be defined in your alerts strategy.
+Assess the severity of the condition each rule is identifying to assign an appropriate level. Define the types of issues you assign to each severity level and your standard response to each in your alerts strategy.
## Action groups
-Automated responses to alerts in Azure Monitor are defined in [action groups](alerts/action-groups.md). An action group is a collection of one or more notifications and actions that are fired when an alert is triggered. A single action group can be used with multiple alert rules and contain one or more of the following:
-- Notifications. Messages that notify operators and administrators that an alert was created.-- Actions. Automated processes that attempt to correct the detected issue,
+Automated responses to alerts in Azure Monitor are defined in [action groups](alerts/action-groups.md). An action group is a collection of one or more notifications and actions that are fired when an alert is triggered. A single action group can be used with multiple alert rules and contain one or more of the following items:
+
+- **Notifications**: Messages that notify operators and administrators that an alert was created.
+- **Actions**: Automated processes that attempt to correct the detected issue.
+ ## Notifications
-Notifications are messages sent to one or more users to notify them that an alert has been created. Since a single action group can be used with multiple alert rules, you should design a set of action groups for different sets of administrators and users who will receive the same sets of alerts. Use any of the following types of notifications depending on the preferences of your operators and your organizational standards.
+
+Notifications are messages sent to one or more users to notify them that an alert has been created. Because a single action group can be used with multiple alert rules, you should design a set of action groups for different sets of administrators and users who will receive the same sets of alerts. Use any of the following types of notifications depending on the preferences of your operators and your organizational standards:
- Email - SMS - Push to Azure app - Voice-- Email Azure Resource Manager Role
+- Email Azure Resource Manager role
## Actions
-Actions are automated responses to an alert. You can use the available actions for any scenario that they support, but the following sections describe how each are typically used.
+
+Actions are automated responses to an alert. You can use the available actions for any scenario that they support, but the following sections describe how each action is typically used.
### Automated remediation
-Use the following actions to attempt automated remediation of the issue identified by the alert.
-- Automation runbook - Start either a built-in or custom a runbook in Azure Automation. For example, built-in runbooks are available to perform such functions as restarting or scaling up a virtual machine.-- Azure Function - Start an Azure Function.
+Use the following actions to attempt automated remediation of the issue identified by the alert:
+- **Automation runbook**: Start a built-in runbook or a custom runbook in Azure Automation. For example, built-in runbooks are available to perform such functions as restarting or scaling up a virtual machine.
+- **Azure Functions**: Start an Azure function.
### ITSM and on-call management -- ITSM - Use the [ITSM connector]() to create work items in your ITSM tool based on alerts from Azure Monitor. You first configure the connector and then use the **ITSM** action in alert rules.-- Webhooks - Send the alert to an incident management system that supports webhooks such as PagerDuty and Splunk On-Call.-- Secure webhook - ITSM integration with Azure AD Authentication
+- **IT service management (ITSM)**: Use the [ITSM Connector]() to create work items in your ITSM tool based on alerts from Azure Monitor. You first configure the connector and then use the **ITSM** action in alert rules.
+- **Webhooks**: Send the alert to an incident management system that supports webhooks such as PagerDuty and Splunk On-Call.
+- **Secure webhook**: Integrate ITSM with Azure Active Directory Authentication.
+## Minimize alert activity
-## Minimizing alert activity
-While you want to create alerts for any important information in your environment, you should ensure that you aren't creating excessive alerts and notifications for issues that don't warrant them. Use the following guidelines to minimize your alert activity to ensure that critical issues are surfaced while you don't generate excess information and notifications for administrators.
+You want to create alerts for any important information in your environment. But you don't want to create excessive alerts and notifications for issues that don't warrant them. To minimize your alert activity to ensure that critical issues are surfaced while you don't generate excess information and notifications for administrators, follow these guidelines:
-- See [Successful alerting strategy](/azure/cloud-adoption-framework/manage/monitor/alerting#successful-alerting-strategy) for principles on determining whether a symptom is an appropriate candidate for alerting.
+- See [Successful alerting strategy](/azure/cloud-adoption-framework/manage/monitor/alerting#successful-alerting-strategy) to determine whether a symptom is an appropriate candidate for alerting.
- Use the **Automatically resolve alerts** option in metric alert rules to resolve alerts when the condition has been corrected.-- Use **Suppress alerts** option in log query alert rules which prevents creating multiple alerts for the same issue.-- Ensure that you use appropriate severity levels for alert rules so that high priority issues can be analyzed together.-- Limit notifications for alerts with a severity of Warning or less since they don't require immediate attention.
+- Use the **Suppress alerts** option in log query alert rules to avoid creating multiple alerts for the same issue.
+- Ensure that you use appropriate severity levels for alert rules so that high-priority issues can be analyzed together.
+- Limit notifications for alerts with a severity of Warning or less because they don't require immediate attention.
## Create alert rules at scale
-Since you'll typically want to alert on issues for all of your critical Azure applications and resources, you should leverage methods for creating alert rules at scale.
-- Azure Monitor supports monitoring multiple resources of the same type with one metric alert rule for resources that exist in the same Azure region. See [Monitoring at scale using metric alerts in Azure Monitor](alerts/alerts-metric-overview.md#monitoring-at-scale-using-metric-alerts-in-azure-monitor) for a list of Azure services that are currently supported for this feature.-- For metric alert rules for Azure services that don't support multiple resources, leverage automation tools such as CLI and PowerShell with Resource Manager templates to create the same alert rule for multiple resources. See [Resource Manager template samples for metric alert rules in Azure Monitor](alerts/resource-manager-alerts-metric.md) for samples.-- Write queries in log query alert rules to return data for multiple resources. Use the **Split by dimensions** setting in the rule to create separate alerts for each resource.
+Typically, you'll want to alert on issues for all your critical Azure applications and resources. Use the following methods for creating alert rules at scale:
+- Azure Monitor supports monitoring multiple resources of the same type with one metric alert rule for resources that exist in the same Azure region. For a list of Azure services that are currently supported for this feature, see [Monitoring at scale using metric alerts in Azure Monitor](alerts/alerts-metric-overview.md#monitoring-at-scale-using-metric-alerts-in-azure-monitor).
+- For metric alert rules for Azure services that don't support multiple resources, use automation tools such as the Azure CLI and PowerShell with Resource Manager templates to create the same alert rule for multiple resources. For samples, see [Resource Manager template samples for metric alert rules in Azure Monitor](alerts/resource-manager-alerts-metric.md).
+- To return data for multiple resources, write queries in log query alert rules. Use the **Split by dimensions** setting in the rule to create separate alerts for each resource.
> [!NOTE]
-> Resource-centric log query alert rules which are currently in public preview allow you to use all resources in a subscription or resource group as a target for a log query alert.
+> Resource-centric log query alert rules currently in public preview allow you to use all resources in a subscription or resource group as a target for a log query alert.
## Next steps -- [Define alerts and automated actions from Azure Monitor data](best-practices-alerts.md)
+[Define alerts and automated actions from Azure Monitor data](best-practices-alerts.md)
azure-monitor Best Practices Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-cost.md
Title: Azure Monitor best practices - Cost management
+ Title: 'Azure Monitor best practices: Cost management'
description: Guidance and recommendations for reducing your cost for Azure Monitor.
-# Azure Monitor best practices - Cost management
-This article provides guidance on reducing your cloud monitoring costs by implementing and managing Azure Monitor in the most cost effective manner. This includes leveraging cost saving features and ensuring that you're not paying for data collection that provides little value. It also provides guidance for regularly monitoring your usage so that you can proactively detect and identify sources responsible for excessive usage.
+# Azure Monitor best practices: Cost management
+This article provides guidance on reducing your cloud monitoring costs by implementing and managing Azure Monitor in the most cost-effective manner. It explains how to take advantage of cost-saving features to help ensure that you're not paying for data collection that provides little value. It also provides guidance for regularly monitoring your usage so that you can proactively detect and identify sources responsible for excessive usage.
## Understand Azure Monitor charges+ You should start by understanding the different ways that Azure Monitor charges and how to view your monthly bill. See [Azure Monitor cost and usage](usage-estimated-costs.md) for a complete description and the different tools available to analyze your charges. ## Configure workspaces
-You can start using Azure Monitor with a single Log Analytics workspace using default options. As your monitoring environment grows though, you will need to make decisions about whether to have multiple services share a single workspace or create multiple workspaces, and you want to evaluate configuration options that allow you to reduce your monitoring costs.
+
+You can start using Azure Monitor with a single Log Analytics workspace by using default options. As your monitoring environment grows, you'll need to make decisions about whether to have multiple services share a single workspace or create multiple workspaces. You want to evaluate configuration options that allow you to reduce your monitoring costs.
### Configure pricing tier or dedicated cluster
-By default, workspaces will use Pay-As-You-Go pricing with no minimum data volume. If you collect a sufficient amount of data though, you can significantly decrease your cost by using a [commitment tier](logs/cost-logs.md#commitment-tiers). You commit to a daily minimum of data collected in exchange for a lower rate.
-[Dedicated clusters](logs/logs-dedicated-clusters.md) provide additional functionality and cost savings if you ingest at least 500 GB per day collectively among multiple workspaces in the same region. Unlike commitment tiers, workspaces in a dedicated cluster don't need to individually reach the 500 GB.
+By default, workspaces will use pay-as-you-go pricing with no minimum data volume. If you collect enough amount of data, you can significantly decrease your cost by using a [commitment tier](logs/cost-logs.md#commitment-tiers). You commit to a daily minimum of data collected in exchange for a lower rate.
-See [Azure Monitor Logs pricing details](logs/cost-logs.md) for details on commitment tiers and guidance on determining which is most appropriate for your level of usage. See [Usage and estimated costs](usage-estimated-costs.md#usage-and-estimated-costs) to view estimated costs for your usage at different pricing tiers.
+[Dedicated clusters](logs/logs-dedicated-clusters.md) provide more functionality and cost savings if you ingest at least 500 GB per day collectively among multiple workspaces in the same region. Unlike commitment tiers, workspaces in a dedicated cluster don't need to individually reach 500 GB.
+
+See [Azure Monitor Logs pricing details](logs/cost-logs.md) for information on commitment tiers and guidance on determining which is most appropriate for your level of usage. See [Usage and estimated costs](usage-estimated-costs.md#usage-and-estimated-costs) to view estimated costs for your usage at different pricing tiers.
### Optimize workspace configuration
-As your monitoring environment becomes more complex, you will need to consider whether to create additional Log Analytics workspaces. This may be as you place resources in additional regions or as you implement additional services that use workspaces such as Azure Sentinel and Microsoft Defender for Cloud.
-There can be cost implications with your workspace design, most notably when you combine different services such as operational data from Azure Monitor and security data from Microsoft Sentinel. See [Workspaces with Microsoft Sentinel](logs/cost-logs.md#workspaces-with-microsoft-sentinel) and [Workspaces with Microsoft Defender for Cloud](logs/cost-logs.md#workspaces-with-microsoft-defender-for-cloud) for a description of these implications and guidance on determining the most cost-effective solution for your environment.
+As your monitoring environment becomes more complex, you'll need to consider whether to create more Log Analytics workspaces. This need might surface as you place resources in more regions or as you implement more services that use workspaces such as Microsoft Sentinel and Microsoft Defender for Cloud.
+
+There can be cost implications with your workspace design, most notably when you combine different services such as operational data from Azure Monitor and security data from Microsoft Sentinel. For a description of these implications and guidance on determining the most cost-effective solution for your environment, see:
+
+ - [Workspaces with Microsoft Sentinel](logs/cost-logs.md#workspaces-with-microsoft-sentinel)
+- [Workspaces with Microsoft Defender for Cloud](logs/cost-logs.md#workspaces-with-microsoft-defender-for-cloud)
## Configure tables in each workspace
-Except for [tables that don't incur charges](logs/cost-logs.md#data-size-calculation), all data in a Log Analytics workspace is billed at the same rate by default. You may be collecting data though that you query infrequently or that you need to archive for compliance but rarely access. You can significantly reduce your costs by configuring Basic Logs and by optimizing your data retention and archiving.
+
+Except for [tables that don't incur charges](logs/cost-logs.md#data-size-calculation), all data in a Log Analytics workspace is billed at the same rate by default. You might be collecting data that you query infrequently or that you need to archive for compliance but rarely access. You can significantly reduce your costs by optimizing your data retention and archiving and configuring Basic Logs.
### Configure data retention and archiving
-Data collected in a Log Analytics workspace is retained for 31 days at no charge (90 days if Azure Sentinel is enabled on the workspace). You can retain data beyond the default for trending analysis or other reporting, but there is a charge for this retention.
-Your retention requirement may just be for compliance reasons or for occasional investigation or analysis of historical data. In this case, you should configure [Archived Logs](logs/data-retention-archive.md) which allows you to retain data long term (up to 7 years) at a significantly reduced cost. There is a cost to search archived data or temporarily restore it for analysis. If you require infrequent access to this data though, this cost will be more than offset by the reduced retention cost.
+Data collected in a Log Analytics workspace is retained for 31 days at no charge. The time period is 90 days if Microsoft Sentinel is enabled on the workspace. You can retain data beyond the default for trending analysis or other reporting, but there's a charge for this retention.
+
+Your retention requirement might be for compliance reasons or for occasional investigation or analysis of historical data. In this case, you should configure [Archived Logs](logs/data-retention-archive.md), which allows you to retain data for up to seven years at a reduced cost. There's a cost to search archived data or temporarily restore it for analysis. If you require infrequent access to this data, this cost is more than offset by the reduced retention cost.
-You can configure retention and archiving for all tables in a workspace or configure each table separately. This allows you to optimize your costs by setting only the retention you require for each data type.
+You can configure retention and archiving for all tables in a workspace or configure each table separately. The options allow you to optimize your costs by setting only the retention you require for each data type.
### Configure Basic Logs (preview)
-You can save on data ingestion costs by configuring [certain tables](logs/basic-logs-configure.md#which-tables-support-basic-logs) in your Log Analytics workspace that you primarily use for debugging, troubleshooting and auditing as [Basic Logs](logs/basic-logs-configure.md). Tables configured for Basic Logs have a lower ingestion cost in exchange for reduced features. They can't be used for alerting, their retention is set to eight days, they support a limited version of the query language, and there is a cost for querying them. If you query these tables infrequently though, this query cost can be more than offset by the reduced ingestion cost.
+
+You can save on data ingestion costs by configuring [certain tables](logs/basic-logs-configure.md#which-tables-support-basic-logs) in your Log Analytics workspace that you primarily use for debugging, troubleshooting, and auditing as [Basic Logs](logs/basic-logs-configure.md).
+
+Tables configured for Basic Logs have a lower ingestion cost in exchange for reduced features. They can't be used for alerting, their retention is set to eight days, they support a limited version of the query language, and there's a cost for querying them. If you query these tables infrequently, this query cost can be more than offset by the reduced ingestion cost.
The decision whether to configure a table for Basic Logs is based on the following criteria:
The decision whether to configure a table for Basic Logs is based on the followi
- You only require basic queries of the data using a limited version of the query language. - The cost savings for data ingestion over a month exceed the expected cost for any expected queries
-See [Query Basic Logs in Azure Monitor (Preview)](.//logs/basic-logs-query.md) for details on query limitations and [Configure Basic Logs in Azure Monitor (Preview)](logs/basic-logs-configure.md) for more details about them.
+See [Query Basic Logs in Azure Monitor (preview)](.//logs/basic-logs-query.md) for information on query limitations. See [Configure Basic Logs in Azure Monitor (Preview)](logs/basic-logs-configure.md) for more information about Basic Logs.
## Reduce the amount of data collected
-The most straightforward strategy to reduce your costs for data ingestion and retention is to reduce the amount of data that you collect. Your goal should be to collect the minimal amount of data to meet your monitoring requirements. If you find that you're collecting data that's not being used for alerting or analysis, then you have an opportunity to reduce your monitoring costs by modifying your configuration to stop collecting data that you don't need.
-The configuration change will vary depending on the data source. The following sections provide guidance for configuring common data sources to reduce the data they send to the workspace.
+The most straightforward strategy to reduce your costs for data ingestion and retention is to reduce the amount of data that you collect. Your goal should be to collect the minimal amount of data to meet your monitoring requirements. You might find that you're collecting data that's not being used for alerting or analysis. If so, you have an opportunity to reduce your monitoring costs by modifying your configuration to stop collecting data that you don't need.
+
+The configuration change varies depending on the data source. The following sections provide guidance for configuring common data sources to reduce the data they send to the workspace.
## Virtual machines
-Virtual machines can vary significantly in the amount of data they collect, depending on the amount of telemetry generated by the applications and services they have installed. The following table lists the most common data collected from virtual machines and strategies for limiting them for each of the Azure Monitor agents.
+Virtual machines can vary significantly in the amount of data they collect, depending on the amount of telemetry generated by the applications and services they have installed. The following table lists the most common data collected from virtual machines and strategies for limiting them for each of the Azure Monitor agents.
| Source | Strategy | Log Analytics agent | Azure Monitor agent | |:|:|:|:|
-| Event logs | Collect only required event logs and levels. For example, *Information* level events are rarely used and should typically not be collected. For Azure Monitor agent, filter particular event IDs that are frequent but not valuable. | Change the [event log configuration for the workspace](agents/data-sources-windows-events.md) | Change the [data collection rule](agents/data-collection-rule-azure-monitor-agent.md). Use [custom XPath queries](agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries) to filter specific event IDs. |
-| Syslog | Reduce the number of facilities collected and only collect required event levels. For example, *Info* and *Debug* level events are rarely used and should typically not be collected. | Change the [syslog configuration for the workspace](agents/data-sources-syslog.md). | Change the [data collection rule](agents/data-collection-rule-azure-monitor-agent.md). Use [custom XPath queries](agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries) to filter specific events. |
-| Performance counters | Collect only the performance counters required and reduce the frequency of collection. For Azure Monitor agent, consider sending performance data only to Metrics and not Logs. | Change the [performance counter configuration for the workspace](agents/data-sources-performance-counters.md). | Change the [data collection rule](agents/data-collection-rule-azure-monitor-agent.md). Use [custom XPath queries](agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries) to filter specific counters. |
-
+| Event logs | Collect only required event logs and levels. For example, *Information*-level events are rarely used and should typically not be collected. For the Azure Monitor agent, filter particular event IDs that are frequent but not valuable. | Change the [event log configuration for the workspace](agents/data-sources-windows-events.md). | Change the [data collection rule](agents/data-collection-rule-azure-monitor-agent.md). Use [custom XPath queries](agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries) to filter specific event IDs. |
+| Syslog | Reduce the number of facilities collected and only collect required event levels. For example, *Info* and *Debug* level events are rarely used and should typically not be collected. | Change the [Syslog configuration for the workspace](agents/data-sources-syslog.md). | Change the [data collection rule](agents/data-collection-rule-azure-monitor-agent.md). Use [custom XPath queries](agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries) to filter specific events. |
+| Performance counters | Collect only the performance counters required and reduce the frequency of collection. For the Azure Monitor agent, consider sending performance data only to Metrics and not Logs. | Change the [performance counter configuration for the workspace](agents/data-sources-performance-counters.md). | Change the [data collection rule](agents/data-collection-rule-azure-monitor-agent.md). Use [custom XPath queries](agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries) to filter specific counters. |
### Use transformations to filter events
-The bulk of data collection from virtual machines will be from Windows or Syslog events. While you can provide more filtering with the Azure Monitor agent, you still may be collecting records that provide little value. Use [transformations](essentials//data-collection-transformations.md) to implement more granular filtering and also to filter data from columns that provide little value. For example, you might have a Windows event that's valuable for alerting, but it includes columns with redundant or excessive data. You can create a transformation that allows the event to be collected but removes this excessive data.
-See the section below on filtering data with transformations for a summary on where to implement filtering and transformations for different data sources.
+The bulk of data collection from virtual machines will be from Windows or Syslog events. While you can provide more filtering with the Azure Monitor agent, you still might be collecting records that provide little value. Use [transformations](essentials//data-collection-transformations.md) to implement more granular filtering and also to filter data from columns that provide little value. For example, you might have a Windows event that's valuable for alerting, but it includes columns with redundant or excessive data. You can create a transformation that allows the event to be collected but removes this excessive data.
+
+See the following section on filtering data with transformations for a summary on where to implement filtering and transformations for different data sources.
### Multi-homing agents
-You should be cautious with any configuration using multi-homed agents where a single virtual machine sends data to multiple workspaces since you may be incurring charges for the same data multiple times. If you do multi-home agents, ensure that you're sending unique data to each workspace.
-You can also collect duplicate data with a single virtual machine running both the Azure Monitor agent and Log Analytics agent, even if they're both sending data to the same workspace. While the agents can coexist, each works independently without any knowledge of the other. You should continue to use the Log Analytics agent until you [migrate to the Azure Monitor agent](./agents/azure-monitor-agent-migration.md) rather than using both together unless you can ensure that each is collecting unique data.
+You should be cautious with any configuration using multi-homed agents where a single virtual machine sends data to multiple workspaces because you might be incurring charges for the same data multiple times. If you do multi-home agents, make sure you're sending unique data to each workspace.
-See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for guidance on analyzing your collected data to ensure that you aren't collecting duplicate data for the same machine.
+You can also collect duplicate data with a single virtual machine running both the Azure Monitor agent and Log Analytics agent, even if they're both sending data to the same workspace. While the agents can coexist, each works independently without any knowledge of the other. Continue to use the Log Analytics agent until you [migrate to the Azure Monitor agent](./agents/azure-monitor-agent-migration.md) rather than using both together unless you can ensure that each is collecting unique data.
-## Application Insights
-There are multiple methods that you can use to limit the amount of data collected by Application Insights.
+See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for guidance on analyzing your collected data to make sure you aren't collecting duplicate data for the same machine.
-* **Sampling**: [Sampling](app/sampling.md) is the primary tool you can use to tune the amount of data collected by Application Insights. Use sampling to reduce the amount of telemetry that's sent from your applications with minimal distortion of metrics.
+## Application Insights
-* **Limit Ajax calls**: [Limit the number of Ajax calls](app/javascript.md#configuration) that can be reported in every page view or disable Ajax reporting. Note that disabling Ajax calls will disable [JavaScript correlation](app/javascript.md#enable-distributed-tracing).
+There are multiple methods that you can use to limit the amount of data collected by Application Insights:
+* **Sampling**: [Sampling](app/sampling.md) is the primary tool you can use to tune the amount of data collected by Application Insights. Use sampling to reduce the amount of telemetry that's sent from your applications with minimal distortion of metrics.
+* **Limit Ajax calls**: [Limit the number of Ajax calls](app/javascript.md#configuration) that can be reported in every page view or disable Ajax reporting. If you disable Ajax calls, you'll be disabling [JavaScript correlation](app/javascript.md#enable-distributed-tracing) too.
* **Disable unneeded modules**: [Edit ApplicationInsights.config](app/configuration-with-applicationinsights-config.md) to turn off collection modules that you don't need. For example, you might decide that performance counters or dependency data aren't required.- * **Pre-aggregate metrics**: If you put calls to TrackMetric in your application, you can reduce traffic by using the overload that accepts your calculation of the average and standard deviation of a batch of measurements. Alternatively, you can use a [pre-aggregating package](https://www.myget.org/gallery/applicationinsights-sdk-labs).
+* **Limit the use of custom metrics**: The Application Insights option to [Enable alerting on custom metric dimensions](app/pre-aggregated-metrics-log-metrics.md#custom-metrics-dimensions-and-pre-aggregation) can increase costs. Using this option can result in the creation of more pre-aggregation metrics.
+* **Ensure use of updated SDKs**: Earlier versions of the ASP.NET Core SDK and Worker Service SDK [collect many counters by default](app/eventcounters.md#default-counters-collected), which were collected as custom metrics. Use later versions to specify [only required counters](app/eventcounters.md#customizing-counters-to-be-collected).
-* **Limit the use of custom metrics**: The Application Insights option to [Enable alerting on custom metric dimensions](app/pre-aggregated-metrics-log-metrics.md#custom-metrics-dimensions-and-pre-aggregation) can increase costs because this can result in the creation of more pre-aggregation metrics.
-
-* **Ensure use of updated SDKs**: Earlier version of the ASP.NET Core SDK and Worker Service SDK [collect a large number of counters by default](app/eventcounters.md#default-counters-collected) which collected as custom metrics. Use later versions to specify [only required counters](app/eventcounters.md#customizing-counters-to-be-collected).
## Resource logs
-The data volume for [resource logs](essentials/resource-logs.md) varies significantly between services, so you should only collect the categories that are required. You may also not want to collect platform metrics from Azure resources since this data is already being collected in Metrics. Only configured your diagnostic data to collect metrics if you need metric data in the workspace for more complex analysis with log queries.
-Diagnostic settings do not allow granular filtering of resource logs. You may require certain logs in a particular category but not others. In this case, use [transformations](essentials/data-collection-transformations.md) on the workspace to filter logs that you don't require. You can also filter out the value of certain columns that you don't require to save additional cost.
+The data volume for [resource logs](essentials/resource-logs.md) varies significantly between services, so you should only collect the categories that are required. You might also not want to collect platform metrics from Azure resources because this data is already being collected in Metrics. Only configure your diagnostic data to collect metrics if you need metric data in the workspace for more complex analysis with log queries.
-## Other insights and services
-See the documentation for other services that store their data in a Log Analytics workspace for recommendations on optimizing their data usage. Following
+Diagnostic settings don't allow granular filtering of resource logs. You might require certain logs in a particular category but not others. In this case, use [transformations](essentials/data-collection-transformations.md) on the workspace to filter logs that you don't require. You can also filter out the value of certain columns that you don't require to save additional cost.
-- **Container insights** - [Understand monitoring costs for Container insights](containers/container-insights-cost.md#controlling-ingestion-to-reduce-cost)-- **Microsoft Sentinel** - [Reduce costs for Microsoft Sentinel](../sentinel/billing-reduce-costs.md)-- **Defender for Cloud** - [Setting the security event option at the workspace level](../defender-for-cloud/enable-data-collection.md#setting-the-security-event-option-at-the-workspace-level)
+## Other insights and services
+See the documentation for other services that store their data in a Log Analytics workspace for recommendations on optimizing their data usage:
+- **Container insights**: [Understand monitoring costs for Container insights](containers/container-insights-cost.md#controlling-ingestion-to-reduce-cost)
+- **Microsoft Sentinel**: [Reduce costs for Microsoft Sentinel](../sentinel/billing-reduce-costs.md)
+- **Defender for Cloud**: [Setting the security event option at the workspace level](../defender-for-cloud/enable-data-collection.md#setting-the-security-event-option-at-the-workspace-level)
## Filter data with transformations (preview)
-[Data collection rule transformations in Azure Monitor](essentials//data-collection-transformations.md) allow you to filter incoming data to reduce costs for data ingestion and retention. In addition to filtering records from the incoming data, you can filter out columns in the data, reducing its billable size as described in [Data size calculation](logs/cost-logs.md#data-size-calculation).
-Use ingestion-time transformations on the workspace to further filter data for workflows where you don't have granular control. For example, you can select categories in a [diagnostic setting](essentials/diagnostic-settings.md) to collect resource logs for a particular service, but that category might send a variety of records that you don't need. Create a transformation for the table that service uses to filter out records you don't want.
+You can use [data collection rule transformations in Azure Monitor](essentials//data-collection-transformations.md) to filter incoming data to reduce costs for data ingestion and retention. In addition to filtering records from the incoming data, you can filter out columns in the data, reducing its billable size as described in [Data size calculation](logs/cost-logs.md#data-size-calculation).
+
+Use ingestion-time transformations on the workspace to further filter data for workflows where you don't have granular control. For example, you can select categories in a [diagnostic setting](essentials/diagnostic-settings.md) to collect resource logs for a particular service, but that category might also send records that you don't need. Create a transformation for the table that service uses to filter out records you don't want.
-You can also ingestion-time transformations to lower the storage requirements for records you want by removing columns without useful information. For example, you might have error events in a resource log that you want for alerting, but you don't require certain columns in those records that contain a large amount of data. Create a transformation for that table that removes those columns.
+You can also use ingestion-time transformations to lower the storage requirements for records you want by removing columns without useful information. For example, you might have error events in a resource log that you want for alerting. But you might not require certain columns in those records that contain a large amount of data. You can create a transformation for the table that removes those columns.
-The following table for methods to apply transformations to different workflows.
+The following table shows methods to apply transformations to different workflows.
> [!NOTE]
-> Azure tables here refers to tables that are created and maintained by Microsoft and documented in the [Azure Monitor Reference](/azure/azure-monitor/reference/). Custom tables are created by custom applications and have a suffix of *_CL* ion their name.
+> Azure tables here refers to tables that are created and maintained by Microsoft and documented in the [Azure Monitor reference](/azure/azure-monitor/reference/). Custom tables are created by custom applications and have a suffix of *_CL* in their name.
| Source | Target | Description | Filtering method | |:|:|:|:|
-| Azure Monitor agent | Azure tables | Collect data from standard sources such as Windows events, syslog, and performance data and send to Azure tables in Log Analytics workspace. | Use XPath in DCR to collect specific data from client machine. Ingestion-time transformations in agent DCR are not yet supported. |
+| Azure Monitor agent | Azure tables | Collect data from standard sources such as Windows events, Syslog, and performance data and send to Azure tables in Log Analytics workspace. | Use XPath in the data collection rule (DCR) to collect specific data from client machines. Ingestion-time transformations in the agent DCR aren't yet supported. |
| Azure Monitor agent | Custom tables | Collecting data outside of standard data sources is not yet supported. | |
-| Log Analytics agent | Azure tables | Collect data from standard sources such as Windows events, syslog, and performance data and send to Azure tables in Log Analytics workspace. | Configure data collection on the workspace. Optionally, create ingestion-time transformation in the workspace DCR to filter records and columns. |
-| Log Analytics agent | Custom tables | Configure [custom logs](agents/data-sources-custom-logs.md) on the workspace to collect file based text logs. | Configure ingestion-time transformation in the workspace DCR to filter or transform incoming data. You must first migrate the custom table to the new logs ingestion API. |
-| Data Collector API | Custom tables | Use [Data Collector API](logs/data-collector-api.md) to send data to custom tables in the workspace using REST API. | Configure ingestion-time transformation in the workspace DCR to filter or transform incoming data. You must first migrate the custom table to the new logs ingestion API. |
-| Logs ingestion API | Custom tables<br>Azure tables | Use [Logs ingestion API](logs/logs-ingestion-api-overview.md) to send data to the workspace using REST API. | Configure ingestion-time transformation in the DCR for the custom log. |
-| Other data sources | Azure tables | Includes resource logs from diagnostic settings and other Azure Monitor features such as Application insights, Container insights and VM insights. | Configure ingestion-time transformation in the workspace DCR to filter or transform incoming data. |
-
+| Log Analytics agent | Azure tables | Collect data from standard sources such as Windows events, Syslog, and performance data and send it to Azure tables in the Log Analytics workspace. | Configure data collection on the workspace. Optionally, create ingestion-time transformation in the workspace DCR to filter records and columns. |
+| Log Analytics agent | Custom tables | Configure [custom logs](agents/data-sources-custom-logs.md) on the workspace to collect file-based text logs. | Configure ingestion-time transformation in the workspace DCR to filter or transform incoming data. You must first migrate the custom table to the new logs ingestion API. |
+| Data Collector API | Custom tables | Use the [Data Collector API](logs/data-collector-api.md) to send data to custom tables in the workspace by using the REST API. | Configure ingestion-time transformation in the workspace DCR to filter or transform incoming data. You must first migrate the custom table to the new Logs ingestion API. |
+| Logs ingestion API | Custom tables<br>Azure tables | Use the [Logs ingestion API](logs/logs-ingestion-api-overview.md) to send data to the workspace using REST API. | Configure ingestion-time transformation in the DCR for the custom log. |
+| Other data sources | Azure tables | Includes resource logs from diagnostic settings and other Azure Monitor features such as Application insights, Container insights, and VM insights. | Configure ingestion-time transformation in the workspace DCR to filter or transform incoming data. |
## Monitor workspace and analyze usage
-Once you've configured your environment and data collection for cost optimization, you need to continue to monitor it to ensure that you don't experience unexpected increases in billable usage. You should also analyze your usage regularly to determine if you have additional opportunities to reduce your usage, such as further filtering out collected data that has not proven to be useful.
+After you've configured your environment and data collection for cost optimization, you need to continue to monitor it to ensure that you don't experience unexpected increases in billable usage. You should also analyze your usage regularly to determine if you have other opportunities to reduce your usage. For example, you might want to further filter out collected data that hasn't proven to be useful.
### Set a daily cap
-A [daily cap](logs/daily-cap.md) disables data collection in a Log Analytics workspace for the rest of the day once your configured limit is reached. This should not be used as a method to reduce costs, but rather as a preventative measure to ensure that you don't exceed a particular budget. Daily caps are typically used by organizations that are particularly cost conscious.
-When data collection stops, you effectively have no monitoring of features and resources relying on that workspace. Rather than just relying on the daily cap alone, you can configure an alert rule to notify you when data collection reaches some level before the daily cap. This allows you to address any increases before data collection shuts down, or even to temporarily disable collection for less critical resources.
+A [daily cap](logs/daily-cap.md) disables data collection in a Log Analytics workspace for the rest of the day after your configured limit is reached. A daily cap shouldn't be used as a method to reduce costs but as a preventative measure to ensure that you don't exceed a particular budget. Daily caps are typically used by organizations that are particularly cost conscious.
+
+When data collection stops, you effectively have no monitoring of features and resources relying on that workspace. Instead of relying on the daily cap alone, you can configure an alert rule to notify you when data collection reaches some level before the daily cap. Notification allows you to address any increases before data collection shuts down, or even to temporarily disable collection for less critical resources.
+
+See [Set daily cap on Log Analytics workspace](logs/daily-cap.md) for information on how the daily cap works and how to configure one.
-See [Set daily cap on Log Analytics workspace](logs/daily-cap.md) for details on how the daily cap works and how to configure one.
### Send alert when data collection is high
-In order to avoid unexpected bills, you should be proactively notified anytime you experience excessive usage. This allows you to address any potential anomalies before the end of your billing period.
-The following example is a [log alert rule](alerts/alerts-unified-log.md) that sends an alert if the billable data volume ingested in the last 24 hours was greater than 50 GB. Modify the **Alert Logic** to use a different threshold based on expected usage in your environment. You can also increase the frequency to check usage multiple times every day, but this will result in a higher charge for the alert rule.
+To avoid unexpected bills, you should be proactively notified anytime you experience excessive usage. Notification allows you to address any potential anomalies before the end of your billing period.
+
+The following example is a [log alert rule](alerts/alerts-unified-log.md) that sends an alert if the billable data volume ingested in the last 24 hours was greater than 50 GB. Modify the **Alert Logic** setting to use a different threshold based on expected usage in your environment. You can also increase the frequency to check usage multiple times every day, but this option will result in a higher charge for the alert rule.
| Setting | Value | |:|:|
The following example is a [log alert rule](alerts/alerts-unified-log.md) that s
| Actions | Select or add an [action group](alerts/action-groups.md) to notify you when the threshold is exceeded. | | **Details** | | | Severity| Warning |
-| Alert rule name | Billable data volume greater than 50 GB in 24 hours |
+| Alert rule name | Billable data volume greater than 50 GB in 24 hours. |
-See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for details on using log queries like the one used here to analyze billable usage in your workspace.
+See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for information on using log queries like the one used here to analyze billable usage in your workspace.
## Analyze your collected data
-When you detect an increase in data collection, then you need methods to analyze your collected data to identify the source of the increase. You should also periodically analyze data collection to determine if there's additional configuration that can decrease your usage further. This is particularly important when you add a new set of data sources, such as a new set of virtual machines or onboard a new service.
-See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for different methods to analyze your collected data and billable usage. This article includes a variety of log queries that will help you identify the source of any data increases and to understand your basic usage patterns.
+When you detect an increase in data collection, you need methods to analyze your collected data to identify the source of the increase. You should also periodically analyze data collection to determine if there's additional configuration that can decrease your usage further. This practice is particularly important when you add a new set of data sources, such as a new set of virtual machines or onboard a new service.
+
+See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for different methods to analyze your collected data and billable usage. This article includes various log queries that will help you identify the source of any data increases and to understand your basic usage patterns.
## Next steps - See [Azure Monitor cost and usage](usage-estimated-costs.md)) for a description of Azure Monitor and how to view and analyze your monthly bill.-- See [Azure Monitor Logs pricing details](logs/cost-logs.md) for details on how charges are calculated for data in a Log Analytics workspace and different configuration options to reduce your charges.-- See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for details on analyzing the data in your workspace to determine to source of any higher than expected usage and opportunities to reduce your amount of data collected.-- See [Set daily cap on Log Analytics workspace](logs/daily-cap.md) to control your costs by setting a daily limit on the amount of data that may be ingested in a workspace.
+- See [Azure Monitor Logs pricing details](logs/cost-logs.md) for information on how charges are calculated for data in a Log Analytics workspace and different configuration options to reduce your charges.
+- See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for information on analyzing the data in your workspace to determine the source of any higher-than-expected usage and opportunities to reduce your amount of data collected.
+- See [Set daily cap on Log Analytics workspace](logs/daily-cap.md) to control your costs by setting a daily limit on the amount of data that can be ingested in a workspace.
azure-monitor Best Practices Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-data-collection.md
Title: Azure Monitor best practices - Configure data collection
+ Title: 'Azure Monitor best practices: Configure data collection'
description: Guidance and recommendations for configuring data collection in Azure Monitor.
-# Azure Monitor best practices - Configure data collection
+# Azure Monitor best practices: Configure data collection
+ This article is part of the scenario [Recommendations for configuring Azure Monitor](best-practices.md). It describes recommended steps to configure data collection required to enable Azure Monitor features for your Azure and hybrid applications and resources. > [!IMPORTANT]
-> The features of Azure Monitor and their configuration will vary depending on your business requirements balanced with the cost of the enabled features. Each step below will identify whether there is potential cost, and you should assess these costs before proceeding. See [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) for complete pricing details.
+> The features of Azure Monitor and their configuration will vary depending on your business requirements balanced with the cost of the enabled features. Each of the following steps identifies whether there's potential cost, and you should assess these costs before proceeding. See [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) for complete pricing details.
## Design Log Analytics workspace architecture
-You require at least one Log Analytics workspace to enable [Azure Monitor Logs](logs/data-platform-logs.md), which is required for collecting such data as logs from Azure resources, collecting data from the guest operating system of Azure virtual machines, and for most Azure Monitor insights. Other services such as Microsoft Sentinel and Microsoft Defender for Cloud also use a Log Analytics workspace and can share the same one that you use for Azure Monitor.
-There is no cost for creating a Log Analytics workspace, but there is a potential charge once you configure data to be collected into it. See [Azure Monitor Logs pricing details](logs/cost-logs.md) for details on how log data is charged.
+You require at least one Log Analytics workspace to enable [Azure Monitor Logs](logs/data-platform-logs.md), which is required for:
+
+- Collecting data such as logs from Azure resources.
+- Collecting data from the guest operating system of Azure Virtual Machines.
+- Enabling most Azure Monitor insights.
+
+Other services such as Microsoft Sentinel and Microsoft Defender for Cloud also use a Log Analytics workspace and can share the same one that you use for Azure Monitor.
-See [Create a Log Analytics workspace in the Azure portal](logs/quick-create-workspace.md) to create an initial Log Analytics workspace and [Manage access to Log Analytics workspaces](logs/manage-access.md) to configure access. You can use scalable methods such as Resource Manager templates to configure workspaces, though this is often not required since most environments will require a minimal number.
+There's no cost for creating a Log Analytics workspace, but there's a potential charge after you configure data to be collected into it. See [Azure Monitor Logs pricing details](logs/cost-logs.md) for information on how log data is charged.
-Start with a single workspace to support initial monitoring, but see [Design a Log Analytics workspace configuration](logs/workspace-design.md) for guidance on when to use multiple workspaces and how to locate and configure them.
+See [Create a Log Analytics workspace in the Azure portal](logs/quick-create-workspace.md) to create an initial Log Analytics workspace, and see [Manage access to Log Analytics workspaces](logs/manage-access.md) to configure access. You can use scalable methods such as Resource Manager templates to configure workspaces, although this step is often not required because most environments will require a minimal number.
+Start with a single workspace to support initial monitoring. See [Design a Log Analytics workspace configuration](logs/workspace-design.md) for guidance on when to use multiple workspaces and how to locate and configure them.
## Collect data from Azure resources
-Some monitoring of Azure resources is available automatically with no configuration required, while you must perform configuration steps to collect additional monitoring data. The following table illustrates the configuration steps required to collect all available data from your Azure resources, including at which step data is sent to Azure Monitor Metrics and Azure Monitor Logs. The sections below describe each step in further detail.
-[![Deploy Azure resource monitoring](media/best-practices-data-collection/best-practices-azure-resources.png)](media/best-practices-data-collection/best-practices-azure-resources.png#lightbox)
+Some monitoring of Azure resources is available automatically with no configuration required. To collect more monitoring data, you must perform configuration steps.
-### Collect tenant and subscription logs
-While the [Azure Active Directory logs](../active-directory/reports-monitoring/overview-reports.md) for your tenant and the [Activity log](essentials/platform-logs-overview.md) for your subscription are collected automatically, sending them to a Log Analytics workspace enables you to analyze these events with other log data using log queries in Log Analytics. This also allows you to create log query alerts which are the only way to alert on Azure Active Directory logs and provide more complex logic than Activity log alerts.
+The following table shows the configuration steps required to collect all available data from your Azure resources. It also shows at which step data is sent to Azure Monitor Metrics and Azure Monitor Logs. The following sections describe each step in further detail.
-There's no cost for sending the Activity log to a workspace, but there is a data ingestion and retention charge for Azure Active Directory logs.
+[![Diagram that shows deploying Azure resource monitoring.](media/best-practices-data-collection/best-practices-azure-resources.png)](media/best-practices-data-collection/best-practices-azure-resources.png#lightbox)
-See [Integrate Azure AD logs with Azure Monitor logs](../active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md) and [Create diagnostic settings to send platform logs and metrics to different destinations](essentials/diagnostic-settings.md) to create a diagnostic setting for your tenant and subscription to send log entries to your Log Analytics workspace.
+### Collect tenant and subscription logs
+The [Azure Active Directory (Azure AD) logs](../active-directory/reports-monitoring/overview-reports.md) for your tenant and the [activity log](essentials/platform-logs-overview.md) for your subscription are collected automatically. When you send them to a Log Analytics workspace, you can analyze these events with other log data by using log queries in Log Analytics. You can also create log query alerts, which are the only way to alert on Azure AD logs and provide more complex logic than activity log alerts.
+There's no cost for sending the activity log to a workspace, but there's a data ingestion and retention charge for Azure AD logs.
+See [Integrate Azure AD logs with Azure Monitor logs](../active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md) and [Create diagnostic settings to send platform logs and metrics to different destinations](essentials/diagnostic-settings.md) to create a diagnostic setting for your tenant and subscription to send log entries to your Log Analytics workspace.
### Collect resource logs and platform metrics
-Resources in Azure automatically generate [resource logs](essentials/platform-logs-overview.md) that provide details of operations performed within the resource. Unlike platform metrics though, you need to configure resource logs to be collected. Create a diagnostic setting to send them to a Log Analytics workspace and combine them with the other data used with Azure Monitor Logs. The same diagnostic setting can be used to also send the platform metrics for most resources to the same workspace, which allows you to analyze metric data using log queries with other collected data.
-There is a cost for collecting resource logs in your Log Analytics workspace, so only select those log categories with valuable data. Collecting all categories will incur cost for collecting data with little value. See the monitoring documentation for each Azure service for a description of categories and recommendations for which to collect. Also see [Azure Monitor best practices - cost management](logs/cost-logs.md) for recommendations on optimizing the cost of your log collection.
+Resources in Azure automatically generate [resource logs](essentials/platform-logs-overview.md) that provide details of operations performed within the resource. Unlike platform metrics, you need to configure resource logs to be collected. Create a diagnostic setting to send them to a Log Analytics workspace and combine them with the other data used with Azure Monitor Logs. The same diagnostic setting also can be used to send the platform metrics for most resources to the same workspace. This way, you can analyze metric data by using log queries with other collected data.
-See [Create diagnostic setting to collect resource logs and metrics in Azure](essentials/diagnostic-settings.md#create-diagnostic-settings) to create a diagnostic setting for an Azure resource.
+There's a cost for collecting resource logs in your Log Analytics workspace, so only select those log categories with valuable data. Collecting all categories will incur cost for collecting data with little value. See the monitoring documentation for each Azure service for a description of categories and recommendations for which to collect. Also see [Azure Monitor best practices - cost management](logs/cost-logs.md) for recommendations on optimizing the cost of your log collection.
-Since a diagnostic setting needs to be created for each Azure resource, use Azure Policy to automatically create a diagnostic setting as each resource is created. Each Azure resource type has a unique set of categories that need to be listed in the diagnostic setting. Because of this, each resource type requires a separate policy definition. Some resource types have built-in policy definitions that you can assign without modification. For other resource types, you need to create a custom definition.
+See [Create diagnostic settings to collect resource logs and metrics in Azure](essentials/diagnostic-settings.md#create-diagnostic-settings) to create a diagnostic setting for an Azure resource.
+
+Because a diagnostic setting needs to be created for each Azure resource, use Azure Policy to automatically create a diagnostic setting as each resource is created. Each Azure resource type has a unique set of categories that need to be listed in the diagnostic setting. Because of this, each resource type requires a separate policy definition. Some resource types have built-in policy definitions that you can assign without modification. For other resource types, you need to create a custom definition.
See [Create diagnostic settings at scale using Azure Policy](essentials/diagnostic-settings-policy.md) for a process for creating policy definitions for a particular Azure service and details for creating diagnostic settings at scale. ### Enable insights
-Insights provide a specialized monitoring experience for a particular service. They use the same data already being collected such as platform metrics and resource logs, but they provide custom workbooks the assist you in identifying and analyzing the most critical data. Most insights will be available in the Azure portal with no configuration required, other than collecting resource logs for that service. See the monitoring documentation for each Azure service to determine whether it has an insight and if it requires configuration.
-There is no cost for insights, but you may be charged for any data they collect.
+Insights provide a specialized monitoring experience for a particular service. They use the same data already being collected such as platform metrics and resource logs, but they provide custom workbooks that assist you in identifying and analyzing the most critical data. Most insights will be available in the Azure portal with no configuration required, other than collecting resource logs for that service. See the monitoring documentation for each Azure service to determine whether it has an insight and if it requires configuration.
+
+There's no cost for insights, but you might be charged for any data they collect.
See [What is monitored by Azure Monitor?](monitor-reference.md) for a list of available insights and solutions in Azure Monitor. See the documentation for each for any unique configuration or pricing information. > [!IMPORTANT]
-> The following insights are significantly more complex than others and have additional guidance for their configuration.
->
+> The following insights are much more complex than others and have more guidance for their configuration:
+>
> - [VM insights](#monitor-virtual-machines) > - [Container insights](#monitor-containers) > - [Monitor applications](#monitor-applications) - ## Monitor virtual machines+ Virtual machines generate similar data as other Azure resources, but they require an agent to collect data from the guest operating system. Virtual machines also have unique monitoring requirements because of the different workloads running on them. See [Monitoring Azure virtual machines with Azure Monitor](vm/monitor-vm-azure.md) for a dedicated scenario on monitoring virtual machines with Azure Monitor. ## Monitor containers
-Virtual machines generate similar data as other Azure resources, but they require a containerized version of the Log Analytics agent to collect required data. Container insights helps you prepare your containerized environment for monitoring and works in conjunction with third party tools for providing comprehensive monitoring of AKS and the workflows it supports. See [Monitoring Azure Kubernetes Service (AKS) with Azure Monitor](../aks/monitor-aks.md?toc=/azure/azure-monitor/toc.json) for a dedicated scenario on monitoring AKS with Azure Monitor.
+
+Virtual machines generate similar data as other Azure resources, but they require a containerized version of the Log Analytics agent to collect required data. Container insights help you prepare your containerized environment for monitoring. It works in conjunction with third-party tools to provide comprehensive monitoring of Azure Kubernetes Service (AKS) and the workflows it supports. See [Monitoring Azure Kubernetes Service with Azure Monitor](../aks/monitor-aks.md?toc=/azure/azure-monitor/toc.json) for a dedicated scenario on monitoring AKS with Azure Monitor.
## Monitor applications
-Azure Monitor monitors your custom applications using [Application Insights](app/app-insights-overview.md), which you must configure for each application you want to monitor. The configuration process will vary depending on the type of application being monitored and the type of monitoring that you want to perform. Data collected by Application Insights is stored in Azure Monitor Metrics, Azure Monitor Logs, and Azure blob storage, depending on the feature. Performance data is stored in both Azure Monitor Metrics and Azure Monitor Logs with no additional configuration required.
+
+Azure Monitor monitors your custom applications by using [Application Insights](app/app-insights-overview.md), which you must configure for each application you want to monitor. The configuration process varies depending on the type of application being monitored and the type of monitoring that you want to perform. Data collected by Application Insights is stored in Azure Monitor Metrics, Azure Monitor Logs, and Azure Blob Storage, depending on the feature. Performance data is stored in both Azure Monitor Metrics and Azure Monitor Logs with no more configuration required.
### Create an application resource+ Application Insights is the feature of Azure Monitor for monitoring your cloud native and hybrid applications.
-You must create a resource in Application Insights for each application that you're going to monitor. Log data collected by Application Insights is stored in Azure Monitor Logs for a workspace-based application. Log data for classic applications is stored separate from your Log Analytics workspace as described in [Data structure](logs/log-analytics-workspace-overview.md#data-structure).
+You must create a resource in Application Insights for each application that you're going to monitor. Log data collected by Application Insights is stored in Azure Monitor Logs for a workspace-based application. Log data for classic applications is stored separately from your Log Analytics workspace as described in [Data structure](logs/log-analytics-workspace-overview.md#data-structure).
- When you create the application, you must select whether to use classic or workspace-based. See [Create an Application Insights resource](app/create-new-resource.md) to create a classic application.
+ When you create the application, you must select whether to use classic or workspace based. See [Create an Application Insights resource](app/create-new-resource.md) to create a classic application.
See [Workspace-based Application Insights resources (preview)](app/create-workspace-resource.md) to create a workspace-based application. -
- A fundamental design decision is whether to use separate or single application resource for multiple applications. Separate resources can save costs and prevent mixing data from different applications, but a single resource can simplify your monitoring by keeping all relevant telemetry together. See [How many Application Insights resources should I deploy](app/separate-resources.md) for detailed criteria on making this design decision.
--
+ A fundamental design decision is whether to use separate or a single application resource for multiple applications. Separate resources can save costs and prevent mixing data from different applications, but a single resource can simplify your monitoring by keeping all relevant telemetry together. See [How many Application Insights resources should I deploy](app/separate-resources.md) for criteria to help you make this design decision.
### Configure codeless or code-based monitoring
-To enable monitoring for an application, you must decide whether you will use codeless or code-based monitoring. The configuration process will vary depending on this decision and the type of application you're going to monitor.
-**Codeless monitoring** is easiest to implement and can be configured after your code development. It doesn't require any updates to your code. See the following resources for details on enabling monitoring depending on your application.
+To enable monitoring for an application, you must decide whether you'll use codeless or code-based monitoring. The configuration process varies depending on this decision and the type of application you're going to monitor.
+
+**Codeless monitoring** is easiest to implement and can be configured after your code development. It doesn't require any updates to your code. For information on how to enable monitoring based on your application, see:
- [Applications hosted on Azure Web Apps](app/azure-web-apps.md) - [Java applications](app/java-in-process-agent.md)-- [ASP.NET applications hosted in IIS on Azure VM or Azure virtual machine scale set](app/azure-vm-vmss-apps.md)
+- [ASP.NET applications hosted in IIS on Azure Virtual Machines or Azure Virtual Machine Scale Sets](app/azure-vm-vmss-apps.md)
- [ASP.NET applications hosted in IIS on-premises](app/status-monitor-v2-overview.md)
+**Code-based monitoring** is more customizable and collects more telemetry, but it requires adding a dependency to your code on the Application Insights SDK NuGet packages. For information on how to enable monitoring based on your application, see:
-**Code-based monitoring** is more customizable and collects additional telemetry, but it requires adding a dependency to your code on the Application Insights SDK NuGet packages. See the following resources for details on enabling monitoring depending on your application.
--- [ASP.NET Applications](app/asp-net.md)-- [ASP.NET Core Applications](app/asp-net-core.md)-- [.NET Console Applications](app/console.md)
+- [ASP.NET applications](app/asp-net.md)
+- [ASP.NET Core applications](app/asp-net-core.md)
+- [.NET console applications](app/console.md)
- [Java](app/java-in-process-agent.md) - [Node.js](app/nodejs.md) - [Python](app/opencensus-python.md) - [Other platforms](app/platforms.md) ### Configure availability testing
-Availability tests in Application Insights are recurring tests that monitor the availability and responsiveness of your application at regular intervals from points around the world. You can create a simple ping test for free or create a sequence of web requests to simulate user transactions which have associated cost.
-See [Monitor the availability of any website](app/monitor-web-app-availability.md) for summary of the different kinds of test and details on creating them.
+Availability tests in Application Insights are recurring tests that monitor the availability and responsiveness of your application at regular intervals from points around the world. You can create a simple ping test for free. You can also create a sequence of web requests to simulate user transactions, which have associated costs.
+
+See [Monitor the availability of any website](app/monitor-web-app-availability.md) for a summary of the different kinds of tests and information on creating them.
### Configure Profiler
-Profiler in Application Insights provides performance traces for .NET applications. It helps you identify the "hot" code path that takes the longest time when it's handling a particular web request. The process for configuring the profiler varies depending on the type of application.
-See [Profile production applications in Azure with Application Insights](app/profiler-overview.md) for details on configuring Profiler.
+Profiler in Application Insights provides performance traces for .NET applications. It helps you identify the "hot" code path that takes the longest time when it's handling a particular web request. The process for configuring the profiler varies depending on the type of application.
+
+See [Profile production applications in Azure with Application Insights](app/profiler-overview.md) for information on configuring Profiler.
### Configure Snapshot Debugger
-Snapshot Debugger in Application Insights monitors exception telemetry from your .NET application and collects snapshots on your top-throwing exceptions so that you have the information you need to diagnose issues in production. The process for configuring Snapshot Debugger varies depending on the type of application.
-See [Debug snapshots on exceptions in .NET apps](app/snapshot-debugger.md) for details on configuring Snapshot Debugger.
+Snapshot Debugger in Application Insights monitors exception telemetry from your .NET application. It collects snapshots on your top-throwing exceptions so that you have the information you need to diagnose issues in production. The process for configuring Snapshot Debugger varies depending on the type of application.
+
+See [Debug snapshots on exceptions in .NET apps](app/snapshot-debugger.md) for information on configuring Snapshot Debugger.
## Next steps -- With data collection configured for all of your Azure resources, see [Analyze and visualize data](best-practices-analysis.md) to see options for analyzing this data.
+With data collection configured for all your Azure resources, see [Analyze and visualize data](best-practices-analysis.md) to see options for analyzing this data.
azure-monitor Change Analysis Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-enable.md
ms.contributor: cawa Previously updated : 08/10/2022 Last updated : 08/23/2022
Register the `Microsoft.ChangeAnalysis` resource provider with an Azure Resource
- Enter any UI entry point, like the Web App **Diagnose and Solve Problems** tool, or - Bring up the Change Analysis standalone tab.
-In this guide, you'll learn the two ways to enable Change Analysis for web app in-guest changes:
-- For one or a few web apps, enable Change Analysis via the UI.
+In this guide, you'll learn the two ways to enable Change Analysis for Azure Functions and web app in-guest changes:
+- For one or a few Azure Functions or web apps, enable Change Analysis via the UI.
- For a large number of web apps (for example, 50+ web apps), enable Change Analysis using the provided PowerShell script. > [!NOTE]
-> Slot-level enablement for web app is not supported at the moment.
+> Slot-level enablement for Azure Functions or web app is not supported at the moment.
-## Enable web app in-guest change collection via Azure Portal
+## Enable Azure Functions and web app in-guest change collection via the Change Analysis portal
For web app in-guest changes, separate enablement is required for scanning code files within a web app. For more information, see [Change Analysis in the Diagnose and solve problems tool](change-analysis-visualizations.md#diagnose-and-solve-problems-tool) section.
azure-monitor Continuous Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/continuous-monitoring.md
Title: Continuous monitoring with Azure Monitor | Microsoft Docs
-description: Describes specific steps for using Azure Monitor to enable Continuous monitoring throughout your workflows.
+description: Describes specific steps for using Azure Monitor to enable continuous monitoring throughout your workflows.
# Continuous monitoring with Azure Monitor
-Continuous monitoring refers to the process and technology required to incorporate monitoring across each phase of your DevOps and IT operations lifecycles. It helps to continuously ensure the health, performance, and reliability of your application and infrastructure as it moves from development to production. Continuous monitoring builds on the concepts of Continuous Integration and Continuous Deployment (CI/CD) which help you develop and deliver software faster and more reliably to provide continuous value to your users.
+Continuous monitoring refers to the process and technology required to incorporate monitoring across each phase of your DevOps and IT operations lifecycles. It helps to continuously ensure the health, performance, and reliability of your application and infrastructure as it moves from development to production. Continuous monitoring builds on the concepts of continuous integration and continuous deployment (CI/CD). CI/CD helps you develop and deliver software faster and more reliably to provide continuous value to your users.
-[Azure Monitor](overview.md) is the unified monitoring solution in Azure that provides full-stack observability across applications and infrastructure in the cloud and on-premises. It works seamlessly with [Visual Studio and Visual Studio Code](https://visualstudio.microsoft.com/) during development and test and integrates with [Azure DevOps](/azure/devops/user-guide/index) for release management and work item management during deployment and operations. It even integrates across the ITSM and SIEM tools of your choice to help track issues and incidents within your existing IT processes.
-
-This article describes specific steps for using Azure Monitor to enable continuous monitoring throughout your workflows. It includes links to other documentation that provides details on implementing different features.
+[Azure Monitor](overview.md) is the unified monitoring solution in Azure that provides full-stack observability across applications and infrastructure in the cloud and on-premises. It works seamlessly with [Visual Studio and Visual Studio Code](https://visualstudio.microsoft.com/) during development and test. It integrates with [Azure DevOps](/azure/devops/user-guide/index) for release management and work item management during deployment and operations. It even integrates across the IT system management (ITSM) and SIEM tools of your choice to help track issues and incidents within your existing IT processes.
+This article describes specific steps for using Azure Monitor to enable continuous monitoring throughout your workflows. Links to other documentation provide information on implementing different features.
## Enable monitoring for all your applications
-In order to gain observability across your entire environment, you need to enable monitoring on all your web applications and services. This will allow you to easily visualize end-to-end transactions and connections across all the components.
--- [Azure DevOps Projects](../devops-project/overview.md) give you a simplified experience with your existing code and Git repository, or choose from one of the sample applications to create a Continuous Integration (CI) and Continuous Delivery (CD) pipeline to Azure.-- [Continuous monitoring in your DevOps release pipeline](./app/continuous-monitoring.md) allows you to gate or rollback your deployment based on monitoring data.-- [Status Monitor](./app/status-monitor-v2-overview.md) allows you to instrument a live .NET app on Windows with Azure Application Insights, without having to modify or redeploy your code.-- If you have access to the code for your application, then enable full monitoring with [Application Insights](./app/app-insights-overview.md) by installing the Azure Monitor Application Insights SDK for [.NET](./app/asp-net.md), [.NET Core](./app/asp-net-core.md), [Java](./app/java-in-process-agent.md), [Node.js](./app/nodejs-quick-start.md), or [any other programming languages](./app/platforms.md). This allows you to specify custom events, metrics, or page views that are relevant to your application and your business.
+To gain observability across your entire environment, you need to enable monitoring on all your web applications and services. This way, you can easily visualize end-to-end transactions and connections across all the components. For example:
+- [Azure DevOps projects](../devops-project/overview.md) give you a simplified experience with your existing code and Git repository. You can also choose from one of the sample applications to create a CI/CD pipeline to Azure.
+- [Continuous monitoring in your DevOps release pipeline](./app/continuous-monitoring.md) allows you to gate or roll back your deployment based on monitoring data.
+- [Status Monitor](./app/status-monitor-v2-overview.md) allows you to instrument a live .NET app on Windows with Application Insights, without having to modify or redeploy your code.
+- If you have access to the code for your application, enable full monitoring with [Application Insights](./app/app-insights-overview.md) by installing the Azure Monitor Application Insights SDK for [.NET](./app/asp-net.md), [.NET Core](./app/asp-net-core.md), [Java](./app/java-in-process-agent.md), [Node.js](./app/nodejs-quick-start.md), or [any other programming languages](./app/platforms.md). Full monitoring allows you to specify custom events, metrics, or page views that are relevant to your application and your business.
## Enable monitoring for your entire infrastructure
-Applications are only as reliable as their underlying infrastructure. Having monitoring enabled across your entire infrastructure will help you achieve full observability and make it easier to discover a potential root cause when something fails. Azure Monitor helps you track the health and performance of your entire hybrid infrastructure including resources such as VMs, containers, storage, and network.
-- You automatically get [platform metrics, activity logs and diagnostics logs](data-sources.md) from most of your Azure resources with no configuration.
+Applications are only as reliable as their underlying infrastructure. Having monitoring enabled across your entire infrastructure will help you achieve full observability and make it easier to discover a potential root cause when something fails. Azure Monitor helps you track the health and performance of your entire hybrid infrastructure including resources such as VMs, containers, storage, and network. For example, you can:
+
+- Get [platform metrics, activity logs, and diagnostics logs](data-sources.md) automatically from most of your Azure resources with no configuration.
- Enable deeper monitoring for VMs with [VM insights](vm/vminsights-overview.md).-- Enable deeper monitoring for AKS clusters with [Container insights](containers/container-insights-overview.md).
+- Enable deeper monitoring for Azure Kubernetes Service (AKS) clusters with [Container insights](containers/container-insights-overview.md).
- Add [monitoring solutions](./monitor-reference.md) for different applications and services in your environment.
+[Infrastructure as code](/azure/devops/learn/what-is-infrastructure-as-code) is the management of infrastructure in a descriptive model, using the same versioning that DevOps teams use for source code. It adds reliability and scalability to your environment and allows you to use similar processes that are used to manage your applications. For example, you can:
-[Infrastructure as code](/azure/devops/learn/what-is-infrastructure-as-code) is the management of infrastructure in a descriptive model, using the same versioning as DevOps teams use for source code. It adds reliability and scalability to your environment and allows you to leverage similar processes that used to manage your applications.
--- Use [Resource Manager templates](./logs/resource-manager-workspace.md) to enable monitoring and configure alerts over a large set of resources.-- Use [Azure Policy](../governance/policy/overview.md) to enforce different rules over your resources. This ensures that those resources stay compliant with your corporate standards and service level agreements.
+- Use [Azure Resource Manager templates](./logs/resource-manager-workspace.md) to enable monitoring and configure alerts over a large set of resources.
+- Use [Azure Policy](../governance/policy/overview.md) to enforce different rules over your resources. Azure Policy ensures that those resources stay compliant with your corporate standards and service level agreements.
+## Combine resources in Azure resource groups
-## Combine resources in Azure Resource Groups
-A typical application on Azure today includes multiple resources such as VMs and App Services or microservices hosted on Cloud Services, AKS clusters, or Service Fabric. These applications frequently utilize dependencies like Event Hubs, Storage, SQL, and Service Bus.
+A typical application on Azure today includes multiple resources such as VMs and app services or microservices hosted on Azure Cloud Services, AKS clusters, or Azure Service Fabric. These applications frequently use dependencies like Azure Event Hubs, Azure Storage, Azure SQL, and Azure Service Bus. For example, you can:
-- Combine resources in Azure Resource Groups to get full visibility across all your resources that make up your different applications. [Resource Group insights](./insights/resource-group-insights.md) provides a simple way to keep track of the health and performance of your entire full-stack application and enables drilling down into respective components for any investigations or debugging.
+- Combine resources in Azure resource groups to get full visibility across all your resources that make up your different applications. [Resource group insights](./insights/resource-group-insights.md) provides a simple way to keep track of the health and performance of your entire full-stack application and enables drilling down into respective components for any investigations or debugging.
-## Ensure quality through Continuous Deployment
-Continuous Integration / Continuous Deployment allows you to automatically integrate and deploy code changes to your application based on the results of automated testing. It streamlines the deployment process and ensures the quality of any changes before they move into production.
+## Ensure quality through continuous deployment
+CI/CD allows you to automatically integrate and deploy code changes to your application based on the results of automated testing. It streamlines the deployment process and ensures the quality of any changes before they move into production. For example, you can:
-- Use [Azure Pipelines](/azure/devops/pipelines) to implement Continuous Deployment and automate your entire process from code commit to production based on your CI/CD tests.-- Use [Quality Gates](/azure/devops/pipelines/release/approvals/gates) to integrate monitoring into your pre-deployment or post-deployment. This ensures that you are meeting the key health/performance metrics (KPIs) as your applications move from dev to production and any differences in the infrastructure environment or scale is not negatively impacting your KPIs.-- [Maintain separate monitoring instances](./app/separate-resources.md) between your different deployment environments such as Dev, Test, Canary, and Prod. This ensures that collected data is relevant across the associated applications and infrastructure. If you need to correlate data across environments, you can use [multi-resource charts in Metrics Explorer](./essentials/metrics-charts.md) or create [cross-resource queries in Azure Monitor](logs/cross-workspace-query.md).-
+- Use [Azure Pipelines](/azure/devops/pipelines) to implement continuous deployment and automate your entire process from code commit to production based on your CI/CD tests.
+- Use [quality gates](/azure/devops/pipelines/release/approvals/gates) to integrate monitoring into your pre-deployment or post-deployment. Quality gates ensure that you're meeting the key health and performance metrics, also known as KPIs, as your applications move from development to production. They also ensure that any differences in the infrastructure environment or scale aren't negatively affecting your KPIs.
+- [Maintain separate monitoring instances](./app/separate-resources.md) between your different deployment environments, such as Dev, Test, Canary, and Prod. Separate monitoring instances ensure that collected data is relevant across the associated applications and infrastructure. If you need to correlate data across environments, you can use [multi-resource charts in metrics explorer](./essentials/metrics-charts.md) or create [cross-resource queries in Azure Monitor](logs/cross-workspace-query.md).
## Create actionable alerts with actions
-A critical aspect of monitoring is proactively notifying administrators of any current and predicted issues.
-- Create [alerts in Azure Monitor](./alerts/alerts-overview.md) based on logs and metrics to identify predictable failure states. You should have a goal of making all alerts actionable meaning that they represent actual critical conditions and seek to reduce false positives. Use [Dynamic Thresholds](alerts/alerts-dynamic-thresholds.md) to automatically calculate baselines on metric data rather than defining your own static thresholds. -- Define actions for alerts to use the most effective means of notifying your administrators. Available [actions for notification](alerts/action-groups.md#create-an-action-group-by-using-the-azure-portal) are SMS, e-mails, push notifications, or voice calls.
+A critical aspect of monitoring is proactively notifying administrators of any current and predicted issues. For example, you can:
+
+- Create [alerts in Azure Monitor](./alerts/alerts-overview.md) based on logs and metrics to identify predictable failure states. You should have a goal of making all alerts actionable, which means that they represent actual critical conditions and seek to reduce false positives. Use [dynamic thresholds](alerts/alerts-dynamic-thresholds.md) to automatically calculate baselines on metric data rather than defining your own static thresholds.
+- Define actions for alerts to use the most effective means of notifying your administrators. Available [actions for notification](alerts/action-groups.md#create-an-action-group-by-using-the-azure-portal) are SMS, emails, push notifications, or voice calls.
- Use more advanced actions to [connect to your ITSM tool](alerts/itsmc-overview.md) or other alert management systems through [webhooks](alerts/activity-log-alerts-webhook.md).-- Remediate situations identified in alerts as well with [Azure Automation runbooks](../automation/automation-webhooks.md) or [Logic Apps](/connectors/custom-connectors/create-webhook-trigger) that can be launched from an alert using webhooks.
+- Remediate situations identified in alerts as well with [Azure Automation runbooks](../automation/automation-webhooks.md) or [Azure Logic Apps](/connectors/custom-connectors/create-webhook-trigger) that can be launched from an alert by using webhooks.
- Use [autoscaling](./autoscale/tutorial-autoscale-performance-schedule.md) to dynamically increase and decrease your compute resources based on collected metrics. ## Prepare dashboards and workbooks
-Ensuring that your development and operations have access to the same telemetry and tools allows them to view patterns across your entire environment and minimize your Mean Time To Detect (MTTD) and Mean Time To Restore (MTTR).
+
+Ensuring that your development and operations have access to the same telemetry and tools allows them to view patterns across your entire environment and minimize your mean time to detect and mean time to restore. For example, you can:
- Prepare [custom dashboards](./app/tutorial-app-dashboards.md) based on common metrics and logs for the different roles in your organization. Dashboards can combine data from all Azure resources.-- Prepare [Workbooks](./visualize/workbooks-overview.md) to ensure knowledge sharing between development and operations. These could be prepared as dynamic reports with metric charts and log queries, or even as troubleshooting guides prepared by developers helping customer support or operations to handle basic problems.
+- Prepare [workbooks](./visualize/workbooks-overview.md) to ensure knowledge sharing between development and operations. Workbooks could be prepared as dynamic reports with metric charts and log queries. They can also be troubleshooting guides prepared by developers to help customer support or operations handle basic problems.
## Continuously optimize
- Monitoring is one of the fundamental aspects of the popular Build-Measure-Learn philosophy, which recommends continuously tracking your KPIs and user behavior metrics and then striving to optimize them through planning iterations. Azure Monitor helps you collect metrics and logs relevant to your business and to add new data points in the next deployment as required.
-- Use tools in Application Insights to [track end-user behavior and engagement](./app/tutorial-users.md).-- Use [Impact Analysis](./app/usage-impact.md) to help you prioritize which areas to focus on to drive to important KPIs.
+ Monitoring is one of the fundamental aspects of the popular Build-Measure-Learn philosophy, which recommends continuously tracking your KPIs and user behavior metrics and then striving to optimize them through planning iterations. Azure Monitor helps you collect metrics and logs relevant to your business and add new data points in the next deployment as required. For example, you can:
+- Use tools in Application Insights to [track user behavior and engagement](./app/tutorial-users.md).
+- Use [Impact analysis](./app/usage-impact.md) to help you prioritize which areas to focus on to drive to important KPIs.
## Next steps
azure-monitor Observability Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/observability-data.md
documentationcenter: ''
na Previously updated : 04/05/2022 Last updated : 08/18/2022 # Observability data in Azure Monitor Enabling observability across today's complex computing environments running distributed applications that rely on both cloud and on-premises services, requires collection of operational data from every layer and every component of the distributed system. You need to be able to perform deep insights on this data and consolidate it into a single pane of glass with different perspectives to support the multitude of stakeholders in your organization.
-[Azure Monitor](overview.md) collects and aggregates data from a variety of sources into a common data platform where it can be used for analysis, visualization, and alerting. It provides a consistent experience on top of data from multiple sources, which gives you deep insights across all your monitored resources and even with data from other services that store their data in Azure Monitor.
+[Azure Monitor](overview.md) collects and aggregates data from various sources into a common data platform where it can be used for analysis, visualization, and alerting. It provides a consistent experience on top of data from multiple sources, which gives you deep insights across all your monitored resources and even with data from other services that store their data in Azure Monitor.
:::image type="content" source="media/overview/azure-monitor-overview-optm.svg" alt-text="Diagram that shows an overview of Azure Monitor." border="false" lightbox="media/overview/azure-monitor-overview-optm.svg"::: ## Pillars of observability
-Metrics, logs, and distributed traces are commonly referred to as the three pillars of observability. These are the different kinds of data that a monitoring tool must collect and analyze to provide sufficient observability of a monitored system. Observability can be achieved by correlating data from multiple pillars and aggregating data across the entire set of resources being monitored. Because Azure Monitor stores data from multiple sources together, the data can be correlated and analyzed using a common set of tools. It also correlates data across multiple Azure subscriptions and tenants, in addition to hosting data for other services.
+Metrics, logs, distributed traces, and changes are commonly referred to as the pillars of observability. These are the different kinds of data that a monitoring tool must collect and analyze to provide sufficient observability of a monitored system. Observability can be achieved by correlating data from multiple pillars and aggregating data across the entire set of resources being monitored. Because Azure Monitor stores data from multiple sources together, the data can be correlated and analyzed using a common set of tools. It also correlates data across multiple Azure subscriptions and tenants, in addition to hosting data for other services.
Azure resources generate a significant amount of monitoring data. Azure Monitor consolidates this data along with monitoring data from other sources into either a Metrics or Logs platform. Each is optimized for particular monitoring scenarios, and each supports different features in Azure Monitor. Features such as data analysis, visualizations, or alerting require you to understand the differences so that you can implement your required scenario in the most efficient and cost effective manner. Insights in Azure Monitor such as [Application Insights](app/app-insights-overview.md) or [VM insights](vm/vminsights-overview.md) have analysis tools that allow you to focus on the particular monitoring scenario without having to understand the differences between the two types of data.
Logs in Azure Monitor are stored in a Log Analytics workspace that's based on [A
> [!NOTE] > It's important to distinguish between Azure Monitor Logs and sources of log data in Azure. For example, subscription level events in Azure are written to an [activity log](essentials/platform-logs-overview.md) that you can view from the Azure Monitor menu. Most resources will write operational information to a [resource log](essentials/platform-logs-overview.md) that you can forward to different locations. Azure Monitor Logs is a log data platform that collects activity logs and resource logs along with other monitoring data to provide deep analysis across your entire set of resources. -
- You can work with [log queries](logs/log-query-overview.md) interactively with [Log Analytics](logs/log-query-overview.md) in the Azure portal or add the results to an [Azure dashboard](app/tutorial-app-dashboards.md) for visualization in combination with other data. You can also create [log alerts](alerts/alerts-log.md) which will trigger an alert based on the results of a schedule query.
+You can work with [log queries](logs/log-query-overview.md) interactively with [Log Analytics](logs/log-query-overview.md) in the Azure portal or add the results to an [Azure dashboard](app/tutorial-app-dashboards.md) for visualization in combination with other data. You can also create [log alerts](alerts/alerts-log.md) which will trigger an alert based on the results of a schedule query.
Read more about Azure Monitor Logs including their sources of data in [Logs in Azure Monitor](logs/data-platform-logs.md).
Distributed tracing in Azure Monitor is enabled with the [Application Insights S
Read more about distributed tracing at [What is Distributed Tracing?](app/distributed-tracing.md).
+## Changes
+
+Change Analysis alerts you to live site issues, outages, component failures, or other change data. It also provides insights into those application changes, increases observability, and reduces the mean time to repair. You automatically register the `Microsoft.ChangeAnalysis` resource provider with an Azure Resource Manager subscription by going to Change Analysis via the Azure portal. For web app in-guest changes, you can enable the [Change Analysis tool via the Change Analysis portal](./change/change-analysis-enable.md#enable-azure-functions-and-web-app-in-guest-change-collection-via-the-change-analysis-portal).
+
+Change Analysis builds on [Azure Resource Graph](../governance/resource-graph/overview.md) to provide a historical record of how your Azure resources have changed over time. It detects managed identities, platform operating system upgrades, and hostname changes. Change Analysis securely queries IP configuration rules, TLS settings, and extension versions to provide more detailed change data.
+
+Read more about Change Analysis at [Use Change Analysis in Azure Monitor](./change/change-analysis.md). [Try Change Analysis for observability into your Azure subscriptions](https://aka.ms/cahome).
## Next steps
azure-monitor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/overview.md
Azure Monitor uses a version of the [Kusto Query Language](/azure/kusto/query/)
![Diagram that shows logs data flowing into Log Analytics for analysis.](media/overview/logs.png)
-Change Analysis alerts you to live site issues, outages, component failures, or other change data. It also provides insights into those application changes, increases observability, and reduces the mean time to repair. You automatically register the `Microsoft.ChangeAnalysis` resource provider with an Azure Resource Manager subscription by going to Change Analysis via the Azure portal. For web app in-guest changes, you can enable Change Analysis by using the [Diagnose and solve problems tool](./change/change-analysis-enable.md#enable-web-app-in-guest-change-collection-via-azure-portal).
+Change Analysis alerts you to live site issues, outages, component failures, or other change data. It also provides insights into those application changes, increases observability, and reduces the mean time to repair. You automatically register the `Microsoft.ChangeAnalysis` resource provider with an Azure Resource Manager subscription by going to Change Analysis via the Azure portal. For web app in-guest changes, you can enable Change Analysis by using the [Diagnose and solve problems tool](./change/change-analysis-enable.md#enable-azure-functions-and-web-app-in-guest-change-collection-via-the-change-analysis-portal).
Change Analysis builds on [Azure Resource Graph](../governance/resource-graph/overview.md) to provide a historical record of how your Azure resources have changed over time. It detects managed identities, platform operating system upgrades, and hostname changes. Change Analysis securely queries IP configuration rules, TLS settings, and extension versions to provide more detailed change data.
azure-monitor Resource Manager Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/resource-manager-samples.md
# Resource Manager template samples for Azure Monitor
-You can deploy and configure Azure Monitor at scale by using [Azure Resource Manager templates](../azure-resource-manager/templates/syntax.md). This article lists sample templates for Azure Monitor features. You can modify these samples for your particular requirements and deploy them by using any standard method for deploying Resource Manager templates.
+You can deploy and configure Azure Monitor at scale by using [Azure Resource Manager templates](../azure-resource-manager/templates/syntax.md). This article lists sample templates for Azure Monitor features. You can modify these samples for your particular requirements and deploy them by using any standard method for deploying Resource Manager templates.
-## Deploying the sample templates
-The basic steps to use the one of the template samples are:
+## Deploy the sample templates
+
+The basic steps to use one of the template samples are:
1. Copy the template and save it as a JSON file.
-2. Modify the parameters for your environment and save the JSON file.
-3. Deploy the template by using [any deployment method for Resource Manager templates](../azure-resource-manager/templates/deploy-powershell.md).
+1. Modify the parameters for your environment and save the JSON file.
+1. Deploy the template by using [any deployment method for Resource Manager templates](../azure-resource-manager/templates/deploy-powershell.md).
For example, use the following commands to deploy the template and parameter file to a resource group by using PowerShell or the Azure CLI:
az deployment group create \
## Next steps -- Learn more about [Resource Manager templates](../azure-resource-manager/templates/overview.md).
+Learn more about [Resource Manager templates](../azure-resource-manager/templates/overview.md).
azure-netapp-files Backup Restore New Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-restore-new-volume.md
na Previously updated : 05/23/2022 Last updated : 08/23/2022 # Restore a backup to a new volume
Restoring a backup creates a new volume with the same protocol type. This articl
* You should trigger the restore operation when there are no baseline backups. Otherwise, the restore might increase the load on the Azure Blob account where your data is backed up.
+* For large volumes (greater than 10 TB), it can take multiple hours to transfer all the data from the backup media.
+ See [Requirements and considerations for Azure NetApp Files backup](backup-requirements-considerations.md) for additional considerations about using Azure NetApp Files backup. ## Steps
See [Requirements and considerations for Azure NetApp Files backup](backup-requi
> If a volume is deleted but the backup policy wasnΓÇÖt disabled before the volume deletion, all the backups related to the volume are retained in the Azure storage, and you can find them under the associated NetApp account. See [Search backups at NetApp account level](backup-search.md#search-backups-at-netapp-account-level).
-2. From the backup list, select the backup to restore. Click the three dots (`…`) to the right of the backup, then click **Restore to new volume** from the Action menu.
+2. From the backup list, select the backup to restore. Select the three dots (`…`) to the right of the backup, then select **Restore to new volume** from the Action menu.
![Screenshot that shows the option to restore backup to a new volume.](../media/azure-netapp-files/backup-restore-new-volume.png)
-3. In the Create a Volume page that appears, provide information for the fields in the page as applicable, and click **Review + Create** to begin restoring the backup to a new volume.
+3. In the Create a Volume page that appears, provide information for the fields in the page as applicable, and select **Review + Create** to begin restoring the backup to a new volume.
* The **Protocol** field is pre-populated from the original volume and cannot be changed. However, if you restore a volume from the backup list at the NetApp account level, you need to specify the Protocol field. The Protocol field must match the protocol of the original volume. Otherwise, the restore operation will fail with the following error:
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
Azure NetApp Files is updated regularly. This article provides a summary about t
Standard network features now includes Global VNet peering. You must still [register the feature](configure-network-features.md#register-the-feature) before using it. [!INCLUDE [Standard network features pricing](includes/standard-networking-pricing.md)]-
-* [Cloud Backup for Virtual Machines on Azure NetApp Files datastores for Azure VMware Solution](../azure-vmware/install-cloud-backup-virtual-machines.md)
- You can now create VM consistent snapshot backups of VMs on Azure NetApp Files datastores using [Cloud Backup for Virtual Machines](../azure-vmware/backup-azure-netapp-files-datastores-vms.md). The associated virtual appliance installs in the Azure VMware Solution cluster and provides policy based automated and consistent backup of VMs integrated with Azure NetApp Files snapshot technology for fast backups and restores of VMs, groups of VMs (organized in resource groups) or complete datastores.
## July 2022
Azure NetApp Files is updated regularly. This article provides a summary about t
* Azure Key Vault to store Service Principal content * Azure Managed Disk as an alternate storage back end
-* [Azure NetApp Files datastores for Azure VMware Solution](../azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md) is now in public preview. You can [Back up Azure NetApp Files datastores and VMs using Cloud Backup](../azure-vmware/backup-azure-netapp-files-datastores-vms.md). This virtual appliance installs in the Azure VMware Solution cluster and provides policy based automated backup of VMs integrated with Azure NetApp Files snapshot technology for fast backups and restores of VMs, groups of VMs (organized in resource groups) or complete datastores.
+* [Azure NetApp Files datastores for Azure VMware Solution](../azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md) is now in public preview. You can back up Azure NetApp Files datastores and VMs using Cloud Backup. This virtual appliance installs in the Azure VMware Solution cluster and provides policy based automated backup of VMs integrated with Azure NetApp Files snapshot technology for fast backups and restores of VMs, groups of VMs (organized in resource groups) or complete datastores.
* [Active Directory connection enhancement: Reset Active Directory computer account password](create-active-directory-connections.md#reset-active-directory) (Preview)
azure-percept Voice Control Your Inventory Then Visualize With Power Bi Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/voice-control-your-inventory-then-visualize-with-power-bi-dashboard.md
- Title: Voice control your inventory with Azure Percept Audio
-description: This article will give detailed instructions for building the main components of the solution and deploying the edge speech AI.
---- Previously updated : 12/14/2021 -----
-# Voice control your inventory with Azure Percept Audio
-This article will give detailed instructions for building the main components of the solution and deploying the edge speech AI. The solution uses the Azure Percept DK device and the Audio SoM, Azure Speech Services -Custom Commands, Azure Function App, SQL Database, and Power BI. Users can learn how to manage their inventory with voice using Azure Percept Audio and visualize results with Power BI. The goal of this article is to empower users to create a basic inventory management solution.
-
-Users who want to take their solution further can add an additional edge module for visual inventory inspection or expand on the inventory visualizations within Power BI.
-
-In this tutorial, you learn how to:
--- Create an Azure SQL Server and SQL Database-- Create an Azure function project and publish to Azure-- Import an available template to Custom Commands-- Create a Custom Commands using an available template-- Deploy modules to your Devkit-- Import dataset from Azure SQL to Power BI--
-## Prerequisites
-- Percept DK ([Purchase](https://www.microsoft.com/store/build/azure-percept/8v2qxmzbz9vc))-- Azure Subscription : [Free trial account](https://azure.microsoft.com/free/)-- [Azure Percept DK setup experience](./quickstart-percept-dk-set-up.md)-- [Azure Percept Audio setup](./quickstart-percept-audio-setup.md)-- Speaker or headphones that can connect to 3.5mm audio jack (optional) -- Install [Power BI Desktop](https://powerbi.microsoft.com/downloads/)-- Install [VS code](https://code.visualstudio.com/download) -- Install the [IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) and [IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools) Extension in VS Code -- The [Azure Functions Core Tools](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/azure-functions/functions-run-local.md) version 3.x.-- The [Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) for Visual Studio Code.-- The [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) for Visual Studio Code.-- Create an [Azure SQL server](/azure/azure-sql/database/single-database-create-quickstart)--
-## Software architecture
-![Solution Architecture](./media/voice-control-your-inventory-images/voice-control-solution-architect.png)
--
-## Step 1: Create an Azure SQL Server and SQL Database
-In this section, you will learn how to create the table for this lab. This table will be the main source of truth for your current inventory and the basis of data visualized in Power BI.
-
-1. Set SQL server firewall
- 1. Click Set server firewall
- ![Set server firewall](./media/voice-control-your-inventory-images/set-server-firewall.png)
- 2. Add Rule name workshop - Start IP 0.0.0.0 and End IP 255.255.255.255 to the IP allowlist for lab purpose
- ![Rule name workshop](./media/voice-control-your-inventory-images/save-workshop.png)
- 3. Click Query editor to login your sql database <br />
- ![Query editor to login your sql database](./media/voice-control-your-inventory-images/query-editor.png) <br />
- 4. Login to your SQL database through SQL Server Authentication <br />
- ![SQL Server Authentication](./media/voice-control-your-inventory-images/sql-authentication.png) <br />
-2. Run the T-SQL query below in the query editor to create the table <br />
-
-
- ```sql
- -- Create table stock
- CREATE TABLE Stock
- (
-     color varchar(255),
-     num_box int
- )
-
- ```
-
- :::image type="content" source="./media/voice-control-your-inventory-images/create-sql-table.png" alt-text="Create SQL table.":::
-
-## Step 2: Create an Azure Functions project and publish to Azure
-In this section, you will use Visual Studio Code to create a local Azure Functions project in Python. Later in this article, you'll publish your function code to Azure.
-
-1. Go to the [GitHub link](https://github.com/microsoft/Azure-Percept-Reference-Solutions/tree/main/voice-control-inventory-management) and clone the repository
- 1. Click Code and HTTPS tab
- :::image type="content" source="./media/voice-control-your-inventory-images/clone-git.png" alt-text="Code and HTTPS tab.":::
- 2. Copy the command below in your terminal to clone the repository
- ![clone the repository](./media/voice-control-your-inventory-images/clone-git-command.png)
-
- ```
- git clone https://github.com/microsoft/Azure-Percept-Reference-Solutions/tree/main/voice-control-inventory-management
- ```
-
-2. Enable Azure Functions.
-
- 1. Click Azure Logo in the task bar
-
- ![Azure Logo in the task bar](./media/voice-control-your-inventory-images/select-azure-icon.png)
- 2. Click "..." and check the ΓÇ£FunctionsΓÇ¥ has been checked
- ![check the ΓÇ£FunctionsΓÇ¥](./media/voice-control-your-inventory-images/select-function.png)
-
-3. Create your local project
- 1. Create a folder (ex: airlift_az_func) for your project workspace
- ![Create a folder](./media/voice-control-your-inventory-images/create-new-folder.png)
- 2. Choose the Azure icon in the Activity bar, then in Functions, select the <strong>Create new project...</strong>icon.
- ![select Azure icon](./media/voice-control-your-inventory-images/select-function-visio-studio.png)
- 3. Choose the directory location you just created for your project workspace and choose **Select**.
- ![the directory location](./media/voice-control-your-inventory-images/select-airlift-folder.png)
- 4. <strong>Provide the following information at the prompts</strong>: Select a language for your function project: Choose <strong>Python</strong>.
- ![following information at the prompts](./media/voice-control-your-inventory-images/language-python.png)
- 5. <strong>Select a Python alias to create a virtual environment</strong>: Choose the location of your Python interpreter. If the location isn't shown, type in the full path to your Python binary. Select skip virtual environment you donΓÇÖt have Python installed.
- ![create a virtual environment](./media/voice-control-your-inventory-images/skip-virtual-env.png)
- 6. <strong>Select a template for your project's first function</strong>: Choose <strong>HTTP trigger</strong>.
- ![Select a template](./media/voice-control-your-inventory-images/http-trigger.png)
- 7. <strong>Provide a function name</strong>: Type <strong>HttpExample</strong>.
- ![Provide a function name](./media/voice-control-your-inventory-images/http-example.png)
- 8. <strong>Authorization level</strong>: Choose <strong>Anonymous</strong>, which enables anyone to call your function endpoint. To learn about authorization level, see [Authorization keys](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/azure-functions/functions-bindings-http-webhook-trigger.md).
- ![power pi dashboard](./media/voice-control-your-inventory-images/create-http-trigger.png)
- 9. <strong>Select how you would like to open your project</strong>: Choose Add to workspace. Trust folder and enable all features.
-
- ![Authorization keys](./media/voice-control-your-inventory-images/trust-authorize.png)
- 1. You will see the HTTPExample function has been initiated
- ![ HTTPExample function](./media/voice-control-your-inventory-images/modify-init-py.png)
-
-4. Develop CRUD.py to update Azure SQL on Azure Function
- 1. Replace the content of the <strong>__init__.py</strong> in [here](https://github.com/microsoft/Azure-Percept-Reference-Solutions/blob/main/voice-control-inventory-management/azure-functions/__init__.py) by copying the raw content of <strong>__init__.py</strong>
- :::image type="content" source="./media/voice-control-your-inventory-images/copy-raw-content-mini.png" alt-text="Copy raw contents." lightbox="./media/voice-control-your-inventory-images/copy-raw-content.png":::
- 2. Drag and drop the <strong>CRUD.py</strong> to the same layer of <strong>init.py</strong>
- ![Drag and drop-1](./media/voice-control-your-inventory-images/crud-file.png)
- ![Drag and drop-2](./media/voice-control-your-inventory-images/show-crud-file.png)
- 3. Update the value of the <strong>sql server full address</strong>, <strong>database</strong>, <strong>username</strong>, <strong>password</strong> you created in section 1 in <strong>CRUD.py</strong>
- :::image type="content" source="./media/voice-control-your-inventory-images/server-name-mini.png" alt-text="Update the values."lightbox="./media/voice-control-your-inventory-images/server-name.png":::
- ![Update the value-2](./media/voice-control-your-inventory-images/server-parameter.png)
- 4. Replace the content of the <strong>requirements.txt</strong> in here by copying the raw content of requirements.txt
- ![Replace the content-1](./media/voice-control-your-inventory-images/select-requirements-u.png)
- :::image type="content" source="./media/voice-control-your-inventory-images/view-requirement-file-mini.png" alt-text="Replace the content." lightbox= "./media/voice-control-your-inventory-images/view-requirement-file.png":::
- 5. Press ΓÇ£Ctrl + sΓÇ¥ to save the content
-
-5. Sign in to Azure
- 1. Before you can publish your app, you must sign into Azure. If you aren't already signed in, choose the Azure icon in the Activity bar, then in the Azure: Functions area, choose <strong>Sign in to Azure...</strong>.If you're already signed in, go to the next section.
- ![sign into Azure](./media/voice-control-your-inventory-images/sign-in-to-azure.png)
-
- 2. When prompted in the browser, choose your Azure account and sign in using your Azure account credentials.
- 3. After you've successfully signed in, you can close the new browser window. The subscriptions that belong to your Azure account are displayed in the Side bar.
-
-6. Publish the project to Azure
- 1. Choose the Azure icon in the Activity bar, then in the <strong>Azure: Functions area</strong>, choose the <strong>Deploy to function app...</strong> button.
- ![icon in the Act bar](./media/voice-control-your-inventory-images/upload-to-cloud.png)
- 2. Provide the following information at the prompts:
- 1. <strong>Select folder</strong>: Choose a folder from your workspace or browse to one that contains your function app. You won't see this if you already have a valid function app opened.
- 2. <strong>Select subscription</strong>: Choose the subscription to use. You won't see this if you only have one subscription.
- 3. <strong>Select Function App in Azure</strong>: Choose + Create new Function App. (Don't choose the Advanced option, which isn't covered in this article.)
- 4. <strong>Enter a globally unique name for the function app</strong>: Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions.
- 5. <strong>Select a runtime</strong>: Choose the version of <strong>3.9</strong>
- ![Choose the version](./media/voice-control-your-inventory-images/latest-python-version.png)
- 1. <strong>Select a location for new resources</strong>: Choose the region.
- 2. Select <strong>View Output</strong> in this notification to view the creation and deployment results, including the Azure resources that you created. If you miss the notification, select the bell icon in the lower right corner to see it again.
-
- ![including the Azure resources](./media/voice-control-your-inventory-images/select-view-output.png)
- 3. <strong>Note down the HTTP Trigger Url</strong> for further use in the section 4
- ![Note down the HTTP Trigger](./media/voice-control-your-inventory-images/example-http.png)
-
-7. Test your Azure Function App
- 1. Choose the Azure icon in the Activity bar, expand your subscription, your new function app, and Functions.
- 2. Right-click the HttpExample function and choose <strong>Execute Function Now</strong>....
- ![Right-click the HttpExample ](./media/voice-control-your-inventory-images/function.png)
- 3. In Enter request body you see the request message body value of
- ```
- { "color": "yellow", "num_box" :"2", "action":"remove" }
- ```
- ![request message body](./media/voice-control-your-inventory-images/type-new-command.png)
- Press Enter to send this request message to your function.
-
- 1. When the function executes in Azure and returns a response, a notification is raised in Visual Studio Code.
- ![a notification](./media/voice-control-your-inventory-images/example-output.png)
-
-## Step 3: Import an inventory speech template to Custom Commands
-In this section, you will import an existing application config json file to Custom Commands.
-
-1. Create an Azure Speech resource in a region that supports Custom Commands.
- 1. Click [Create Speech Services portal](https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices) to create an Azure Speech resource
- 1. Select your Subscription
- 2. Use the Resource group you just created in exercise 1
- 3. Select the Region(Please check here to see the support region in custom commands)
- 4. Create Name for your speech service
- 5. Select Pricing tier to Free F0
- 2. Go to the Speech Studio for Custom Commands
- 1. In a web browser, go to [Speech Studio](https://speech.microsoft.com/portal).
- 2. Select <strong>Custom Commands</strong>.
- The default view is a list of the Custom Commands applications you have under your selected subscription.
- :::image type="content" source="./media/voice-control-your-inventory-images/cognitive-service.png" alt-text="Custom Commands applications.":::
- 3. Select your Speech <strong>subscription</strong> and <strong>resource group</strong>, and then select <strong>Use resource</strong>.
- ![Select your Speech](./media/voice-control-your-inventory-images/speech-studio.png)
- 3. Import an existing application config as a new Custom Commands project
- 1. Select <strong>New project</strong> to create a project.
- ![ a new Custom Commands](./media/voice-control-your-inventory-images/create-new-project.png)
- 2. In the <strong>Name</strong> box, enter project name as Stock (or something else of your choice).
- 3. In the <strong>Language</strong> list, select <strong>English (United States)</strong>.
- 4. Select <strong>Browse files</strong> and in the browse window, select the <strong>smart-stock.json</strong> file in the <strong>custom-commands folder</strong>
- ![the browse window-1](./media/voice-control-your-inventory-images/smart-stock.png)
- ![the browse window-2](./media/voice-control-your-inventory-images/chose-smart-stock.png)
-
- 5. In the <strong>LUIS authoring resource</strong> list, select an authoring resource. If there are no valid authoring resources, create one by selecting <strong>Create new LUIS authoring resource</strong>.
- ![Create new LUIS](./media/voice-control-your-inventory-images/luis-resource.png)
-
- 6. In the <strong>Resource Name</strong> box, enter the name of the resource.
- 7. In the <strong>Resource Group</strong> list, select a resource group.
- 8. In the <strong>Location list</strong>, select a region.
- 9. In the <strong>Pricing Tier</strong> list, select a tier.
- 10. Next, select <strong>Create</strong> to create your project. After the project is created, select your project. You should now see overview of your new Custom Commands application.
--
-## Step 4: Train, test, and publish the Custom Commands
-In this section, you will train, test, and publish your Custom Commands
-
-1. Replace the web endpoints URL
- 1. Click Web endpoints and replace the URL
- 2. Replace the value in the URL to the <strong>HTTP Trigger Url</strong> you noted down in section 2 (ex: `https://xxx.azurewebsites.net/api/httpexample`)
- :::image type="content" source="./media/voice-control-your-inventory-images/web-point-url.png" alt-text="Replace the value in the URL.":::
-2. Create LUIS prediction resource
- 1. Click <strong>settings</strong> and create a <strong>S0</strong> prediction resource under LUIS <strong>prediction resource</strong>.
- :::image type="content" source="./media/voice-control-your-inventory-images/predict-source.png" alt-text="Prediction resource-1.":::
- ![prediction resource-2](./media/voice-control-your-inventory-images/tier-s0.png)
-3. Train and Test with your custom command
- 1. Click <strong>Save</strong> to save the Custom Commands Project
- 2. Click <strong>Train</strong> to Train your custom commands service
- :::image type="content" source="./media/voice-control-your-inventory-images/train-model.png" alt-text="Custom commands train model.":::
- 3. Click <strong>Test</strong> to test your custom commands service
- :::image type="content" source="./media/voice-control-your-inventory-images/test-model.png" alt-text="Custom commands test model.":::
- 4. Type ΓÇ£Add 2 green boxesΓÇ¥ in the pop-up window to see if it can respond correctly
- ![pop-up window](./media/voice-control-your-inventory-images/outcome.png)
-4. Publish your custom command
- 1. Click Publish to publish the custom commands
- :::image type="content" source="./media/voice-control-your-inventory-images/publish.png" alt-text="Publish the custom commands.":::
-5. Note down your application ID, speech key in the settings for further use
- :::image type="content" source="./media/voice-control-your-inventory-images/application-id.png" alt-text="Application ID.":::
-
-## Step 5: Deploy modules to your Devkit
-In this section, you will learn how to use deployment manifest to deploy modules to your device.
-1. Set IoT Hub Connection String
- 1. Go to your IoT Hub service in Azure portal. Click <strong>Shared access policies</strong> -> <strong>Iothubowner</strong>
- 2. Click <strong>Copy</strong> the get the <strong>primary connection string</strong>
- :::image type="content" source="./media/voice-control-your-inventory-images/iot-hub-owner.png" alt-text="Primary connection string.":::
- 3. In Explorer of VS Code, click "Azure IoT Hub".
- ![click on hub](./media/voice-control-your-inventory-images/azure-iot-hub-studio.png)
- 4. Click "Set IoT Hub Connection String" in context menu
- ![choose hub string](./media/voice-control-your-inventory-images/connection-string.png)
- 5. An input box will pop up, then enter your IoT Hub Connection String<br />
-2. Open VSCode to open the folder you cloned in the section 1 <br />
- ![Open VSCode](./media/voice-control-your-inventory-images/open-folder.png)
-3. Modify the envtemplate<br />
- 1. Right click the <strong>envtemplate</strong> and rename to <strong>.env</strong>. Provide values for all variables such as below.<br />
- ![click on env template](./media/voice-control-your-inventory-images/env-template.png)
- ![select the end env template](./media/voice-control-your-inventory-images/env-file.png)
- 2. Relace your Application ID and Speech resource key by checking your Speech Studio<br />
- ![check the speech studio-1](./media/voice-control-your-inventory-images/general-app-id.png)
- ![check the speech studio-2](./media/voice-control-your-inventory-images/region-westus.png)
- 3. Check the region by checking your Azure speech service, and mapping the <strong>display name</strong> (e.g. West US) to <strong>name</strong> (e.g., westus) [here](https://azuretracks.com/2021/04/current-azure-region-names-reference/).
- ![confirm region](./media/voice-control-your-inventory-images/portal-westus.png)
- 4. Replace the Speech Region to the name (e.g.: westus) you just get from the mapping table. (Check all characters are in lower case.)
- ![change region](./media/voice-control-your-inventory-images/region-westus-2.png)
-
-4. Deploy modules to device
- 1. Right click on deployment.template.json and <strong>select Generate IoT Edge Deployment Manifest</strong>
- ![generate Manifest](./media/voice-control-your-inventory-images/deployment-manifest.png)
- 2. After you generated the manifest, you can see <strong>deployment.amd64.json</strong> is under config folder. Right click on deployment.amd64.json and choose Create Deployment for <strong>Single Device</strong>
- ![create deployment](./media/voice-control-your-inventory-images/config-deployment-manifest.png)
- 3. Choose the IoT Hub device you are going to deploy
- ![choose device](./media/voice-control-your-inventory-images/iot-hub-device.png)
- 4. Check your log of the azurespeechclient module
- 1. Go to Azure portal to click your Azure IoT Hub
- !:::image type="content" source="./media/voice-control-your-inventory-images/voice-iothub.png" alt-text="Select IoT hub.":::
- 2. Click IoT Edge
- :::image type="content" source="./media/voice-control-your-inventory-images/portal-iotedge.png" alt-text="Go to IoT edge.":::
- 3. Click your Edge device to see if the modules run well
- :::image type="content" source="./media/voice-control-your-inventory-images/device-id.png" alt-text="Confirm modules.":::
- 4. Click <strong>azureearspeechclientmodule</strong> module
- :::image type="content" source="./media/voice-control-your-inventory-images/azure-ear-module.png" alt-text="Select ear module.":::
- 5. Click <strong>Troubleshooting</strong> tab of the azurespeechclientmodule
- ![select client mod](./media/voice-control-your-inventory-images/troubleshoot.png)
-
- 5. Check your log of the azurespeechclient module
- 1. Change the Time range to 3 minutes to check the latest log
- ![confirm log](./media/voice-control-your-inventory-images/time-range.png)
- 2. Speak <strong>ΓÇ£Computer, remove 2 red boxesΓÇ¥</strong> to your Azure Percept Audio
- (Computer is the wake word to wake Azure Percept DK, and remove 2 red boxes is the command)
- Check the log in the speech log if it shows <strong>ΓÇ£sure, remove 2 red boxes. 2 red boxes have been removed.ΓÇ¥</strong>
- :::image type="content" source="./media/voice-control-your-inventory-images/speech-regconizing.png" alt-text="Verify log.":::
- >[!NOTE]
- >If you have set up the wake word before, please use the wake work you set up to wake your DK.
-
-
-## Step 6: Import dataset from Azure SQL to Power BI
-In this section, you will create a Power BI report and check if the report has been updated after you speak commands to your Azure Percept Audio.
-1. Open the Power BI Desktop Application and import data from Azure SQL Server
- 1. Click close of the pop-up window
- ![close import data from SQL Server](./media/voice-control-your-inventory-images/power-bi-get-started.png)
- 2. Import data from SQL Server
- ![Import data from SQL Server](./media/voice-control-your-inventory-images/import-sql-server.png)
- 3. Enter your sql server name \<sql server name\>.database.windows.NET, and choose DirectQuery
- ![enter name for importing data from SQL Server](./media/voice-control-your-inventory-images/direct-query.png)
- 4. Select Database, and enter the username and the password
- ![select databae for importing data from SQL Server](./media/voice-control-your-inventory-images/database-pw.png)
- 5. <strong>Select</strong> the table Stock, and Click <strong>Load</strong> to load dataset to Power BI Desktop<br />
-
- ![choose strong option for import data from SQL Server](./media/voice-control-your-inventory-images/stock-table.png)
-2. Create your Power BI report
- 1. Click color, num_box columns in the Fields. And choose visualization Clustered column chart to present your chart.<br />
- ![Power BI report column box](./media/voice-control-your-inventory-images/color.png)
- ![Power BI report cluster column](./media/voice-control-your-inventory-images/graph.png)
- 2. Drag and drop the <strong>color</strong>column to the <strong>Legend</strong> and you will get the chart that looks like below.
- ![Power BI report-1](./media/voice-control-your-inventory-images/pull-out-color.png)
- ![Power BI report-2](./media/voice-control-your-inventory-images/number-box-by-color.png)
- 3. Click <strong>format</strong> and click Data colors to change the colors accordingly. You will have the charts that look like below.
- ![Power BI report-3](./media/voice-control-your-inventory-images/finish-color-graph.png)
- 4. Select card visualization
- ![Power BI report-4](./media/voice-control-your-inventory-images/choose-card.png)
- 5. Check the num_box
- ![Power BI report-5](./media/voice-control-your-inventory-images/check-number-box.png)
- 6. Drag and drop the <strong>color</strong> column to <strong>Filters on this visual</strong>
- ![Power BI report-6](./media/voice-control-your-inventory-images/pull-color-to-data-fields.png)
- 7. Select green in the Filters on this visual
-
- ![Power BI report-7](./media/voice-control-your-inventory-images/visual-filter.png)
- 8. Double click the column name of the column in the Fields and change the name of the column from ΓÇ£Count of the green boxΓÇ¥
- ![Power BI report-8](./media/voice-control-your-inventory-images/show-number-box.png)
-3. Speak command to your Devkit and refresh Power BI
- 1. Speak ΓÇ£Add three green boxesΓÇ¥ to Azure Percept Audio
- 2. Click ΓÇ£RefreshΓÇ¥. You will see the number of green boxes has been updated.
- ![Power BI report-9](./media/voice-control-your-inventory-images/refresh-power-bi.png)
-
-Congratulations! You now know how to develop your own voice assistant! You went through a lot of configuration and set up the custom commands for the first time. Great job! Now you can start trying more complex scenarios after this tutorial. Looking forward to seeing you design more interesting scenarios and let voice assistant help in the future.
-
-<!-- 6. Clean up resources
-Required. If resources were created during the tutorial. If no resources were created,
-state that there are no resources to clean up in this section.
>-
-## Clean up resources
-
-If you're not going to continue to use this application, delete
-resources with the following steps:
-
-1. Login to the [Azure portal](https://portal.azure.com), go to `Resource Group` you have been using for this tutorial. Delete the SQL DB, Azure Function, and Speech Service resources.
-
-2. Go into [Azure Percept Studio](https://portal.azure.com/#blade/AzureEdgeDevices/Main/overview), select your device from the `Device` blade, click the `Speech` tab within your device, and under `Configuration` remove reference to your custom command.
-
-3. Go in to [Speech Studio](https://speech.microsoft.com/portal) and delete project created for this tutorial.
-
-4. Login to [Power BI](https://msit.powerbi.com/home) and select your Workspace (this is the same Group Workspace you used while creating the Stream Analytics job output), and delete workspace.
---
-<!-- 7. Next steps
-Required: A single link in the blue box format. Point to the next logical tutorial
-in a series, or, if there are no other tutorials, to some other cool thing the
-customer can do.
>-
-## Next steps
-
-Check out the tutorial [Create a people counting solution with Azure Percept Vision](./create-people-counting-solution-with-azure-percept-devkit-vision.md).
-
-<!--
-Remove all the comments in this template before you sign-off or merge to the
-main branch.
>
azure-resource-manager Deploy Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-github-actions.md
description: In this quickstart, you learn how to deploy Bicep files by using Gi
Previously updated : 07/18/2022 Last updated : 08/22/2022
To create a workflow, take the following steps:
```yml on: [push] name: Azure ARM
+ permissions:
+ id-token: write
+ contents: read
jobs: build-and-deploy: runs-on: ubuntu-latest
azure-resource-manager Microsoft Solutions Armapicontrol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/microsoft-solutions-armapicontrol.md
Title: ArmApiControl UI element
-description: Describes the Microsoft.Solutions.ArmApiControl UI element for Azure portal. Used for calling API operations.
+description: Describes the Microsoft.Solutions.ArmApiControl UI element for Azure portal that's used to call API operations.
-- Previously updated : 07/14/2020 -+ Last updated : 08/23/2022 # Microsoft.Solutions.ArmApiControl UI element
-ArmApiControl lets you get results from an Azure Resource Manager API operation. Use the results to populate dynamic content in other controls.
+The `ArmApiControl` gets results from an Azure Resource Manager API operation using GET or POST. You can use the results to populate dynamic content in other controls.
## UI sample
-There's no UI for this control.
+There's no UI for `ArmApiControl`.
## Schema
-The following example shows the schema for this control:
+The following example shows the control's schema.
```json {
- "name": "testApi",
- "type": "Microsoft.Solutions.ArmApiControl",
- "request": {
- "method": "{HTTP-method}",
- "path": "{path-for-the-URL}",
-    "body": {
-      "key1": "val1",
-      "key2": "val2"
- }
+ "name": "testApi",
+ "type": "Microsoft.Solutions.ArmApiControl",
+ "request": {
+ "method": "{HTTP-method}",
+ "path": "{path-for-the-URL}",
+ "body": {
+ "key1": "value1",
+ "key2": "value2"
}
+ }
} ``` ## Sample output
-The control's output is not displayed to the user. Instead, the result of the operation is used in other controls.
+The control's output isn't displayed to the user. Instead, the operation's results are used in other controls.
## Remarks -- The `request.method` property specifies the HTTP method. Only `GET` or `POST` are allowed.-- The `request.path` property specifies a URL that must be a relative path to an ARM endpoint. It can be a static path or can be constructed dynamically by referring output values of the other controls.
+- The `request.method` property specifies the HTTP method. Only GET or POST are allowed.
+- The `request.path` property specifies a URL that must be a relative path to an Azure Resource Manager endpoint. It can be a static path or can be constructed dynamically by referring output values of the other controls.
- For example, an ARM call into `Microsoft.Network/expressRouteCircuits` resource provider:
+ For example, an Azure Resource Manager call into the `Microsoft.Network/expressRouteCircuits` resource provider.
```json
- "path": "subscriptions/<subid>/resourceGroup/<resourceGroupName>/providers/Microsoft.Network/expressRouteCircuits/<routecircuitName>/?api-version=2020-05-01"
+ "path": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/expressRouteCircuits/{circuitName}?api-version=2022-01-01"
``` - The `request.body` property is optional. Use it to specify a JSON body that is sent with the request. The body can be static content or constructed dynamically by referring to output values from other controls. ## Example
-In the following example, the `providersApi` element calls an API to get an array of provider objects.
+In the following example, the `providersApi` element uses the `ArmApiControl` and calls an API to get an array of provider objects.
+
+The `providersDropDown` element's `allowedValues` property is configured to use the array and get the provider names. The provider names are displayed in the dropdown list.
-The `allowedValues` property of the `providersDropDown` element is configured to get the names of the providers. It displays them in the dropdown list.
+The `output` property `providerName` shows the provider name that was selected from the dropdown list. The output can be used to pass the value to a parameter in an Azure Resource Manager template.
```json {
- "name": "providersApi",
- "type": "Microsoft.Solutions.ArmApiControl",
- "request": {
- "method": "GET",
- "path": "[concat(subscription().id, '/providers/Microsoft.Network/expressRouteServiceProviders?api-version=2019-02-01')]"
+ "$schema": "https://schema.management.azure.com/schemas/0.1.2-preview/CreateUIDefinition.MultiVm.json#",
+ "handler": "Microsoft.Azure.CreateUIDef",
+ "version": "0.1.2-preview",
+ "parameters": {
+ "basics": [
+ {
+ "name": "providersApi",
+ "type": "Microsoft.Solutions.ArmApiControl",
+ "request": {
+ "method": "GET",
+ "path": "[concat(subscription().id, '/providers/Microsoft.Network/expressRouteServiceProviders?api-version=2022-01-01')]"
+ }
+ },
+ {
+ "name": "providerDropDown",
+ "type": "Microsoft.Common.DropDown",
+ "label": "Provider",
+ "toolTip": "The provider that offers the express route connection.",
+ "constraints": {
+ "allowedValues": "[map(basics('providersApi').value, (item) => parse(concat('{\"label\":\"', item.name, '\",\"value\":\"', item.name, '\"}')))]",
+ "required": true
+ },
+ "visible": true
+ }
+ ],
+ "steps": [],
+ "outputs": {
+ "providerName": "[basics('providerDropDown')]"
}
-},
-{
- "name": "providerDropDown",
- "type": "Microsoft.Common.DropDown",
- "label": "Provider",
- "toolTip": "The provider that offers the express route connection.",
- "constraints": {
- "allowedValues": "[map(steps('settings').providersApi.value, (item) => parse(concat('{\"label\":\"', item.name, '\",\"value\":\"', item.name, '\"}')))]",
- "required": true
- },
- "visible": true
+ }
} ```
-For an example of using the ArmApiControl to check the availability of a resource name, see [Microsoft.Common.TextBox](microsoft-common-textbox.md).
+For an example of the `ArmApiControl` that uses the `request.body` property, see the [Microsoft.Common.TextBox](microsoft-common-textbox.md#single-line) single-line example. That example checks the availability of a storage account name and returns a message if the name is unavailable.
## Next steps -- For an introduction to creating UI definitions, see [Getting started with CreateUiDefinition](create-uidefinition-overview.md).
+- For an introduction to creating UI definitions, see [CreateUiDefinition.json for Azure managed application's create experience](create-uidefinition-overview.md).
- For a description of common properties in UI elements, see [CreateUiDefinition elements](create-uidefinition-elements.md).
+- To learn more about functions like `map`, `basics`, and `parse`, see [CreateUiDefinition functions](create-uidefinition-functions.md).
azure-resource-manager Resource Name Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-name-rules.md
In the following tables, the term alphanumeric refers to:
> | networkWatchers | resource group | 1-80 | Alphanumerics, underscores, periods, and hyphens.<br><br>Start with alphanumeric. End alphanumeric or underscore. | > | privateDnsZones | resource group | 1-63 characters<br><br>2 to 34 labels<br><br>Each label is a set of characters separated by a period. For example, **contoso.com** has 2 labels. | Each label can contain alphanumerics, underscores, and hyphens.<br><br>Each label is separated by a period. | > | privateDnsZones / virtualNetworkLinks | private DNS zone | 1-80 | Alphanumerics, underscores, periods, and hyphens.<br><br>Start with alphanumeric. End alphanumeric or underscore. |
+> | privateEndpoints | resource group | 2-64 | Alphanumerics, underscores, periods, and hyphens.<br><br>Start with alphanumeric. End alphanumeric or underscore. |
> | publicIPAddresses | resource group | 1-80 | Alphanumerics, underscores, periods, and hyphens.<br><br>Start with alphanumeric. End alphanumeric or underscore. | > | publicIPPrefixes | resource group | 1-80 | Alphanumerics, underscores, periods, and hyphens.<br><br>Start with alphanumeric. End alphanumeric or underscore. | > | routeFilters | resource group | 1-80 | Alphanumerics, underscores, periods, and hyphens.<br><br>Start with alphanumeric. End alphanumeric or underscore. |
azure-resource-manager Quickstart Create Templates Use The Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md
Title: Deploy template - Azure portal description: Learn how to create your first Azure Resource Manager template (ARM template) using the Azure portal. You also learn how to deploy it. Previously updated : 03/24/2022 Last updated : 08/22/2022 #Customer intent: As a developer new to Azure deployment, I want to learn how to use the Azure portal to create and edit Resource Manager templates, so I can use the templates to deploy Azure resources.
-# Quickstart: Create and deploy ARM templates by using the Azure portal
+# Quickstart: Create and deploy ARM templates by using the Azure portal
-In this quickstart, you learn how to generate an Azure Resource Manager template (ARM template) in the Azure portal. You edit and deploy the template from the portal.
+In this quickstart, you learn how to create an Azure Resource Manager template (ARM template) in the Azure portal. You edit and deploy the template from the portal.
ARM templates are JSON files that define the resources you need to deploy for your solution. To understand the concepts associated with deploying and managing your Azure solutions, see [template deployment overview](overview.md). After completing the tutorial, you deploy an Azure Storage account. The same process can be used to deploy other Azure resources.
-![Resource Manager template quickstart portal diagram](./media/quickstart-create-templates-use-the-portal/azure-resource-manager-export-deploy-template-portal.png)
- If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
-## Generate a template using the portal
-
-If you're new to Azure deployment, you may find it challenging to create an ARM template. To get around this challenge, you can configure your deployment in the Azure portal and download the corresponding ARM template. You save the template and reuse it in the future.
+## Retrieve a custom template
-Many experienced template developers use this method to generate templates when they try to deploy Azure resources that they aren't familiar with. For more information about exporting templates by using the portal, see [Export resource groups to templates](../management/manage-resource-groups-portal.md#export-resource-groups-to-templates). The other way to find a working template is from [Azure Quickstart templates](https://azure.microsoft.com/resources/templates/).
+Rather than manually building an entire ARM template, let's start by retrieving a pre-built template that accomplishes our goal. The [Azure Quickstart Templates repo](https://github.com/Azure/azure-quickstart-templates) repo contains a large collection of templates that deploy common scenarios. The portal makes it easy for you find and use templates from this repo. You can save the template and reuse it later.
1. In a web browser, go to the [Azure portal](https://portal.azure.com) and sign in.
-1. From the Azure portal menu, select **Create a resource**.
+1. From the Azure portal search bar, search for **deploy a custom template** and then select it from the available options.
- ![Select Create a resource from Azure portal menu](./media/quickstart-create-templates-use-the-portal/azure-resource-manager-template-tutorial-create-a-resource.png)
+ :::image type="content" source="./media/quickstart-create-templates-use-the-portal/search-custom-template.png" alt-text="Screenshot of Search for Custom Template.":::
-1. In the search box, type **storage account**, and then press **[ENTER]**.
-1. Select the down arrow next to **Create**, and then select **Storage account**.
+1. For **Template** source, notice that **Quickstart template** is selected by default. You can keep this selection. In the drop-down, search for *quickstarts/microsoft.storage/storage-account-create* and select it. After finding the quickstart template, select **Select template.**
- ![Create an Azure storage account](./media/quickstart-create-templates-use-the-portal/azure-resource-manager-template-tutorial-create-storage-account-portal.png)
+ :::image type="content" source="./media/quickstart-create-templates-use-the-portal/select-custom-template.png" alt-text="Screenshot of Select Quickstart Template.":::
-1. Enter the following information:
+1. In the next blade, you provide custom values to use for the deployment.
- |Name|Value|
- |-|-|
- |**Resource group**|Select **Create new**, and specify a resource group name of your choice. On the screenshot, the resource group name is *mystorage1016rg*. Resource group is a container for Azure resources. Resource group makes it easier to manage Azure resources. |
- |**Name**|Give your storage account a unique name. The storage account name must be unique across all of Azure, and it contain only lowercase letters and numbers. Name must be between 3 and 24 characters. If you get an error message saying "The storage account name 'mystorage1016' is already taken", try using **&lt;your name>storage&lt;Today's date in MMDD>**, for example **johndolestorage1016**. For more information, see [Naming rules and restrictions](/azure/architecture/best-practices/resource-naming).|
+ For **Resource group**, select **Create new** and provide *myResourceGroup* for the name. You can use the default values for the other fields. When you've finished providing values, select **Review + create**.
- You can use the default values for the rest of the properties.
+ :::image type="content" source="./media/quickstart-create-templates-use-the-portal/input-fields-template.png" alt-text="Screenshot for Input Fields for Template.":::
+
+1. The portal validates your template and the values you provided. After validation succeeds, select **Create** to start the deployment.
+
+ :::image type="content" source="./media/quickstart-create-templates-use-the-portal/template-validation.png" alt-text="Screenshot for Validation and create.":::
- ![Create an Azure storage account configuration using the Azure portal](./media/quickstart-create-templates-use-the-portal/azure-resource-manager-template-tutorial-create-storage-account.png)
+1. Once your validation has passed, you'll see the status of the deployment. When it completes successfully, select **Go to resource** to see the storage account.
- > [!NOTE]
- > Some of the exported templates require some edits before you can deploy them.
+ :::image type="content" source="./media/quickstart-create-templates-use-the-portal/deploy-success.png" alt-text="Screenshot for Deployment Succeeded Notification.":::
-1. Select **Review + create** on the bottom of the screen. Don't select **Create** in the next step.
-1. Select **Download a template for automation** on the bottom of the screen. The portal shows the generated template:
+1. From this screen, you can view the new storage account and its properties.
- ![Generate a template from the portal](./media/quickstart-create-templates-use-the-portal/azure-resource-manager-template-tutorial-create-storage-account-template.png)
+ :::image type="content" source="./media/quickstart-create-templates-use-the-portal/view-storage-account.png" alt-text="Screenshot for View Deployment Page.":::
- The main pane shows the template. It's a JSON file with six top-level elements - `schema`, `contentVersion`, `parameters`, `variables`, `resources`, and `output`. For more information, see [Understand the structure and syntax of ARM templates](./syntax.md)
+## Edit and deploy the template
- There are nine parameters defined. One of them is called **storageAccountName**. The second highlighted part on the previous screenshot shows how to reference this parameter in the template. In the next section, you edit the template to use a generated name for the storage account.
+You can use the portal for quickly developing and deploying ARM templates. In general, we recommend using Visual Studio Code for developing your ARM templates, and Azure CLI or Azure PowerShell for deploying the template, but you can use the portal for quick deployments without installing those tools.
+
+In this section, let's suppose you have an ARM template that you want to deploy one time with setting up the other tools.
+
+1. Again, select **Deploy a custom template** in the portal.
+
+1. This time, select **Build your own template in the editor**.
+
+ :::image type="content" source="./media/quickstart-create-templates-use-the-portal/build-own-template.png" alt-text="Screenshot for Build your own template.":::
+
+1. You see a blank template.
+
+ :::image type="content" source="./media/quickstart-create-templates-use-the-portal/blank-template.png" alt-text="Screenshot for Blank Template.":::
+
+1. Replace the blank template with the following template. It deploys a virtual network with a subnet.
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "vnetName": {
+ "type": "string",
+ "defaultValue": "VNet1",
+ "metadata": {
+ "description": "VNet name"
+ }
+ },
+ "vnetAddressPrefix": {
+ "type": "string",
+ "defaultValue": "10.0.0.0/16",
+ "metadata": {
+ "description": "Address prefix"
+ }
+ },
+ "subnetPrefix": {
+ "type": "string",
+ "defaultValue": "10.0.0.0/24",
+ "metadata": {
+ "description": "Subnet Prefix"
+ }
+ },
+ "subnetName": {
+ "type": "string",
+ "defaultValue": "Subnet1",
+ "metadata": {
+ "description": "Subnet Name"
+ }
+ },
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]",
+ "metadata": {
+ "description": "Location for all resources."
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Network/virtualNetworks",
+ "apiVersion": "2021-08-01",
+ "name": "[parameters('vnetName')]",
+ "location": "[parameters('location')]",
+ "properties": {
+ "addressSpace": {
+ "addressPrefixes": [
+ "[parameters('vnetAddressPrefix')]"
+ ]
+ },
+ "subnets": [
+ {
+ "name": "[parameters('subnetName')]",
+ "properties": {
+ "addressPrefix": "[parameters('subnetPrefix')]"
+ }
+ }
+ ]
+ }
+ }
+ ]
+ }
+ ```
- In the template, one Azure resource is defined. The type is `Microsoft.Storage/storageAccounts`. Take a look of how the resource is defined, and the definition structure.
-1. Select **Download** from the top of the screen.
-1. Open the downloaded zip file, and then save **template.json** to your computer. In the next section, you use a template deployment tool to edit the template.
-1. Select the **Parameter** tab to see the values you provided for the parameters. Write down these values, you need them in the next section when you deploy the template.
+1. Select **Save**.
- ![Screenshot that highlights the Parameter tab that shows the values you provided.](./media/quickstart-create-templates-use-the-portal/azure-resource-manager-template-tutorial-create-storage-account-template-parameters.png)
+1. You see the blade for providing deployment values. Again, select **myResourceGroup** for the resource group. You can use the other default values. When you're done providing values, select **Review + create**
- Using both the template file and the parameters file, you can create a resource, in this tutorial, an Azure storage account.
+1. After the portal validates the template, select **Create**.
-## Edit and deploy the template
+1. When the deployment completes, you see the status of the deployment. This time select the name of the resource group.
-The Azure portal can be used to perform some basic template editing. In this quickstart, you use a portal tool called *Template Deployment*. *Template Deployment* is used in this tutorial so you can complete the whole tutorial using one interface - the Azure portal. To edit a more complex template, consider using [Visual Studio Code](quickstart-create-templates-use-visual-studio-code.md), which provides richer edit functionalities.
-
-> [!IMPORTANT]
-> Template Deployment provides an interface for testing simple templates. It is not recommended to use this feature in production. Instead, store your templates in an Azure storage account, or a source code repository like GitHub.
-
-Azure requires that each Azure service has a unique name. The deployment could fail if you entered a storage account name that already exists. To avoid this issue, you modify the template to use a template function call `uniquestring()` to generate a unique storage account name.
-
-1. From the Azure portal menu, in the search box, type **deploy**, and then select **Deploy a custom template**.
-
- ![Azure Resource Manager templates library](./media/quickstart-create-templates-use-the-portal/azure-resource-manager-template-library.png)
-
-1. Select **Build your own template in the editor**.
-1. Select **Load file**, and then follow the instructions to load template.json you downloaded in the last section.
-
- After the file is loaded, you may notice a warning that the template schema wasn't loaded. You can ignore this warning. The schema is valid.
-
-1. Make the following three changes to the template:
-
- ![Azure Resource Manager templates](./media/quickstart-create-templates-use-the-portal/azure-resource-manager-template-tutorial-edit-storage-account-template-revised.png)
-
- - Remove the **storageAccountName** parameter as shown in the previous screenshot.
- - Add one variable called **storageAccountName** as shown in the previous screenshot:
-
- ```json
- "storageAccountName": "[concat(uniqueString(subscription().subscriptionId), 'storage')]"
- ```
-
- Two template functions are used here: `concat()` and `uniqueString()`.
- - Update the name element of the **Microsoft.Storage/storageAccounts** resource to use the newly defined variable instead of the parameter:
-
- ```json
- "name": "[variables('storageAccountName')]",
- ```
-
- The final template shall look like:
-
- ```json
- {
- "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "location": {
- "type": "string"
- },
- "accountType": {
- "type": "string"
- },
- "kind": {
- "type": "string"
- },
- "accessTier": {
- "type": "string"
- },
- "minimumTlsVersion": {
- "type": "string"
- },
- "supportsHttpsTrafficOnly": {
- "type": "bool"
- },
- "allowBlobPublicAccess": {
- "type": "bool"
- },
- "allowSharedKeyAccess": {
- "type": "bool"
- }
- },
- "variables": {
- "storageAccountName": "[concat(uniqueString(subscription().subscriptionId), 'storage')]"
- },
- "resources": [
- {
- "name": "[variables('storageAccountName')]",
- "type": "Microsoft.Storage/storageAccounts",
- "apiVersion": "2019-06-01",
- "location": "[parameters('location')]",
- "properties": {
- "accessTier": "[parameters('accessTier')]",
- "minimumTlsVersion": "[parameters('minimumTlsVersion')]",
- "supportsHttpsTrafficOnly": "[parameters('supportsHttpsTrafficOnly')]",
- "allowBlobPublicAccess": "[parameters('allowBlobPublicAccess')]",
- "allowSharedKeyAccess": "[parameters('allowSharedKeyAccess')]"
- },
- "dependsOn": [],
- "sku": {
- "name": "[parameters('accountType')]"
- },
- "kind": "[parameters('kind')]",
- "tags": {}
- }
- ],
- "outputs": {}
- }
- ```
+ :::image type="content" source="./media/quickstart-create-templates-use-the-portal/view-second-deployment.png" alt-text="Screenshot for View second deployment.":::
-1. Select **Save**.
-1. Enter the following values:
+1. Notice that your resource group now contains a storage account and a virtual network.
+
+ :::image type="content" source="./media/quickstart-create-templates-use-the-portal/view-resource-group.png" alt-text="Screenshot for View Storage Account and Virtual Network.":::
+
+## Export a custom template
- |Name|Value|
- |-|-|
- |**Resource group**|Select the resource group name you created in the last section. |
- |**Region**|Select a location for the resource group. For example, **Central US**. |
- |**Location**|Select a location for the storage account. For example, **Central US**. |
- |**Account Type**|Enter **Standard_LRS** for this quickstart. |
- |**Kind**|Enter **StorageV2** for this quickstart. |
- |**Access Tier**|Enter **Hot** for this quickstart. |
- |**Minimum TLS Version**|Enter **TLS1_0**. |
- |**Supports Https Traffic Only**| Select **true** for this quickstart. |
- |**Allow Blob Public Access**| Select **false** for this quickstart. |
- |**Allow Shared Key Access**| Select **true** for this quickstart. |
+Sometimes the easiest way to work with an ARM template is to have the portal generate it for you. The portal can create an ARM template based on the current state of your resource group.
-1. Select **Review + create**.
-1. Select **Create**.
-1. Select the bell icon (notifications) from the top of the screen to see the deployment status. You shall see **Deployment in progress**. Wait until the deployment is completed.
+1. In your resource group, select **Export template**.
+
+ :::image type="content" source="./media/quickstart-create-templates-use-the-portal/export-template.png" alt-text="Screenshot for Export Template.":::
- ![Azure Resource Manager templates deployment notification](./media/quickstart-create-templates-use-the-portal/azure-resource-manager-template-tutorial-portal-notification.png)
+1. The portal generates a template for you based on the current state of the resource group. Notice that this template isn't the same as either template you deployed earlier. It contains definitions for both the storage account and virtual network, along with other resources like a blob service that was automatically created for your storage account.
-1. Select **Go to resource group** from the notification pane. You shall see a screen similar to:
+1. To save this template for later use, select **Download**.
- ![Azure Resource Manager templates deployment resource group](./media/quickstart-create-templates-use-the-portal/azure-resource-manager-template-tutorial-portal-deployment-resource-group.png)
+ :::image type="content" source="./media/quickstart-create-templates-use-the-portal/download-template.png" alt-text="Screenshot for Download exported template.":::
- You can see the deployment status was successful, and there's only one storage account in the resource group. The storage account name is a unique string generated by the template. To learn more about using Azure storage accounts, see [Quickstart: Upload, download, and list blobs using the Azure portal](../../storage/blobs/storage-quickstart-blobs-portal.md).
+You now have an ARM template that represents the current state of the resource group. This template is auto-generated. Before using the template for production deployments, you may want to revise it, such as adding parameters for template reuse.
## Clean up resources When the Azure resources are no longer needed, clean up the resources you deployed by deleting the resource group.
-1. In the Azure portal, select **Resource group** on the left menu.
-1. Enter the resource group name in the **Filter by name** field.
+1. In the Azure portal, select **Resource groups** on the left menu.
+1. Enter the resource group name in the **Filter for any field** search box.
1. Select the resource group name. You shall see the storage account in the resource group. 1. Select **Delete resource group** in the top menu. ## Next steps
-In this tutorial, you learned how to generate a template from the Azure portal, and how to deploy the template using the portal. The template used in this Quickstart is a simple template with one Azure resource. When the template is complex, it's easier to use Visual Studio Code or Visual Studio to develop the template. To learn more about template development, see our new beginner tutorial series:
+In this tutorial, you learned how to generate a template from the Azure portal, and how to deploy the template using the portal. The template used in this Quickstart is a simple template with one Azure resource. When the template is complex, it's easier to use Visual Studio Code, or Visual Studio to develop the template. To learn more about template development, see our new beginner tutorial series:
> [!div class="nextstepaction"] > [Beginner tutorials](./template-tutorial-create-first-template.md)
azure-resource-manager Quickstart Create Templates Use Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/quickstart-create-templates-use-visual-studio-code.md
Title: Create template - Visual Studio Code description: Use Visual Studio Code and the Azure Resource Manager tools extension to work on Azure Resource Manager templates (ARM templates). Previously updated : 08/09/2020 Last updated : 06/27/2022 #Customer intent: As a developer new to Azure deployment, I want to learn how to use Visual Studio Code to create and edit Resource Manager templates, so I can use the templates to deploy Azure resources.
-# Quickstart: Create ARM templates with Visual Studio Code
+# Quickstart: Create ARM templates with Visual Studio Code
-The Azure Resource Manager Tools for Visual Studio Code provide language support, resource snippets, and resource autocompletion. These tools help create and validate Azure Resource Manager templates (ARM templates). In this quickstart, you use the extension to create an ARM template from scratch. While doing so you experience the extensions capabilities such as ARM template snippets, validation, completions, and parameter file support.
+The Azure Resource Manager Tools for Visual Studio Code provide language support, resource snippets, and resource autocompletion. These tools help create and validate Azure Resource Manager templates (ARM templates), and are therefore the recommended method of ARM template creation and configuration. In this quickstart, you use the extension to create an ARM template from scratch. While doing so you experience the extensions capabilities such as ARM template snippets, validation, completions, and parameter file support.
To complete this quickstart, you need [Visual Studio Code](https://code.visualstudio.com/), with the [Azure Resource Manager tools extension](https://marketplace.visualstudio.com/items?itemName=msazurermtools.azurerm-vscode-tools) installed. You also need either the [Azure CLI](/cli/azure/) or the [Azure PowerShell module](/powershell/azure/new-azureps-module-az) installed and authenticated.
azure-signalr Concept Upstream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/concept-upstream.md
The URL of upstream is not encryption at rest. If you have any sensitive informa
2. Grant secret read permission for the managed identity in the Access policies in the Key Vault. See [Assign a Key Vault access policy using the Azure portal](../key-vault/general/assign-access-policy-portal.md)
-3. Replace your sensitive text with the syntax `{@Microsoft.KeyVault(SecretUri=<secret-identity>)}` in the Upstream URL Pattern.
+3. Replace your sensitive text with the below syntax in the Upstream URL Pattern:
+ ```
+ {@Microsoft.KeyVault(SecretUri=<secret-identity>)}
+ ```
+ `<secret-identity>` is the full data-plane URI of a secret in Key Vault, optionally including a version, e.g., https://myvault.vault.azure.net/secrets/mysecret/ or https://myvault.vault.azure.net/secrets/mysecret/ec96f02080254f109c51a1f14cdb1931
+
+ For example, a complete reference would look like the following:
+ ```
+ @Microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)
+ ```
> [!NOTE]
-> The secret content only rereads when you change the Upstream settings or change the managed identity. Make sure you have granted secret read permission to the managed identity before using the Key Vault secret reference.
+> The service rereads the secret content every 30 minutes or whenever the upstream settings or managed identity changes. Try updating the Upstream settings if you'd like an immediate update when the Key Vault content is changed.
### Rule settings
azure-signalr Signalr Quickstart Dotnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-quickstart-dotnet-core.md
Ready to start?
## Prerequisites * Install the [.NET Core SDK](https://dotnet.microsoft.com/download).
-* Download or clone the [AzureSignalR-sample](https://github.com/aspnet/AzureSignalR-samples) GitHub repository.
+* Download or clone the [AzureSignalR-sample](https://github.com/aspnet/AzureSignalR-samples) GitHub repository.
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qsnetcore).
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.
## Create an ASP.NET Core web app
-In this section, you use the [.NET Core command-line interface (CLI)](/dotnet/core/tools/) to create an ASP.NET Core MVC web app project. The advantage of using the .NET Core CLI over Visual Studio is that it's available across the Windows, macOS, and Linux platforms.
+In this section, you use the [.NET Core command-line interface (CLI)](/dotnet/core/tools/) to create an ASP.NET Core MVC web app project. The advantage of using the .NET Core CLI over Visual Studio is that it's available across the Windows, macOS, and Linux platforms.
1. Create a folder for your project. This quickstart uses the *E:\Testing\chattest* folder.
In this section, you'll add the [Secret Manager tool](/aspnet/core/security/app-
dotnet add package Microsoft.Azure.SignalR ```
-2. Run the following command to restore packages for your project:
+1. Run the following command to restore packages for your project:
```dotnetcli dotnet restore ```
-3. Add a secret named *Azure:SignalR:ConnectionString* to Secret Manager.
+1. Prepare the Secret Manager for use with this project.
+
+ ````dotnetcli
+ dotnet user-secrets init
+ ````
+
+1. Add a secret named *Azure:SignalR:ConnectionString* to Secret Manager.
This secret will contain the connection string to access your SignalR Service resource. *Azure:SignalR:ConnectionString* is the default configuration key that SignalR looks for to establish a connection. Replace the value in the following command with the connection string for your SignalR Service resource.
In this section, you'll add the [Secret Manager tool](/aspnet/core/security/app-
This secret is accessed with the Configuration API. A colon (:) works in the configuration name with the Configuration API on all supported platforms. See [Configuration by environment](/dotnet/core/extensions/configuration-providers#environment-variable-configuration-provider). -
-4. Open *Startup.cs* and update the `ConfigureServices` method to use Azure SignalR Service by calling the `AddSignalR()` and `AddAzureSignalR()` methods:
+1. Open *Startup.cs* and update the `ConfigureServices` method to use Azure SignalR Service by calling the `AddSignalR()` and `AddAzureSignalR()` methods:
```csharp public void ConfigureServices(IServiceCollection services)
In this section, you'll add the [Secret Manager tool](/aspnet/core/security/app-
Not passing a parameter to `AddAzureSignalR()` causes this code to use the default configuration key for the SignalR Service resource connection string. The default configuration key is *Azure:SignalR:ConnectionString*.
-5. In *Startup.cs*, update the `Configure` method by replacing it with the following code.
+1. In *Startup.cs*, update the `Configure` method by replacing it with the following code.
```csharp public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
In this section, you'll add a development runtime environment for ASP.NET Core.
} ``` - ## Build and run the app locally 1. To build the app by using the .NET Core CLI, run the following command in the command shell:
In this section, you'll add a development runtime environment for ASP.NET Core.
![Example of an Azure SignalR group chat](media/signalr-quickstart-dotnet-core/signalr-quickstart-complete-local.png) - ## Clean up resources If you'll continue to the next tutorial, you can keep the resources created in this quickstart and reuse them.
azure-sql-edge Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/backup-restore.md
Title: Back up and restore databases - Azure SQL Edge description: Learn about backup and restore capabilities in Azure SQL Edge.
-keywords:
-+++ Last updated : 05/19/2020 --- Previously updated : 05/19/2020 + # Back up and restore databases in Azure SQL Edge
azure-sql-edge Configure Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/configure-replication.md
Title: Configure replication to Azure SQL Edge
+ Title: Configure replication to Azure SQL Edge
description: Learn about configuring replication to Azure SQL Edge.
-keywords:
-+++ Last updated : 05/19/2020 --- Previously updated : 05/19/2020+ # Configure replication to Azure SQL Edge
azure-sql-edge Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/configure.md
Title: Configure Azure SQL Edge description: Learn about configuring Azure SQL Edge.
-keywords:
-+++ Last updated : 09/22/2020 --- Previously updated : 09/22/2020+ # Configure Azure SQL Edge
azure-sql-edge Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/connect.md
Title: Connect and query Azure SQL Edge description: Learn how to connect to and query Azure SQL Edge.
-keywords:
-+++ Last updated : 07/25/2020 --- Previously updated : 07/25/2020+ # Connect and query Azure SQL Edge
azure-sql-edge Create External Stream Transact Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/create-external-stream-transact-sql.md
Title: CREATE EXTERNAL STREAM (Transact-SQL) - Azure SQL Edge description: Learn about the CREATE EXTERNAL STREAM statement in Azure SQL Edge
-keywords:
-+++ Last updated : 07/27/2020 --- Previously updated : 07/27/2020+ # CREATE EXTERNAL STREAM (Transact-SQL)
azure-sql-edge Create Stream Analytics Job https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/create-stream-analytics-job.md
Title: Create a T-SQL streaming job in Azure SQL Edge
-description: Learn about creating Stream Analytics jobs in Azure SQL Edge.
-keywords:
-
+ Title: Create a T-SQL streaming job in Azure SQL Edge
+description: Learn about creating Stream Analytics jobs in Azure SQL Edge.
+++ Last updated : 07/27/2020 --- Previously updated : 07/27/2020+ # Create a data streaming job in Azure SQL Edge
azure-sql-edge Data Retention Cleanup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/data-retention-cleanup.md
Title: Manage historical data with retention policy - Azure SQL Edge description: Learn how to manage historical data with retention policy in Azure SQL Edge
-keywords: SQL Edge, data retention
-+++ Last updated : 09/04/2020 --- Previously updated : 09/04/2020
+keywords:
+ - SQL Edge
+ - data retention
+ # Manage historical data with retention policy
azure-sql-edge Data Retention Enable Disable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/data-retention-enable-disable.md
Title: Enable and disable data retention policies - Azure SQL Edge description: Learn how to enable and disable data retention policies in Azure SQL Edge
-keywords: SQL Edge, data retention
-+++ Last updated : 09/04/2020 --- Previously updated : 09/04/2020
+keywords:
+ - SQL Edge
+ - data retention
+ # Enable and disable data retention policies
azure-sql-edge Data Retention Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/data-retention-overview.md
Title: Data retention policy overview - Azure SQL Edge description: Learn about the data retention policy in Azure SQL Edge
-keywords: SQL Edge, data retention
-+++ Last updated : 09/04/2020 --- Previously updated : 09/04/2020
+keywords:
+ - SQL Edge
+ - data retention
+ # Data retention overview
azure-sql-edge Date Bucket Tsql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/date-bucket-tsql.md
Title: Date_Bucket (Transact-SQL) - Azure SQL Edge description: Learn about using Date_Bucket in Azure SQL Edge
-keywords: Date_Bucket, SQL Edge
-+++ Last updated : 09/03/2020 --- Previously updated : 09/03/2020
+keywords:
+ - Date_Bucket
+ - SQL Edge
+ # Date_Bucket (Transact-SQL)
azure-sql-edge Deploy Dacpac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/deploy-dacpac.md
Title: Using SQL Database DACPAC and BACPAC packages - Azure SQL Edge description: Learn about using dacpacs and bacpacs in Azure SQL Edge
-keywords: SQL Edge, sqlpackage
-+++ Last updated : 09/03/2020 --- Previously updated : 09/03/2020
+keywords:
+ - SQL Edge
+ - sqlpackage
+ # SQL Database DACPAC and BACPAC packages in SQL Edge
azure-sql-edge Deploy Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/deploy-kubernetes.md
Title: Deploy an Azure SQL Edge container in Kubernetes - Azure SQL Edge description: Learn about deploying an Azure SQL Edge container in Kubernetes
-keywords: SQL Edge, container, kubernetes
-+++ Last updated : 09/22/2020 --- Previously updated : 09/22/2020
+keywords:
+ - SQL Edge
+ - container
+ - kubernetes
+ # Deploy an Azure SQL Edge container in Kubernetes
azure-sql-edge Deploy Onnx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/deploy-onnx.md
Title: Deploy and make predictions with ONNX description: Learn how to train a model, convert it to ONNX, deploy it to Azure SQL Edge, and then run native PREDICT on data using the uploaded ONNX model.
-keywords: deploy SQL Edge
- -+ Last updated 06/21/2022
+ms.technology: machine-learning
+
+keywords: deploy SQL Edge
# Deploy and make predictions with an ONNX model and SQL machine learning
azure-sql-edge Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/deploy-portal.md
Title: Deploy Azure SQL Edge using the Azure portal description: Learn how to deploy Azure SQL Edge using the Azure portal
-keywords: deploy SQL Edge
-+++ Last updated : 09/22/2020 --- Previously updated : 09/22/2020
+keywords: deploy SQL Edge
+ # Deploy Azure SQL Edge
azure-sql-edge Disconnected Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/disconnected-deployment.md
Title: Deploy Azure SQL Edge with Docker - Azure SQL Edge description: Learn about deploying Azure SQL Edge with Docker
-keywords: SQL Edge, container, docker
-+++ Last updated : 09/22/2020 --- Previously updated : 09/22/2020
+keywords:
+ - SQL Edge
+ - container
+ - docker
+ # Deploy Azure SQL Edge with Docker
azure-sql-edge Drop External Stream Transact Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/drop-external-stream-transact-sql.md
Title: DROP EXTERNAL STREAM (Transact-SQL) - Azure SQL Edge
-description: Learn about the DROP EXTERNAL STREAM statement in Azure SQL Edge
-keywords:
-
+description: Learn about the DROP EXTERNAL STREAM statement in Azure SQL Edge
+++ Last updated : 05/19/2020 --- Previously updated : 05/19/2020+ # DROP EXTERNAL STREAM (Transact-SQL)
azure-sql-edge Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/features.md
Title: Supported features of Azure SQL Edge
+ Title: Supported features of Azure SQL Edge
description: Learn about details of features supported by Azure SQL Edge.
-keywords: introduction to SQL Edge, what is SQL Edge, SQL Edge overview
-+++ Last updated : 09/03/2020 --- Previously updated : 09/03/2020
+keywords:
+ - introduction to SQL Edge
+ - what is SQL Edge
+ - SQL Edge overview
+ # Supported features of Azure SQL Edge
azure-sql-edge High Availability Sql Edge Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/high-availability-sql-edge-containers.md
Title: High availability for Azure SQL Edge containers - Azure SQL Edge description: Learn about high availability for Azure SQL Edge containers
-keywords: SQL Edge, containers, high availability
-+++ Last updated : 09/22/2020 --- Previously updated : 09/22/2020
+keywords:
+ - SQL Edge
+ - containers
+ - high availability
+ # High availability for Azure SQL Edge containers
azure-sql-edge Imputing Missing Values https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/imputing-missing-values.md
Title: Filling time gaps and imputing missing values - Azure SQL Edge description: Learn about filling time gaps and imputing missing values in Azure SQL Edge
-keywords: SQL Edge, timeseries
-+++ Last updated : 09/22/2020 --- Previously updated : 09/22/2020
+keywords:
+ - SQL Edge
+ - timeseries
+ # Filling time gaps and imputing missing values
azure-sql-edge Onnx Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/onnx-overview.md
Title: Machine learning and AI with ONNX in Azure SQL Edge description: Machine learning in Azure SQL Edge supports models in the Open Neural Network Exchange (ONNX) format. ONNX is an open format you can use to interchange models between various machine learning frameworks and tools.
-keywords: deploy SQL Edge
---- -+ Last updated 06/21/2022+++
+keywords: deploy SQL Edge
+ # Machine learning and AI with ONNX in SQL Edge
azure-sql-edge Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/overview.md
Title: What is Azure SQL Edge?
+ Title: What is Azure SQL Edge?
description: Learn about Azure SQL Edge
-keywords: introduction to SQL Edge,what is SQL Edge, SQL Edge overview
-+++ Last updated : 05/19/2020 --- Previously updated : 05/19/2020
+keywords:
+ - introduction to SQL Edge
+ - what is SQL Edge
+ - SQL Edge overview
+ # What is Azure SQL Edge?
azure-sql-edge Performance Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/performance-best-practices.md
Title: Performance best practices and configuration guidelines - Azure SQL Edge description: Learn about performance best practices and configuration guidelines in Azure SQL Edge
-keywords: SQL Edge, data retention
-+++ Last updated : 09/22/2020 --- Previously updated : 09/22/2020
+keywords:
+ - SQL Edge
+ - data retention
+ # Performance best practices and configuration guidelines
azure-sql-edge Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/release-notes.md
Title: Release notes for Azure SQL Edge
-description: Release notes detailing what's new or what has changed in the Azure SQL Edge images.
-keywords: release notes SQL Edge
----
+ Title: Release notes for Azure SQL Edge
+description: Release notes detailing what's new or what has changed in the Azure SQL Edge images.
--++ Last updated 6/21/2022++
+keywords: release notes SQL Edge
+ # Azure SQL Edge release notes
azure-sql-edge Resources Partners Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/resources-partners-security.md
Title: External partners for security solutions for Azure SQL Edge
-description: Providing details about external partners who are working with Azure SQL Edge
-keywords: security partners Azure SQL Edge
----
+description: Providing details about external partners who are working with Azure SQL Edge
--++ Last updated 10/09/2020++
+keywords: security partners Azure SQL Edge
+ # Azure SQL Edge security partners
azure-sql-edge Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/security-overview.md
Title: Secure Azure SQL Edge
+ Title: Secure Azure SQL Edge
description: Learn about security in Azure SQL Edge
-keywords: SQL Edge, security
-+++ Last updated : 09/22/2020 --- Previously updated : 09/22/2020
+keywords:
+ - SQL Edge
+ - security
+ # Securing Azure SQL Edge
azure-sql-edge Stream Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/stream-data.md
Title: Data streaming in Azure SQL Edge description: Learn about data streaming in Azure SQL Edge.
-keywords:
-+++ Last updated : 07/08/2022 --- Previously updated : 07/08/2022+ # Data streaming in Azure SQL Edge
azure-sql-edge Streaming Catalog Views https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/streaming-catalog-views.md
Title: Streaming catalog views (Transact-SQL) - Azure SQL Edge description: Learn about the available streaming catalog views and dynamic management views in Azure SQL Edge
-keywords: sys.external_streams, SQL Edge
-+++ Last updated : 05/19/2019 --- Previously updated : 05/19/2019
+keywords:
+ - sys.external_streams
+ - SQL Edge
+ # Streaming Catalog Views (Transact-SQL)
azure-sql-edge Sys External Job Streams https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/sys-external-job-streams.md
Title: sys.external_job_streams (Transact-SQL) - Azure SQL Edge description: Learn about using sys.external_job_streams in Azure SQL Edge
-keywords: sys.external_job_streams, SQL Edge
-+++ Last updated : 05/19/2019 --- Previously updated : 05/19/2019
+keywords:
+ - sys.external_job_streams
+ - SQL Edge
+ # sys.external_job_streams (Transact-SQL)
azure-sql-edge Sys External Streaming Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/sys-external-streaming-jobs.md
Title: sys.external_streaming_jobs (Transact-SQL) - Azure SQL Edge description: Learn about using sys.external_streaming_jobs in Azure SQL Edge
-keywords: sys.external_streaming_jobs, SQL Edge
-+++ Last updated : 05/19/2019 --- Previously updated : 05/19/2019
+keywords:
+ - sys.external_streaming_jobs
+ - SQL Edge
+ # sys.external_streaming_jobs (Transact-SQL)
azure-sql-edge Sys External Streams https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/sys-external-streams.md
Title: sys.external_streams (Transact-SQL) - Azure SQL Edge description: Learn about using sys.external_streams in Azure SQL Edge
-keywords: sys.external_streams, SQL Edge
-+++ Last updated : 05/19/2019 --- Previously updated : 05/19/2019
+keywords:
+ - sys.external_streams
+ - SQL Edge
+ # sys.external_streams (Transact-SQL)
azure-sql-edge Sys Sp Cleanup Data Retention https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/sys-sp-cleanup-data-retention.md
Title: sys.sp_cleanup_data_retention (Transact-SQL) - Azure SQL Edge description: Learn about using sys.sp_cleanup_data_retention (Transact-SQL) in Azure SQL Edge
-keywords: sys.sp_cleanup_data_retention (Transact-SQL), SQL Edge
-+++ Last updated : 09/22/2020 --- Previously updated : 09/22/2020
+keywords:
+ - sys.sp_cleanup_data_retention (Transact-SQL)
+ - SQL Edge
+ # sys.sp_cleanup_data_retention (Transact-SQL)
azure-sql-edge Track Data Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/track-data-changes.md
Title: Track data changes in Azure SQL Edge description: Learn about change tracking and change data capture in Azure SQL Edge.
-keywords:
-+++ Last updated : 05/19/2020 --- Previously updated : 05/19/2020+ # Track data changes in Azure SQL Edge
azure-sql-edge Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/troubleshoot.md
Title: Troubleshooting Azure SQL Edge deployments description: Learn about possible errors when deploying Azure SQL Edge
-keywords: SQL Edge, troubleshooting, deployment errors
-+++ Last updated : 09/22/2020 --- Previously updated : 09/22/2020
+keywords:
+ - SQL Edge
+ - troubleshooting
+ - deployment errors
+ # Troubleshooting Azure SQL Edge deployments
azure-sql-edge Tutorial Deploy Azure Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/tutorial-deploy-azure-resources.md
Title: Set up resources for deploying an ML model in Azure SQL Edge description: In part one of this three-part Azure SQL Edge tutorial for predicting iron ore impurities, you'll install the prerequisite software and set up required Azure resources for deploying a machine learning model in Azure SQL Edge.
-keywords:
--- --++ Last updated 05/19/2020+++ # Install software and set up resources for the tutorial
azure-sql-edge Tutorial Renewable Energy Demo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/tutorial-renewable-energy-demo.md
Title: Deploying Azure SQL Edge on turbines in a Contoso wind farm description: In this tutorial, you'll use Azure SQL Edge for wake-detection on the turbines in a Contoso wind farm.
-keywords:
--- --++ Last updated 12/18/2020+++ # Using Azure SQL Edge to build smarter renewable resources
azure-sql-edge Tutorial Run Ml Model On Sql Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/tutorial-run-ml-model-on-sql-edge.md
Title: Deploy ML model on Azure SQL Edge using ONNX
+ Title: Deploy ML model on Azure SQL Edge using ONNX
description: In part three of this three-part Azure SQL Edge tutorial for predicting iron ore impurities, you'll run the ONNX machine learning models on SQL Edge.
-keywords:
--- --++ Last updated 05/19/2020+++ # Deploy ML model on Azure SQL Edge using ONNX
azure-sql-edge Tutorial Set Up Iot Edge Modules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/tutorial-set-up-iot-edge-modules.md
Title: Set up IoT Edge modules in Azure SQL Edge description: In part two of this three-part Azure SQL Edge tutorial for predicting iron ore impurities, you'll set up IoT Edge modules and connections.
-keywords:
--- --++ Last updated 09/22/2020+++ # Set up IoT Edge modules and connections
azure-sql-edge Tutorial Sync Data Factory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/tutorial-sync-data-factory.md
Title: Sync data from Azure SQL Edge by using Azure Data Factory description: Learn about syncing data between Azure SQL Edge and Azure Blob storage
-keywords: SQL Edge,sync data from SQL Edge, SQL Edge data factory
-+++ Last updated : 05/19/2020 --- Previously updated : 05/19/2020
+keywords:
+ - SQL Edge
+ - sync data from SQL Edge
+ - SQL Edge data factory
+ # Tutorial: Sync data from SQL Edge to Azure Blob storage by using Azure Data Factory
azure-sql-edge Tutorial Sync Data Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/tutorial-sync-data-sync.md
Title: Sync data from Azure SQL Edge by using SQL Data Sync description: Learn about syncing data from Azure SQL Edge by using Azure SQL Data Sync
-keywords: SQL Edge,sync data from SQL Edge, SQL Edge data sync
-+++ Last updated : 05/19/2020 --- Previously updated : 05/19/2020
+keywords:
+ - SQL Edge
+ - sync data from SQL Edge
+ - SQL Edge data sync
+ # Tutorial: Sync data from SQL Edge to Azure SQL Database by using SQL Data Sync
azure-sql-edge Usage And Diagnostics Data Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/usage-and-diagnostics-data-configuration.md
Title: Azure SQL Edge usage and diagnostics data configuration description: Learn how to configure usage and diagnostics data in Azure SQL Edge.-+++ Last updated : 08/04/2020 --- Previously updated : 08/04/2020+ # Azure SQL Edge usage and diagnostics data configuration
azure-video-indexer Connect To Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/connect-to-azure.md
If your storage account is behind a firewall, see [storage account that is behin
:::image type="content" alt-text="Screenshot that shows how to use the classic API." source="./media/create-account/enable-classic-api.png":::
- When creating a storage account for your Media Services account, select **StorageV2** for account kind and **Geo-redundant** (GRS) for replication fields.
-
- :::image type="content" alt-text="Screenshot that shows how to specify a storage account." source="./media/create-account/create-new-ams-account.png":::
- > [!NOTE] > Make sure to write down the Media Services resource and account names. 1. Before you can play your videos in the Azure Video Indexer web app, you must start the default **Streaming Endpoint** of the new Media Services account.
The following Azure Media Services related considerations apply:
* If you plan to connect to an existing Media Services account, make sure the Media Services account was created with the classic APIs. ![Media Services classic API](./media/create-account/enable-classic-api.png)
-* If you connect to an existing Media Services account, Azure Video Indexer doesn't change the existing media **Reserved Units** configuration.
-
- You might need to adjust the type and number of Media Reserved Units according to your planned load. Keep in mind that if your load is high and you don't have enough units or speed, videos processing can result in timeout failures.
* If you connect to a new Media Services account, Azure Video Indexer automatically starts the default **Streaming Endpoint** in it: ![Media Services streaming endpoint](./media/create-account/ams-streaming-endpoint.png)
To create a paid account in Azure Government, follow the instructions in [Create
### Limitations of Azure Video Indexer on Azure Government
-* No manual content moderation available in Government cloud.
+* Only paid accounts (ARM or classic) are available on Azure Government.
+* No manual content moderation available in Government cloud.
In the public cloud when content is deemed offensive based on a content moderation, the customer can ask for a human to look at that content and potentially revert that decision.
-* No trial accounts.
* Bing description - in Gov cloud we won't present a description of celebrities and named entities identified. This is a UI capability only. ## Clean up resources
azure-vmware Backup Azure Netapp Files Datastores Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/backup-azure-netapp-files-datastores-vms.md
- Title: Back up Azure NetApp Files datastores and VMs using Cloud Backup
-description: Learn how to back up datastores and Virtual Machines to the cloud.
-- Previously updated : 08/12/2022--
-# Back up Azure NetApp Files datastores and VMs using Cloud Backup for Virtual Machines
-
-From the VMware vSphere client, you can back up datastores and Virtual Machines (VMs) to the cloud.
-
-## Configure subscriptions
-
-Before you back up your Azure NetApp Files datastores, you must add your Azure and Azure NetApp Files cloud subscriptions.
-
-### Add Azure cloud subscription
-
-1. Sign in to the VMware vSphere client.
-2. From the left navigation, select **Cloud Backup for Virtual Machines**.
-3. Select the **Settings** page and then select the **Cloud Subscription** tab.
-4. Select **Add** and then provide the required values from your Azure subscription.
-
-### Add Azure NetApp Files cloud subscription account
-
-1. From the left navigation, select **Cloud Backup for Virtual Machines**.
-2. Select **Storage Systems**.
-3. Select **Add** to add the Azure NetApp Files cloud subscription account details.
-4. Provide the required values and then select **Add** to save your settings.
-
-## Create a backup policy
-
-You must create backup policies before you can use Cloud Backup for Virtual Machines to back up Azure NetApp Files datastores and VMs.
-
-1. In the left navigation of the vCenter web client page, select **Cloud Backup for Virtual Machines** > **Policies**.
-2. On the **Policies** page, select **Create** to initiate the wizard.
-3. On the **New Backup Policy** page, select the vCenter Server that will use the policy, then enter the policy name and a description.
-* **Only alphanumeric characters and underscores (_) are supported in VM, datastore, cluster, policy, backup, or resource group names.** Other special characters are not supported.
-4. Specify the retention settings.
- The maximum retention value is 255 backups. If the **"Backups to keep"** option is selected during the backup operation, Cloud Backup for Virtual Machines will retain backups with the specified retention count and delete the backups that exceed the retention count.
-5. Specify the frequency settings.
- The policy specifies the backup frequency only. The specific protection schedule for backing up is defined in the resource group. Therefore, two or more resource groups can share the same policy and backup frequency but have different backup schedules.
-6. **Optional:** In the **Advanced** fields, select the fields that are needed. The Advanced field details are listed in the following table.
-
- | Field | Action |
- | - | - |
- | VM consistency | Check this box to pause the VMs and create a VMware snapshot each time the backup job runs. <br> When you check the VM consistency box, backup operations might take longer and require more storage space. In this scenario, the VMs are first paused, then VMware performs a VM consistent snapshot. Cloud Backup for Virtual Machines then performs its backup operation, and then VM operations are resumed. <br> VM guest memory is not included in VM consistency snapshots. |
- | Include datastores with independent disks | Check this box to include any datastores with independent disks that contain temporary data in your backup. |
- | Scripts | Enter the fully qualified path of the prescript or postscript that you want the Cloud Backup for Virtual Machines to run before or after backup operations. For example, you can run a script to update Simple Network Management Protocol (SNMP) traps, automate alerts, and send logs. The script path is validated at the time the script is executed. <br> **NOTE**: Prescripts and postscripts must be located on the virtual appliance VM. To enter multiple scripts, press **Enter** after each script path to list each script on a separate line. The semicolon (;) character is not allowed. |
-7. Select **Add** to save your policy.
- You can verify that the policy has been created successfully and review the policy configuration by selecting the policy in the **Policies** page.
-
-## Resource groups
-
-A resource group is the container for VMs and datastores that you want to protect.
-
-Do not add VMs in an inaccessible state to a resource group. Although a resource group can contain a VM in an inaccessible state, the inaccessible state will cause backups for the resource group to fail.
-
-### Considerations for resource groups
-
-You can add or remove resources from a resource group at any time.
-* **Back up a single resource:** To back up a single resource (for example, a single VM), you must create a resource group that contains that single resource.
-* **Back up multiple resources:** To back up multiple resources, you must create a resource group that contains multiple resources.
-* **Optimize snapshot copies:** To optimize snapshot copies, group the VMs and datastores that are associated with the same volume into one resource group.
-* **Backup policies:** Although it's possible to create a resource group without a backup policy, you can only perform scheduled data protection operations when at least one policy is attached to the resource group. You can use an existing policy, or you can create a new policy while creating a resource group.
-* **Compatibility checks:** Cloud Backup for VMs performs compatibility checks when you create a resource group. Reasons for incompatibility might be:
- * Virtual machine disks (VMDKs) are on unsupported storage.
- * A shared PCI device is attached to a VM.
- * You have not added the Azure subscription account.
-
-### Create a resource group using the wizard
-
-1. In the left navigation of the vCenter web client page, select **Cloud Backup** for **Virtual Machines** > **Resource Groups**. Then select **+ Create** to start the wizard
-
- :::image type="content" source="./media/cloud-backup/vsphere-create-resource-group.png" alt-text="Screenshot of the vSphere Client Resource Group interface shows a red box highlights a button with a green plus sign that reads Create, instructing you to select this button." lightbox="./media/cloud-backup/vsphere-create-resource-group.png":::
-
-1. On the **General Info & Notification** page in the wizard, enter the required values.
-1. On the **Resource** page, do the following:
-
- | Field | Action |
- | -- | -- |
- | Scope | Select the type of resource you want to protect: <ul><li>Datastores</li><li>Virtual Machines</li></ul> |
- | Datacenter | Navigate to the VMs or datastores |
- | Available entities | Select the resources you want to protect. Then select **>** to move your selections to the Selected entities list. |
-
- When you select **Next**, the system first checks that Cloud Backup for Virtual Machines manages and is compatible with the storage on which the selected resources are located.
-
- >[!IMPORTANT]
- >If you receive the message `selected <resource-name> is not Cloud Backup for Virtual Machines compatible` then a selected resource is not compatible with Cloud Backup for Virtual Machines.
-
-1. On the **Spanning disks** page, select an option for VMs with multiple VMDKs across multiple datastores:
- * Always exclude all spanning datastores
- (This is the default option for datastores)
- * Always include all spanning datastores
- (This is the default for VMs)
- * Manually select the spanning datastores to be included
-1. On the **Policies** page, select or create one or more backup policies.
- * To use **an existing policy**, select one or more policies from the list.
- * To **create a new policy**:
- 1. Select **+ Create**.
- 1. Complete the New Backup Policy wizard to return to the Create Resource Group wizard.
-1. On the **Schedules** page, configure the backup schedule for each selected policy.
- In the **Starting** field, enter a date and time other than zero. The date must be in the format day/month/year. You must fill in each field. The Cloud Backup for Virtual Machines creates schedules in the time zone in which the Cloud Backup for Virtual Machines is deployed. You can modify the time zone by using the Cloud Backup for Virtual Machines GUI.
-
- :::image type="content" source="./media/cloud-backup/backup-schedules.png" alt-text="A screenshot of the Backup schedules interface showing an hourly backup beginning at 10:22 a.m. on April 26, 2022." lightbox="./media/cloud-backup/backup-schedules.png":::
-1. Review the summary. If you need to change any information, you can return to any page in the wizard to do so. Select **Finish** to save your settings.
-
- After you select **Finish**, the new resource group will be added to the resource group list.
-
- If the pause operation fails for any of the VMs in the backup, then the backup is marked as not VM-consistent even if the policy selected has VM consistency selected. In this case, it's possible that some of the VMs were successfully paused.
-
-### Other ways to create a resource group
-
-In addition to using the wizard, you can:
-* **Create a resource group for a single VM:**
- 1. Select **Menu** > **Hosts and Clusters**.
- 1. Right-click the Virtual Machine you want to create a resource group for and select **Cloud Backup for Virtual Machines**. Select **+ Create**.
-* **Create a resource group for a single datastore:**
- 1. Select **Menu** > **Hosts and Clusters**.
- 1. Right-click a datastore, then select **Cloud Backup for Virtual Machines**. Select **+ Create**.
-
-## Back up resource groups
-
-Backup operations are performed on all the resources defined in a resource group. If a resource group has a policy attached and a schedule configured, backups occur automatically according to the schedule.
-
-### Prerequisites
-
-* You must have created a resource group with a policy attached.
- Do not start an on-demand backup job when a job to back up the Cloud Backup for Virtual Machines MySQL database is already running. Use the maintenance console to see the configured backup schedule for the MySQL database.
-
-### Back up resource groups on demand
-
-1. In the left navigation of the vCenter web client page, select **Cloud Backup for Virtual Machines** > **Resource Groups**, then select a resource group. Select **Run Now** to start the backup.
-
- :::image type="content" source="./media/cloud-backup/resource-groups-run-now.png" alt-text="Image of the vSphere Client Resource Group interface. At the top left, a red box highlights a green circular button with a white arrow inside next to text reading Run Now, instructing you to select this button." lightbox="./media/cloud-backup/resource-groups-run-now.png":::
-
- 1.1 If the resource group has multiple policies configured, then in the **Backup Now** dialog box, select the policy you want to use for this backup operation.
-1. Select **OK** to initiate the backup.
- >[!NOTE]
- >You can't rename a backup once it is created.
-1. **Optional:** Monitor the operation progress by selecting **Recent Tasks** at the bottom of the window or on the dashboard Job Monitor for more details.
- If the pause operation fails for any of the VMs in the backup, then the backup completes with a warning and is marked as not VM-consistent even if the selected policy has VM consistency selected. In this case, it is possible that some of the VMs were successfully paused. In the job monitor, the failed VM details will show the paused as failed.
-
-## Next steps
-
-* [Restore VMs using Cloud Backup for Virtual Machines](restore-azure-netapp-files-vms.md)
azure-vmware Install Cloud Backup Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/install-cloud-backup-virtual-machines.md
- Title: Install Cloud Backup for Virtual Machines
-description: Cloud Backup for Virtual Machines is a plug-in installed in the Azure VMware Solution and enables you to back up and restore Azure NetApp Files datastores and virtual machines.
-- Previously updated : 08/10/2022--
-# Install Cloud Backup for Virtual Machines
-
-Cloud Backup for Virtual Machines is a plug-in installed in the Azure VMware Solution and enables you to back up and restore Azure NetApp Files datastores and virtual machines (VMs).
-
-Use Cloud Backup for VMs to:
-* Build and securely connect both legacy and cloud-native workloads across environments and unify operations
-* Provision and resize datastore volumes right from the Azure portal
-* Take VM consistent snapshots for quick checkpoints
-* Quickly recover VMs
-
-## Prerequisites
-
-Before you can install Cloud Backup for Virtual Machines, you need to create an Azure service principle with the required Azure NetApp Files privileges. If you've already created one, you can skip to the installation steps below.
-
-## Install Cloud Backup for Virtual Machines using the Azure portal
-
-You'll need to install Cloud Backup for Virtual Machines through the Azure portal as an add-on.
-
-1. Sign in to your Azure VMware Solution private cloud.
-1. Select **Run command** > **Packages** > **NetApp.CBS.AVS** > **Install-NetAppCBSA**.
-
- :::image type="content" source="./media/cloud-backup/run-command.png" alt-text="Screenshot of the Azure interface that shows the configure signal logic step with a backdrop of the Create alert rule page." lightbox="./media/cloud-backup/run-command.png":::
-
-1. Provide the required values, then select **Run**.
-
- :::image type="content" source="./media/cloud-backup/run-commands-fields.png" alt-text="Image of the Run Command fields which are described in the table below." lightbox="./media/cloud-backup/run-commands-fields.png":::
-
- | Field | Value |
- | | -- |
- | ApplianceVirtualMachineName | VM name for the appliance. |
- | EsxiCluster | Destination ESXi cluster name to be used for deploying the appliance. |
- | VmDatastore | Datastore to be used for the appliance. |
- | NetworkMapping | Destination network to be used for the appliance. |
- | ApplianceNetworkName | Network name to be used for the appliance. |
- | ApplianceIPAddress | IPv4 address to be used for the appliance. |
- | Netmask | Subnet mask. |
- | Gateway | Gateway IP address. |
- | PrimaryDNS | Primary DNS server IP address. |
- | ApplianceUser | User Account for hosting API services in the appliance. |
- | AppliancePassword | Password of the user hosting API services in the appliance. |
- | MaintenanceUserPassword | Password of the appliance maintenance user. |
-
- >[!IMPORTANT]
- >You can also install Cloud Backup for Virtual Machines using DHCP by running the package `NetAppCBSApplianceUsingDHCP`. If you install Cloud Backup for Virtual Machines using DHCP, you don't need to provide the values for the PrimaryDNS, Gateway, Netmask, and ApplianceIPAddress fields. These values will be automatically generated.
-
-1. Check **Notifications** or the **Run Execution Status** tab to see the progress. For more information about the status of the execution, see [Run command in Azure VMware Solution](concepts-run-command.md).
-
-Upon successful execution, the Cloud Backup for Virtual Machines will automatically be displayed in the VMware vSphere client.
-
-## Upgrade Cloud Backup for Virtual Machines
-
-You can execute this run command to upgrade the Cloud Backup for Virtual Machines to the next available version.
-
->[!IMPORTANT]
-> Before you initiate the upgrade, you must:
-> * Back up the MySQL database of Cloud Backup for Virtual Machines.
-> * Take snapshot copies of Cloud Backup for Virtual Machines.
-
-1. Select **Run command** > **Packages** > **NetApp.CBS.AVS** > **Invoke-UpgradeNetAppCBSAppliance**.
-
-1. Provide the required values, and then select **Run**.
-
-1. Check **Notifications** or the **Run Execution Status** pane to monitor the progress.
-
-## Uninstall Cloud Backup for Virtual Machines
-
-You can execute the run command to uninstall Cloud Backup for Virtual Machines.
-
-> [!IMPORTANT]
-> Before you initiate the upgrade, you must:
-> * Backup the MySQL database of Cloud Backup for Virtual Machines.
-> * Ensure that there are no other VMs installed in the VMware vSphere tag: `AVS_ANF_CLOUD_ADMIN_VM_TAG`. All VMs with this tag will be deleted when you uninstall.
-
-1. Select **Run command** > **Packages** > **NetApp.CBS.AVS** > **Uninstall-NetAppCBSAppliance**.
-
-1. Provide the required values, and then select **Run**.
-
-1. Check **Notifications** or the **Run Execution Status** pane to monitor the progress.
-
-## Change vCenter account password
-
-You can execute this command to reset the vCenter account password:
-
-1. Select **Run command** > **Packages** > **NetApp.CBS.AVS** > **Invoke-ResetNetAppCBSApplianceVCenterPasswordA**.
-
-1. Provide the required values, then select **Run**.
-
-1. Check **Notifications** or the **Run Execution Status** pane to monitor the progress.
-
-## Next steps
-
-* [Back up Azure NetApp Files datastores and VMs using Cloud Backup for Virtual Machines](backup-azure-netapp-files-datastores-vms.md)
-* [Restore VMs using Cloud Backup for Virtual Machines](restore-azure-netapp-files-vms.md)
azure-vmware Restore Azure Netapp Files Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/restore-azure-netapp-files-vms.md
- Title: Restore VMs using Cloud Backup for Virtual Machines
-description: Learn how to restore virtual machines from a cloud backup to the vCenter.
-- Previously updated : 08/12/2022--
-# Restore VMs using Cloud Backup for Virtual Machines
-
-Cloud Backup for Virtual Machines enables you to restore virtual machines (VMs) from the cloud backup to the vCenter.
-
-This topic covers how to:
-* Restore VMs from backups
-* Restore deleted VMs from backups
-* Restore VM disks (VMDKs) from backups
-* Recovery of Cloud Backup for Virtual Machines internal database
-
-## Restore VMs from backups
-
-When you restore a VM, you can overwrite the existing content with the backup copy that you select or you can restore to a new VM.
-
-You can restore VMs to the original datastore mounted on the original ESXi host (this overwrites the original VM).
-
-## Prerequisites to restore VMs
-
-* A backup must exist: you must have created a backup of the VM using the Cloud Backup for Virtual Machines before you can restore the VM.
->[!NOTE]
->Restore operations cannot finish successfully if there are snapshots of the VM that were performed by software other than the Cloud Backup for Virtual Machines.
-* The VM must not be in transit: the VM that you want to restore must not be in a state of vMotion or Storage vMotion.
-* High Availability (HA) configuration errors: ensure there are no HA configuration errors displayed on the vCenter ESXi Host Summary screen before restoring backups to a different location.
-
-### Considerations for restoring VMs from backups
-
-* VM is unregistered and registered again: The restore operation for VMs unregisters the original VM, restores the VM from a backup snapshot, and registers the restored VM with the same name and configuration on the same ESXi server. You must manually add the VMs to resource groups after the restore.
-* Restoring datastores: You cannot restore a datastore, but you can restore any VM in the datastore.
-* VMware consistency snapshot failures for a VM: Even if a VMware consistency snapshot for a VM fails, the VM is nevertheless backed up. You can view the entities contained in the backup copy in the Restore wizard and use it for restore operations.
-
-### Restore a VM from a backup
-
-1. In the VMware vSphere web client GUI, select **Menu** in the toolbar. Select **Inventory** and then **Virtual Machines and Templates**.
-1. In the left navigation, right-click a Virtual Machine, then select **NetApp Cloud Backup**. In the drop-down list, select **Restore** to initiate the wizard.
-1. In the Restore wizard, on the **Select Backup** page, select the backup snapshot copy that you want to restore.
- > [!NOTE]
- > You can search for a specific backup name or a partial backup name, or you can filter the backup list by selecting the filter icon and then choosing a date and time range, selecting whether you want backups that contain VMware snapshots, whether you want mounted backups, and the location. Select **OK** to return to the wizard.
-1. On the **Select Scope** page, select **Entire Virtual Machine** in the **Restore scope** field, then select **Restore location**, and then enter the destination ESXi information where the backup should be mounted.
-1. When restoring partial backups, the restore operation skips the Select Scope page.
-1. Enable **Restart VM** checkbox if you want the VM to be powered on after the restore operation.
-1. On the **Select Location** page, select the location for the primary or secondary location.
-1. Review the **Summary** page and then select **Finish**.
-1. **Optional:** Monitor the operation progress by selecting Recent Tasks at the bottom of the screen.
-1. Although the VMs are restored, they are not automatically added to their former resource groups. Therefore, you must manually add the restored VMs to the appropriate resource groups.
-
-## Restore deleted VMs from backups
-
-You can restore a deleted VM from a datastore primary or secondary backup to an ESXi host that you select. You can also restore VMs to the original datastore mounted on the original ESXi host, which creates a clone of the VM.
-
-## Prerequisites to restore deleted VMs
-
-* You must have added the Azure cloud Subscription account.
- The user account in vCenter must have the minimum vCenter privileges required for Cloud Backup for Virtual Machines.
-* A backup must exist.
- You must have created a backup of the VM using the Cloud Backup for Virtual Machines before you can restore the VMDKs on that VM.
-
-### Considerations for restoring deleted VMs
-
-You cannot restore a datastore, but you can restore any VM in the datastore.
-
-### Restore deleted VMs
-
-1. Select **Menu** and then select the **Inventory** option.
-1. Select a datastore, then select the **Configure** tab, then the **Backups in the Cloud Backup for Virtual Machines** section.
-1. Select (double-click) a backup to see a list of all VMs that are included in the backup.
-1. Select the deleted VM from the backup list and then select **Restore**.
-1. On the **Select Scope** page, select **Entire Virtual Machine** in the **Restore scope field**, then select the restore location, and then enter the destination ESXi information where the backup should be mounted.
-1. Enable **Restart VM** checkbox if you want the VM to be powered on after the restore operation.
-1. On the **Select Location** page, select the location of the backup that you want to restore to.
-1. Review the **Summary** page, then select **Finish**.
-
-## Restore VMDKs from backups
-
-You can restore existing VMDKs or deleted or detached VMDKs from either a primary or secondary backup. You can restore one or more VMDKs on a VM to the same datastore.
-
-## Prerequisites to restore VMDKs
-
-* A backup must exist.
- You must have created a backup of the VM using the Cloud Backup for Virtual Machines.
-* The VM must not be in transit.
- The VM that you want to restore must not be in a state of vMotion or Storage vMotion.
-
-### Considerations for restoring VMDKs
-
-* If the VMDK is deleted or detached from the VM, then the restore operation attaches the VMDK to the VM.
-* Attach and restore operations connect VMDKs using the default SCSI controller. VMDKs that are attached to a VM with a NVME controller are backed up, but for attach and restore operations they are connected back using a SCSI controller.
-
-### Restore VMDKs
-
-1. In the VMware vSphere web client GUI, select **Menu** in the toolbar. Select **Inventory**, then **Virtual Machines and Templates**.
-1. In the left navigation, right-click a VM and select **NetApp Cloud Backup**. In the drop-down list, select **Restore**.
-1. In the Restore wizard, on the **Select Backup** page, select the backup copy from which you want to restore. To find the backup, do one of the following options:
- * Search for a specific backup name or a partial backup name
- * Filter the backup list by selecting the filter icon and a date and time range. Select if you want backups that contain VMware snapshots, if you want mounted backups, and primary location.
- Select **OK** to return to the wizard.
-1. On the **Select Scope** page, select **Particular virtual disk** in the Restore scope field, then select the virtual disk and destination datastore.
-1. On the **Select Location** page, select the snapshot copy that you want to restore.
-1. Review the **Summary** page and then select **Finish**.
-1. **Optional:** Monitor the operation progress by clicking Recent Tasks at the bottom of the screen.
-
-## Recovery of Cloud Backup for Virtual Machines internal database
-
-You can use the maintenance console to restore a specific backup of the MySQL database (also called an NSM database) for Cloud Backup for Virtual Machines.
-
-1. Open a maintenance console window.
-1. From the main menu, enter option **1) Application Configuration**.
-1. From the Application Configuration menu, enter option **6) MySQL backup and restore**.
-1. From the MySQL Backup and Restore Configuration menu, enter option **2) List MySQL backups**. Make note of the backup you want to restore.
-1. From the MySQL Backup and Restore Configuration menu, enter option **3) Restore MySQL backup**.
-1. At the prompt ΓÇ£Restore using the most recent backup,ΓÇ¥ enter **n**.
-1. At the prompt ΓÇ£Backup to restore from,ΓÇ¥ enter the backup name, and then select **Enter**.
- The selected backup MySQL database will be restored to its original location.
-
-If you need to change the MySQL database backup configuration, you can modify:
-* The backup location (the default is: `/opt/netapp/protectionservice/mysqldumps`)
-* The number of backups kept (the default value is three)
-* The time of day the backup is recorded (the default value is 12:39 a.m.)
-
-1. Open a maintenance console window.
-1. From the main menu, enter option **1) Application Configuration**.
-1. From the Application Configuration menu, enter option **6) MySQL backup and restore**.
-1. From the MySQL Backup & Restore Configuration, menu, enter option **1) Configure MySQL backup**.
--
- :::image type="content" source="./media/cloud-backup/mysql-backup-configuration.png" alt-text="Screenshot of the CLI maintenance menu depicting menu options." lightbox="./media/cloud-backup/mysql-backup-configuration.png":::
center-sap-solutions Install Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/install-software.md
The following components are necessary for the SAP installation:
- `jq` version 1.6 - `ansible` version 2.9.27 - `netaddr` version 0.8.0-- The SAP Bill of Materials (BOM), as generated by ACSS. These YAML files list all required SAP packages for the SAP software installation. There's a main BOM (`S41909SPS03_v0011ms.yaml`, `S42020SPS03_v0003ms.yaml`, `S4HANA_2021_ISS_v0001ms.yaml`) and there are dependent BOMs (`HANA_2_00_059_v0003ms.yaml`, `HANA_2_00_063_v0001ms.yaml` `SUM20SP14_latest.yaml`, `SWPM20SP12_latest.yaml`). They provide the following information:
+- The SAP Bill of Materials (BOM), as generated by ACSS. These YAML files list all required SAP packages for the SAP software installation. There's a main BOM (`S41909SPS03_v0011ms.yaml`, `S42020SPS03_v0003ms.yaml`, `S4HANA_2021_ISS_v0001ms.yaml`) and there are dependent BOMs (`HANA_2_00_059_v0003ms.yaml`, `HANA_2_00_064_v0001ms.yaml` `SUM20SP14_latest.yaml`, `SWPM20SP12_latest.yaml`). They provide the following information:
- The full name of the SAP package (`name`) - The package name with its file extension as downloaded (`archive`) - The checksum of the package as specified by SAP (`checksum`)
You also can [run scripts to automate this process](#option-1-upload-software-co
- For S/4HANA 2020 SPS 03, make following folders
- 1. **HANA_2_00_063_v0001ms**
+ 1. **HANA_2_00_064_v0001ms**
1. **S42020SPS03_v0003ms** 1. **SWPM20SP12_latest** 1. **SUM20SP14_latest** - For S/4HANA 2021 ISS 00, make following folders
- 1. **HANA_2_00_063_v0001ms**
+ 1. **HANA_2_00_064_v0001ms**
1. **S4HANA_2021_ISS_v0001ms** 1. **SWPM20SP12_latest** 1. **SUM20SP14_latest**
You also can [run scripts to automate this process](#option-1-upload-software-co
- For S/4HANA 2020 SPS 03, 1. [S42020SPS03_v0003ms.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/S42020SPS03_v0003ms.yaml)
- 1. [HANA_2_00_063_v0001ms.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/HANA_2_00_063_v0001ms/HANA_2_00_063_v0001ms.yaml)
+ 1. [HANA_2_00_064_v0001ms.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/HANA_2_00_064_v0001ms/HANA_2_00_064_v0001ms.yaml)
1. [SWPM20SP12_latest.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/SWPM20SP12_latest/SWPM20SP12_latest.yaml) 1. [SUM20SP14_latest.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/SUM20SP14_latest/SUM20SP14_latest.yaml) - For S/4HANA 2021 ISS 00, 1. [S4HANA_2021_ISS_v0001ms.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/S4HANA_2021_ISS_v0001ms.yaml)
- 1. [HANA_2_00_063_v0001ms.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/HANA_2_00_063_v0001ms/HANA_2_00_063_v0001ms.yaml)
+ 1. [HANA_2_00_064_v0001ms.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/HANA_2_00_064_v0001ms/HANA_2_00_064_v0001ms.yaml)
1. [SWPM20SP12_latest.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/SWPM20SP12_latest/SWPM20SP12_latest.yaml) 1. [SUM20SP14_latest.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/SUM20SP14_latest/SUM20SP14_latest.yaml)
You also can [run scripts to automate this process](#option-1-upload-software-co
1. [S4HANA_2021_ISS_v0001ms-app-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-app-inifile-param.j2) 1. [S4HANA_2021_ISS_v0001ms-dbload-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-dbload-inifile-param.j2) 1. [S4HANA_2021_ISS_v0001ms-ers-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-ers-inifile-param.j2)
- 1. [S4HANA_2021_ISS_v0001ms-generic-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-generic-inifile-param.j2)
+ 1. [S4HANA_2021_ISS_v0001ms-generic-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-generic-inifile-param.j2)
1. [S4HANA_2021_ISS_v0001ms-pas-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-pas-inifile-param.j2) 1. [S4HANA_2021_ISS_v0001ms-scs-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-scs-inifile-param.j2) 1. [S4HANA_2021_ISS_v0001ms-scsha-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-scsha-inifile-param.j2)
You also can [run scripts to automate this process](#option-1-upload-software-co
- For S/4HANA 2020 SPS 03, 1. [S42020SPS03_v0003ms.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/S42020SPS03_v0003ms.yaml)
- 1. [HANA_2_00_063_v0001ms.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/HANA_2_00_063_v0001ms/HANA_2_00_063_v0001ms.yaml)
+ 1. [HANA_2_00_064_v0001ms.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/HANA_2_00_064_v0001ms/HANA_2_00_064_v0001ms.yaml)
1. [SWPM20SP12_latest.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/SWPM20SP12_latest/SWPM20SP12_latest.yaml) 1. [SUM20SP14_latest.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/SUM20SP14_latest/SUM20SP14_latest.yaml) - For S/4HANA 2021 ISS 00, 1. [S4HANA_2021_ISS_v0001ms.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/S4HANA_2021_ISS_v0001ms.yaml)
- 1. [HANA_2_00_063_v0001ms.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/HANA_2_00_063_v0001ms/HANA_2_00_063_v0001ms.yaml)
+ 1. [HANA_2_00_064_v0001ms.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/HANA_2_00_064_v0001ms/HANA_2_00_064_v0001ms.yaml)
1. [SWPM20SP12_latest.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/SWPM20SP12_latest/SWPM20SP12_latest.yaml) 1. [SUM20SP14_latest.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/SUM20SP14_latest/SUM20SP14_latest.yaml)
You can install a maximum of 10 Application Servers, excluding the Primary Appli
### SAP package version changes
-When SAP changes the version of packages for a component in the BOM, you might encounter problems with the automated installation shell script. It's recommended to download your SAP installation media as soon as possible to avoid issues.
+1. When SAP changes the version of packages for a component in the BOM, you might encounter problems with the automated installation shell script. It's recommended to download your SAP installation media as soon as possible to avoid issues.
If you encounter this problem, follow these steps:
If you encounter this problem, follow these steps:
1. Reupload the BOM file(s) in the subfolder (`S41909SPS03_v0011ms` or `S42020SPS03_v0003ms` or `S4HANA_2021_ISS_v0001ms`) of the "boms" folder
+### Special characters like $ in S-user password is not accepted while downloading the BOM.
+
+1. Follow the step by step instructions upto cloning the 'SAP Automation repository from GitHub' in **Download SAP media** section.
+
+1. Before running the Ansible playbook set the SPASS environment variable below. Single quotes should be present in the below command
+
+ ```bash
+ export SPASS='password_with_special_chars'
+ ```
+1. Then run the ansible playbook
+
+```azurecli
+ ansible-playbook ./sap-automation/deploy/ansible/playbook_bom_downloader.yaml -e "bom_base_name=S41909SPS03_v0011ms" -e "deployer_kv_name=dummy_value" -e "s_user=<username>" -e "s_password=$SPASS" -e "sapbits_access_key=<storageAccountAccessKey>" -e "sapbits_location_base_path=<containerBasePath>"
+ ```
+
+- For `<username>`, use your SAP username.
+- For `<bom_base_name>`, use the SAP Version you want to install i.e. **_S41909SPS03_v0011ms_** or **_S42020SPS03_v0003ms_** or **_S4HANA_2021_ISS_v0001ms_**
+- For `<storageAccountAccessKey>`, use your storage account's access key. You found this value in the [previous section](#download-supporting-software).
+- For `<containerBasePath>`, use the path to your `sapbits` container. You found this value in the [previous section](#download-supporting-software).
+ The format is `https://<your-storage-account>.blob.core.windows.net/sapbits`
+
+This should resolve the problem and you can proceed with next steps as described in the section.
## Next steps - [Monitor SAP system from Azure portal](monitor-portal.md)
cognitive-services Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/role-based-access-control.md
Previously updated : 08/02/2022 Last updated : 08/23/2022
Use the following table to determine access needs for your LUIS application.
These custom roles only apply to authoring (Language Understanding Authoring) and not prediction resources (Language Understanding). > [!NOTE]
-> * "Owner" and "Contributor" roles take priority over the custom LUIS roles.
-> * Azure Active Directory (Azure AD) is only used with custom LUIS roles.
+> * *Owner* and *Contributor* roles take priority over the custom LUIS roles.
+> * Azure Active Directory (Azure AAD) is only used with custom LUIS roles.
+> * If you are assigned as a *Contributor* on Azure, your role will be shown as *Owner* in LUIS portal.
### Cognitive Services LUIS reader
A user that is responsible for building and modifying LUIS application, as a col
### Cognitive Services LUIS owner
+> [!NOTE]
+> * If you are assigned as an *Owner* and *LUIS Owner* you will be be shown as *LUIS Owner* in LUIS portal.
+ These users are the gatekeepers for LUIS applications in a production environment. They should have full access to any of the underlying functions and thus can view everything in the application and have direct access to edit any changes for both authoring and runtime environments. :::row:::
cognitive-services Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/role-based-access-control.md
+
+ Title: Role-based access control for the Language service
+
+description: Learn how to use Azure RBAC for managing individual access to Azure resources.
++++++ Last updated : 08/23/2022++++
+# Language role-based access control
+
+Azure Cognitive Service for Language supports Azure role-based access control (Azure RBAC), an authorization system for managing individual access to Azure resources. Using Azure RBAC, you assign different team members different levels of permissions for your projects authoring resources. See the [Azure RBAC documentation](/azure/role-based-access-control/) for more information.
+
+## Enable Azure Active Directory authentication
+
+To use Azure RBAC, you must enable Azure Active Directory authentication. You can [create a new resource with a custom subdomain](../../authentication.md#create-a-resource-with-a-custom-subdomain) or [create a custom subdomain for your existing resource](../../cognitive-services-custom-subdomains.md#how-does-this-impact-existing-resources).
+
+## Add role assignment to Language resource
+
+Azure RBAC can be assigned to a Language resource. To grant access to an Azure resource, you add a role assignment.
+1. In the [Azure portal](https://ms.portal.azure.com/), select **All services**.
+1. Select **Cognitive Services**, and navigate to your specific Language resource.
+ > [!NOTE]
+ > You can also set up Azure RBAC for whole resource groups, subscriptions, or management groups. Do this by selecting the desired scope level and then navigating to the desired item. For example, selecting **Resource groups** and then navigating to a specific resource group.
+
+1. Select **Access control (IAM)** on the left navigation pane.
+1. Select **Add**, then select **Add role assignment**.
+1. On the **Role** tab on the next screen, select a role you want to add.
+1. On the **Members** tab, select a user, group, service principal, or managed identity.
+1. On the **Review + assign** tab, select **Review + assign** to assign the role.
+
+Within a few minutes, the target will be assigned the selected role at the selected scope. For help with these steps, see [Assign Azure roles using the Azure portal](/azure/role-based-access-control/role-assignments-portal).
+
+## Language role types
+
+Use the following table to determine access needs for your Language projects.
+
+These custom roles only apply to Language resources.
+> [!NOTE]
+> * All prebuilt capabilities are accessible to all roles
+> * *Owner* and *Contributor* roles take priority over the custom language roles
+> * AAD is only used in case of custom Language roles
+> * If you are assigned as a *Contributor* on Azure, your role will be shown as *Owner* in Language studio portal.
++
+### Cognitive Services Language reader
+
+A user that should only be validating and reviewing the Language apps, typically a tester to ensure the application is performing well before deploying the project. They may want to review the applicationΓÇÖs assets to notify the app developers of any changes that need to be made, but do not have direct access to make them. Readers will have access to view the evaluation results.
++
+ :::column span="":::
+ **Capabilities**
+ :::column-end:::
+ :::column span="":::
+ **API Access**
+ :::column-end:::
+ :::column span="":::
+ * Read
+ * Test
+ :::column-end:::
+ :::column span="":::
+ All GET APIs under:
+ * [Language authoring conversational language understanding APIs](/rest/api/language/conversational-analysis-authoring)
+ * [Language authoring text analysis APIs](/rest/api/language/text-analysis-authoring)
+ * [Question answering projects](/rest/api/cognitiveservices/questionanswering/question-answering-projects)
+ Only `TriggerExportProjectJob` POST operation under:
+ * [Language authoring conversational language understanding export API](/rest/api/language/conversational-analysis-authoring/export?tabs=HTTP)
+ * [Language authoring text analysis export API](/rest/api/language/text-analysis-authoring/export?tabs=HTTP)
+ Only Export POST operation under:
+ * [Question Answering Projects](/rest/api/cognitiveservices/questionanswering/question-answering-projects/export)
+ All the Batch Testing Web APIs
+ *[Language Runtime CLU APIs](/rest/api/language/conversation-analysis-runtime)
+ *[Language Runtime Text Analysis APIs](/rest/api/language/text-analysis-runtime)
+ :::column-end:::
+
+### Cognitive Services Language writer
+
+A user that is responsible for building and modifying an application, as a collaborator in a larger team. The collaborator can modify the Language apps in any way, train those changes, and validate/test those changes in the portal. However, this user shouldnΓÇÖt have access to deploying this application to the runtime, as they may accidentally reflect their changes in production. They also shouldnΓÇÖt be able to delete the application or alter its prediction resources and endpoint settings (assigning or unassigning prediction resources, making the endpoint public). This restricts this role from altering an application currently being used in production. They may also create new applications under this resource, but with the restrictions mentioned.
+
+ :::column span="":::
+ **Capabilities**
+ :::column-end:::
+ :::column span="":::
+ **API Access**
+ :::column-end:::
+ :::column span="":::
+ * All functionalities under Cognitive Services Language Reader.
+ * Ability to:
+ * Train
+ * Write
+ :::column-end:::
+ :::column span="":::
+ * All APIs under Language reader
+ * All POST, PUT and PATCH APIs under:
+ * [Language conversational language understanding APIs](/rest/api/language/conversational-analysis-authoring)
+ * [Language text analysis APIs](/rest/api/language/text-analysis-authoring)
+ * [question answering projects](/rest/api/cognitiveservices/questionanswering/question-answering-projects)
+ Except for
+ * Delete deployment
+ * Delete trained model
+ * Delete Project
+ * Deploy Model
+ :::column-end:::
+
+### Cognitive Services Language owner
+
+> [!NOTE]
+> If you are assigned as an *Owner* and *Language Owner* you will be be shown as *Cognitive Services Language owner* in Language studio portal.
++
+These users are the gatekeepers for the Language applications in production environments. They should have full access to any of the underlying functions and thus can view everything in the application and have direct access to edit any changes for both authoring and runtime environments
+
+ :::column span="":::
+ **Functionality**
+ :::column-end:::
+ :::column span="":::
+ **API Access**
+ :::column-end:::
+ :::column span="":::
+ * All functionalities under Cognitive Services Language Writer
+ * Deploy
+ * Delete
+ :::column-end:::
+ :::column span="":::
+ All APIs available under:
+ * [Language authoring conversational language understanding APIs](/rest/api/language/conversational-analysis-authoring)
+ * [Language authoring text analysis APIs](/rest/api/language/text-analysis-authoring)
+ * [question answering projects](/rest/api/cognitiveservices/questionanswering/question-answering-projects)
+
+ :::column-end:::
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/overview.md
Summarization is one of the features offered by [Azure Cognitive Service for Lan
This documentation contains the following article types:
-* [**Quickstarts**](quickstart.md?pivots=rest-api&tabs=document-summarization) are getting-started instructions to guide you through making requests to the service.
-* [**How-to guides**](how-to/document-summarization.md) contain instructions for using the service in more specific or customized ways.
+* **[Quickstarts](quickstart.md?pivots=rest-api&tabs=document-summarization)** are getting-started instructions to guide you through making requests to the service.
+* **[How-to guides](how-to/document-summarization.md)** contain instructions for using the service in more specific or customized ways.
Text summarization is a broad topic, consisting of several approaches to represent relevant information in text. The document summarization feature described in this documentation enables you to use extractive text summarization to produce a summary of a document. It extracts sentences that collectively represent the most important or relevant information within the original content. This feature is designed to shorten content that could be considered too long to read. For example, it can condense articles, papers, or documents to key sentences.
Document summarization supports the following features:
This documentation contains the following article types:
-* [**Quickstarts**](quickstart.md?pivots=rest-api&tabs=conversation-summarization) are getting-started instructions to guide you through making requests to the service.
-* [**How-to guides**](how-to/conversation-summarization.md) contain instructions for using the service in more specific or customized ways.
+* **[Quickstarts](quickstart.md?pivots=rest-api&tabs=conversation-summarization)** are getting-started instructions to guide you through making requests to the service.
+* **[How-to guides](how-to/conversation-summarization.md)** contain instructions for using the service in more specific or customized ways.
Conversation summarization is a broad topic, consisting of several approaches to represent relevant information in text. The conversation summarization feature described in this documentation enables you to use abstractive text summarization to produce a summary of issues and resolutions in transcripts of web chats and service call transcripts between customer-service agents, and your customers.
Conversation summarization is a broad topic, consisting of several approaches to
## When to use conversation summarization
-* When there are predefined aspects of an ΓÇ£issueΓÇ¥ and ΓÇ£resolutionΓÇ¥, such as:
- * The reason for a service chat/call (the issue).
- * That resolution for the issue.
+* When there are aspects of an ΓÇ£issueΓÇ¥ and ΓÇ£resolutionΓÇ¥, such as:
+ * The reason for a service chat/call (the issue).
+ * That resolution for the issue.
* You only want a summary that focuses on related information about issues and resolutions. * When there are two participants in the conversation, and you want to summarize what each had said.
Conversation summarization feature would simplify the text into the following:
To use this feature, you submit raw unstructured text for analysis and handle the API output in your application. Analysis is performed as-is, with no additional customization to the model used on your data. There are two ways to use summarization: -
-|Development option |Description | Links |
+|Development option |Description | Links |
|||| | Language Studio | A web-based platform that enables you to try document summarization without needing writing code. | ΓÇó [Language Studio website](https://language.cognitive.azure.com/tryout/summarization) <br> ΓÇó [Quickstart: Use Language Studio](../language-studio.md) | | REST API or Client library (Azure SDK) | Integrate document summarization into your applications using the REST API, or the client library available in a variety of languages. | ΓÇó [Quickstart: Use document summarization](quickstart.md) | - # [Conversation summarization](#tab/conversation-summarization) To use this feature, you submit raw text for analysis and handle the API output in your application. Analysis is performed as-is, with no additional customization to the model used on your data. There are two ways to use conversation summarization: -
-|Development option |Description | Links |
+|Development option |Description | Links |
|||| | REST API | Integrate conversation summarization into your applications using the REST API. | [Quickstart: Use conversation summarization](quickstart.md) |
To use this feature, you submit raw text for analysis and handle the API output
* Summarization takes raw unstructured text for analysis. See [Data and service limits](../concepts/data-limits.md) in the how-to guide for more information. * Summarization works with a variety of written languages. See [language support](language-support.md?tabs=document-summarization) for more information. - # [Conversation summarization](#tab/conversation-summarization) * Conversation summarization takes structured text for analysis. See the [data and service limits](../concepts/data-limits.md) for more information.
As you use document summarization in your applications, see the following refere
|JavaScript | [JavaScript documentation](/javascript/api/overview/azure/ai-text-analytics-readme?view=azure-node-preview&preserve-view=true) | [JavaScript samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/textanalytics/ai-text-analytics/samples/v5) | |Python | [Python documentation](/python/api/overview/azure/ai-textanalytics-readme?view=azure-python-preview&preserve-view=true) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/textanalytics/azure-ai-textanalytics/samples) |
-## Responsible AI
+## Responsible AI
An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which itΓÇÖs deployed. Read the [transparency note for summarization](/legal/cognitive-services/language-service/transparency-note-extractive-summarization?context=/azure/cognitive-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
container-apps Compare Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/compare-options.md
You can get started building your first container app [using the quickstarts](ge
[Azure Functions](../azure-functions/functions-overview.md) is a serverless Functions-as-a-Service (FaaS) solution. It's optimized for running event-driven applications using the functions programming model. It shares many characteristics with Azure Container Apps around scale and integration with events, but optimized for ephemeral functions deployed as either code or containers. The Azure Functions programming model provides productivity benefits for teams looking to trigger the execution of your functions on events and bind to other data sources. When building FaaS-style functions, Azure Functions is the ideal option. The Azure Functions programming model is available as a base container image, making it portable to other container based compute platforms allowing teams to reuse code as environment requirements change. ### Azure Spring Apps
-[Azure Spring Apps](../spring-apps/overview.md) makes it easy to deploy Spring Boot microservice applications to Azure without any code changes. The service manages the infrastructure of Spring Cloud applications so developers can focus on their code. Azure Spring Apps provides lifecycle management using comprehensive monitoring and diagnostics, configuration management, service discovery, CI/CD integration, blue-green deployments, and more. If your team or organization is predominantly Spring, Azure Spring Apps is an ideal option.
+[Azure Spring Apps](../spring-apps/overview.md) is a platform as a service (PaaS) for Spring developers. If you want to run Spring Boot, Sprng Cloud or any other Spring applications on Azure, Azure Spring Apps is an ideal option. The service manages the infrastructure of Spring applications so developers can focus on their code. Azure Spring Apps provides lifecycle management using comprehensive monitoring and diagnostics, configuration management, service discovery, CI/CD integration, blue-green deployments, and more.
### Azure Red Hat OpenShift [Azure Red Hat OpenShift](../openshift/intro-openshift.md) is jointly engineered, operated, and supported by Red Hat and Microsoft to provide an integrated product and support experience for running Kubernetes-powered OpenShift. With Azure Red Hat OpenShift, teams can choose their own registry, networking, storage, and CI/CD solutions, or use the built-in solutions for automated source code management, container and application builds, deployments, scaling, health management, and more from OpenShift. If your team or organization is using OpenShift, Azure Red Hat OpenShift is an ideal option.
You can get started building your first container app [using the quickstarts](ge
## Next steps > [!div class="nextstepaction"]
-> [Deploy your first container app](get-started.md)
+> [Deploy your first container app](get-started.md)
container-apps Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/disaster-recovery.md
Additionally, the following resources can help you create your own disaster reco
To take advantage of availability zones, you must enable zone redundancy when you create the Container Apps environment. The environment must include a virtual network (VNET) with an infrastructure subnet. To ensure proper distribution of replicas, you should configure your app's minimum and maximum replica count with values that are divisible by three. The minimum replica count should be at least three.
-### Enabled zone redundancy via the Azure portal
+### Enable zone redundancy via the Azure portal
To create a container app in an environment with zone redundancy enabled using the Azure portal:
cosmos-db Transactional Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/transactional-batch.md
Title: Transactional batch operations in Azure Cosmos DB using the .NET SDK
-description: Learn how to use TransactionalBatch in the Azure Cosmos DB .NET SDK to perform a group of point operations that either succeed or fail.
+ Title: Transactional batch operations in Azure Cosmos DB using the .NET or Java SDK
+description: Learn how to use TransactionalBatch in the Azure Cosmos DB .NET or Java SDK to perform a group of point operations that either succeed or fail.
Last updated 10/27/2020
-# Transactional batch operations in Azure Cosmos DB using the .NET SDK
+# Transactional batch operations in Azure Cosmos DB using the .NET or Java SDK
[!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)] Transactional batch describes a group of point operations that need to either succeed or fail together with the same partition key in a container. In the .NET and Java SDKs, the `TransactionalBatch` class is used to define this batch of operations. If all operations succeed in the order they're described within the transactional batch operation, the transaction will be committed. However, if any operation fails, the entire transaction is rolled back.
cost-management-billing Tutorial Export Acm Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-export-acm-data.md
description: This article shows you how you can create and manage exported Cost Management data so that you can use it in external systems. Previously updated : 04/25/2022 Last updated : 08/23/2022
Remove-AzCostManagementExport -Name DemoExport -Scope 'subscriptions/00000000-00
Scheduled exports are affected by the time and day of week of when you initially create the export. When you create a scheduled export, the export runs at the same frequency for each export that runs later. For example, for a daily export of month-to-date costs export set at a daily frequency, the export runs daily. Similarly for a weekly export, the export runs every week on the same day as it is scheduled. The exact delivery time of the export isn't guaranteed and the exported data is available within four hours of run time.
+- When you create an export using the [Exports API](/rest/api/cost-management/exports/create-or-update?tabs=HTTP), specify the `recurrencePeriod` in UTC time. The API doesnΓÇÖt convert your local time to UTC.
+ - Example - A weekly export is scheduled on Friday, August 19 with `recurrencePeriod` set to 2:00 PM. The API receives the input as 2:00 PM UTC, Friday, August 19. The weekly export will be scheduled to run every Friday.
+- When you create an export in the Azure portal, its start date time is automatically converted to the equivalent UTC time.
+ - Example - A weekly export is scheduled on Friday, August 19 with the local time of 2:00 AM IST (UTC+5:30) from the Azure portal. The API receives the input as 8:30 PM, Thursday, August 18th. The weekly export will be scheduled to run every Thursday.
+ Each export creates a new file, so older exports aren't overwritten. #### Create an export for multiple subscriptions
data-factory Format Delta https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/format-delta.md
Previously updated : 08/05/2022 Last updated : 08/22/2022
The below table lists the properties supported by a delta sink. You can edit the
| Compression type | The compression type of the delta table | no | `bzip2`<br>`gzip`<br>`deflate`<br>`ZipDeflate`<br>`snappy`<br>`lz4` | compressionType | | Compression level | Choose whether the compression completes as quickly as possible or if the resulting file should be optimally compressed. | required if `compressedType` is specified. | `Optimal` or `Fastest` | compressionLevel | | Vacuum | Specify retention threshold in hours for older versions of table. A value of 0 or less defaults to 30 days | yes | Integer | vacuum |
+| Table action | Tells ADF what to do with the target Delta table in your sink. You can leave it as-is and append new rows, overwrite the existing table definition and data with new metadata and data, or keep the existing table structure but first truncate all rows, then insert the new rows. | no | None, Truncate, Overwrite | truncate, overwrite |
| Update method | Specify which update operations are allowed on the delta lake. For methods that aren't insert, a preceding alter row transformation is required to mark rows. | yes | `true` or `false` | deletable <br> insertable <br> updateable <br> merge | | Optimized Write | Achieve higher throughput for write operation via optimizing internal shuffle in Spark executors. As a result, you may notice fewer partitions and files that are of a larger size | no | `true` or `false` | optimizedWrite: true | | Auto Compact | After any write operation has completed, Spark will automatically execute the ```OPTIMIZE``` command to re-organize the data, resulting in more partitions if necessary, for better reading performance in the future | no | `true` or `false` | autoCompact: true |
databox-online Azure Stack Edge Gpu Deploy Virtual Machine Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-portal.md
You can create and manage virtual machines (VMs) on an Azure Stack Edge Pro GPU device by using the Azure portal, templates, and Azure PowerShell cmdlets, and via the Azure CLI or Python scripts. This article describes how to create and manage a VM on your Azure Stack Edge Pro GPU device by using the Azure portal. > [!IMPORTANT]
-> We recommend that you enable multifactor authentication for the user who manages VMs that are deployed on your device from the cloud.
+> You will need to enable multifactor authentication for the user who manages the VMs and images that are deployed on your device from the cloud. The cloud operations will fail if the user doesn't have multifactor authentication enabled. For steps to enable multifactor authentication click [here](/articles/active-directory/authentication/tutorial-enable-azure-mfa.md)
## VM deployment workflow
defender-for-iot How To Manage Cloud Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-cloud-alerts.md
# View and manage alerts from the Azure portal
+> [!IMPORTANT]
+> The **Alerts** page is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
++ This article describes how to manage your alerts from Microsoft Defender for IoT on the Azure portal. If you're integrating with Microsoft Sentinel, the alert details and entity information are also sent to Microsoft Sentinel, where you can also view them from the **Alerts** page.
defender-for-iot Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/workbooks.md
Last updated 06/02/2022
# Use Azure Monitor workbooks in Microsoft Defender for IoT
+> [!IMPORTANT]
+> The **Azure Monitor workbooks** are currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+ Azure Monitor workbooks provide graphs, charts, and dashboards that visually reflect data stored in your Azure Resource Graph subscriptions and are available directly in Microsoft Defender for IoT. In the Azure portal, use the Defender for IoT **Workbooks** page to view workbooks created by Microsoft and provided out-of-the-box, or created by customers and shared across the community.
event-grid Communication Services Voice Video Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/communication-services-voice-video-events.md
This section contains an example of what that data would look like for each even
"subject": "call/{serverCallId}/startedBy/8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1", "data": { "startedBy": {
- "rawId": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1",
- "communicationUser": {
- "id": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1"
+ "communicationIdentifier": {
+ "rawId": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1",
+ "communicationUser": {
+ "id": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1"
+ }
}, "role": "{role}" },
This section contains an example of what that data would look like for each even
"data": { "durationOfCall": 49.728617199999995, "startedBy": {
- "rawId": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1",
- "communicationUser": {
- "id": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1"
+ "communicationIdentifier": {
+ "rawId": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1",
+ "communicationUser": {
+ "id": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1"
+ }
}, "role": "{role}" },
This section contains an example of what that data would look like for each even
"subject": "call/{serverCallId}/participant/8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1", "data": { "user": {
- "rawId": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1",
- "communicationUser": {
- "id": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1"
+ "communicationIdentifier": {
+ "rawId": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1",
+ "communicationUser": {
+ "id": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1"
+ }
}, "role": "{role}" },
This section contains an example of what that data would look like for each even
"participantId": "041e3b8a-1cce-4ebf-b587-131312c39410", "endpointType": "acs-web-test-client-ACSWeb(3617/1.0.0.0/os=windows; browser=chrome; browserVer=93.0; deviceType=Desktop)/TsCallingVersion=_TS_BUILD_VERSION_/Ovb=_TS_OVB_VERSION_", "startedBy": {
- "rawId": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1",
- "communicationUser": {
- "id": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1"
+ "communicationIdentifier": {
+ "rawId": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1",
+ "communicationUser": {
+ "id": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1"
+ }
}, "role": "{role}" },
This section contains an example of what that data would look like for each even
"subject": "call/aHR0cHM6Ly9jb252LWRldi0yMS5jb252LWRldi5za3lwZS5uZXQ6NDQzL2NvbnYvbVQ4NnVfempBMG05QVM4VnRvSWFrdz9pPTAmZT02Mzc2Nzc3MTc2MDAwMjgyMzA/participant/8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-27cc-07fd-0848220077d8", "data": { "user": {
- "rawId": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-27cc-07fd-0848220077d8",
- "communicationUser": {
- "id": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-27cc-07fd-0848220077d8"
+ "communicationIdentifier": {
+ "rawId": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-27cc-07fd-0848220077d8",
+ "communicationUser": {
+ "id": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-27cc-07fd-0848220077d8"
+ }
}, "role": "{role}" },
This section contains an example of what that data would look like for each even
"participantId": "750a1442-3156-4914-94d2-62cf73796833", "endpointType": "acs-web-test-client-ACSWeb(3617/1.0.0.0/os=windows; browser=chrome; browserVer=93.0; deviceType=Desktop)/TsCallingVersion=_TS_BUILD_VERSION_/Ovb=_TS_OVB_VERSION_", "startedBy": {
- "rawId": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1",
- "communicationUser": {
- "id": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1"
+ "communicationIdentifier": {
+ "rawId": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1",
+ "communicationUser": {
+ "id": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1"
+ }
}, "role": "{role}" },
event-hubs Authenticate Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/authenticate-application.md
For Schema Registry built-in roles, see [Schema Registry roles](schema-registry-
## Authenticate from an application A key advantage of using Azure AD with Event Hubs is that your credentials no longer need to be stored in your code. Instead, you can request an OAuth 2.0 access token from Microsoft identity platform. Azure AD authenticates the security principal (a user, a group, or service principal) running the application. If authentication succeeds, Azure AD returns the access token to the application, and the application can then use the access token to authorize requests to Azure Event Hubs.
-Following sections shows you how to configure your native application or web application for authentication with Microsoft identity platform 2.0. For more information about Microsoft identity platform 2.0, see [Microsoft identity platform (v2.0) overview](../active-directory/develop/v2-overview.md).
+The following sections show you how to configure your native application or web application for authentication with Microsoft identity platform 2.0. For more information about Microsoft identity platform 2.0, see [Microsoft identity platform (v2.0) overview](../active-directory/develop/v2-overview.md).
For an overview of the OAuth 2.0 code grant flow, see [Authorize access to Azure Active Directory web applications using the OAuth 2.0 code grant flow](../active-directory/develop/v2-oauth2-auth-code-flow.md).
event-hubs Event Hubs Scalability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-scalability.md
There are two factors which influence scaling with Event Hubs.
## Throughput units
-The throughput capacity of Event Hubs is controlled by *throughput units*. Throughput units are pre-purchased units of capacity. A single throughput lets you:
+The throughput capacity of Event Hubs is controlled by *throughput units*. Throughput units are pre-purchased units of capacity. A single throughput unit lets you:
* Ingress: Up to 1 MB per second or 1000 events per second (whichever comes first). * Egress: Up to 2 MB per second or 4096 events per second.
For more information about the auto-inflate feature, see [Automatically scale th
## Processing units
- [Event Hubs Premium](./event-hubs-premium-overview.md) provides superior performance and better isolation within a managed multitenant PaaS environment. The resources in a Premium tier are isolated at the CPU and memory level so that each tenant workload runs in isolation. This resource container is called a *Processing Unit*(PU). You can purchase 1, 2, 4, 8 or 16 processing Units for each Event Hubs Premium namespace.
+ [Event Hubs Premium](./event-hubs-premium-overview.md) provides superior performance and better isolation within a managed multitenant PaaS environment. The resources in a Premium tier are isolated at the CPU and memory level so that each tenant workload runs in isolation. This resource container is called a *Processing Unit* (PU). You can purchase 1, 2, 4, 8 or 16 processing Units for each Event Hubs Premium namespace.
How much you can ingest and stream with a processing unit depends on various factors such as your producers, consumers, the rate at which you're ingesting and processing, and much more.
-For example, Event Hubs Premium namespace with 1 PU and 1 event hub(100 partitions) can approximately offer core capacity of ~5-10 MB/s ingress and 10-20 MB/s egress for both AMQP or Kafka workloads.
+For example, Event Hubs Premium namespace with 1 PU and 1 event hub (100 partitions) can approximately offer core capacity of ~5-10 MB/s ingress and 10-20 MB/s egress for both AMQP or Kafka workloads.
To learn about configuring PUs for a premium tier namespace, see [Configure processing units](configure-processing-units-premium-namespace.md).
event-hubs Resource Governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/resource-governance-overview.md
Title: Resource governance with application groups description: This article describes how to enable resource governance using application groups. Previously updated : 05/24/2022 Last updated : 08/23/2022
When policies for application groups are applied, the client application workloa
### Disabling application groups Application group is enabled by default and that means all the client applications can access Event Hubs namespace for publishing and consuming events by adhering to the application group policies.
-When an application group is disabled, client applications of that application group won't be able to connect to the Event Hubs namespace and all the existing connections that are already established from client applications are terminated.
+When an application group is disabled, the client will still be able to connect to the event hub, but the authorization will fail and then the client connection gets closed. Therefore, you'll see lots of successful open and close connections, with same number of authorization failures in diagnostic logs.
## Next steps For instructions on how to create and manage application groups, see [Resource governance for client applications using Azure portal](resource-governance-with-app-groups.md)
governance Machine Configuration Create Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/machine-configuration-create-definition.md
Title: How to create custom machine configuration policy definitions description: Learn how to create a machine configuration policy. Previously updated : 07/25/2022 Last updated : 08/09/2022
Create a policy definition that audits using a custom
configuration package, in a specified path: ```powershell
-New-GuestConfigurationPolicy `
- -PolicyId 'My GUID' `
- -ContentUri '<paste the ContentUri output from the Publish command>' `
- -DisplayName 'My audit policy.' `
- -Description 'Details about my policy.' `
- -Path './policies' `
- -Platform 'Windows' `
- -PolicyVersion 1.0.0 `
- -Verbose
+$PolicyConfig = @{
+ PolicyId = '_My GUID_'
+ ContentUri = <_ContentUri output from the Publish command_>
+ DisplayName = 'My audit policy'
+ Description = 'My audit policy'
+ Path = './policies'
+ Platform = 'Windows'
+ PolicyVersion = 1.0.0
+}
+
+New-GuestConfigurationPolicy @PolicyConfig
``` Create a policy definition that deploys a configuration using a custom configuration package, in a specified path: ```powershell
-New-GuestConfigurationPolicy `
- -PolicyId 'My GUID' `
- -ContentUri '<paste the ContentUri output from the Publish command>' `
- -DisplayName 'My audit policy.' `
- -Description 'Details about my policy.' `
- -Path './policies' `
- -Platform 'Windows' `
- -PolicyVersion 1.0.0 `
- -Mode 'ApplyAndAutoCorrect' `
- -Verbose
+$PolicyConfig2 = @{
+ PolicyId = '_My GUID_'
+ ContentUri = <_ContentUri output from the Publish command_>
+ DisplayName = 'My audit policy'
+ Description = 'My audit policy'
+ Path = './policies'
+ Platform = 'Windows'
+ PolicyVersion = 1.0.0
+ Mode = 'ApplyAndAutoCorrect'
+}
+
+New-GuestConfigurationPolicy @PolicyConfig2
``` The cmdlet output returns an object containing the definition display name and
The following example creates a policy definition to audit a service, where the
list at the time of policy assignment. ```powershell
-# This DSC Resource text:
+# This DSC resource definition...
Service 'UserSelectedNameExample' { Name = 'ParameterValue'
Service 'UserSelectedNameExample'
State = 'Running' }`
-# Would require the following hashtable:
-$PolicyParameterInfo = @(
+# ...can be converted to a hash table:
+$PolicyParameterInfo = @(
@{
- Name = 'ServiceName' # Policy parameter name (mandatory)
- DisplayName = 'windows service name.' # Policy parameter display name (mandatory)
- Description = 'Name of the windows service to be audited.' # Policy parameter description (optional)
- ResourceType = 'Service' # DSC configuration resource type (mandatory)
- ResourceId = 'UserSelectedNameExample' # DSC configuration resource id (mandatory)
- ResourcePropertyName = 'Name' # DSC configuration resource property name (mandatory)
- DefaultValue = 'winrm' # Policy parameter default value (optional)
- AllowedValues = @('BDESVC','TermService','wuauserv','winrm') # Policy parameter allowed values (optional)
- }
-)
-
-New-GuestConfigurationPolicy `
- -PolicyId 'My GUID' `
- -ContentUri '<paste the ContentUri output from the Publish command>' `
- -DisplayName 'Audit Windows Service.' `
- -Description 'Audit if a Windows Service isn't enabled on Windows machine.' `
- -Path '.\policies' `
- -Parameter $PolicyParameterInfo `
- -PolicyVersion 1.0.0
+ # Policy parameter name (mandatory)
+ Name = 'ServiceName'
+ # Policy parameter display name (mandatory)
+ DisplayName = 'windows service name.'
+ # Policy parameter description (optional)
+ Description = 'Name of the windows service to be audited.'
+ # DSC configuration resource type (mandatory)
+ ResourceType = 'Service'
+ # DSC configuration resource id (mandatory)
+ ResourceId = 'UserSelectedNameExample'
+ # DSC configuration resource property name (mandatory)
+ ResourcePropertyName = 'Name'
+ # Policy parameter default value (optional)
+ DefaultValue = 'winrm'
+ # Policy parameter allowed values (optional)
+ AllowedValues = @('BDESVC','TermService','wuauserv','winrm')
+ })
+
+# ...and then passed into the `New-GuestConfigurationPolicy` cmdlet
+$PolicyParam = @{
+ PolicyId = 'My GUID'
+ ContentUri = '<ContentUri output from the Publish command>'
+ DisplayName = 'Audit Windows Service.'
+ Description = "Audit if a Windows Service isn't enabled on Windows machine."
+ Path = '.\policies'
+ Parameter = $PolicyParameterInfo
+ PolicyVersion = 1.0.0
+}
+
+New-GuestConfigurationPolicy @PolicyParam
``` ### Publish the Azure Policy definition
governance Guest Configuration Baseline Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/guest-configuration-baseline-windows.md
Title: Reference - Azure Policy guest configuration baseline for Windows description: Details of the Windows baseline on Azure implemented through Azure Policy guest configuration. Previously updated : 05/12/2022 Last updated : 08/23/2022
implementations:
- **Vulnerabilities in security configuration on your machines should be remediated** in Azure Security Center
-For more information, see [Azure Policy guest configuration](../../machine-configuration/overview.md) and
+For more information, see [Azure Policy guest configuration](../concepts/guest-configuration.md) and
[Overview of the Azure Security Benchmark (V2)](../../../security/benchmarks/overview.md).
+## Account Policies-Password Policy
+
+|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity |
+|||||
+|Account Lockout Duration<br /><sub>(AZ-WIN-73312)</sub> |<br />**Key Path**: [System Access]LockoutDuration<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\>\= 15<br /><sub>(Policy)</sub> |Warning |
+
+## Administrative Template - Window Defender
+
+|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity |
+|||||
+|Configure detection for potentially unwanted applications<br /><sub>(AZ-WIN-202219)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows Defender\PUAProtection<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Scan all downloaded files and attachments<br /><sub>(AZ-WIN-202221)</sub> |<br />**Key Path**: Software\Policies\Microsoft\Windows Defender\Real-Time Protection\DisableIOAVProtection<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Warning |
+|Turn off Microsoft Defender AntiVirus<br /><sub>(AZ-WIN-202220)</sub> |<br />**Key Path**: Software\Policies\Microsoft\Windows Defender\DisableAntiSpyware<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Critical |
+|Turn off real-time protection<br /><sub>(AZ-WIN-202222)</sub> |<br />**Key Path**: Software\Policies\Microsoft\Windows Defender\Real-Time Protection\DisableRealtimeMonitoring<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Warning |
+|Turn on e-mail scanning<br /><sub>(AZ-WIN-202218)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows Defender\Scan\DisableEmailScanning<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning |
+|Turn on script scanning<br /><sub>(AZ-WIN-202223)</sub> |<br />**Key Path**: Software\Policies\Microsoft\Windows Defender\Real-Time Protection\DisableScriptScanning<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Warning |
+ ## Administrative Templates - Control Panel |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | ||||| |Allow Input Personalization<br /><sub>(AZ-WIN-00168)</sub> |**Description**: This policy enables the automatic learning component of input personalization that includes speech, inking, and typing. Automatic learning enables the collection of speech and handwriting patterns, typing history, contacts, and recent calendar information. It is required for the use of Cortana. Some of this collected information may be stored on the user's OneDrive, in the case of inking and typing; some of the information will be uploaded to Microsoft to personalize speech. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\InputPersonalization\AllowInputPersonalization<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning |
-|Prevent enabling lock screen camera<br /><sub>(CCE-38347-1)</sub> |**Description**: Disables the lock screen camera toggle switch in PC Settings and prevents a camera from being invoked on the lock screen. By default, users can enable invocation of an available camera on the lock screen. If you enable this setting, users will no longer be able to enable or disable lock screen camera access in PC Settings, and the camera cannot be invoked on the lock screen.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Personalization\NoLockScreenCamera<br />**OS**: WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
-|Prevent enabling lock screen slide show<br /><sub>(CCE-38348-9)</sub> |**Description**: Disables the lock screen slide show settings in PC Settings and prevents a slide show from playing on the lock screen. By default, users can enable a slide show that will run after they lock the machine. If you enable this setting, users will no longer be able to modify slide show settings in PC Settings, and no slide show will ever start.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Personalization\NoLockScreenSlideshow<br />**OS**: WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+
+## Administrative Templates - MS Security Guide
+
+|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity |
+|||||
+|Disable SMB v1 client (remove dependency on LanmanWorkstation)<br /><sub>(AZ-WIN-00122)</sub> |<br />**Key Path**: SYSTEM\CurrentControlSet\Services\LanmanWorkstation\DependsOnService<br />**OS**: WS2008, WS2008R2, WS2012<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= Bowser\0MRxSmb20\0NSI\0\0<br /><sub>(Registry)</sub> |Critical |
+|WDigest Authentication must be disabled.<br /><sub>(AZ-WIN-73497)</sub> |<br />**Key Path**: SYSTEM\CurrentControlSet\Control\SecurityProviders\Wdigest\UseLogonCredential<br />**OS**: WS2016, WS2019<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Important |
+
+## Administrative Templates - MSS
+
+|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity |
+|||||
+|MSS: (DisableIPSourceRouting IPv6) IP source routing protection level (protects against packet spoofing)<br /><sub>(AZ-WIN-202213)</sub> |<br />**Key Path**: System\CurrentControlSet\Services\Tcpip6\Parameters\DisableIPSourceRouting<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 2<br /><sub>(Registry)</sub> |Informational |
+|MSS: (DisableIPSourceRouting) IP source routing protection level (protects against packet spoofing)<br /><sub>(AZ-WIN-202244)</sub> |<br />**Key Path**: System\CurrentControlSet\Services\Tcpip\Parameters\DisableIPSourceRouting<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 2<br /><sub>(Registry)</sub> |Informational |
+|MSS: (NoNameReleaseOnDemand) Allow the computer to ignore NetBIOS name release requests except from WINS servers<br /><sub>(AZ-WIN-202214)</sub> |<br />**Key Path**: System\CurrentControlSet\Services\Netbt\Parameters\NoNameReleaseOnDemand<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Informational |
+|MSS: (SafeDllSearchMode) Enable Safe DLL search mode (recommended)<br /><sub>(AZ-WIN-202215)</sub> |<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Session Manager\SafeDllSearchMode<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|MSS: (WarningLevel) Percentage threshold for the security event log at which the system will generate a warning<br /><sub>(AZ-WIN-202212)</sub> |<br />**Key Path**: SYSTEM\CurrentControlSet\Services\Eventlog\Security\WarningLevel<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member | 90<br /><sub>(Registry)</sub> |Informational |
+|Windows Server must be configured to prevent Internet Control Message Protocol (ICMP) redirects from overriding Open Shortest Path First (OSPF)-generated routes.<br /><sub>(AZ-WIN-73503)</sub> |<br />**Key Path**: SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\EnableICMPRedirect<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Informational |
## Administrative Templates - Network |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | ||||| |Enable insecure guest logons<br /><sub>(AZ-WIN-00171)</sub> |**Description**: This policy setting determines if the SMB client will allow insecure guest logons to an SMB server. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\LanmanWorkstation\AllowInsecureGuestAuth<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical |
+|Hardened UNC Paths - NETLOGON<br /><sub>(AZ_WIN_202250)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\NetworkProvider\HardenedPaths\\\*\NETLOGON<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= RequireMutualAuthentication=1, RequireIntegrity=1<br /><sub>(Registry)</sub> |Warning |
+|Hardened UNC Paths - SYSVOL<br /><sub>(AZ_WIN_202251)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\NetworkProvider\HardenedPaths\\\*\SYSVOL<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= RequireMutualAuthentication=1, RequireIntegrity=1<br /><sub>(Registry)</sub> |Warning |
|Minimize the number of simultaneous connections to the Internet or a Windows Domain<br /><sub>(CCE-38338-0)</sub> |**Description**: This policy setting prevents computers from connecting to both a domain based network and a non-domain based network at the same time. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\WcmSvc\GroupPolicy\fMinimizeConnections<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Warning | |Prohibit installation and configuration of Network Bridge on your DNS domain network<br /><sub>(CCE-38002-2)</sub> |**Description**: You can use this procedure to control user's ability to install and configure a network bridge. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Network Connections\NC_AllowNetBridge_NLA<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning | |Prohibit use of Internet Connection Sharing on your DNS domain network<br /><sub>(AZ-WIN-00172)</sub> |**Description**: Although this "legacy" setting traditionally applied to the use of Internet Connection Sharing (ICS) in Windows 2000, Windows XP & Server 2003, this setting now freshly applies to the Mobile Hotspot feature in Windows 10 & Server 2016. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Network Connections\NC_ShowSharedAccessUI<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning | |Turn off multicast name resolution<br /><sub>(AZ-WIN-00145)</sub> |**Description**: LLMNR is a secondary name resolution protocol. With LLMNR, queries are sent using multicast over a local network link on a single subnet from a client computer to another client computer on the same subnet that also has LLMNR enabled. LLMNR does not require a DNS server or DNS client configuration, and provides name resolution in scenarios in which conventional DNS name resolution is not possible. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\DNSClient\EnableMulticast<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning |
+## Administrative Templates - Security Guide
+
+|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity |
+|||||
+|Enable Structured Exception Handling Overwrite Protection (SEHOP)<br /><sub>(AZ-WIN-202210)</sub> |<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Session Manager\kernel\DisableExceptionChainValidation<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Critical |
+|NetBT NodeType configuration<br /><sub>(AZ-WIN-202211)</sub> |<br />**Key Path**: SYSTEM\CurrentControlSet\Services\NetBT\Parameters\NodeType<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 2<br /><sub>(Registry)</sub> |Warning |
+ ## Administrative Templates - System |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | ||||| |Block user from showing account details on sign-in<br /><sub>(AZ-WIN-00138)</sub> |**Description**: This policy prevents the user from showing account details (email address or user name) on the sign-in screen. If you enable this policy setting, the user cannot choose to show account details on the sign-in screen. If you disable or do not configure this policy setting, the user may choose to show account details on the sign-in screen.<br />**Key Path**: Software\Policies\Microsoft\Windows\System\BlockUserFromShowingAccountDetailsOnSignin<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
-|Boot-Start Driver Initialization Policy<br /><sub>(CCE-37912-3)</sub> |**Description**: This policy setting allows you to specify which boot-start drivers are initialized based on a classification determined by an Early Launch Antimalware boot-start driver. The Early Launch Antimalware boot-start driver can return the following classifications for each boot-start driver: - Good: The driver has been signed and has not been tampered with. - Bad: The driver has been identified as malware. It is recommended that you do not allow known bad drivers to be initialized. - Bad, but required for boot: The driver has been identified as malware, but the computer cannot successfully boot without loading this driver. - Unknown: This driver has not been attested to by your malware detection application and has not been classified by the Early Launch Antimalware boot-start driver. If you enable this policy setting you will be able to choose which boot-start drivers to initialize the next time the computer is started. If you disable or do not configure this policy setting, the boot start drivers determined to be Good, Unknown or Bad but Boot Critical are initialized and the initialization of drivers determined to be Bad is skipped. If your malware detection application does not include an Early Launch Antimalware boot-start driver or if your Early Launch Antimalware boot-start driver has been this setting has no effect and all boot-start drivers are initialized.<br />**Key Path**: SYSTEM\CurrentControlSet\Policies\EarlyLaunch\DriverLoadPolicy<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 3<br /><sub>(Registry)</sub> |Warning |
+|Boot-Start Driver Initialization Policy<br /><sub>(CCE-37912-3)</sub> |**Description**: This policy setting allows you to specify which boot-start drivers are initialized based on a classification determined by an Early Launch Antimalware boot-start driver. The Early Launch Antimalware boot-start driver can return the following classifications for each boot-start driver: - Good: The driver has been signed and has not been tampered with. - Bad: The driver has been identified as malware. It is recommended that you do not allow known bad drivers to be initialized. - Bad, but required for boot: The driver has been identified as malware, but the computer cannot successfully boot without loading this driver. - Unknown: This driver has not been attested to by your malware detection application and has not been classified by the Early Launch Antimalware boot-start driver. If you enable this policy setting you will be able to choose which boot-start drivers to initialize the next time the computer is started. If you disable or do not configure this policy setting, the boot start drivers determined to be Good, Unknown or Bad but Boot Critical are initialized and the initialization of drivers determined to be Bad is skipped. If your malware detection application does not include an Early Launch Antimalware boot-start driver or if your Early Launch Antimalware boot-start driver has been disabled, this setting has no effect and all boot-start drivers are initialized.<br />**Key Path**: SYSTEM\CurrentControlSet\Policies\EarlyLaunch\DriverLoadPolicy<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 3<br /><sub>(Registry)</sub> |Warning |
|Configure Offer Remote Assistance<br /><sub>(CCE-36388-7)</sub> |**Description**: This policy setting allows you to turn on or turn off Offer (Unsolicited) Remote Assistance on this computer. Help desk and support personnel will not be able to proactively offer assistance, although they can still respond to user assistance requests. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\fAllowUnsolicited<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning | |Configure Solicited Remote Assistance<br /><sub>(CCE-37281-3)</sub> |**Description**: This policy setting allows you to turn on or turn off Solicited (Ask for) Remote Assistance on this computer. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\fAllowToGetHelp<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical | |Do not display network selection UI<br /><sub>(CCE-38353-9)</sub> |**Description**: This policy setting allows you to control whether anyone can interact with available networks UI on the logon screen. If you enable this policy setting, the PC's network connectivity state cannot be changed without signing into Windows. If you disable or don't configure this policy setting, any user can disconnect the PC from the network or can connect the PC to other available networks without signing into Windows.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\System\DontDisplayNetworkSelectionUI<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Do not enumerate connected users on domain-joined computers<br /><sub>(AZ-WIN-202216)</sub> |<br />**Key Path**: Software\Policies\Microsoft\Windows\System\DontEnumerateConnectedUsers<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning |
|Enable RPC Endpoint Mapper Client Authentication<br /><sub>(CCE-37346-4)</sub> |**Description**: This policy setting controls whether RPC clients authenticate with the Endpoint Mapper Service when the call they are making contains authentication information. The Endpoint Mapper Service on computers running Windows NT4 (all service packs) cannot process authentication information supplied in this manner. If you disable this policy setting, RPC clients will not authenticate to the Endpoint Mapper Service, but they will be able to communicate with the Endpoint Mapper Service on Windows NT4 Server. If you enable this policy setting, RPC clients will authenticate to the Endpoint Mapper Service for calls that contain authentication information. Clients making such calls will not be able to communicate with the Windows NT4 Server Endpoint Mapper Service. If you do not configure this policy setting, it remains disabled. RPC clients will not authenticate to the Endpoint Mapper Service, but they will be able to communicate with the Windows NT4 Server Endpoint Mapper Service. Note: This policy will not be applied until the system is rebooted.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Rpc\EnableAuthEpResolution<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical | |Enable Windows NTP Client<br /><sub>(CCE-37843-0)</sub> |**Description**: This policy setting specifies whether the Windows NTP Client is enabled. Enabling the Windows NTP Client allows your computer to synchronize its computer clock with other NTP servers. You might want to disable this service if you decide to use a third-party time provider. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\NtpClient\Enabled<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Encryption Oracle Remediation for CredSSP protocol<br /><sub>(AZ-WIN-201910)</sub> |<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\CredSSP\Parameters\AllowEncryptionOracle<br />**OS**: WS2016, WS2019<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical |
+|Ensure 'Configure registry policy processing: Do not apply during periodic background processing' is set to 'Enabled: FALSE'<br /><sub>(CCE-36169-1)</sub> |**Description**: The "Do not apply during periodic background processing" option prevents the system from updating affected policies in the background while the computer is in use. When background updates are disabled, policy changes will not take effect until the next user logon or system restart. The recommended state for this setting is: `Enabled: FALSE` (unchecked).<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Group Policy\{35378EAC-683F-11D2-A89A-00C04FBBCFA2}\NoBackgroundPolicy<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Critical |
+|Ensure 'Configure registry policy processing: Process even if the Group Policy objects have not changed' is set to 'Enabled: TRUE'<br /><sub>(CCE-36169-1a)</sub> |**Description**: The "Process even if the Group Policy objects have not changed" option updates and reapplies policies even if the policies have not changed. The recommended state for this setting is: `Enabled: TRUE` (checked).<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Group Policy\{35378EAC-683F-11D2-A89A-00C04FBBCFA2}\NoGPOListChanges<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Critical |
|Ensure 'Continue experiences on this device' is set to 'Disabled'<br /><sub>(AZ-WIN-00170)</sub> |**Description**: This policy setting determines whether the Windows device is allowed to participate in cross-device experiences (continue experiences). The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\System\EnableCdp<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
+|Enumerate local users on domain-joined computers<br /><sub>(AZ_WIN_202204)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\System\EnumerateLocalUsers<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
|Include command line in process creation events<br /><sub>(CCE-36925-6)</sub> |**Description**: This policy setting determines what information is logged in security audit events when a new process has been created. This setting only applies when the Audit Process Creation policy is enabled. If you enable this policy setting the command line information for every process will be logged in plain text in the security event log as part of the Audit Process Creation event 4688, "a new process has been created," on the workstations and servers on which this policy setting is applied. If you disable or do not configure this policy setting, the process's command line information will not be included in Audit Process Creation events. Default: Not configured Note: When this policy setting is enabled, any user with access to read the security events will be able to read the command line arguments for any successfully created process. Command line arguments can contain sensitive or private information such as passwords or user data.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\Audit\ProcessCreationIncludeCmdLine_Enabled<br />**OS**: WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Prevent device metadata retrieval from the Internet<br /><sub>(AZ-WIN-202251)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Device Metadata\PreventDeviceMetadataFromNetwork<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Informational |
+|Remote host allows delegation of non-exportable credentials<br /><sub>(AZ-WIN-20199)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\CredentialsDelegation\AllowProtectedCreds<br />**OS**: WS2016, WS2019<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
|Turn off app notifications on the lock screen<br /><sub>(CCE-35893-7)</sub> |**Description**: This policy setting allows you to prevent app notifications from appearing on the lock screen. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\System\DisableLockScreenAppNotifications<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
-|Turn off downloading of print drivers over HTTP<br /><sub>(CCE-36625-2)</sub> |**Description**: This policy setting controls whether the computer can download print driver packages over HTTP. To set up HTTP printing, printer drivers that are not available in the standard operating system installation might need to be downloaded over HTTP. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Printers\DisableWebPnPDownload<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Warning |
+|Turn off background refresh of Group Policy<br /><sub>(CCE-14437-8)</sub> |<br />**Key Path**: Software\Microsoft\Windows\CurrentVersion\Policies\System\DisableBkGndGroupPolicy<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Warning |
+|Turn off downloading of print drivers over HTTP<br /><sub>(CCE-36625-2)</sub> |**Description**: This policy setting controls whether the computer can download print driver packages over HTTP. To set up HTTP printing, printer drivers that are not available in the standard operating system installation might need to be downloaded over HTTP. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Printers\DisableWebPnPDownload<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
|Turn off Internet Connection Wizard if URL connection is referring to Microsoft.com<br /><sub>(CCE-37163-3)</sub> |**Description**: This policy setting specifies whether the Internet Connection Wizard can connect to Microsoft to download a list of Internet Service Providers (ISPs). The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Internet Connection Wizard\ExitOnMSICW<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning | |Turn on convenience PIN sign-in<br /><sub>(CCE-37528-7)</sub> |**Description**: This policy setting allows you to control whether a domain user can sign in using a convenience PIN. In Windows 10, convenience PIN was replaced with Passport, which has stronger security properties. To configure Passport for domain users, use the policies under Computer configuration\\Administrative Templates\\Windows Components\\Microsoft Passport for Work. **Note:** The user's domain password will be cached in the system vault when using this feature. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\System\AllowDomainPINLogon<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
+## Administrative Templates - Windows Component
+
+|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity |
+|||||
+|Turn off cloud consumer account state content<br /><sub>(AZ-WIN-202217)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\CloudContent\DisableConsumerAccountStateContent<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+
+## Administrative Templates - Windows Components
+
+|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity |
+|||||
+|Do not allow drive redirection<br /><sub>(AZ-WIN-73569)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\fDisableCdm<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Turn on PowerShell Transcription<br /><sub>(AZ-WIN-202208)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\PowerShell\Transcription\EnableTranscripting<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Warning |
+
+## Administrative Templates - Windows Security
+
+|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity |
+|||||
+|Prevent users from modifying settings<br /><sub>(AZ-WIN-202209)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows Defender Security Center\App and Browser protection\DisallowExploitProtectionOverride<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+
+## Administrative Template - Windows Defender
+
+|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity |
+|||||
+|Configure Attack Surface Reduction rules<br /><sub>(AZ_WIN_202205)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows Defender\Windows Defender Exploit Guard\ASR\ExploitGuard_ASR_Rules<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Prevent users and apps from accessing dangerous websites<br /><sub>(AZ_WIN_202207)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows Defender\Windows Defender Exploit Guard\Network Protection\EnableNetworkProtection<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+
+## Audit Computer Account Management
+
+|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity |
+|||||
+|Audit Computer Account Management<br /><sub>(CCE-38004-8)</sub> |<br />**Key Path**: {0CCE9236-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller |\= Success<br /><sub>(Audit)</sub> |Critical |
+
+## Secured Core
+
+|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity |
+|||||
+|Enable boot DMA protection<br /><sub>(AZ-WIN-202250)</sub> |<br />**Key Path**: BootDMAProtection<br />**OS**: Ex= [WSASHCI22H2<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(OsConfig)</sub> |Critical |
+|Enable hypervisor enforced code integrity<br /><sub>(AZ-WIN-202246)</sub> |<br />**Key Path**: HypervisorEnforcedCodeIntegrityStatus<br />**OS**: Ex= [WSASHCI22H2<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(OsConfig)</sub> |Critical |
+|Enable secure boot<br /><sub>(AZ-WIN-202248)</sub> |<br />**Key Path**: SecureBootState<br />**OS**: Ex= [WSASHCI22H2<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(OsConfig)</sub> |Critical |
+|Enable system guard<br /><sub>(AZ-WIN-202247)</sub> |<br />**Key Path**: SystemGuardStatus<br />**OS**: Ex= [WSASHCI22H2<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(OsConfig)</sub> |Critical |
+|Enable virtualization based security<br /><sub>(AZ-WIN-202245)</sub> |<br />**Key Path**: VirtualizationBasedSecurityStatus<br />**OS**: Ex= [WSASHCI22H2<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(OsConfig)</sub> |Critical |
+|Set TPM version<br /><sub>(AZ-WIN-202249)</sub> |<br />**Key Path**: TPMVersion<br />**OS**: Ex= [WSASHCI22H2<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member | 2.0<br /><sub>(OsConfig)</sub> |Critical |
+ ## Security Options - Accounts |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
+|Accounts: Block Microsoft accounts<br /><sub>(AZ-WIN-202201)</sub> |<br />**Key Path**: Software\Microsoft\Windows\CurrentVersion\Policies\System\NoConnectedUser<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 3<br /><sub>(Registry)</sub> |Warning |
|Accounts: Guest account status<br /><sub>(CCE-37432-2)</sub> |**Description**: This policy setting determines whether the Guest account is enabled or disabled. The Guest account allows unauthenticated network users to gain access to the system. The recommended state for this setting is: `Disabled`. **Note:** This setting will have no impact when applied to the domain controller organizational unit via group policy because domain controllers have no local account database. It can be configured at the domain level via group policy, similar to account lockout and password policy settings.<br />**Key Path**: [System Access]EnableGuestAccount<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Policy)</sub> |Critical | |Accounts: Limit local account use of blank passwords to console logon only<br /><sub>(CCE-37615-2)</sub> |**Description**: This policy setting determines whether local accounts that are not password protected can be used to log on from locations other than the physical computer console. If you enable this policy setting, local accounts that have blank passwords will not be able to log on to the network from remote client computers. Such accounts will only be able to log on at the keyboard of the computer. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\LimitBlankPasswordUse<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
+|Network access: Allow anonymous SID/Name translation<br /><sub>(CCE-10024-8)</sub> |<br />**Key Path**: [System Access]LSAAnonymousNameLookup<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Policy)</sub> |Warning |
## Security Options - Audit
For more information, see [Azure Policy guest configuration](../../machine-confi
|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Devices: Allow undock without having to log on<br /><sub>(AZ-WIN-00120)</sub> |**Description**: This policy setting determines whether a portable computer can be undocked if the user does not log on to the system. Enable this policy setting to eliminate a Logon requirement and allow use of an external hardware eject button to undock the computer. If you disable this policy setting, a user must log on and have been assigned the Remove computer from docking station user right to undock the computer.<br />**Key Path**: Software\Microsoft\Windows\CurrentVersion\Policies\System\UndockWithoutLogon<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Informational |
|Devices: Allowed to format and eject removable media<br /><sub>(CCE-37701-0)</sub> |**Description**: This policy setting determines who is allowed to format and eject removable media. You can use this policy setting to prevent unauthorized users from removing data on one computer to access it on another computer on which they have local administrator privileges.<br />**Key Path**: SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\AllocateDASD<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning | |Devices: Prevent users from installing printer drivers<br /><sub>(CCE-37942-0)</sub> |**Description**: For a computer to print to a shared printer, the driver for that shared printer must be installed on the local computer. This security setting determines who is allowed to install a printer driver as part of connecting to a shared printer. The recommended state for this setting is: `Enabled`. **Note:** This setting does not affect the ability to add a local printer. This setting does not affect Administrators.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Print\Providers\LanMan Print Services\Servers\AddPrinterDrivers<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Warning |
+|Limits print driver installation to Administrators<br /><sub>(AZ_WIN_202202)</sub> |<br />**Key Path**: Software\Policies\Microsoft\Windows NT\Printers\PointAndPrint\RestrictDriverInstallationToAdministrators<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+
+## Security Options - Domain member
+
+|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity |
+|||||
+|Ensure 'Domain member: Digitally encrypt or sign secure channel data (always)' is set to 'Enabled'<br /><sub>(CCE-36142-8)</sub> |<br />**Key Path**: SYSTEM\CurrentControlSet\Services\Netlogon\Parameters\RequireSignOrSeal<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
+|Ensure 'Domain member: Digitally encrypt secure channel data (when possible)' is set to 'Enabled'<br /><sub>(CCE-37130-2)</sub> |<br />**Key Path**: SYSTEM\CurrentControlSet\Services\Netlogon\Parameters\SealSecureChannel<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
+|Ensure 'Domain member: Digitally sign secure channel data (when possible)' is set to 'Enabled'<br /><sub>(CCE-37222-7)</sub> |**Description**: <p><span>This policy setting determines whether a domain member should attempt to negotiate whether all secure channel traffic that it initiates must be digitally signed. Digital signatures protect the traffic from being modified by anyone who captures the data as it traverses the network. The recommended state for this setting is: 'Enabled'.</span></p><br />**Key Path**: SYSTEM\CurrentControlSet\Services\Netlogon\Parameters\SignSecureChannel<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
+|Ensure 'Domain member: Disable machine account password changes' is set to 'Disabled'<br /><sub>(CCE-37508-9)</sub> |**Description**: <p><span>This policy setting determines whether a domain member can periodically change its computer account password. Computers that cannot automatically change their account passwords are potentially vulnerable, because an attacker might be able to determine the password for the system's domain account. The recommended state for this setting is: 'Disabled'.</span></p><br />**Key Path**: SYSTEM\CurrentControlSet\Services\Netlogon\Parameters\DisablePasswordChange<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
+|Ensure 'Domain member: Maximum machine account password age' is set to '30 or fewer days, but not 0'<br /><sub>(CCE-37431-4)</sub> |**Description**: This policy setting determines the maximum allowable age for a computer account password. By default, domain members automatically change their domain passwords every 30 days. If you increase this interval significantly so that the computers no longer change their passwords, an attacker would have more time to undertake a brute force attack against one of the computer accounts. The recommended state for this setting is: `30 or fewer days, but not 0`. **Note:** A value of `0` does not conform to the benchmark as it disables maximum password age.<br />**Key Path**: System\CurrentControlSet\Services\Netlogon\Parameters\MaximumPasswordAge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |In 1-30<br /><sub>(Registry)</sub> |Critical |
+|Ensure 'Domain member: Require strong (Windows 2000 or later) session key' is set to 'Enabled'<br /><sub>(CCE-37614-5)</sub> |<br />**Key Path**: SYSTEM\CurrentControlSet\Services\Netlogon\Parameters\RequireStrongKey<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
## Security Options - Interactive Logon |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
+|Caching of logon credentials must be limited<br /><sub>(AZ-WIN-73651)</sub> |<br />**Key Path**: SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\CachedLogonsCount<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |In 1-4<br /><sub>(Registry)</sub> |Informational |
|Interactive logon: Do not display last user name<br /><sub>(CCE-36056-0)</sub> |**Description**: This policy setting determines whether the account name of the last user to log on to the client computers in your organization will be displayed in each computer's respective Windows logon screen. Enable this policy setting to prevent intruders from collecting account names visually from the screens of desktop or laptop computers in your organization. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\DontDisplayLastUserName<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical | |Interactive logon: Do not require CTRL+ALT+DEL<br /><sub>(CCE-37637-6)</sub> |**Description**: This policy setting determines whether users must press CTRL+ALT+DEL before they log on. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\DisableCAD<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
+|Interactive logon: Machine inactivity limit<br /><sub>(AZ-WIN-73645)</sub> |<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\InactivityTimeoutSecs<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |In 1-900<br /><sub>(Registry)</sub> |Important |
+|Interactive logon: Message text for users attempting to log on<br /><sub>(AZ-WIN-202253)</sub> |<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\LegalNoticeText<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member | <br /><sub>(Registry)</sub> |Warning |
+|Interactive logon: Message title for users attempting to log on<br /><sub>(AZ-WIN-202254)</sub> |<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\LegalNoticeCaption<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member | <br /><sub>(Registry)</sub> |Warning |
+|Interactive logon: Prompt user to change password before expiration<br /><sub>(CCE-10930-6)</sub> |<br />**Key Path**: Software\Microsoft\Windows NT\CurrentVersion\Winlogon\PasswordExpiryWarning<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |In 5-14<br /><sub>(Registry)</sub> |Informational |
## Security Options - Microsoft Network Client
For more information, see [Azure Policy guest configuration](../../machine-confi
|Microsoft network server: Digitally sign communications (always)<br /><sub>(CCE-37864-6)</sub> |**Description**: This policy setting determines whether packet signing is required by the SMB server component. Enable this policy setting in a mixed environment to prevent downstream clients from using the workstation as a network server. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\LanManServer\Parameters\RequireSecuritySignature<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical | |Microsoft network server: Digitally sign communications (if client agrees)<br /><sub>(CCE-35988-5)</sub> |**Description**: This policy setting determines whether the SMB server will negotiate SMB packet signing with clients that request it. If no signing request comes from the client, a connection will be allowed without a signature if the **Microsoft network server: Digitally sign communications (always)** setting is not enabled. **Note:** Enable this policy setting on SMB clients on your network to make them fully effective for packet signing with all clients and servers in your environment. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\LanManServer\Parameters\EnableSecuritySignature<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical | |Microsoft network server: Disconnect clients when logon hours expire<br /><sub>(CCE-37972-7)</sub> |**Description**: This security setting determines whether to disconnect users who are connected to the local computer outside their user account's valid logon hours. This setting affects the Server Message Block (SMB) component. If you enable this policy setting you should also enable **Network security: Force logoff when logon hours expire** (Rule 2.3.11.6). If your organization configures logon hours for users, this policy setting is necessary to ensure they are effective. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\LanManServer\Parameters\EnableForcedLogoff<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
+|Microsoft network server: Server SPN target name validation level<br /><sub>(CCE-10617-9)</sub> |<br />**Key Path**: System\CurrentControlSet\Services\LanManServer\Parameters\SMBServerNameHardeningLevel<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning |
## Security Options - Microsoft Network Server
For more information, see [Azure Policy guest configuration](../../machine-confi
|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
+|Accounts: Rename administrator account<br /><sub>(CCE-10976-9)</sub> |<br />**Key Path**: [System Access]NewAdministratorName<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member | Administrator<br /><sub>(Policy)</sub> |Warning |
|Network access: Do not allow anonymous enumeration of SAM accounts<br /><sub>(CCE-36316-8)</sub> |**Description**: This policy setting controls the ability of anonymous users to enumerate the accounts in the Security Accounts Manager (SAM). If you enable this policy setting, users with anonymous connections will not be able to enumerate domain account user names on the systems in your environment. This policy setting also allows additional restrictions on anonymous connections. The recommended state for this setting is: `Enabled`. **Note:** This policy has no effect on domain controllers.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\RestrictAnonymousSAM<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical | |Network access: Do not allow anonymous enumeration of SAM accounts and shares<br /><sub>(CCE-36077-6)</sub> |**Description**: This policy setting controls the ability of anonymous users to enumerate SAM accounts as well as shares. If you enable this policy setting, anonymous users will not be able to enumerate domain account user names and network share names on the systems in your environment. The recommended state for this setting is: `Enabled`. **Note:** This policy has no effect on domain controllers.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\RestrictAnonymous<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical | |Network access: Let Everyone permissions apply to anonymous users<br /><sub>(CCE-36148-5)</sub> |**Description**: This policy setting determines what additional permissions are assigned for anonymous connections to the computer. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\EveryoneIncludesAnonymous<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
For more information, see [Azure Policy guest configuration](../../machine-confi
|Network security: Minimum session security for NTLM SSP based (including secure RPC) clients<br /><sub>(CCE-37553-5)</sub> |**Description**: This policy setting determines which behaviors are allowed by clients for applications using the NTLM Security Support Provider (SSP). The SSP Interface (SSPI) is used by applications that need authentication services. The setting does not modify how the authentication sequence works but instead require certain behaviors in applications that use the SSPI. The recommended state for this setting is: `Require NTLMv2 session security, Require 128-bit encryption`. **Note:** These values are dependent on the _Network security: LAN Manager Authentication Level_ security setting value.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\MSV1_0\NTLMMinClientSec<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 537395200<br /><sub>(Registry)</sub> |Critical | |Network security: Minimum session security for NTLM SSP based (including secure RPC) servers<br /><sub>(CCE-37835-6)</sub> |**Description**: This policy setting determines which behaviors are allowed by servers for applications using the NTLM Security Support Provider (SSP). The SSP Interface (SSPI) is used by applications that need authentication services. The setting does not modify how the authentication sequence works but instead require certain behaviors in applications that use the SSPI. The recommended state for this setting is: `Require NTLMv2 session security, Require 128-bit encryption`. **Note:** These values are dependent on the _Network security: LAN Manager Authentication Level_ security setting value.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\MSV1_0\NTLMMinServerSec<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 537395200<br /><sub>(Registry)</sub> |Critical |
-## Security Options - Recovery console
+## Security Options - Shutdown
|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Recovery console: Allow floppy copy and access to all drives and all folders<br /><sub>(AZ-WIN-00180)</sub> |**Description**: This policy setting makes the Recovery Console SET command available, which allows you to set the following recovery console environment variables: ΓÇó AllowWildCards. Enables wildcard support for some commands (such as the DEL command). ΓÇó AllowAllPaths. Allows access to all files and folders on the computer. ΓÇó AllowRemovableMedia. Allows files to be copied to removable media, such as a floppy disk. ΓÇó NoCopyPrompt. Does not prompt when overwriting an existing file.<br />**Key Path**: Software\Microsoft\Windows NT\CurrentVersion\Setup\RecoveryConsole\setcommand<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
+|Shutdown: Allow system to be shut down without having to log on<br /><sub>(CCE-36788-8)</sub> |**Description**: This policy setting determines whether a computer can be shut down when a user is not logged on. If this policy setting is enabled, the shutdown command is available on the Windows logon screen. It is recommended to disable this policy setting to restrict the ability to shut down the computer to users with credentials on the system. The recommended state for this setting is: `Disabled`. **Note:** In Server 2008 R2 and older versions, this setting had no impact on Remote Desktop (RDP) / Terminal Services sessions - it only affected the local console. However, Microsoft changed the behavior in Windows Server 2012 (non-R2) and above, where if set to Enabled, RDP sessions are also allowed to shut down or restart the server.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\ShutdownWithoutLogon<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
+|Shutdown: Clear virtual memory pagefile<br /><sub>(AZ-WIN-00181)</sub> |**Description**: This policy setting determines whether the virtual memory pagefile is cleared when the system is shut down. When this policy setting is enabled, the system pagefile is cleared each time that the system shuts down properly. If you enable this security setting, the hibernation file (Hiberfil.sys) is zeroed out when hibernation is disabled on a portable computer system. It will take longer to shut down and restart the computer, and will be especially noticeable on computers with large paging files.<br />**Key Path**: System\CurrentControlSet\Control\Session Manager\Memory Management\ClearPageFileAtShutdown<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
-## Security Options - Shutdown
+## Security Options - System cryptography
|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Shutdown: Allow system to be shut down without having to log on<br /><sub>(CCE-36788-8)</sub> |**Description**: This policy setting determines whether a computer can be shut down when a user is not logged on. If this policy setting is enabled, the shutdown command is available on the Windows logon screen. It is recommended to disable this policy setting to restrict the ability to shut down the computer to users with credentials on the system. The recommended state for this setting is: `Disabled`. **Note:** In Server 2008 R2 and older versions, this setting had no impact on Remote Desktop (RDP) / Terminal Services sessions - it only affected the local console. However, Microsoft changed the behavior in Windows Server 2012 (non-R2) and above, where if set to Enabled, RDP sessions are also allowed to shut down or restart the server.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\ShutdownWithoutLogon<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
-|Shutdown: Clear virtual memory pagefile<br /><sub>(AZ-WIN-00181)</sub> |**Description**: This policy setting determines whether the virtual memory pagefile is cleared when the system is shut down. When this policy setting is enabled, the system pagefile is cleared each time that the system shuts down properly. If you enable this security setting, the hibernation file (Hiberfil.sys) is zeroed out when hibernation is disabled on a portable computer system. It will take longer to shut down and restart the computer, and will be especially noticeable on computers with large paging files.<br />**Key Path**: System\CurrentControlSet\Control\Session Manager\Memory Management\ClearPageFileAtShutdown<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
+|Users must be required to enter a password to access private keys stored on the computer.<br /><sub>(AZ-WIN-73699)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\Cryptography\ForceKeyProtection<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 2<br /><sub>(Registry)</sub> |Important |
+|Windows Server must be configured to use FIPS-compliant algorithms for encryption, hashing, and signing.<br /><sub>(AZ-WIN-73701)</sub> |<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\FIPSAlgorithmPolicy\Enabled<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Member |\= 1<br /><sub>(Registry)</sub> |Important |
## Security Options - System objects |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | ||||| |System objects: Require case insensitivity for non-Windows subsystems<br /><sub>(CCE-37885-1)</sub> |**Description**: This policy setting determines whether case insensitivity is enforced for all subsystems. The Microsoft Win32 subsystem is case insensitive. However, the kernel supports case sensitivity for other subsystems, such as the Portable Operating System Interface for UNIX (POSIX). Because Windows is case insensitive (but the POSIX subsystem will support case sensitivity), failure to enforce this policy setting makes it possible for a user of the POSIX subsystem to create a file with the same name as another file by using mixed case to label it. Such a situation can block access to these files by another user who uses typical Win32 tools, because only one of the files will be available. The recommended state for this setting is: `Enabled`.<br />**Key Path**: System\CurrentControlSet\Control\Session Manager\Kernel\ObCaseInsensitive<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Warning |
-|System objects: Strengthen default permissions of internal system objects (e.g. Symbolic Links)<br /><sub>(CCE-37644-2)</sub> |**Description**: This policy setting determines the strength of the default discretionary access control list (DACL) for objects. Active Directory maintains a global list of shared system resources, such as DOS device names, mutexes, and semaphores. In this way, objects can be located and shared among processes. Each type of object is created with a default DACL that specifies who can access the objects and what permissions are granted. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Session Manager\ProtectionMode<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
+|System objects: Strengthen default permissions of internal system objects (e.g. Symbolic Links)<br /><sub>(CCE-37644-2)</sub> |**Description**: This policy setting determines the strength of the default discretionary access control list (DACL) for objects. Active Directory maintains a global list of shared system resources, such as DOS device names, mutexes, and semaphores. In this way, objects can be located and shared among processes. Each type of object is created with a default DACL that specifies who can access the objects and what permissions are granted. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Session Manager\ProtectionMode<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
## Security Options - System settings
For more information, see [Azure Policy guest configuration](../../machine-confi
|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | ||||| |User Account Control: Admin Approval Mode for the Built-in Administrator account<br /><sub>(CCE-36494-3)</sub> |**Description**: This policy setting controls the behavior of Admin Approval Mode for the built-in Administrator account. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\FilterAdministratorToken<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
-|User Account Control: Allow UIAccess applications to prompt for elevation without using the secure desktop<br /><sub>(CCE-36863-9)</sub> |**Description**: This policy setting controls whether User Interface Accessibility (UIAccess or UIA) programs can automatically disable the secure desktop for elevation prompts used by a standard user. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\EnableUIADesktopToggle<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
+|User Account Control: Allow UIAccess applications to prompt for elevation without using the secure desktop<br /><sub>(CCE-36863-9)</sub> |**Description**: This policy setting controls whether User Interface Accessibility (UIAccess or UIA) programs can automatically disable the secure desktop for elevation prompts used by a standard user. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\EnableUIADesktopToggle<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical |
|User Account Control: Behavior of the elevation prompt for administrators in Admin Approval Mode<br /><sub>(CCE-37029-6)</sub> |**Description**: This policy setting controls the behavior of the elevation prompt for administrators. The recommended state for this setting is: `Prompt for consent on the secure desktop`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\ConsentPromptBehaviorAdmin<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 2<br /><sub>(Registry)</sub> |Critical | |User Account Control: Behavior of the elevation prompt for standard users<br /><sub>(CCE-36864-7)</sub> |**Description**: This policy setting controls the behavior of the elevation prompt for standard users. The recommended state for this setting is: `Automatically deny elevation requests`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\ConsentPromptBehaviorUser<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical | |User Account Control: Detect application installations and prompt for elevation<br /><sub>(CCE-36533-8)</sub> |**Description**: This policy setting controls the behavior of application installation detection for the computer. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\EnableInstallerDetection<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
-|User Account Control: Only elevate UIAccess applications that are installed in secure locations<br /><sub>(CCE-37057-7)</sub> |**Description**: This policy setting controls whether applications that request to run with a User Interface Accessibility (UIAccess) integrity level must reside in a secure location in the file system. Secure locations are limited to the following: - `…\Program Files\`, including subfolders - `…\Windows\system32\` - `…\Program Files (x86)\`, including subfolders for 64-bit versions of Windows **Note:** Windows enforces a public key infrastructure (PKI) signature check on any interactive application that requests to run with a UIAccess integrity level regardless of the state of this security setting. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\EnableSecureUIAPaths<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
-|User Account Control: Run all administrators in Admin Approval Mode<br /><sub>(CCE-36869-6)</sub> |**Description**: This policy setting controls the behavior of all User Account Control (UAC) policy settings for the computer. If you change this policy setting, you must restart your computer. The recommended state for this setting is: `Enabled`. **Note:** If this policy setting is the Security Center notifies you that the overall security of the operating system has been reduced.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\EnableLUA<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
-|User Account Control: Switch to the secure desktop when prompting for elevation<br /><sub>(CCE-36866-2)</sub> |**Description**: This policy setting controls whether the elevation request prompt is displayed on the interactive user's desktop or the secure desktop. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\PromptOnSecureDesktop<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
-|User Account Control: Virtualize file and registry write failures to per-user locations<br /><sub>(CCE-37064-3)</sub> |**Description**: This policy setting controls whether application write failures are redirected to defined registry and file system locations. This policy setting mitigates applications that run as administrator and write run-time application data to: - `%ProgramFiles%`, - `%Windir%`, - `%Windir%\system32`, or - `HKEY_LOCAL_MACHINE\Software`. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\EnableVirtualization<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
+|User Account Control: Only elevate UIAccess applications that are installed in secure locations<br /><sub>(CCE-37057-7)</sub> |**Description**: This policy setting controls whether applications that request to run with a User Interface Accessibility (UIAccess) integrity level must reside in a secure location in the file system. Secure locations are limited to the following: - `…\Program Files\`, including subfolders - `…\Windows\system32\` - `…\Program Files (x86)\`, including subfolders for 64-bit versions of Windows **Note:** Windows enforces a public key infrastructure (PKI) signature check on any interactive application that requests to run with a UIAccess integrity level regardless of the state of this security setting. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\EnableSecureUIAPaths<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|User Account Control: Run all administrators in Admin Approval Mode<br /><sub>(CCE-36869-6)</sub> |**Description**: This policy setting controls the behavior of all User Account Control (UAC) policy settings for the computer. If you change this policy setting, you must restart your computer. The recommended state for this setting is: `Enabled`. **Note:** If this policy setting is disabled, the Security Center notifies you that the overall security of the operating system has been reduced.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\EnableLUA<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|User Account Control: Switch to the secure desktop when prompting for elevation<br /><sub>(CCE-36866-2)</sub> |**Description**: This policy setting controls whether the elevation request prompt is displayed on the interactive user's desktop or the secure desktop. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\PromptOnSecureDesktop<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|User Account Control: Virtualize file and registry write failures to per-user locations<br /><sub>(CCE-37064-3)</sub> |**Description**: This policy setting controls whether application write failures are redirected to defined registry and file system locations. This policy setting mitigates applications that run as administrator and write run-time application data to: - `%ProgramFiles%`, - `%Windir%`, - `%Windir%\system32`, or - `HKEY_LOCAL_MACHINE\Software`. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\EnableVirtualization<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
## Security Settings - Account Policies |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
+|Account lockout threshold.<br /><sub>(AZ-WIN-73311)</sub> |<br />**Key Path**: [System Access]LockoutBadCount<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |In 1-3<br /><sub>(Policy)</sub> |Important |
|Enforce password history<br /><sub>(CCE-37166-6)</sub> |**Description**: <p><span>This policy setting determines the number of renewed, unique passwords that have to be associated with a user account before you can reuse an old password. The value for this policy setting must be between 0 and 24 passwords. The default value for Windows Vista is 0 passwords, but the default setting in a domain is 24 passwords. To maintain the effectiveness of this policy setting, use the Minimum password age setting to prevent users from repeatedly changing their password. The recommended state for this setting is: '24 or more password(s)'.</span></p><br />**Key Path**: [System Access]PasswordHistorySize<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= 24<br /><sub>(Policy)</sub> |Critical | |Maximum password age<br /><sub>(CCE-37167-4)</sub> |**Description**: This policy setting defines how long a user can use their password before it expires. Values for this policy setting range from 0 to 999 days. If you set the value to 0, the password will never expire. Because attackers can crack passwords, the more frequently you change the password the less opportunity an attacker has to use a cracked password. However, the lower this value is set, the higher the potential for an increase in calls to help desk support due to users having to change their password or forgetting which password is current. The recommended state for this setting is `60 or fewer days, but not 0`.<br />**Key Path**: [System Access]MaximumPasswordAge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |In 1-70<br /><sub>(Policy)</sub> |Critical | |Minimum password age<br /><sub>(CCE-37073-4)</sub> |**Description**: This policy setting determines the number of days that you must use a password before you can change it. The range of values for this policy setting is between 1 and 999 days. (You may also set the value to 0 to allow immediate password changes.) The default value for this setting is 0 days. The recommended state for this setting is: `1 or more day(s)`.<br />**Key Path**: [System Access]MinimumPasswordAge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= 1<br /><sub>(Policy)</sub> |Critical | |Minimum password length<br /><sub>(CCE-36534-6)</sub> |**Description**: This policy setting determines the least number of characters that make up a password for a user account. There are many different theories about how to determine the best password length for an organization, but perhaps "pass phrase" is a better term than "password." In Microsoft Windows 2000 or later, pass phrases can be quite long and can include spaces. Therefore, a phrase such as "I want to drink a $5 milkshake" is a valid pass phrase; it is a considerably stronger password than an 8 or 10 character string of random numbers and letters, and yet is easier to remember. Users must be educated about the proper selection and maintenance of passwords, especially with regard to password length. In enterprise environments, the ideal value for the Minimum password length setting is 14 characters, however you should adjust this value to meet your organization's business requirements. The recommended state for this setting is: `14 or more character(s)`.<br />**Key Path**: [System Access]MinimumPasswordLength<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= 14<br /><sub>(Policy)</sub> |Critical | |Password must meet complexity requirements<br /><sub>(CCE-37063-5)</sub> |**Description**: This policy setting checks all new passwords to ensure that they meet basic requirements for strong passwords. When this policy is enabled, passwords must meet the following minimum requirements: - Does not contain the user's account name or parts of the user's full name that exceed two consecutive characters - Be at least six characters in length - Contain characters from three of the following four categories: - English uppercase characters (A through Z) - English lowercase characters (a through z) - Base 10 digits (0 through 9) - Non-alphabetic characters (for example, !, $, #, %) - A catch-all category of any Unicode character that does not fall under the previous four categories. This fifth category can be regionally specific. Each additional character in a password increases its complexity exponentially. For instance, a seven-character, all lower-case alphabetic password would have 267 (approximately 8 x 109 or 8 billion) possible combinations. At 1,000,000 attempts per second (a capability of many password-cracking utilities), it would only take 133 minutes to crack. A seven-character alphabetic password with case sensitivity has 527 combinations. A seven-character case-sensitive alphanumeric password without punctuation has 627 combinations. An eight-character password has 268 (or 2 x 1011) possible combinations. Although this might seem to be a large number, at 1,000,000 attempts per second it would take only 59 hours to try all possible passwords. Remember, these times will significantly increase for passwords that use ALT characters and other special keyboard characters such as "!" or "@". Proper use of the password settings can help make it difficult to mount a brute force attack. The recommended state for this setting is: `Enabled`.<br />**Key Path**: [System Access]PasswordComplexity<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= true<br /><sub>(Policy)</sub> |Critical |
+|Reset account lockout counter.<br /><sub>(AZ-WIN-73309)</sub> |<br />**Key Path**: [System Access]ResetLockoutCount<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\>\= 15<br /><sub>(Policy)</sub> |Important |
|Store passwords using reversible encryption<br /><sub>(CCE-36286-3)</sub> |**Description**: This policy setting determines whether the operating system stores passwords in a way that uses reversible encryption, which provides support for application protocols that require knowledge of the user's password for authentication purposes. Passwords that are stored with reversible encryption are essentially the same as plaintext versions of the passwords. The recommended state for this setting is: `Disabled`.<br />**Key Path**: [System Access]ClearTextPassword<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Policy)</sub> |Critical |
+## Security Settings - Windows Firewall
+
+|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity |
+|||||
+|Windows Firewall: Domain: Allow unicast response<br /><sub>(AZ-WIN-00088)</sub> |**Description**: <p><span>This option is useful if you need to control whether this computer receives unicast responses to its outgoing multicast or broadcast messages.  </span></p><p><span>We recommend this setting to ‘Yes’ for Private and Domain profiles, this will set the registry value to 0.</span></p><br />**Key Path**: Software\Policies\Microsoft\WindowsFirewall\DomainProfile\DisableUnicastResponsesToMulticastBroadcast<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Warning |
+|Windows Firewall: Domain: Firewall state<br /><sub>(CCE-36062-8)</sub> |**Description**: Select On (recommended) to have Windows Firewall with Advanced Security use the settings for this profile to filter network traffic. If you select Off, Windows Firewall with Advanced Security will not use any of the firewall rules or connection security rules for this profile.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\EnableFirewall<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Windows Firewall: Domain: Inbound connections<br /><sub>(AZ-WIN-202252)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\DefaultInboundAction<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Windows Firewall: Domain: Logging: Log dropped packets<br /><sub>(AZ-WIN-202226)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\Logging\LogDroppedPackets<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Informational |
+|Windows Firewall: Domain: Logging: Log successful connections<br /><sub>(AZ-WIN-202227)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\Logging\LogSuccessfulConnections<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Windows Firewall: Domain: Logging: Name<br /><sub>(AZ-WIN-202224)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\Logging\LogFilePath<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= %SystemRoot%\System32\logfiles\firewall\domainfw.log<br /><sub>(Registry)</sub> |Informational |
+|Windows Firewall: Domain: Logging: Size limit (KB)<br /><sub>(AZ-WIN-202225)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\Logging\LogFileSize<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\>\= 16384<br /><sub>(Registry)</sub> |Warning |
+|Windows Firewall: Domain: Outbound connections<br /><sub>(CCE-36146-9)</sub> |**Description**: This setting determines the behavior for outbound connections that do not match an outbound firewall rule. In Windows Vista, the default behavior is to allow connections unless there are firewall rules that block the connection.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\DefaultOutboundAction<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Critical |
+|Windows Firewall: Domain: Settings: Apply local connection security rules<br /><sub>(CCE-38040-2)</sub> |**Description**: <p><span>This setting controls whether local administrators are allowed to create local connection rules that apply together with firewall rules configured by Group Policy. The recommended state for this setting is ‘Yes’, this will set the registry value to 1.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\AllowLocalIPsecPolicyMerge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Critical |
+|Windows Firewall: Domain: Settings: Apply local firewall rules<br /><sub>(CCE-37860-4)</sub> |**Description**: <p><span>This setting controls whether local administrators are allowed to create local firewall rules that apply together with firewall rules configured by Group Policy.</span></p><p><span>The recommended state for this setting is Yes, this will set the registry value to 1. </span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\AllowLocalPolicyMerge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
+|Windows Firewall: Domain: Settings: Display a notification<br /><sub>(CCE-38041-0)</sub> |**Description**: <p><span>By selecting this option, no notification is displayed to the user when a program is blocked from receiving inbound connections. In a server environment, the popups are not useful as the users is not logged in, popups are not necessary and can add confusion for the administrator.  </span></p><p><span>Configure this policy setting to ‘No’, this will set the registry value to 1.  Windows Firewall will not display a notification when a program is blocked from receiving inbound connections.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\DisableNotifications<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Windows Firewall: Private: Allow unicast response<br /><sub>(AZ-WIN-00089)</sub> |**Description**: <p><span>This option is useful if you need to control whether this computer receives unicast responses to its outgoing multicast or broadcast messages.  </span></p><p><span>We recommend this setting to ‘Yes’ for Private and Domain profiles, this will set the registry value to 0.</span></p><br />**Key Path**: Software\Policies\Microsoft\WindowsFirewall\PrivateProfile\DisableUnicastResponsesToMulticastBroadcast<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning |
+|Windows Firewall: Private: Firewall state<br /><sub>(CCE-38239-0)</sub> |**Description**: Select On (recommended) to have Windows Firewall with Advanced Security use the settings for this profile to filter network traffic. If you select Off, Windows Firewall with Advanced Security will not use any of the firewall rules or connection security rules for this profile.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\EnableFirewall<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Windows Firewall: Private: Inbound connections<br /><sub>(AZ-WIN-202228)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\DefaultInboundAction<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Windows Firewall: Private: Logging: Log dropped packets<br /><sub>(AZ-WIN-202231)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\Logging\LogDroppedPackets<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Informational |
+|Windows Firewall: Private: Logging: Log successful connections<br /><sub>(AZ-WIN-202232)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\Logging\LogSuccessfulConnections<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Windows Firewall: Private: Logging: Name<br /><sub>(AZ-WIN-202229)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\Logging\LogFilePath<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= %SystemRoot%\System32\logfiles\firewall\privatefw.log<br /><sub>(Registry)</sub> |Informational |
+|Windows Firewall: Private: Logging: Size limit (KB)<br /><sub>(AZ-WIN-202230)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\Logging\LogFileSize<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\>\= 16384<br /><sub>(Registry)</sub> |Warning |
+|Windows Firewall: Private: Outbound connections<br /><sub>(CCE-38332-3)</sub> |**Description**: This setting determines the behavior for outbound connections that do not match an outbound firewall rule. The default behavior is to allow connections unless there are firewall rules that block the connection. Important If you set Outbound connections to Block and then deploy the firewall policy by using a GPO, computers that receive the GPO settings cannot receive subsequent Group Policy updates unless you create and deploy an outbound rule that enables Group Policy to work. Predefined rules for Core Networking include outbound rules that enable Group Policy to work. Ensure that these outbound rules are active, and thoroughly test firewall profiles before deploying.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\DefaultOutboundAction<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical |
+|Windows Firewall: Private: Settings: Apply local connection security rules<br /><sub>(CCE-36063-6)</sub> |**Description**: <p><span>This setting controls whether local administrators are allowed to create local connection rules that apply together with firewall rules configured by Group Policy. The recommended state for this setting is ‘Yes’, this will set the registry value to 1.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\AllowLocalIPsecPolicyMerge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical |
+|Windows Firewall: Private: Settings: Apply local firewall rules<br /><sub>(CCE-37438-9)</sub> |**Description**: <p><span>This setting controls whether local administrators are allowed to create local firewall rules that apply together with firewall rules configured by Group Policy.</span></p><p><span>The recommended state for this setting is Yes, this will set the registry value to 1. </span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\AllowLocalPolicyMerge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
+|Windows Firewall: Private: Settings: Display a notification<br /><sub>(CCE-37621-0)</sub> |**Description**: <p><span>By selecting this option, no notification is displayed to the user when a program is blocked from receiving inbound connections. In a server environment, the popups are not useful as the users is not logged in, popups are not necessary and can add confusion for the administrator.  </span></p><p><span> Configure this policy setting to ‘No’, this will set the registry value to 1.  Windows Firewall will not display a notification when a program is blocked from receiving inbound connections.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\DisableNotifications<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Windows Firewall: Public: Allow unicast response<br /><sub>(AZ-WIN-00090)</sub> |**Description**: <p><span>This option is useful if you need to control whether this computer receives unicast responses to its outgoing multicast or broadcast messages. This can be done by changing the state for this setting to ‘No’, this will set the registry value to 1.</span></p><br />**Key Path**: Software\Policies\Microsoft\WindowsFirewall\PublicProfile\DisableUnicastResponsesToMulticastBroadcast<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Windows Firewall: Public: Firewall state<br /><sub>(CCE-37862-0)</sub> |**Description**: Select On (recommended) to have Windows Firewall with Advanced Security use the settings for this profile to filter network traffic. If you select Off, Windows Firewall with Advanced Security will not use any of the firewall rules or connection security rules for this profile.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\EnableFirewall<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Windows Firewall: Public: Inbound connections<br /><sub>(AZ-WIN-202234)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\DefaultInboundAction<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Windows Firewall: Public: Logging: Log dropped packets<br /><sub>(AZ-WIN-202237)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\Logging\LogDroppedPackets<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Informational |
+|Windows Firewall: Public: Logging: Log successful connections<br /><sub>(AZ-WIN-202233)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\Logging\LogSuccessfulConnections<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Windows Firewall: Public: Logging: Name<br /><sub>(AZ-WIN-202235)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\Logging\LogFilePath<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= %SystemRoot%\System32\logfiles\firewall\publicfw.log<br /><sub>(Registry)</sub> |Informational |
+|Windows Firewall: Public: Logging: Size limit (KB)<br /><sub>(AZ-WIN-202236)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\Logging\LogFileSize<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= 16384<br /><sub>(Registry)</sub> |Informational |
+|Windows Firewall: Public: Outbound connections<br /><sub>(CCE-37434-8)</sub> |**Description**: This setting determines the behavior for outbound connections that do not match an outbound firewall rule. The default behavior is to allow connections unless there are firewall rules that block the connection. Important If you set Outbound connections to Block and then deploy the firewall policy by using a GPO, computers that receive the GPO settings cannot receive subsequent Group Policy updates unless you create and deploy an outbound rule that enables Group Policy to work. Predefined rules for Core Networking include outbound rules that enable Group Policy to work. Ensure that these outbound rules are active, and thoroughly test firewall profiles before deploying.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\DefaultOutboundAction<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical |
+|Windows Firewall: Public: Settings: Apply local connection security rules<br /><sub>(CCE-36268-1)</sub> |**Description**: <p><span>This setting controls whether local administrators are allowed to create local connection rules that apply together with firewall rules configured by Group Policy. The recommended state for this setting is ‘Yes’, this will set the registry value to 1.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\AllowLocalIPsecPolicyMerge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical |
+|Windows Firewall: Public: Settings: Apply local firewall rules<br /><sub>(CCE-37861-2)</sub> |**Description**: <p><span>This setting controls whether local administrators are allowed to create local firewall rules that apply together with firewall rules configured by Group Policy.</span></p><p><span>The recommended state for this setting is Yes, this will set the registry value to 1. </span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\AllowLocalPolicyMerge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
+|Windows Firewall: Public: Settings: Display a notification<br /><sub>(CCE-38043-6)</sub> |**Description**: <p><span>By selecting this option, no notification is displayed to the user when a program is blocked from receiving inbound connections. In a server environment, the popups are not useful as the users is not logged in, popups are not necessary and can add confusion for the administrator.  </span></p><p><span>Configure this policy setting to ‘No’, this will set the registry value to 1.  Windows Firewall will not display a notification when a program is blocked from receiving inbound connections.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\DisableNotifications<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+ ## System Audit Policies - Account Logon |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | ||||| |Audit Credential Validation<br /><sub>(CCE-37741-6)</sub> |**Description**: <p><span>This subcategory reports the results of validation tests on credentials submitted for a user account logon request. These events occur on the computer that is authoritative for the credentials. For domain accounts, the domain controller is authoritative, whereas for local accounts, the local computer is authoritative. In domain environments, most of the Account Logon events occur in the Security log of the domain controllers that are authoritative for the domain accounts. However, these events can occur on other computers in the organization when local accounts are used to log on. Events for this subcategory include: - 4774: An account was mapped for logon. - 4775: An account could not be mapped for logon. - 4776: The domain controller attempted to validate the credentials for an account. - 4777: The domain controller failed to validate the credentials for an account. The recommended state for this setting is: 'Success and Failure'.</span></p><br />**Key Path**: {0CCE923F-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
+|Audit Kerberos Authentication Service<br /><sub>(AZ-WIN-00004)</sub> |<br />**Key Path**: {0CCE9242-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller |\>\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
## System Audit Policies - Account Management |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Audit Other Account Management Events<br /><sub>(CCE-37855-4)</sub> |**Description**: This subcategory reports other account management events. Events for this subcategory include: ΓÇö 4782: The password hash an account was accessed. ΓÇö 4793: The Password Policy Checking API was called. Refer to the Microsoft Knowledgebase article "Description of security events in Windows Vista and in Windows Server 2008" for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE923A-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
+|Audit Distribution Group Management<br /><sub>(CCE-36265-7)</sub> |<br />**Key Path**: {0CCE9238-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller |\>\= Success<br /><sub>(Audit)</sub> |Critical |
+|Audit Other Account Management Events<br /><sub>(CCE-37855-4)</sub> |**Description**: This subcategory reports other account management events. Events for this subcategory include: ΓÇö 4782: The password hash an account was accessed. ΓÇö 4793: The Password Policy Checking API was called. Refer to the Microsoft Knowledgebase article "Description of security events in Windows Vista and in Windows Server 2008" for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE923A-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller |\>\= Success<br /><sub>(Audit)</sub> |Critical |
|Audit Security Group Management<br /><sub>(CCE-38034-5)</sub> |**Description**: This subcategory reports each event of security group management, such as when a security group is created, changed, or deleted or when a member is added to or removed from a security group. If you enable this Audit policy setting, administrators can track events to detect malicious, accidental, and authorized creation of security group accounts. Events for this subcategory include: - 4727: A security-enabled global group was created. - 4728: A member was added to a security-enabled global group. - 4729: A member was removed from a security-enabled global group. - 4730: A security-enabled global group was deleted. - 4731: A security-enabled local group was created. - 4732: A member was added to a security-enabled local group. - 4733: A member was removed from a security-enabled local group. - 4734: A security-enabled local group was deleted. - 4735: A security-enabled local group was changed. - 4737: A security-enabled global group was changed. - 4754: A security-enabled universal group was created. - 4755: A security-enabled universal group was changed. - 4756: A member was added to a security-enabled universal group. - 4757: A member was removed from a security-enabled universal group. - 4758: A security-enabled universal group was deleted. - 4764: A group's type was changed. The recommended state for this setting is: `Success and Failure`.<br />**Key Path**: {0CCE9237-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
-|Audit User Account Management<br /><sub>(CCE-37856-2)</sub> |**Description**: This subcategory reports each event of user account management, such as when a user account is created, changed, or deleted; a user account is renamed, or enabled; or a password is set or changed. If you enable this Audit policy setting, administrators can track events to detect malicious, accidental, and authorized creation of user accounts. Events for this subcategory include: - 4720: A user account was created. - 4722: A user account was enabled. - 4723: An attempt was made to change an account's password. - 4724: An attempt was made to reset an account's password. - 4725: A user account was disabled. - 4726: A user account was deleted. - 4738: A user account was changed. - 4740: A user account was locked out. - 4765: SID History was added to an account. - 4766: An attempt to add SID History to an account failed. - 4767: A user account was unlocked. - 4780: The ACL was set on accounts which are members of administrators groups. - 4781: The name of an account was changed: - 4794: An attempt was made to set the Directory Services Restore Mode. - 5376: Credential Manager credentials were backed up. - 5377: Credential Manager credentials were restored from a backup. The recommended state for this setting is: `Success and Failure`.<br />**Key Path**: {0CCE9235-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
+|Audit User Account Management<br /><sub>(CCE-37856-2)</sub> |**Description**: This subcategory reports each event of user account management, such as when a user account is created, changed, or deleted; a user account is renamed, disabled, or enabled; or a password is set or changed. If you enable this Audit policy setting, administrators can track events to detect malicious, accidental, and authorized creation of user accounts. Events for this subcategory include: - 4720: A user account was created. - 4722: A user account was enabled. - 4723: An attempt was made to change an account's password. - 4724: An attempt was made to reset an account's password. - 4725: A user account was disabled. - 4726: A user account was deleted. - 4738: A user account was changed. - 4740: A user account was locked out. - 4765: SID History was added to an account. - 4766: An attempt to add SID History to an account failed. - 4767: A user account was unlocked. - 4780: The ACL was set on accounts which are members of administrators groups. - 4781: The name of an account was changed: - 4794: An attempt was made to set the Directory Services Restore Mode. - 5376: Credential Manager credentials were backed up. - 5377: Credential Manager credentials were restored from a backup. The recommended state for this setting is: `Success and Failure`.<br />**Key Path**: {0CCE9235-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
## System Audit Policies - Detailed Tracking |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | ||||| |Audit PNP Activity<br /><sub>(AZ-WIN-00182)</sub> |**Description**: This policy setting allows you to audit when plug and play detects an external device. The recommended state for this setting is: `Success`. **Note:** A Windows 10, Server 2016 or higher OS is required to access and set this value in Group Policy.<br />**Key Path**: {0CCE9248-69AE-11D9-BED3-505054503030}<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
-|Audit Process Creation<br /><sub>(CCE-36059-4)</sub> |**Description**: This subcategory reports the creation of a process and the name of the program or user that created it. Events for this subcategory include: - 4688: A new process has been created. - 4696: A primary token was assigned to process. Refer to Microsoft Knowledge Base article 947226: Description of security events in Windows Vista and in Windows Server 2008 for the most recent information about this setting. The recommended state for this setting is: `Success`.<br />**Key Path**: {0CCE922B-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
+|Audit Process Creation<br /><sub>(CCE-36059-4)</sub> |**Description**: This subcategory reports the creation of a process and the name of the program or user that created it. Events for this subcategory include: - 4688: A new process has been created. - 4696: A primary token was assigned to process. Refer to Microsoft Knowledge Base article 947226: [Description of security events in Windows Vista and in Windows Server 2008](https://support.microsoft.com/en-us/kb/947226) for the most recent information about this setting. The recommended state for this setting is: `Success`.<br />**Key Path**: {0CCE922B-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
+
+## System Audit Policies - DS Access
+
+|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity |
+|||||
+|Audit Directory Service Access<br /><sub>(CCE-37433-0)</sub> |<br />**Key Path**: {0CCE923B-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller |\>\= Failure<br /><sub>(Audit)</sub> |Critical |
+|Audit Directory Service Changes<br /><sub>(CCE-37616-0)</sub> |<br />**Key Path**: {0CCE923C-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller |\>\= Success<br /><sub>(Audit)</sub> |Critical |
+|Audit Directory Service Replication<br /><sub>(AZ-WIN-00093)</sub> |<br />**Key Path**: {0CCE923D-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\>\= No Auditing<br /><sub>(Audit)</sub> |Critical |
## System Audit Policies - Logon-Logoff |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Audit Account Lockout<br /><sub>(CCE-37133-6)</sub> |**Description**: This subcategory reports when a user's account is locked out as a result of too many failed logon attempts. Events for this subcategory include: ΓÇö 4625: An account failed to log on. Refer to the Microsoft Knowledgebase article 'Description of security events in Windows Vista and in Windows Server 2008' for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE9217-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
+|Audit Account Lockout<br /><sub>(CCE-37133-6)</sub> |**Description**: This subcategory reports when a user's account is locked out as a result of too many failed logon attempts. Events for this subcategory include: ΓÇö 4625: An account failed to log on. Refer to the Microsoft Knowledgebase article 'Description of security events in Windows Vista and in Windows Server 2008' for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE9217-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Failure<br /><sub>(Audit)</sub> |Critical |
|Audit Group Membership<br /><sub>(AZ-WIN-00026)</sub> |**Description**: Audit Group Membership enables you to audit group memberships when they are enumerated on the client computer. This policy allows you to audit the group membership information in the user's logon token. Events in this subcategory are generated on the computer on which a logon session is created. For an interactive logon, the security audit event is generated on the computer that the user logged on to. For a network logon, such as accessing a shared folder on the network, the security audit event is generated on the computer hosting the resource. You must also enable the Audit Logon subcategory. Multiple events are generated if the group membership information cannot fit in a single security audit event. The events that are audited include the following: - 4627(S): Group membership information.<br />**Key Path**: {0CCE9249-69AE-11D9-BED3-505054503030}<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical | |Audit Logoff<br /><sub>(CCE-38237-4)</sub> |**Description**: <p><span>This subcategory reports when a user logs off from the system. These events occur on the accessed computer. For interactive logons, the generation of these events occurs on the computer that is logged on to. If a network logon takes place to access a share, these events generate on the computer that hosts the accessed resource. If you configure this setting to No auditing, it is difficult or impossible to determine which user has accessed or attempted to access organization computers. Events for this subcategory include: - 4634: An account was logged off. - 4647: User initiated logoff. The recommended state for this setting is: 'Success'.</span></p><br />**Key Path**: {0CCE9216-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical | |Audit Logon<br /><sub>(CCE-38036-0)</sub> |**Description**: <p><span>This subcategory reports when a user attempts to log on to the system. These events occur on the accessed computer. For interactive logons, the generation of these events occurs on the computer that is logged on to. If a network logon takes place to access a share, these events generate on the computer that hosts the accessed resource. If you configure this setting to No auditing, it is difficult or impossible to determine which user has accessed or attempted to access organization computers. Events for this subcategory include: - 4624: An account was successfully logged on. - 4625: An account failed to log on. - 4648: A logon was attempted using explicit credentials. - 4675: SIDs were filtered. The recommended state for this setting is: 'Success and Failure'.</span></p><br />**Key Path**: {0CCE9215-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
For more information, see [Azure Policy guest configuration](../../machine-confi
|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
+|Audit Detailed File Share<br /><sub>(AZ-WIN-00100)</sub> |<br />**Key Path**: {0CCE9244-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Failure<br /><sub>(Audit)</sub> |Critical |
+|Audit File Share<br /><sub>(AZ-WIN-00102)</sub> |<br />**Key Path**: {0CCE9224-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
|Audit Other Object Access Events<br /><sub>(AZ-WIN-00113)</sub> |**Description**: This subcategory reports other object access-related events such as Task Scheduler jobs and COM+ objects. Events for this subcategory include: ΓÇö 4671: An application attempted to access a blocked ordinal through the TBS. ΓÇö 4691: Indirect access to an object was requested. ΓÇö 4698: A scheduled task was created. ΓÇö 4699: A scheduled task was deleted. ΓÇö 4700: A scheduled task was enabled. ΓÇö 4701: A scheduled task was disabled. ΓÇö 4702: A scheduled task was updated. ΓÇö 5888: An object in the COM+ Catalog was modified. ΓÇö 5889: An object was deleted from the COM+ Catalog. ΓÇö 5890: An object was added to the COM+ Catalog. Refer to the Microsoft Knowledgebase article "Description of security events in Windows Vista and in Windows Server 2008" for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE9227-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical | |Audit Removable Storage<br /><sub>(CCE-37617-8)</sub> |**Description**: This policy setting allows you to audit user attempts to access file system objects on a removable storage device. A security audit event is generated only for all objects for all types of access requested. If you configure this policy setting, an audit event is generated each time an account accesses a file system object on a removable storage. Success audits record successful attempts and Failure audits record unsuccessful attempts. If you do not configure this policy setting, no audit event is generated when an account accesses a file system object on a removable storage. The recommended state for this setting is: `Success and Failure`. **Note:** A Windows 8, Server 2012 (non-R2) or higher OS is required to access and set this value in Group Policy.<br />**Key Path**: {0CCE9245-69AE-11D9-BED3-505054503030}<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
For more information, see [Azure Policy guest configuration](../../machine-confi
|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | ||||| |Audit Authentication Policy Change<br /><sub>(CCE-38327-3)</sub> |**Description**: This subcategory reports changes in authentication policy. Events for this subcategory include: ΓÇö 4706: A new trust was created to a domain. ΓÇö 4707: A trust to a domain was removed. ΓÇö 4713: Kerberos policy was changed. ΓÇö 4716: Trusted domain information was modified. ΓÇö 4717: System security access was granted to an account. ΓÇö 4718: System security access was removed from an account. ΓÇö 4739: Domain Policy was changed. ΓÇö 4864: A namespace collision was detected. ΓÇö 4865: A trusted forest information entry was added. ΓÇö 4866: A trusted forest information entry was removed. ΓÇö 4867: A trusted forest information entry was modified. Refer to the Microsoft Knowledgebase article "Description of security events in Windows Vista and in Windows Server 2008" for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE9230-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
+|Audit Authorization Policy Change<br /><sub>(CCE-36320-0)</sub> |<br />**Key Path**: {0CCE9231-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
|Audit MPSSVC Rule-Level Policy Change<br /><sub>(AZ-WIN-00111)</sub> |**Description**: This subcategory reports changes in policy rules used by the Microsoft Protection Service (MPSSVC.exe). This service is used by Windows Firewall and by Microsoft OneCare. Events for this subcategory include: ΓÇö 4944: The following policy was active when the Windows Firewall started. ΓÇö 4945: A rule was listed when the Windows Firewall started. ΓÇö 4946: A change has been made to Windows Firewall exception list. A rule was added. ΓÇö 4947: A change has been made to Windows Firewall exception list. A rule was modified. ΓÇö 4948: A change has been made to Windows Firewall exception list. A rule was deleted. ΓÇö 4949: Windows Firewall settings were restored to the default values. ΓÇö 4950: A Windows Firewall setting has changed. ΓÇö 4951: A rule has been ignored because its major version number was not recognized by Windows Firewall. ΓÇö 4952: Parts of a rule have been ignored because its minor version number was not recognized by Windows Firewall. The other parts of the rule will be enforced. ΓÇö 4953: A rule has been ignored by Windows Firewall because it could not parse the rule. ΓÇö 4954: Windows Firewall Group Policy settings have changed. The new settings have been applied. ΓÇö 4956: Windows Firewall has changed the active profile. ΓÇö 4957: Windows Firewall did not apply the following rule: ΓÇö 4958: Windows Firewall did not apply the following rule because the rule referred to items not configured on this computer: Refer to the Microsoft Knowledgebase article "Description of security events in Windows Vista and in Windows Server 2008" for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE9232-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
+|Audit Other Policy Change Events<br /><sub>(AZ-WIN-00114)</sub> |<br />**Key Path**: {0CCE9234-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Failure<br /><sub>(Audit)</sub> |Critical |
|Audit Policy Change<br /><sub>(CCE-38028-7)</sub> |**Description**: This subcategory reports changes in audit policy including SACL changes. Events for this subcategory include: ΓÇö 4715: The audit policy (SACL) on an object was changed. ΓÇö 4719: System audit policy was changed. ΓÇö 4902: The Per-user audit policy table was created. ΓÇö 4904: An attempt was made to register a security event source. ΓÇö 4905: An attempt was made to unregister a security event source. ΓÇö 4906: The CrashOnAuditFail value has changed. ΓÇö 4907: Auditing settings on object were changed. ΓÇö 4908: Special Groups Logon table modified. ΓÇö 4912: Per User Audit Policy was changed. Refer to the Microsoft Knowledgebase article "Description of security events in Windows Vista and in Windows Server 2008" for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE922F-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical | ## System Audit Policies - Privilege Use
For more information, see [Azure Policy guest configuration](../../machine-confi
|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
+|Audit IPsec Driver<br /><sub>(CCE-37853-9)</sub> |<br />**Key Path**: {0CCE9213-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
+|Audit Other System Events<br /><sub>(CCE-38030-3)</sub> |<br />**Key Path**: {0CCE9214-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
|Audit Security State Change<br /><sub>(CCE-38114-5)</sub> |**Description**: This subcategory reports changes in security state of the system, such as when the security subsystem starts and stops. Events for this subcategory include: ΓÇö 4608: Windows is starting up. ΓÇö 4609: Windows is shutting down. ΓÇö 4616: The system time was changed. ΓÇö 4621: Administrator recovered system from CrashOnAuditFail. Users who are not administrators will now be allowed to log on. Some auditable activity might not have been recorded. Refer to the Microsoft Knowledgebase article 'Description of security events in Windows Vista and in Windows Server 2008' for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE9210-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical | |Audit Security System Extension<br /><sub>(CCE-36144-4)</sub> |**Description**: This subcategory reports the loading of extension code such as authentication packages by the security subsystem. Events for this subcategory include: ΓÇö 4610: An authentication package has been loaded by the Local Security Authority. ΓÇö 4611: A trusted logon process has been registered with the Local Security Authority. ΓÇö 4614: A notification package has been loaded by the Security Account Manager. ΓÇö 4622: A security package has been loaded by the Local Security Authority. ΓÇö 4697: A service was installed in the system. Refer to the Microsoft Knowledgebase article "Description of security events in Windows Vista and in Windows Server 2008" for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE9211-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical | |Audit System Integrity<br /><sub>(CCE-37132-8)</sub> |**Description**: This subcategory reports on violations of integrity of the security subsystem. Events for this subcategory include: ΓÇö 4612: Internal resources allocated for the queuing of audit messages have been exhausted, leading to the loss of some audits. ΓÇö 4615: Invalid use of LPC port. ΓÇö 4618: A monitored security event pattern has occurred. ΓÇö 4816 : RPC detected an integrity violation while decrypting an incoming message. ΓÇö 5038: Code integrity determined that the image hash of a file is not valid. The file could be corrupt due to unauthorized modification or the invalid hash could indicate a potential disk device error. ΓÇö 5056: A cryptographic self-test was performed. ΓÇö 5057: A cryptographic primitive operation failed. ΓÇö 5060: Verification operation failed. ΓÇö 5061: Cryptographic operation. ΓÇö 5062: A kernel-mode cryptographic self-test was performed. Refer to the Microsoft Knowledgebase article 'Description of security events in Windows Vista and in Windows Server 2008' for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE9212-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
For more information, see [Azure Policy guest configuration](../../machine-confi
|Restore files and directories<br /><sub>(CCE-37613-7)</sub> |**Description**: This policy setting determines which users can bypass file, directory, registry, and other persistent object permissions when restoring backed up files and directories on computers that run Windows Vista in your environment. This user right also determines which users can set valid security principals as object owners; it is similar to the Backup files and directories user right. The recommended state for this setting is: `Administrators`.<br />**Key Path**: [Privilege Rights]SeRestorePrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member | Administrators, Backup Operators<br /><sub>(Policy)</sub> |Warning | |Shut down the system<br /><sub>(CCE-38328-1)</sub> |**Description**: This policy setting determines which users who are logged on locally to the computers in your environment can shut down the operating system with the Shut Down command. Misuse of this user right can result in a denial of service condition. The recommended state for this setting is: `Administrators`.<br />**Key Path**: [Privilege Rights]SeShutdownPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Administrators<br /><sub>(Policy)</sub> |Warning | |Take ownership of files or other objects<br /><sub>(CCE-38325-7)</sub> |**Description**: This policy setting allows users to take ownership of files, folders, registry keys, processes, or threads. This user right bypasses any permissions that are in place to protect objects to give ownership to the specified user. The recommended state for this setting is: `Administrators`.<br />**Key Path**: [Privilege Rights]SeTakeOwnershipPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Administrators<br /><sub>(Policy)</sub> |Critical |
+|The Debug programs user right must only be assigned to the Administrators group.<br /><sub>(AZ-WIN-73755)</sub> |<br />**Key Path**: [Privilege Rights]SeDebugPrivilege<br />**OS**: WS2016, WS2019<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Administrators<br /><sub>(Policy)</sub> |Critical |
+|The Impersonate a client after authentication user right must only be assigned to Administrators, Service, Local Service, and Network Service.<br /><sub>(AZ-WIN-73785)</sub> |<br />**Key Path**: [Privilege Rights]SeImpersonatePrivilege<br />**OS**: WS2016, WS2019<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member | Administrators,Service,Local Service,Network Service<br /><sub>(Policy)</sub> |Important |
## Windows Components |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | ||||| |Allow Basic authentication<br /><sub>(CCE-36254-1)</sub> |**Description**: This policy setting allows you to manage whether the Windows Remote Management (WinRM) service accepts Basic authentication from a remote client. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\WinRM\Client\AllowBasic<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
-|Allow Cortana<br /><sub>(AZ-WIN-00131)</sub> |**Description**: This policy setting specifies whether Cortana is allowed on the device.   If you enable or don't configure this setting, Cortana will be allowed on the device. If you disable this setting, Cortana will be turned off.   When Cortana is off, users will still be able to use search to find things on the device and on the Internet.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Windows Search\AllowCortana<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning |
-|Allow Cortana above lock screen<br /><sub>(AZ-WIN-00130)</sub> |**Description**: This policy setting determines whether or not the user can interact with Cortana using speech while the system is locked. If you enable or don't configure this setting, the user can interact with Cortana using speech while the system is locked. If you disable this setting, the system will need to be unlocked for the user to interact with Cortana using speech.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Windows Search\AllowCortanaAboveLock<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning |
|Allow indexing of encrypted files<br /><sub>(CCE-38277-0)</sub> |**Description**: This policy setting controls whether encrypted items are allowed to be indexed. When this setting is changed, the index is rebuilt completely. Full volume encryption (such as BitLocker Drive Encryption or a non-Microsoft solution) must be used for the location of the index to maintain security for encrypted files. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Windows Search\AllowIndexingEncryptedStoresOrItems<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning | |Allow Microsoft accounts to be optional<br /><sub>(CCE-38354-7)</sub> |**Description**: This policy setting lets you control whether Microsoft accounts are optional for Windows Store apps that require an account to sign in. This policy only affects Windows Store apps that support it. If you enable this policy setting, Windows Store apps that typically require a Microsoft account to sign in will allow users to sign in with an enterprise account instead. If you disable or do not configure this policy setting, users will need to sign in with a Microsoft account.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\MSAOptional<br />**OS**: WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
-|Allow search and Cortana to use location<br /><sub>(AZ-WIN-00133)</sub> |**Description**: This policy setting specifies whether search and Cortana can provide location aware search and Cortana results.   If this is enabled, search and Cortana can access location information.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Windows Search\AllowSearchToUseLocation<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning |
|Allow Telemetry<br /><sub>(AZ-WIN-00169)</sub> |**Description**: This policy setting determines the amount of diagnostic and usage data reported to Microsoft. A value of 0 will send minimal data to Microsoft. This data includes Malicious Software Removal Tool (MSRT) & Windows Defender data, if enabled, and telemetry client settings. Setting a value of 0 applies to enterprise, EDU, IoT and server devices only. Setting a value of 0 for other devices is equivalent to choosing a value of 1. A value of 1 sends only a basic amount of diagnostic and usage data. Note that setting values of 0 or 1 will degrade certain experiences on the device. A value of 2 sends enhanced diagnostic and usage data. A value of 3 sends the same data as a value of 2, plus additional diagnostics data, including the files and content that may have caused the problem. Windows 10 telemetry settings apply to the Windows operating system and some first party apps. This setting does not apply to third party apps running on Windows 10. The recommended state for this setting is: `Enabled: 0 - Security [Enterprise Only]`. **Note:** If the "Allow Telemetry" setting is configured to "0 - Security [Enterprise Only]", then the options in Windows Update to defer upgrades and updates will have no effect.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\DataCollection\AllowTelemetry<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= 0<br /><sub>(Registry)</sub> |Warning | |Allow unencrypted traffic<br /><sub>(CCE-38223-4)</sub> |**Description**: This policy setting allows you to manage whether the Windows Remote Management (WinRM) service sends and receives unencrypted messages over the network. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\WinRM\Client\AllowUnencryptedTraffic<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical | |Allow user control over installs<br /><sub>(CCE-36400-0)</sub> |**Description**: Permits users to change installation options that typically are available only to system administrators. The security features of Windows Installer prevent users from changing installation options typically reserved for system administrators, such as specifying the directory to which files are installed. If Windows Installer detects that an installation package has permitted the user to change a protected option, it stops the installation and displays a message. These security features operate only when the installation program is running in a privileged security context in which it has access to directories denied to the user. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Installer\EnableUserControl<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
For more information, see [Azure Policy guest configuration](../../machine-confi
|Always prompt for password upon connection<br /><sub>(CCE-37929-7)</sub> |**Description**: This policy setting specifies whether Terminal Services always prompts the client computer for a password upon connection. You can use this policy setting to enforce a password prompt for users who log on to Terminal Services, even if they already provided the password in the Remote Desktop Connection client. By default, Terminal Services allows users to automatically log on if they enter a password in the Remote Desktop Connection client. Note If you do not configure this policy setting, the local computer administrator can use the Terminal Services Configuration tool to either allow or prevent passwords from being automatically sent.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\fPromptForPassword<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical | |Application: Control Event Log behavior when the log file reaches its maximum size<br /><sub>(CCE-37775-4)</sub> |**Description**: This policy setting controls Event Log behavior when the log file reaches its maximum size. If you enable this policy setting and a log file reaches its maximum size, new events are not written to the log and are lost. If you disable or do not configure this policy setting and a log file reaches its maximum size, new events overwrite old events. Note: Old events may or may not be retained according to the "Backup log automatically when full"  policy setting.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\EventLog\Application\Retention<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical | |Application: Specify the maximum log file size (KB)<br /><sub>(CCE-37948-7)</sub> |**Description**: This policy setting specifies the maximum size of the log file in kilobytes. If you enable this policy setting, you can configure the maximum log file size to be between 1 megabyte (1024 kilobytes) and 2 terabytes (2147483647 kilobytes) in kilobyte increments. If you disable or do not configure this policy setting, the maximum size of the log file will be set to the locally configured value. This value can be changed by the local administrator using the Log Properties dialog and it defaults to 20 megabytes.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\EventLog\Application\MaxSize<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= 32768<br /><sub>(Registry)</sub> |Critical |
+|Block all consumer Microsoft account user authentication<br /><sub>(AZ-WIN-20198)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\MicrosoftAccount\DisableUserAuth<br />**OS**: WS2016, WS2019<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
|Configure local setting override for reporting to Microsoft MAPS<br /><sub>(AZ-WIN-00173)</sub> |**Description**: This policy setting configures a local override for the configuration to join Microsoft MAPS. This setting can only be set by Group Policy. If you enable this setting the local preference setting will take priority over Group Policy. If you disable or do not configure this setting Group Policy will take priority over the local preference setting.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows Defender\SpyNet\LocalSettingOverrideSpynetReporting<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning | |Configure Windows SmartScreen<br /><sub>(CCE-35859-8)</sub> |**Description**: This policy setting allows you to manage the behavior of Windows SmartScreen. Windows SmartScreen helps keep PCs safer by warning users before running unrecognized programs downloaded from the Internet. Some information is sent to Microsoft about files and programs run on PCs with this feature enabled. If you enable this policy setting, Windows SmartScreen behavior may be controlled by setting one of the following options: ΓÇó Give user a warning before running downloaded unknown software ΓÇó Turn off SmartScreen If you disable or do not configure this policy setting, Windows SmartScreen behavior is managed by administrators on the PC by using Windows SmartScreen Settings in Security and Maintenance. Options: ΓÇó Give user a warning before running downloaded unknown software ΓÇó Turn off SmartScreen<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\System\EnableSmartScreen<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |In 1-2<br /><sub>(Registry)</sub> |Warning | |Detect change from default RDP port<br /><sub>(AZ-WIN-00156)</sub> |**Description**: This setting determines whether the network port that listens for Remote Desktop Connections has been changed from the default 3389<br />**Key Path**: System\CurrentControlSet\Control\Terminal Server\WinStations\RDP-Tcp\PortNumber<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 3389<br /><sub>(Registry)</sub> |Critical |
For more information, see [Azure Policy guest configuration](../../machine-confi
|Do not show feedback notifications<br /><sub>(AZ-WIN-00140)</sub> |**Description**: This policy setting allows an organization to prevent its devices from showing feedback questions from Microsoft. If you enable this policy setting, users will no longer see feedback notifications through the Windows Feedback app. If you disable or do not configure this policy setting, users may see notifications through the Windows Feedback app asking users for feedback. Note: If you disable or do not configure this policy setting, users can control how often they receive feedback questions.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\DataCollection\DoNotShowFeedbackNotifications<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical | |Do not use temporary folders per session<br /><sub>(CCE-38180-6)</sub> |**Description**: By default, Remote Desktop Services creates a separate temporary folder on the RD Session Host server for each active session that a user maintains on the RD Session Host server. The temporary folder is created on the RD Session Host server in a Temp folder under the user's profile folder and is named with the "sessionid." This temporary folder is used to store individual temporary files. To reclaim disk space, the temporary folder is deleted when the user logs off from a session. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\PerSessionTempDir<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical | |Enumerate administrator accounts on elevation<br /><sub>(CCE-36512-2)</sub> |**Description**: This policy setting controls whether administrator accounts are displayed when a user attempts to elevate a running application. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\CredUI\EnumerateAdministrators<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
+|PowerShell script block logging must be enabled.<br /><sub>(AZ-WIN-73591)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\PowerShell\ScriptBlockLogging\EnableScriptBlockLogging<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Important |
|Prevent downloading of enclosures<br /><sub>(CCE-37126-0)</sub> |**Description**: This policy setting prevents the user from having enclosures (file attachments) downloaded from a feed to the user's computer. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Internet Explorer\Feeds\DisableEnclosureDownload<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning | |Require secure RPC communication<br /><sub>(CCE-37567-5)</sub> |**Description**: Specifies whether a Remote Desktop Session Host server requires secure RPC communication with all clients or allows unsecured communication. You can use this setting to strengthen the security of RPC communication with clients by allowing only authenticated and encrypted requests. If the status is set to Enabled, Remote Desktop Services accepts requests from RPC clients that support secure requests, and does not allow unsecured communication with untrusted clients. If the status is set to Disabled, Remote Desktop Services always requests security for all RPC traffic. However, unsecured communication is allowed for RPC clients that do not respond to the request. If the status is set to Not Configured, unsecured communication is allowed. Note: The RPC interface is used for administering and configuring Remote Desktop Services.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\fEncryptRPCTraffic<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical | |Require user authentication for remote connections by using Network Level Authentication<br /><sub>(AZ-WIN-00149)</sub> |**Description**: Require user authentication for remote connections by using Network Level Authentication<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\UserAuthentication<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
For more information, see [Azure Policy guest configuration](../../machine-confi
|Specify the interval to check for definition updates<br /><sub>(AZ-WIN-00152)</sub> |**Description**: This policy setting allows you to specify an interval at which to check for definition updates. The time value is represented as the number of hours between update checks. Valid values range from 1 (every hour) to 24 (once per day). If you enable this setting, checking for definition updates will occur at the interval specified. If you disable or do not configure this setting, checking for definition updates will occur at the default interval.<br />**Key Path**: SOFTWARE\Microsoft\Microsoft Antimalware\Signature Updates\SignatureUpdateInterval<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 8<br /><sub>(Registry)</sub> |Critical | |System: Control Event Log behavior when the log file reaches its maximum size<br /><sub>(CCE-36160-0)</sub> |**Description**: This policy setting controls Event Log behavior when the log file reaches its maximum size. If you enable this policy setting and a log file reaches its maximum size, new events are not written to the log and are lost. If you disable or do not configure this policy setting and a log file reaches its maximum size, new events overwrite old events. Note: Old events may or may not be retained according to the "Backup log automatically when full"  policy setting.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\EventLog\System\Retention<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical | |System: Specify the maximum log file size (KB)<br /><sub>(CCE-36092-5)</sub> |**Description**: This policy setting specifies the maximum size of the log file in kilobytes. If you enable this policy setting, you can configure the maximum log file size to be between 1 megabyte (1024 kilobytes) and 2 terabytes (2,147,483,647 kilobytes) in kilobyte increments. If you disable or do not configure this policy setting, the maximum size of the log file will be set to the locally configured value. This value can be changed by the local administrator using the Log Properties dialog and it defaults to 20 megabytes.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\EventLog\System\MaxSize<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= 32768<br /><sub>(Registry)</sub> |Critical |
+|The Application Compatibility Program Inventory must be prevented from collecting data and sending the information to Microsoft.<br /><sub>(AZ-WIN-73543)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\AppCompat\DisableInventory<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Member |\= 1<br /><sub>(Registry)</sub> |Informational |
|Turn off Autoplay<br /><sub>(CCE-36875-3)</sub> |**Description**: Autoplay starts to read from a drive as soon as you insert media in the drive, which causes the setup file for programs or audio media to start immediately. An attacker could use this feature to launch a program to damage the computer or data on the computer. You can enable the Turn off Autoplay setting to disable the Autoplay feature. Autoplay is disabled by default on some removable drive types, such as floppy disk and network drives, but not on CD-ROM drives. Note You cannot use this policy setting to enable Autoplay on computer drives in which it is disabled by default, such as floppy disk and network drives.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\Explorer\NoDriveTypeAutoRun<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 255<br /><sub>(Registry)</sub> |Critical | |Turn off Data Execution Prevention for Explorer<br /><sub>(CCE-37809-1)</sub> |**Description**: Disabling data execution prevention can allow certain legacy plug-in applications to function without terminating Explorer. The recommended state for this setting is: `Disabled`. **Note:** Some legacy plug-in applications and other software may not function with Data Execution Prevention and will require an exception to be defined for that specific plug-in/software.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Explorer\NoDataExecutionPrevention<br />**OS**: WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical | |Turn off heap termination on corruption<br /><sub>(CCE-36660-9)</sub> |**Description**: Without heap termination on corruption, legacy plug-in applications may continue to function when a File Explorer session has become corrupt. Ensuring that heap termination on corruption is active will prevent this. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Explorer\NoHeapTerminationOnCorruption<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
For more information, see [Azure Policy guest configuration](../../machine-confi
|Turn off shell protocol protected mode<br /><sub>(CCE-36809-2)</sub> |**Description**: This policy setting allows you to configure the amount of functionality that the shell protocol can have. When using the full functionality of this protocol applications can open folders and launch files. The protected mode reduces the functionality of this protocol allowing applications to only open a limited set of folders. Applications are not able to open files with this protocol when it is in the protected mode. It is recommended to leave this protocol in the protected mode to increase the security of Windows. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\Explorer\PreXPSP2ShellProtocolBehavior<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning | |Turn on behavior monitoring<br /><sub>(AZ-WIN-00178)</sub> |**Description**: This policy setting allows you to configure behavior monitoring. If you enable or do not configure this setting behavior monitoring will be enabled. If you disable this setting behavior monitoring will be disabled.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows Defender\Real-Time Protection\DisableBehaviorMonitoring<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
-## Windows Firewall Properties
+## Windows Settings - Security Settings
|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Windows Firewall: Domain: Allow unicast response<br /><sub>(AZ-WIN-00088)</sub> |**Description**: <p><span>This option is useful if you need to control whether this computer receives unicast responses to its outgoing multicast or broadcast messages.  </span></p><p><span>We recommend this setting to ‘Yes’ for Private and Domain profiles, this will set the registry value to 0.</span></p><br />**Key Path**: Software\Policies\Microsoft\WindowsFirewall\DomainProfile\DisableUnicastResponsesToMulticastBroadcast<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Warning |
-|Windows Firewall: Domain: Firewall state<br /><sub>(CCE-36062-8)</sub> |**Description**: Select On (recommended) to have Windows Firewall with Advanced Security use the settings for this profile to filter network traffic. If you select Off, Windows Firewall with Advanced Security will not use any of the firewall rules or connection security rules for this profile.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\EnableFirewall<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
-|Windows Firewall: Domain: Outbound connections<br /><sub>(CCE-36146-9)</sub> |**Description**: This setting determines the behavior for outbound connections that do not match an outbound firewall rule. In Windows Vista, the default behavior is to allow connections unless there are firewall rules that block the connection.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\DefaultOutboundAction<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Critical |
-|Windows Firewall: Domain: Settings: Apply local connection security rules<br /><sub>(CCE-38040-2)</sub> |**Description**: <p><span>This setting controls whether local administrators are allowed to create local connection rules that apply together with firewall rules configured by Group Policy. The recommended state for this setting is ‘Yes’, this will set the registry value to 1.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\AllowLocalIPsecPolicyMerge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Critical |
-|Windows Firewall: Domain: Settings: Apply local firewall rules<br /><sub>(CCE-37860-4)</sub> |**Description**: <p><span>This setting controls whether local administrators are allowed to create local firewall rules that apply together with firewall rules configured by Group Policy.</span></p><p><span>The recommended state for this setting is Yes, this will set the registry value to 1. </span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\AllowLocalPolicyMerge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
-|Windows Firewall: Domain: Settings: Display a notification<br /><sub>(CCE-38041-0)</sub> |**Description**: <p><span>By selecting this option, no notification is displayed to the user when a program is blocked from receiving inbound connections. In a server environment, the popups are not useful as the users is not logged in, popups are not necessary and can add confusion for the administrator.  </span></p><p><span>Configure this policy setting to ‘No’, this will set the registry value to 1.  Windows Firewall will not display a notification when a program is blocked from receiving inbound connections.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\DisableNotifications<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning |
-|Windows Firewall: Private: Allow unicast response<br /><sub>(AZ-WIN-00089)</sub> |**Description**: <p><span>This option is useful if you need to control whether this computer receives unicast responses to its outgoing multicast or broadcast messages.  </span></p><p><span>We recommend this setting to ‘Yes’ for Private and Domain profiles, this will set the registry value to 0.</span></p><br />**Key Path**: Software\Policies\Microsoft\WindowsFirewall\PrivateProfile\DisableUnicastResponsesToMulticastBroadcast<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning |
-|Windows Firewall: Private: Firewall state<br /><sub>(CCE-38239-0)</sub> |**Description**: Select On (recommended) to have Windows Firewall with Advanced Security use the settings for this profile to filter network traffic. If you select Off, Windows Firewall with Advanced Security will not use any of the firewall rules or connection security rules for this profile.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\EnableFirewall<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
-|Windows Firewall: Private: Outbound connections<br /><sub>(CCE-38332-3)</sub> |**Description**: This setting determines the behavior for outbound connections that do not match an outbound firewall rule. The default behavior is to allow connections unless there are firewall rules that block the connection. Important If you set Outbound connections to Block and then deploy the firewall policy by using a GPO, computers that receive the GPO settings cannot receive subsequent Group Policy updates unless you create and deploy an outbound rule that enables Group Policy to work. Predefined rules for Core Networking include outbound rules that enable Group Policy to work. Ensure that these outbound rules are active, and thoroughly test firewall profiles before deploying.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\DefaultOutboundAction<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical |
-|Windows Firewall: Private: Settings: Apply local connection security rules<br /><sub>(CCE-36063-6)</sub> |**Description**: <p><span>This setting controls whether local administrators are allowed to create local connection rules that apply together with firewall rules configured by Group Policy. The recommended state for this setting is ‘Yes’, this will set the registry value to 1.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\AllowLocalIPsecPolicyMerge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
-|Windows Firewall: Private: Settings: Apply local firewall rules<br /><sub>(CCE-37438-9)</sub> |**Description**: <p><span>This setting controls whether local administrators are allowed to create local firewall rules that apply together with firewall rules configured by Group Policy.</span></p><p><span>The recommended state for this setting is Yes, this will set the registry value to 1. </span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\AllowLocalPolicyMerge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
-|Windows Firewall: Private: Settings: Display a notification<br /><sub>(CCE-37621-0)</sub> |**Description**: <p><span>By selecting this option, no notification is displayed to the user when a program is blocked from receiving inbound connections. In a server environment, the popups are not useful as the users is not logged in, popups are not necessary and can add confusion for the administrator.  </span></p><p><span> Configure this policy setting to ‘No’, this will set the registry value to 1.  Windows Firewall will not display a notification when a program is blocked from receiving inbound connections.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\DisableNotifications<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
-|Windows Firewall: Public: Allow unicast response<br /><sub>(AZ-WIN-00090)</sub> |**Description**: <p><span>This option is useful if you need to control whether this computer receives unicast responses to its outgoing multicast or broadcast messages. This can be done by changing the state for this setting to ‘No’, this will set the registry value to 1.</span></p><br />**Key Path**: Software\Policies\Microsoft\WindowsFirewall\PublicProfile\DisableUnicastResponsesToMulticastBroadcast<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
-|Windows Firewall: Public: Firewall state<br /><sub>(CCE-37862-0)</sub> |**Description**: Select On (recommended) to have Windows Firewall with Advanced Security use the settings for this profile to filter network traffic. If you select Off, Windows Firewall with Advanced Security will not use any of the firewall rules or connection security rules for this profile.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\EnableFirewall<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
-|Windows Firewall: Public: Outbound connections<br /><sub>(CCE-37434-8)</sub> |**Description**: This setting determines the behavior for outbound connections that do not match an outbound firewall rule. The default behavior is to allow connections unless there are firewall rules that block the connection. Important If you set Outbound connections to Block and then deploy the firewall policy by using a GPO, computers that receive the GPO settings cannot receive subsequent Group Policy updates unless you create and deploy an outbound rule that enables Group Policy to work. Predefined rules for Core Networking include outbound rules that enable Group Policy to work. Ensure that these outbound rules are active, and thoroughly test firewall profiles before deploying.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\DefaultOutboundAction<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical |
-|Windows Firewall: Public: Settings: Apply local connection security rules<br /><sub>(CCE-36268-1)</sub> |**Description**: <p><span>This setting controls whether local administrators are allowed to create local connection rules that apply together with firewall rules configured by Group Policy. The recommended state for this setting is ‘Yes’, this will set the registry value to 1.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\AllowLocalIPsecPolicyMerge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
-|Windows Firewall: Public: Settings: Apply local firewall rules<br /><sub>(CCE-37861-2)</sub> |**Description**: <p><span>This setting controls whether local administrators are allowed to create local firewall rules that apply together with firewall rules configured by Group Policy.</span></p><p><span>The recommended state for this setting is Yes, this will set the registry value to 1. </span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\AllowLocalPolicyMerge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
-|Windows Firewall: Public: Settings: Display a notification<br /><sub>(CCE-38043-6)</sub> |**Description**: <p><span>By selecting this option, no notification is displayed to the user when a program is blocked from receiving inbound connections. In a server environment, the popups are not useful as the users is not logged in, popups are not necessary and can add confusion for the administrator.  </span></p><p><span>Configure this policy setting to ‘No’, this will set the registry value to 1.  Windows Firewall will not display a notification when a program is blocked from receiving inbound connections.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\DisableNotifications<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Adjust memory quotas for a process<br /><sub>(CCE-10849-8)</sub> |<br />**Key Path**: [Privilege Rights]SeIncreaseQuotaPrivilege<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member | Administrators, Local Service, Network Service<br /><sub>(Policy)</sub> |Warning |
> [!NOTE] > Availability of specific Azure Policy guest configuration settings may vary in Azure Government
For more information, see [Azure Policy guest configuration](../../machine-confi
Additional articles about Azure Policy and guest configuration: -- [Azure Policy guest configuration](../../machine-configuration/overview.md).
+- [Azure Policy guest configuration](../concepts/guest-configuration.md).
- [Regulatory Compliance](../concepts/regulatory-compliance.md) overview. - Review other examples at [Azure Policy samples](./index.md). - Review [Understanding policy effects](../concepts/effects.md).
hdinsight Apache Hadoop Dotnet Csharp Mapreduce Streaming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-dotnet-csharp-mapreduce-streaming.md
description: Learn how to use C# to create MapReduce solutions with Apache Hadoo
Previously updated : 04/28/2020 Last updated : 08/23/2022 # Use C# with MapReduce streaming on Apache Hadoop in HDInsight
youth 17
* [Use MapReduce in Apache Hadoop on HDInsight](hdinsight-use-mapreduce.md). * [Use a C# user-defined function with Apache Hive and Apache Pig](apache-hadoop-hive-pig-udf-dotnet-csharp.md).
-* [Develop Java MapReduce programs](apache-hadoop-develop-deploy-java-mapreduce-linux.md)
+* [Develop Java MapReduce programs](apache-hadoop-develop-deploy-java-mapreduce-linux.md)
hdinsight Apache Interactive Query Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/apache-interactive-query-get-started.md
description: An introduction to Interactive Query, also called Apache Hive LLAP,
Previously updated : 03/03/2020 Last updated : 08/23/2022 #Customer intent: As a developer new to Interactive Query in Azure HDInsight, I want to have a basic understanding of Interactive Query so I can decide if I want to use it rather than build my own cluster.
hdinsight Apache Kafka Azure Container Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-azure-container-services.md
description: Learn how to use Kafka on HDInsight from container images hosted in
Previously updated : 12/04/2019 Last updated : 08/23/2022 # Use Azure Kubernetes Service with Apache Kafka on HDInsight
hdinsight Apache Kafka Streams Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-streams-api.md
description: Tutorial - Learn how to use the Apache Kafka Streams API with Kafka
Previously updated : 04/01/2021 Last updated : 08/23/2022 #Customer intent: As a developer, I need to create an application that uses the Kafka streams API with Kafka on HDInsight
hdinsight Apache Azure Spark History Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-azure-spark-history-server.md
description: Use the extended features in the Apache Spark History Server to deb
Previously updated : 11/25/2019 Last updated : 08/23/2022 # Use the extended features of the Apache Spark History Server to debug and diagnose Spark applications
healthcare-apis Deploy Iot Connector In Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-iot-connector-in-azure.md
In this quickstart, you'll learn how to deploy the MedTech service in the Azure
> [!IMPORTANT] >
-> You'll want to confirm that the **Microsoft.HealthcareApis** and **Microsoft.EventHub** resource providers have been registered with your Azure subscription for a successful deployment. To learn more about registering resource providers, see [Azure resource providers and types](/azure-resource-manager/management/resource-providers-and-types)
+> You'll want to confirm that the **Microsoft.HealthcareApis** and **Microsoft.EventHub** resource providers have been registered with your Azure subscription for a successful deployment. To learn more about registering resource providers, see [Azure resource providers and types](/azure/azure-resource-manager/management/resource-providers-and-types)
## Deploy the MedTech service with a quickstart template
iot-hub Iot Hub Create Through Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-create-through-portal.md
You can change the settings of an existing IoT hub after it's created from the I
### Shared access policies
-You can also view or modify the list of shared access policies by clicking **Shared access policies** in the **Security settings** section. These policies define the permissions for devices and services to connect to IoT Hub.
+You can also view or modify the list of shared access policies by choosing **Shared access policies** in the **Security settings** section. These policies define the permissions for devices and services to connect to IoT Hub.
-Click **Add shared access policy** to open the **Add shared access policy** blade. You can enter the new policy name and the permissions that you want to associate with this policy, as shown in the following figure:
+Select **Add shared access policy** to open the **Add shared access policy** blade. You can enter the new policy name and the permissions that you want to associate with this policy, as shown in the following figure:
:::image type="content" source="./media/iot-hub-create-through-portal/iot-hub-add-shared-access-policy.png" alt-text="Screenshot showing adding a shared access policy." lightbox="./media/iot-hub-create-through-portal/iot-hub-add-shared-access-policy.png"::: * The **Registry Read** and **Registry Write** policies grant read and write access rights to the identity registry. These permissions are used by back-end cloud services to manage device identities. Choosing the write option automatically chooses the read option.
-* The **Service Connect** policy grants permission to access service endpoints. This permission is used by back-end cloud services to send and receive messages from devices as well as to update and read device twin and module twin data.
+* The **Service Connect** policy grants permission to access service endpoints. This permission is used by back-end cloud services to send and receive messages from devices. It's also used to update and read device twin and module twin data.
-* The **Device Connect** policy grants permissions for sending and receiving messages using the IoT Hub device-side endpoints. This permission is used by devices to send and receive messages from an IoT hub, update and read device twin and module twin data, and perform file uploads.
+* The **Device Connect** policy grants permissions for sending and receiving messages using the IoT Hub device-side endpoints. This permission is used by devices to send and receive messages from an IoT hub or update and read device twin and module twin data. It's also used for file uploads.
-Click **Add** to add this newly created policy to the existing list.
+Select **Add** to add this newly created policy to the existing list.
For more detailed information about the access granted by specific permissions, see [IoT Hub permissions](./iot-hub-dev-guide-sas.md#access-control-and-permissions).
For more detailed information about the access granted by specific permissions,
[!INCLUDE [iot-hub-include-create-device](../../includes/iot-hub-include-create-device.md)]
-## Message Routing for an IoT hub
+## Message routing for an IoT hub
-Click **Message Routing** under **Messaging** to see the Message Routing pane, where you define routes and custom endpoints for the hub. [Message routing](iot-hub-devguide-messages-d2c.md) enables you to manage how data is sent from your devices to your endpoints. The first step is to add a new route. Then you can add an existing endpoint to the route, or create a new one of the types supported, such as blob storage.
-
-![Message routing pane](./media/iot-hub-create-through-portal/iot-hub-message-routing.png)
+Select **Message Routing** under **Messaging** to see the Message Routing pane, where you define routes and custom endpoints for the hub. [Message routing](iot-hub-devguide-messages-d2c.md) enables you to manage how data is sent from your devices to your endpoints. The first step is to add a new route. Then you can add an existing endpoint to the route, or create a new one of the types supported, such as blob storage.
### Routes
-Routes is the first tab on the Message Routing pane. To add a new route, click +**Add**. You see the following screen.
+**Routes** is the first tab on the **Message Routing** pane. To add a new route, select **+ Add**.
+
+![Screenshot showing the 'Message Routing' pane with the '+ Add' button.](./media/iot-hub-create-through-portal/iot-hub-message-routing.png)
-![Screenshot showing adding a new route](./media/iot-hub-create-through-portal/iot-hub-add-route-storage-endpoint.png)
+You see the following screen.
+ Name your route. The route name must be unique within the list of routes for that hub.
-For **Endpoint**, you can select one from the dropdown list, or add a new one. In this example, a storage account and container are already available. To add them as an endpoint, click +**Add** next to the Endpoint dropdown and select **Blob Storage**. The following screen shows where the storage account and container are specified.
+For **Endpoint**, select one from the dropdown list or add a new one. In this example, a storage account and container are already available. To add them as an endpoint, choose **+ Add** next to the Endpoint dropdown and select **Blob Storage**.
+
+The following screen shows where the storage account and container are specified.
-![Screenshot showing adding a storage endpoint for the routing rule](./media/iot-hub-create-through-portal/iot-hub-routing-add-storage-endpoint.png)
+![Screenshot showing how to add a storage endpoint for the routing rule.](./media/iot-hub-create-through-portal/iot-hub-routing-add-storage-endpoint.png)
-Click **Pick a container** to select the storage account and container. When you have selected those fields, it returns to the Endpoint pane. Use the defaults for the rest of the fields and **Create** to create the endpoint for the storage account and add it to the routing rules.
+Add an endpoint name in **Endpoint name** if needed. Select **Pick a container** to select the storage account and container. When you've chosen a container then **Select**, the page returns to the **Add a storage endpoint** pane. Use the defaults for the rest of the fields and **Create** to create the endpoint for the storage account and add it to the routing rules.
-For **Data source**, select Device Telemetry Messages.
+You return to the **Add a route** page. For **Data source**, select Device Telemetry Messages.
Next, add a routing query. In this example, the messages that have an application property called `level` with a value equal to `critical` are routed to the storage account. ![Screenshot showing saving a new routing rule](./media/iot-hub-create-through-portal/iot-hub-add-route.png)
-Click **Save** to save the routing rule. You return to the Message Routing pane, and your new routing rule is displayed.
+Select **Save** to save the routing rule. You return to the **Message routing** pane, and your new routing rule is displayed.
### Custom endpoints
-Click the **Custom endpoints** tab. You see any custom endpoints already created. From here, you can add new endpoints or delete existing endpoints.
+Select the **Custom endpoints** tab. You see any custom endpoints already created. From here, you can add new endpoints or delete existing endpoints.
> [!NOTE]
-> If you delete a route, it does not delete the endpoints assigned to that route. To delete an endpoint, click the Custom endpoints tab, select the endpoint you want to delete, and click Delete.
->
+> If you delete a route, it does not delete the endpoints assigned to that route. To delete an endpoint, select the Custom endpoints tab, select the endpoint you want to delete, and choose **Delete**.
You can read more about custom endpoints in [Reference - IoT hub endpoints](iot-hub-devguide-endpoints.md).
To see a full example of how to use custom endpoints with routing, see [Message
Here are two ways to find a specific IoT hub in your subscription:
-1. If you know the resource group to which the IoT hub belongs, click **Resource groups**, then select the resource group from the list. The resource group screen shows all of the resources in that group, including the IoT hubs. Click on the hub for which you're looking.
+1. If you know the resource group to which the IoT hub belongs, choose **Resource groups**, then select the resource group from the list. The resource group screen shows all of the resources in that group, including the IoT hubs. Select your hub.
-2. Click **All resources**. On the **All resources** pane, there is a dropdown list that defaults to `All types`. Click on the dropdown list, uncheck `Select all`. Find `IoT Hub` and check it. Click on the dropdown list box to close it, and the entries will be filtered, showing only your IoT hubs.
+2. Choose **All resources**. On the **All resources** pane, there's a dropdown list that defaults to `All types`. Select the dropdown list, uncheck `Select all`. Find `IoT Hub` and check it. Select the dropdown list box to close it, and the entries will be filtered, showing only your IoT hubs.
## Delete the IoT hub
-To delete an Iot hub, find the IoT hub you want to delete, then click the **Delete** button below the IoT hub name.
+To delete an IoT hub, find the IoT hub you want to delete, then choose **Delete**.
## Next steps
lab-services Reliability In Azure Lab Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/reliability-in-azure-lab-services.md
+
+ Title: Reliability in Azure Lab Services
+description: Learn about reliability in Azure Lab Services
++ Last updated : 08/18/2022++
+# What is reliability in Azure Lab Services?
+
+This article describes reliability support in Azure Lab Services, and covers regional resiliency with availability zones. For a more detailed overview of reliability in Azure, see [Azure resiliency](/azure/availability-zones/overview.md).
+
+## Availability zone support
+
+Azure availability zones are at least three physically separate groups of datacenters within each Azure region. Datacenters within each zone are equipped with independent power, cooling, and networking infrastructure. In the case of a local zone failure, availability zones allow the services to fail over to the other availability zones to provide continuity in service with minimal interruption. Failures can range from software and hardware failures to events such as earthquakes, floods, and fires. Tolerance to failures is achieved with redundancy and logical isolation of Azure services. For more detailed information on availability zones in Azure, see [Regions and availability zones](/azure/availability-zones/az-overview.md).
+
+Azure availability zones-enabled services are designed to provide the right level of resiliency and flexibility. They can be configured in two ways. They can be either zone redundant, with automatic replication across zones, or zonal, with instances pinned to a specific zone. You can also combine these approaches. For more information on zonal vs. zone-redundant architecture, see [Build solutions with availability zones](/azure/architecture/high-availability/building-solutions-for-high-availability).
+
+Azure Lab Services provide availability zone redundancy automatically in all regions listed in this article. While the service infrastructure is zone redundant, customer labs and VMs are not zone redundant.
+
+Currently, the service is not zonal. That is, you canΓÇÖt configure a lab or the VMs in the lab to align to a specific zone. A lab and VMs may be distributed across zones in a region.
+
+### SLA improvements
+
+There are no increased SLAs available for availability in Azure Lab Services. For the monthly uptime SLAs for Azure Lab Services, see [SLA for Azure Lab Services](https://azure.microsoft.com/support/legal/sla/lab-services/v1_0/).
+
+The Azure Lab Services infrastructure uses Cosmos DB storage. The Cosmos DB storage region is the same as the region where the lab plan is located. All the regional Cosmos DB accounts are single region. In the zone-redundant regions listed in this article, the Cosmos DB accounts are single region with Availability Zones. In the other regions, the accounts are single region without Availability Zones. For high availability capabilities for these account types, see [SLAs for Cosmos DB](/azure/cosmos-db/high-availability#slas).
+
+### Zone down experience
+
+#### Azure Lab Services infrastructure
+
+Azure Lab Services infrastructure is zone-redundant in the following regions:
+
+- Australia East
+- Canada Central
+- France Central
+- Korea Central
+- East Asia
+
+Resources apart from the Lab resources and virtual machines are zone redundant in these regions.
+
+In the event of a zone outage in these regions, you can still perform the following tasks:
+
+- Access the Azure Lab Services website
+- Create/manage lab plans
+- Create Users
+- Configure lab schedules
+- Create/manage labs and VMs in regions unaffected by the zone outage.
+
+Data loss may occur only with an unrecoverable disaster in the Cosmos DB region. For more information, see [Region Outages](/azure/cosmos-db/high-availability#region-outages).
+
+For regions not listed, access to the Azure Lab Services infrastructure is not guaranteed when there is a zone outage in the region containing the lab plan. You will only be able to perform the following tasks:
+
+- Access the Azure Lab Services website
+- Create/manage lab plans, labs, and VMs in regions unaffected by the zone outage
+
+> [!NOTE]
+> Existing labs and VMs in regions unaffected by the zone outage aren't affected by a loss of infrastructure in the lab plan region. Existing labs and VMs in unaffected regions can still run and operate as normal.
+
+#### Labs and VMs
+
+Azure Lab Services is not currently zone aligned. So, VMs in a region may be distributed across zones in the region. Therefore, when a zone in a region experiences an outage, there are no guarantees that a lab or any VMs in the associated region will be available.
+
+As a result, the following operations are not guaranteed in the event of a zone outage:
+
+- Manage or access labs/VMs
+- Start/stop/reset VMs
+- Create/publish/delete labs
+- Scale up/down labs
+- Connect to VMs
+
+If there's a zone outage in the region, there's no expectation that you can use any labs or VMs in the region.
+Labs and VMs in other regions will be unaffected by the outage.
+
+#### Zone outage preparation and recovery
+
+Lab and VM services will be restored as soon as the zone availability is restored (the outage is resolved).
+
+If infrastructure is impacted, it will be restored when the zone availability is resolved.
+
+### Region down experience
+
+#### Azure Lab Services infrastructure
+
+In a regional outage, in most scenarios you will only be able to perform the following tasks related to Azure Lab Services infrastructure:
+
+- Access the Azure Lab Services website
+- Create/manage lab plans, labs, and VMs in regions unaffected by the zone outage
+
+Typically, labs are in the same region as the lab plan. However, if the outage is in the lab plan region and an existing lab is in an unaffected region, you can still perform the following tasks for the existing lab in the unaffected region:
+
+- Create Users
+- Configure lab schedules
+
+#### Labs and VMs
+
+In a regional outage, labs and VMs in the region are unavailable, so you will not be able to use or manage them.
+
+Existing labs and VMs in regions unaffected by the outage aren't affected by a loss of infrastructure in the lab plan region. Existing labs and VMs in unaffected regions can still run and operate as normal.
+
+#### Regional outage preparation and recovery
+
+Lab and VM services will be restored as soon as the regional outage is restored (the outage is resolved).
+
+If infrastructure is impacted, it will be restored when the regional outage is resolved.
+
+### Fault tolerance
+
+If you want to preserve maximum access to Azure Lab Services infrastructure during a zone outage, create the lab plan in one of the zone-redundant regions listed.
+
+- Australia East
+- Canada Central
+- France Central
+- Korea Central
+- East Asia
+
+## Disaster recovery
+
+Azure Lab Services does not provide regional failover support. If you want to preserve maximum access to the Azure Lab Services infrastructure during a zone outage, create the lab plan in one of the [zone-redundant regions](#fault-tolerance).
+
+### Outage detection, notification, and management
+
+Azure Lab Services does not provide any service-specific signals about an outage, but is dependent on Azure communications that inform customers about outages. For more information on service health, see [Resource health overview](/azure/service-health/resource-health-overview).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Resiliency in Azure](/azure/availability-zones/overview.md)
logic-apps Logic Apps Custom Api Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-custom-api-authentication.md
Title: Add authentication for securing calls to custom APIs
-description: Set up authentication to improve security for calls to custom APIs from Azure Logic Apps.
+ Title: Add authentication for calls to custom APIs
+description: Set up authentication for calls to custom APIs from Azure Logic Apps.
ms.suite: integration Previously updated : 09/22/2017 Last updated : 08/22/2022
-# Increase security for calls to custom APIs from Azure Logic Apps
+# Add authentication when calling custom APIs from Azure Logic Apps
-To improve security for calls to your APIs, you can set up Azure Active Directory (Azure AD)
-authentication through the Azure portal so you don't have to update your code.
-Or, you can require and enforce authentication through your API's code.
+To improve security for calls to your APIs, you can set up Azure Active Directory (Azure AD) authentication through the Azure portal so you don't have to update your code. Or, you can require and enforce authentication through your API's code.
-## Authentication options for your API
+You can add authentication in the following ways:
-You can improve security for calls to your custom API in these ways:
-
-* [No code changes](#no-code): Protect your API with
-[Azure Active Directory (Azure AD)](../active-directory/fundamentals/active-directory-whatis.md)
-through the Azure portal, so you don't have to update your code or redeploy your API.
+* [No code changes](#no-code): Protect your API with [Azure Active Directory (Azure AD)](../active-directory/fundamentals/active-directory-whatis.md) through the Azure portal, so you don't have to update your code or redeploy your API.
> [!NOTE]
- > By default, the Azure AD authentication that you turn on
- > in the Azure portal doesn't provide fine-grained authorization.
- > For example, this authentication locks your API to just a specific tenant,
- > not to a specific user or app.
+ >
+ > By default, the Azure AD authentication that you select in the Azure portal doesn't
+ > provide fine-grained authorization. For example, this authentication locks your API
+ > to just a specific tenant, not to a specific user or app.
-* [Update your API's code](#update-code): Protect your API by enforcing
-[certificate authentication](#certificate), [basic authentication](#basic),
-or [Azure AD authentication](#azure-ad-code) through code.
+* [Update your API's code](#update-code): Protect your API by enforcing [certificate authentication](#certificate), [basic authentication](#basic), or [Azure AD authentication](#azure-ad-code) through code.
<a name="no-code"></a>
-### Authenticate calls to your API without changing code
+## Authenticate calls to your API without changing code
Here are the general steps for this method:
-1. Create two Azure Active Directory (Azure AD) application identities:
-one for your logic app and one for your web app (or API app).
+1. Create two Azure Active Directory (Azure AD) application identities: one for your logic app resource and one for your web app (or API app).
-2. To authenticate calls to your API, use the credentials (client ID and secret) for the
-service principal that's associated with the Azure AD application identity for your logic app.
+1. To authenticate calls to your API, use the credentials (client ID and secret) for the service principal that's associated with the Azure AD application identity for your logic app.
-3. Include the application IDs in your logic app definition.
+1. Include the application IDs in your logic app's workflow definition.
-#### Part 1: Create an Azure AD application identity for your logic app
+### Part 1: Create an Azure AD application identity for your logic app
-Your logic app uses this Azure AD application identity to authenticate against Azure AD.
-You only have to set up this identity one time for your directory.
-For example, you can choose to use the same identity for all your logic apps,
-even though you can create unique identities for each logic app.
-You can set up these identities in the Azure portal or use [PowerShell](#powershell).
+Your logic app resource uses this Azure AD application identity to authenticate against Azure AD. You only have to set up this identity one time for your directory. For example, you can choose to use the same identity for all your logic apps, even though you can create unique identities for each logic app. You can set up these identities in the Azure portal or use [PowerShell](#powershell).
-**Create the application identity for your logic app in the Azure portal**
+#### [Portal](#tab/azure-portal)
-1. In the [Azure portal](https://portal.azure.com "https://portal.azure.com"),
-choose **Azure Active Directory**.
+1. In the [Azure portal](https://portal.azure.com), select **Azure Active Directory**.
-2. Confirm that you're in the same directory as your web app or API app.
+1. Confirm that you're in the same directory as your web app or API app.
> [!TIP]
+ >
> To switch directories, choose your profile and select another directory.
- > Or, choose **Overview** > **Switch directory**.
+ > Or, select **Overview** > **Switch directory**.
-3. On the directory menu, under **Manage**,
-choose **App registrations** > **New application registration**.
+1. On the directory menu, under **Manage**, select **App registrations** > **New registration**.
- > [!TIP]
- > By default, the app registrations list shows all
- > app registrations in your directory.
- > To view only your app registrations, next to the search box,
- > select **My apps**.
+ The **All registrations** list shows all the app registrations in your directory. To view only your app registrations, select **Owned applications**.
+
+ ![Screenshot showing Azure portal with Azure Active Directory instance, "App registration" pane, and "New application registration" selected.](./media/logic-apps-custom-api-authentication/new-app-registration-azure-portal.png)
+
+1. Provide a user-facing name for your logic app's application identity. Select the supported account types. For **Redirect URI**, select **Web**, provide a unique URL where to return the authentication response, and select **Register**.
- ![Create new app registration](./media/logic-apps-custom-api-authentication/new-app-registration-azure-portal.png)
+ ![Screenshot showing "Register an application" pane with application identity name and URL where to send authentication response.](./media/logic-apps-custom-api-authentication/logic-app-identity-azure-portal.png)
-4. Give your application identity a name,
-leave **Application type** set to **Web app / API**,
-provide a unique string formatted as a domain
-for **Sign-on URL**, and choose **Create**.
+ The **Owned applications** list now includes your created application identity. If this identity doesn't appear, on the toolbar, select **Refresh**.
- ![Provide name and sign-on URL for application identity](./media/logic-apps-custom-api-authentication/logic-app-identity-azure-portal.png)
+ ![Screenshot showing the application identity for your logic app.](./media/logic-apps-custom-api-authentication/logic-app-identity-created.png)
- The application identity that you created for your
- logic app now appears in the app registrations list.
+1. From the app registrations list, select your new application identity.
- ![Application identity for your logic app](./media/logic-apps-custom-api-authentication/logic-app-identity-created.png)
+1. From the application identity navigation menu, select **Overview**.
-5. In the app registrations list, select your new application identity.
-Copy and save the **Application ID** to use as the "client ID"
-for your logic app in Part 3.
+1. On the **Overview** pane, under **Essentials**, copy and save the **Application ID** to use as the "client ID" for your logic app in Part 3.
- ![Copy and save application ID for logic app](./media/logic-apps-custom-api-authentication/logic-app-application-id.png)
+ ![Screenshot showing the application (client) ID underlined.](./media/logic-apps-custom-api-authentication/logic-app-application-id.png)
-6. If your application identity settings aren't visible,
-choose **Settings** or **All settings**.
+1. From the application identity navigation menu, select **Certificates & secrets**.
-7. Under **API Access**, choose **Keys**. Under **Description**,
-provide a name for your key. Under **Expires**, select a duration for your key.
+1. On the **Client secrets** tab, select **New client secret**.
- The key that you're creating acts as the application identity's
- "secret" or password for your logic app.
+1. For **Description**, provide a name for your secret. Under **Expires**, select a duration for your secret. When you're done, select **Add**.
- ![Create key for logic app identity](./media/logic-apps-custom-api-authentication/create-logic-app-identity-key-secret-password.png)
+ The secret that you create acts as the application identity's "secret" or password for your logic app.
-8. On the toolbar, choose **Save**. Under **Value**, your key now appears.
-**Make sure to copy and save your key** for later use because the key is hidden
-when you leave the **Keys** page.
+ ![Screenshot showing secret creation for application identity.](./media/logic-apps-custom-api-authentication/create-logic-app-identity-key-secret-password.png)
- When you configure your logic app in Part 3,
- you specify this key as the "secret" or password.
+ On the **Certificates & secrets** pane, under **Client secrets**, your secret now appears along with a secret value and secret ID.
- ![Copy and save key for later](./media/logic-apps-custom-api-authentication/logic-app-copy-key-secret-password.png)
+ ![Screenshot showing secret value and secret ID with copy button for secret value selected.](./media/logic-apps-custom-api-authentication/logic-app-copy-key-secret-password.png)
+
+1. Copy the secret value for later use. When you configure your logic app in Part 3, you specify this value as the "secret" or password.
<a name="powershell"></a>
-**Create the application identity for your logic app in PowerShell**
+#### [PowerShell](#tab/azure-powershell)
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
-You can perform this task through Azure Resource Manager with PowerShell.
-In PowerShell, run these commands:
+You can perform this task through Azure Resource Manager with PowerShell. In PowerShell, run the following commands:
1. `Add-AzAccount`
In PowerShell, run these commands:
1. `New-AzADApplication -DisplayName "MyLogicAppID" -HomePage "http://mydomain.tld" -IdentifierUris "http://mydomain.tld" -Password $SecurePassword`
-1. Make sure to copy the **Tenant ID** (GUID for your Azure AD tenant),
-the **Application ID**, and the password that you used.
+1. Make sure to copy the **Tenant ID** (GUID for your Azure AD tenant), the **Application ID**, and the password that you used.
+
+For more information, learn how to [create a service principal with PowerShell to access resources](../active-directory/develop/howto-authenticate-service-principal-powershell.md).
-For more information, learn how to
-[create a service principal with PowerShell to access resources](../active-directory/develop/howto-authenticate-service-principal-powershell.md).
+
-#### Part 2: Create an Azure AD application identity for your web app or API app
+### Part 2: Create an Azure AD application identity for your web app or API app
-If your web app or API app is already deployed, you can turn on authentication
-and create the application identity in the Azure portal. Otherwise, you can
-[turn on authentication when you deploy with an Azure Resource Manager template](#authen-deploy).
+If your web app or API app is already deployed, you can turn on authentication and create the application identity in the Azure portal. Otherwise, you can [turn on authentication when you deploy with an Azure Resource Manager template](#authen-deploy).
-**Create the application identity and turn on authentication in the Azure portal for deployed apps**
+**Create the application identity for a deployed web app or API app in the Azure portal**
-1. In the [Azure portal](https://portal.azure.com "https://portal.azure.com"),
-find and select your web app or API app.
+1. In the [Azure portal](https://portal.azure.com), find and select your web app or API app.
-2. Under **Settings**, choose **Authentication/Authorization**.
-Under **App Service Authentication**, turn authentication **On**.
-Under **Authentication Providers**, choose **Azure Active Directory**.
+1. Under **Settings**, select **Authentication** > **Add identity provider**.
- ![Turn on authentication](./media/logic-apps-custom-api-authentication/custom-web-api-app-authentication.png)
+1. After the **Add an identity provider** pane opens, on the **Basics** tab, from the **Identity provider** list, select **Microsoft** to use Azure Active Directory (Azure AD) identities, and then select **Add**.
-3. Now create an application identity for your web app or API app as shown here.
-On the **Azure Active Directory Settings** page,
-set **Management mode** to **Express**. Choose **Create New AD App**.
-Give your application identity a name, and choose **OK**.
+1. Now, create an application identity for your web app or API app as follows:
- ![Create application identity for your web app or API app](./media/logic-apps-custom-api-authentication/custom-api-application-identity.png)
+ 1. For **App registration type**, select **Create new app registration**.
-4. On the **Authentication / Authorization** page, choose **Save**.
+ 1. For **Name**, provide a name for your application identity.
-Now you must find the client ID and tenant ID for the application identity
-that's associated with your web app or API app. You use these IDs in Part 3.
-So continue with these steps for the Azure portal.
+ 1. For **Supported account types**, select the account types appropriate for your scenario.
-**Find application identity's client ID and tenant ID for your web app or API app in the Azure portal**
+ 1. For **Restrict access**, select **Require authentication**.
-1. Under **Authentication Providers**, choose **Azure Active Directory**.
+ 1. For **Unauthenticated requests**, select the option based on your scenario.
- ![Choose "Azure Active Directory"](./media/logic-apps-custom-api-authentication/custom-api-app-identity-client-id-tenant-id.png)
+ 1. When you're done, select **Add**.
-2. On the **Azure Active Directory Settings** page, set **Management mode** to **Advanced**.
+ The application identity that you just created for your web app or API app now appears in the **Identity provider** section:
-3. Copy the **Client ID**, and save that GUID for use in Part 3.
+ ![Screenshot showing newly created application identity for web app or API app.](./media/logic-apps-custom-api-authentication/application-identity-for-web-app.png)
> [!TIP]
- > If **Client ID** and **Issuer Url** don't appear,
- > try refreshing the Azure portal, and repeat Step 1.
+ >
+ > If the application identity doesn't appear, on the toolbar, select **Refresh**.
-4. Under **Issuer Url**, copy and save just the GUID for Part 3.
-You can also use this GUID in your web app or API app's deployment template, if necessary.
+Now you must find the application (client) ID and tenant ID for the application identity that you just created for your web app or API app. You use these IDs in Part 3. So, continue with the following steps for the Azure portal.
- This GUID is your specific tenant's GUID ("tenant ID") and
- should appear in this URL: `https://sts.windows.net/{GUID}`
+**Find application identity's client ID and tenant ID for your web app or API app in the Azure portal**
-5. Without saving your changes, close the **Azure Active Directory Settings** page.
+1. On your web app's navigation menu, select **Authentication**.
-<a name="authen-deploy"></a>
+1. In the **Identity provider** section, find the application identity you previously created. Select the name for your application identity.
+
+ ![Screenshot showing newly created application identity with 'Overview' pane open.](./media/logic-apps-custom-api-authentication/custom-api-app-select-app-identity.png)
+
+1. After the application identity's **Overview** pane opens, find the values for **Application (client) ID** and **Directory (tenant) ID**. Copy and save the values for use in Part 3.
-**Turn on authentication when you deploy with an Azure Resource Manager template**
+ ![Screenshot showing application identity's 'Overview' pane open with 'Application (client) ID' value and 'Directory (tenant) ID' value underlined.](./media/logic-apps-custom-api-authentication/app-identity-application-client-ID-directory-tenant-ID.png)
-You still need to create an Azure AD application identity for your web app or API app
-that differs from the app identity for your logic app. To create the application identity,
-follow the previous steps in Part 2 for the Azure portal.
+ You can also use the tenant ID GUID in your web app or API app's deployment template, if necessary. This GUID is your specific tenant's GUID ("tenant ID") and should appear in this URL: `https://sts.windows.net/{GUID}`
-You can also follow the steps in Part 1, but make sure to use your web app
-or API app's actual `https://{URL}` for **Sign-on URL** and **App ID URI**.
-From these steps, you have to save both the client ID and tenant ID for use
-in your app's deployment template and also for Part 3.
+<a name="authen-deploy"></a>
+
+**Set up authentication when you deploy with an Azure Resource Manager template**
-> [!NOTE]
-> When you create the Azure AD application identity for your web app or API app,
-> you must use the Azure portal, not PowerShell.
-> The PowerShell commandlet doesn't set up the required permissions to sign users into a website.
+If you're using an Azure Resource Manager template (ARM template), you still have to create an Azure AD application identity for your web app or API app that differs from the app identity for your logic app. To create the application identity, and then find the client ID and tenant ID, follow the previous steps in Part 2 for the Azure portal. You'll use both the client ID and tenant ID in your app's deployment template and also for Part 3.
-After you get the client ID and tenant ID, include these IDs
-as a subresource of your web app or API app in your deployment template:
+> [!IMPORTANT]
+>
+> When you create the Azure AD application identity for your web app or API app, you must use the Azure portal, not PowerShell. The PowerShell commandlet doesn't set up the required permissions to sign users into a website.
+
+After you get the client ID and tenant ID, include these IDs as a subresource of your web app or API app in your deployment template:
``` json "resources": [
as a subresource of your web app or API app in your deployment template:
] ```
-To automatically deploy a blank web app and a logic app together with
-Azure Active Directory authentication, [view the complete template here](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.logic/logic-app-custom-api),
-or click **Deploy to Azure** here:
+To automatically deploy a blank web app and a logic app together with Azure Active Directory authentication, [view the complete template here](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.logic/logic-app-custom-api), or select the following **Deploy to Azure** button:
[![Deploy to Azure](media/logic-apps-custom-api-authentication/deploybutton.png)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.logic%2Flogic-app-custom-api%2Fazuredeploy.json)
-#### Part 3: Populate the Authorization section in your logic app
+### Part 3: Populate the Authorization section in your logic app
+
+The previous template already has this authorization section set up, but if you are directly authoring your logic app definition, you must include the full authorization section.
-The previous template already has this authorization section set up,
-but if you are directly authoring the logic app, you must include the full authorization section.
+1. Open your logic app definition in code view.
-Open your logic app definition in code view, go to the **HTTP** action definition,
-find the **Authorization** section, and include these properties:
+1. Go to the **HTTP** action definition, find the **Authorization** section, and include the following properties:
```json { "tenant": "<tenant-ID>", "audience": "<client-ID-from-Part-2-web-app-or-API app>", "clientId": "<client-ID-from-Part-1-logic-app>",
- "secret": "<key-from-Part-1-logic-app>",
+ "secret": "<secret-from-Part-1-logic-app>",
"type": "ActiveDirectoryOAuth" } ``` | Property | Required | Description | | -- | -- | -- |
-| tenant | Yes | The GUID for the Azure AD tenant |
-| audience | Yes | The GUID for the target resource that you want to access, which is the client ID from the application identity for your web app or API app |
-| clientId | Yes | The GUID for the client requesting access, which is the client ID from the application identity for your logic app |
-| secret | Yes | The key or password from the application identity for the client that's requesting the access token |
-| type | Yes | The authentication type. For ActiveDirectoryOAuth authentication, the value is `ActiveDirectoryOAuth`. |
-||||
+| `tenant` | Yes | The GUID for the Azure AD tenant |
+| `audience` | Yes | The GUID for the target resource that you want to access, which is the client ID from the application identity for your web app or API app |
+| `clientId` | Yes | The GUID for the client requesting access, which is the client ID from the application identity for your logic app |
+| `secret` | Yes | The secret or password from the application identity for the client that's requesting the access token |
+| `type` | Yes | The authentication type. For ActiveDirectoryOAuth authentication, the value is `ActiveDirectoryOAuth`. |
For example:
For example:
<a name="update-code"></a>
-### Secure API calls through code
+## Secure API calls through code
<a name="certificate"></a>
-#### Certificate authentication
+### Certificate authentication
-To validate the incoming requests from your logic app to your web app or API app, you can use client certificates. To set up your code, learn [how to configure TLS mutual authentication](../app-service/app-service-web-configure-tls-mutual-auth.md).
+To validate the incoming requests from your logic app workflow to your web app or API app, you can use client certificates. To set up your code, learn [how to configure TLS mutual authentication](../app-service/app-service-web-configure-tls-mutual-auth.md).
-In the **Authorization** section, include these properties:
+In the **Authorization** section, include the following properties:
```json {
In the **Authorization** section, include these properties:
| `type` | Yes | The authentication type. For TLS/SSL client certificates, the value must be `ClientCertificate`. | | `password` | No | The password for accessing the client certificate (PFX file) | | `pfx` | Yes | The base64-encoded contents of the client certificate (PFX file) |
-||||
<a name="basic"></a>
-#### Basic authentication
+### Basic authentication
-To validate incoming requests from your logic app to your web app or API app,
-you can use basic authentication, such as a username and password.
-Basic authentication is a common pattern, and you can use this
-authentication in any language used to build your web app or API app.
+To validate incoming requests from your logic app to your web app or API app, you can use basic authentication, such as a username and password. Basic authentication is a common pattern, and you can use this authentication in any language used to build your web app or API app.
-In the **Authorization** section, include these properties:
+In the **Authorization** section, include the following properties:
```json {
In the **Authorization** section, include these properties:
| Property | Required | Description | | -- | -- | -- |
-| type | Yes | The authentication type that you want to use. For basic authentication, the value must be `Basic`. |
-| username | Yes | The username that you want to use for authentication |
-| password | Yes | The password that you want to use for authentication |
-||||
+| `type` | Yes | The authentication type that you want to use. For basic authentication, the value must be `Basic`. |
+| `username` | Yes | The username that you want to use for authentication |
+| `password` | Yes | The password that you want to use for authentication |
<a name="azure-ad-code"></a>
-#### Azure Active Directory authentication through code
+### Azure Active Directory authentication through code
-By default, the Azure AD authentication that you turn on
-in the Azure portal doesn't provide fine-grained authorization.
-For example, this authentication locks your API to just a specific tenant,
-not to a specific user or app.
+By default, the Azure AD authentication that you turn on in the Azure portal doesn't provide fine-grained authorization. For example, this authentication locks your API to just a specific tenant, not to a specific user or app.
-To restrict API access to your logic app through code,
-extract the header that has the JSON web token (JWT).
-Check the caller's identity, and reject requests that don't match.
+To restrict API access to your logic app through code, extract the header that has the JSON web token (JWT). Check the caller's identity, and reject requests that don't match.
-<!-- Going further, to implement this authentication entirely in your own code,
-and not use the Azure portal, learn how to
-[authenticate with on-premises Active Directory in your Azure app](../app-service/overview-authentication-authorization.md).
+<!-- Going further, to implement this authentication entirely in your own code, and not use the Azure portal, learn how to [authenticate with on-premises Active Directory in your Azure app](../app-service/overview-authentication-authorization.md).
To create an application identity for your logic app and use that identity to call your API, you must follow the previous steps. -->
logic-apps Logic Apps Custom Api Host Deploy Call https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-custom-api-host-deploy-call.md
ms.suite: integration Previously updated : 05/13/2020 Last updated : 08/13/2020 # Deploy and call custom APIs from workflows in Azure Logic Apps
-After you [create your own APIs](./logic-apps-create-api-app.md) to use in your logic app workflows,
-you need to deploy those APIs before you can call them.
-You can deploy your APIs as [web apps](../app-service/overview.md),
-but consider deploying your APIs as [API apps](../app-service/app-service-web-tutorial-rest-api.md),
-which make your job easier when you build, host, and consume APIs
-in the cloud and on premises. You don't have to change any code in your
-APIs - just deploy your code to an API app. You can host your APIs on
-[Azure App Service](../app-service/overview.md),
-a platform-as-a-service (PaaS) offering that provides highly scalable,
-easy API hosting.
-
-Although you can call any API from a logic app,
-for the best experience, add [Swagger metadata](https://swagger.io/specification/)
-that describes your API's operations and parameters.
-This Swagger document helps your API integrate more easily
-and work better with logic apps.
+After you [create your own APIs](./logic-apps-create-api-app.md) to use in your logic app workflows, you need to deploy those APIs before you can call them. You can deploy your APIs as [web apps](../app-service/overview.md), but consider deploying your APIs as [API apps](../app-service/app-service-web-tutorial-rest-api.md), which make your job easier when you build, host, and consume APIs in the cloud and on premises. You don't have to change any code in your APIs - just deploy your code to an API app. You can host your APIs on [Azure App Service](../app-service/overview.md), a platform-as-a-service (PaaS) offering that provides highly scalable, easy API hosting.
+
+Although you can call any API from a logic app workflow, for the best experience, add [Swagger metadata](https://swagger.io/specification/) that describes your API's operations and parameters. This Swagger document helps your API integrate more easily and work better with logic app workflows.
## Deploy your API as a web app or API app
-Before you can call your custom API from a logic app,
-deploy your API as a web app or API app to Azure App Service.
-To make your Swagger document readable by the Logic Apps Designer,
-set the API definition properties and turn on
-[cross-origin resource sharing (CORS)](../app-service/overview.md)
-for your web app or API app.
+Before you can call your custom API from a logic app workflow, deploy your API as a web app or API app to Azure App Service.
+To make your Swagger document readable by your workflow, set the API definition properties and turn on [cross-origin resource sharing (CORS)](../app-service/overview.md) for your web app or API app.
-1. In the [Azure portal](https://portal.azure.com),
-select your web app or API app.
+1. In the [Azure portal](https://portal.azure.com), select your web app or API app.
-2. In the app menu that opens,
-under **API**, choose **API definition**.
-Set the **API definition location**
-to the URL for your swagger.json file.
+1. In the app menu that opens, under **API**, select **API definition**. Set the **API definition location** to the URL for your swagger.json file.
- Usually, the URL appears in this format:
- `https://{name}.azurewebsites.net/swagger/docs/v1)`
+ Usually, the URL appears in this format: `https://{name}.azurewebsites.net/swagger/docs/v1)`
- ![Link to Swagger document for your custom API](./media/logic-apps-custom-api-deploy-call/custom-api-swagger-url.png)
+ ![Screenshot showing Azure portal with web app's "API definition" pane open and "API definition location" box for URL to Swagger document for your custom API.](./media/logic-apps-custom-api-deploy-call/custom-api-swagger-url.png)
-3. Under **API**, choose **CORS**.
-Set the CORS policy for **Allowed origins** to **'*'** (allow all).
+3. Under **API**, select **CORS**. Set the CORS policy for **Allowed origins** to **'*'** (allow all).
- This setting permits requests from Logic App Designer.
+ This setting permits requests from the workflow designer.
- ![Permit requests from Logic App Designer to your custom API](./media/logic-apps-custom-api-deploy-call/custom-api-cors.png)
+ ![Screenshot shows web app's "CORS" pane with "Allowed origins" set to "*", which allows all.](./media/logic-apps-custom-api-deploy-call/custom-api-cors.png)
-For more information, see
-[Host a RESTful API with CORS in Azure App Service](../app-service/app-service-web-tutorial-rest-api.md).
+For more information, review [Host a RESTful API with CORS in Azure App Service](../app-service/app-service-web-tutorial-rest-api.md).
## Call your custom API from logic app workflows
-After you set up the API definition properties and CORS,
-your custom API's triggers and actions should be available
-for you to include in your logic app workflow.
+After you set up the API definition properties and CORS, your custom API's triggers and actions should be available for you to include in your logic app workflow.
-* To view websites that have OpenAPI URLs,
-you can browse your subscription websites in the Logic Apps Designer.
+* To view websites that have OpenAPI URLs, you can browse your subscription websites in the workflow designer.
-* To view available actions and inputs by pointing at a Swagger document,
-use the [HTTP + Swagger action](../connectors/connectors-native-http-swagger.md).
+* To view available actions and inputs by pointing at a Swagger document, use the [HTTP + Swagger action](../connectors/connectors-native-http-swagger.md).
-* To call any API, including APIs that don't have or expose an Swagger document,
-you can always create a request with the [HTTP action](../connectors/connectors-native-http.md).
+* To call any API, including APIs that don't have or expose a Swagger document, you can always create a request with the [HTTP action](../connectors/connectors-native-http.md).
## Next steps
logic-apps Logic Apps Deploy Azure Resource Manager Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-deploy-azure-resource-manager-templates.md
ms.suite: integration Previously updated : 08/04/2021 Last updated : 08/20/2022 ms.devlang: azurecli # Deploy Azure Resource Manager templates for Azure Logic Apps
-After you create an Azure Resource Manager template for your logic app, you can deploy your template in these ways:
+
+After you create an Azure Resource Manager template for your Consumption logic app, you can deploy your template in these ways:
* [Azure portal](#portal) * [Visual Studio](#visual-studio)
logic-apps Logic Apps Diagnosing Failures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-diagnosing-failures.md
Last updated 08/20/2022
# Troubleshoot and diagnose workflow failures in Azure Logic Apps + Your logic app workflow generates information that can help you diagnose and debug problems in your app. You can diagnose your workflow by reviewing the inputs, outputs, and other information for each step in the workflow using the Azure portal. Or, you can add some steps to a workflow for runtime debugging. <a name="check-trigger-history"></a>
logic-apps Logic Apps Enterprise Integration Agreements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-agreements.md
Previously updated : 08/30/2022 Last updated : 08/23/2022 # Add agreements between partners in integration accounts for workflows in Azure Logic Apps + After you add partners to your integration account, specify how partners exchange messages by defining [*agreements*](logic-apps-enterprise-integration-agreements.md) in your integration account. Agreements help organizations communicate seamlessly with each other by defining the specific industry-standard protocol for exchanging messages and by providing the following shared benefits: * Enable organizations to exchange information by using a well-known format.
logic-apps Logic Apps Enterprise Integration As2 Mdn Acknowledgment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-as2-mdn-acknowledgment.md
Previously updated : 08/30/2022 Last updated : 08/23/2022 # MDN acknowledgments for AS2 messages in Azure Logic Apps + In Azure Logic Apps, you can create workflows that handle AS2 messages for Electronic Data Interchange (EDI) communication when you use **AS2** operations. In EDI messaging, acknowledgments provide the status from processing an EDI interchange. When receiving an interchange, the [**AS2 Decode** action](logic-apps-enterprise-integration-as2.md#decode) can return a Message Disposition Notification (MDN) or acknowledgment to the sender. An MDN verifies the following items: * The receiving partner successfully received the original message.
logic-apps Logic Apps Enterprise Integration As2 Message Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-as2-message-settings.md
Previously updated : 08/30/2022 Last updated : 08/23/2022 # Reference for AS2 message settings in agreements for Azure Logic Apps + This reference describes the properties that you can set in an AS2 agreement for specifying how to handle messages between [trading partners](logic-apps-enterprise-integration-partners.md). Set up these properties based on your agreement with the partner that exchanges messages with you. <a name="AS2-incoming-messages"></a>
logic-apps Logic Apps Enterprise Integration As2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-as2.md
Previously updated : 08/30/2022 Last updated : 08/23/2022 # Exchange AS2 messages using workflows in Azure Logic Apps + To send and receive AS2 messages in workflows that you create using Azure Logic Apps, use the **AS2** connector, which provides triggers and actions that support and manage AS2 (version 1.2) communication. * If you're working with the **Logic App (Consumption)** resource type and don't need tracking capabilities, use the **AS2 (v2)** connector, rather than the original **AS2** connector, which is being deprecated.
logic-apps Logic Apps Enterprise Integration B2b Business Continuity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-b2b-business-continuity.md
Previously updated : 08/20/2022 Last updated : 08/23/2022 # Set up cross-region disaster recovery for integration accounts in Azure Logic Apps + B2B workloads involve money transactions like orders and invoices. During a disaster event, it's critical for a business to quickly recover to meet the business-level SLAs agreed upon with their partners.
logic-apps Logic Apps Enterprise Integration B2b List Errors Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-b2b-list-errors-solutions.md
Previously updated : 08/20/2022 Last updated : 08/23/2022 # B2B errors and solutions for Azure Logic Apps
-This article helps you troubleshoot errors that might happen in Logic Apps B2B
+
+This article helps you troubleshoot errors that might happen in Azure Logic Apps B2B
scenarios and suggests appropriate actions for correcting those errors. ## Agreement resolution
logic-apps Logic Apps Enterprise Integration B2b https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-b2b.md
Previously updated : 08/30/2022 Last updated : 08/23/2022 # Exchange B2B messages between partners using workflows in Azure Logic Apps + When you have an integration account that defines trading partners and agreements, you can create an automated business-to-business (B2B) workflow that exchanges messages between trading partners by using Azure Logic Apps. Your workflow can use connectors that support industry-standard protocols, such as AS2, X12, EDIFACT, and RosettaNet. You can also include operations provided by other [connectors in Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors), such as Office 365 Outlook, SQL Server, and Salesforce. This article shows how to create an example logic app workflow that can receive HTTP requests by using a **Request** trigger, decode message content by using the **AS2 Decode** and **Decode X12** actions, and return a response by using the **Response** action. The example uses the workflow designer in the Azure portal, but you can follow similar steps for the workflow designer in Visual Studio.
logic-apps Logic Apps Enterprise Integration Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-certificates.md
Previously updated : 08/30/2022 Last updated : 08/23/2022 # Add certificates to integration accounts for securing messages in workflows with Azure Logic Apps + When you need to exchange confidential messages in a logic app business-to-business (B2B) workflow, you can increase the security around this communication by using certificates. A certificate is a digital document that helps secure communication in the following ways: * Checks the participants' identities in electronic communications.
logic-apps Logic Apps Enterprise Integration Create Integration Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-create-integration-account.md
Previously updated : 08/30/2022 Last updated : 08/23/2022 # Create and manage integration accounts for B2B workflows in Azure Logic Apps with the Enterprise Integration Pack + Before you can build business-to-business (B2B) and enterprise integration workflows using Azure Logic Apps, you need to create an *integration account* resource. This account is a scalable cloud-based container in Azure that simplifies how you store and manage B2B artifacts that you define and use in your workflows for B2B scenarios. Such artifacts include [trading partners](logic-apps-enterprise-integration-partners.md), [agreements](logic-apps-enterprise-integration-agreements.md), [maps](logic-apps-enterprise-integration-maps.md), [schemas](logic-apps-enterprise-integration-schemas.md), [certificates](logic-apps-enterprise-integration-certificates.md), and so on. You also need to have an integration account to electronically exchange B2B messages with other organizations. When other organizations use protocols and message formats different from your organization, you have to convert these formats so your organization's system can process those messages. Supported industry-standard protocols include [AS2](logic-apps-enterprise-integration-as2.md), [X12](logic-apps-enterprise-integration-x12.md), [EDIFACT](logic-apps-enterprise-integration-edifact.md), and [RosettaNet](logic-apps-enterprise-integration-rosettanet.md). > [!TIP]
logic-apps Logic Apps Enterprise Integration Edifact Contrl Acknowledgment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-edifact-contrl-acknowledgment.md
Last updated 08/20/2022
# CONTRL acknowledgments and error codes for EDIFACT messages in Azure Logic Apps + In Azure Logic Apps, you can create workflows that handle EDIFACT messages for Electronic Data Interchange (EDI) communication when you use **EDIFACT** operations. In EDI messaging, acknowledgments provide the status from processing an EDI interchange. When receiving an interchange, the [**EDIFACT Decode** action](logic-apps-enterprise-integration-edifact.md) can return one or more types of acknowledgments to the sender, based on which acknowledgment types are enabled and the specified level of validation. This topic provides a brief overview about the EDIFACT CONTRL ACK, the CONTRL ACK segments in an interchange, and the error codes used in those segments.
logic-apps Logic Apps Enterprise Integration Edifact Message Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-edifact-message-settings.md
Last updated 08/20/2022
# Reference for EDIFACT message settings in agreements for Azure Logic Apps + This reference describes the properties that you can set in an EDIFACT agreement for specifying how to handle messages between [trading partners](logic-apps-enterprise-integration-partners.md). Set up these properties based on your agreement with the partner that exchanges messages with you. <a name="EDIFACT-inbound-messages"></a>
logic-apps Logic Apps Enterprise Integration Flatfile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-flatfile.md
Previously updated : 11/02/2021 Last updated : 08/23/2022 # Encode and decode flat files in Azure Logic Apps + Before you send XML content to a business partner in a business-to-business (B2B) scenario, you might want to encode that content first. By building a logic app workflow, you can encode and decode flat files by using the [built-in](../connectors/built-in.md#integration-account-built-in) **Flat File** actions. Although no **Flat File** triggers are available, you can use a different trigger or action to get or feed the XML content from various sources into your workflow for encoding or decoding. For example, you can use the Request trigger, another app, or other [connectors supported by Azure Logic Apps](../connectors/apis-list.md). You can use **Flat File** actions with workflows in the [**Logic App (Consumption)** and **Logic App (Standard)** resource types](single-tenant-overview-compare.md).
logic-apps Logic Apps Enterprise Integration Transform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-transform.md
Previously updated : 09/15/2021 Last updated : 08/23/2022 # Transform XML in workflows with Azure Logic Apps + In enterprise integration business-to-business (B2B) scenarios, you might have to convert XML between formats. Your logic app workflow can transform XML by using the **Transform XML** action and a predefined [*map*](logic-apps-enterprise-integration-maps.md). For example, suppose you regularly receive B2B orders or invoices from a customer that uses the YearMonthDay date format (YYYYMMDD). However, your organization uses the MonthDayYear date format (MMDDYYYY). You can create and use a map that transforms the YearMonthDay format to the MonthDayYear format before storing the order or invoice details in your customer activity database. If you're new to logic apps, review [What is Azure Logic Apps](logic-apps-overview.md)? For more information about B2B enterprise integration, review [B2B enterprise integration workflows with Azure Logic Apps and Enterprise Integration Pack](logic-apps-enterprise-integration-overview.md).
logic-apps Logic Apps Enterprise Integration X12 997 Acknowledgment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-x12-997-acknowledgment.md
Last updated 08/20/2022
# 997 functional acknowledgments and error codes for X12 messages in Azure Logic Apps + In Azure Logic Apps, you can create workflows that handle X12 messages for Electronic Data Interchange (EDI) communication when you use **X12** operations. In EDI messaging, acknowledgments provide the status from processing an EDI interchange. When receiving an interchange, the [**X12 Decode** action](logic-apps-enterprise-integration-x12-decode.md) can return one or more types of acknowledgments to the sender, based on which acknowledgment types are enabled and the specified level of validation. For example, the receiver reports the status from validating the Functional Group Header (GS) and Functional Group Trailer (GE) in the received X12-encoded message by sending a *997 functional acknowledgment (ACK)* along with each error that happens during processing. The **X12 Decode** action always generates a 4010-compliant 997 ACK, while both the [**X12 Encode** action](logic-apps-enterprise-integration-x12-encode.md) and **X12 Decode** action can validate a 5010-compliant 997 ACK.
logic-apps Logic Apps Enterprise Integration X12 Ta1 Acknowledgment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-x12-ta1-acknowledgment.md
Last updated 08/20/2022
# TA1 technical acknowledgments and error codes for X12 messages in Azure Logic Apps + In Azure Logic Apps, you can create workflows that handle X12 messages for Electronic Data Interchange (EDI) communication when you use **X12** operations. In EDI messaging, acknowledgments provide the status from processing an EDI interchange. When receiving an interchange, the [**X12 Decode** action](logic-apps-enterprise-integration-x12-decode.md) can return one or more types of acknowledgments to the sender, based on which acknowledgment types are enabled and the specified level of validation. For example, the receiver reports the status from validating the Interchange Control Header (ISA) and Interchange Control Trailer (IEA) in the received X12-encoded message by sending a *TA1 technical acknowledgment (ACK)*. If this header and trailer are valid, the receiver sends a positive TA1 ACK, no matter the status of other content. If the header and trailer aren't valid, the receiver sends a **TA1 ACK** with an error code instead.
logic-apps Logic Apps Enterprise Integration X12 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-x12.md
Last updated 08/20/2022
# Exchange X12 messages for B2B enterprise integration using Azure Logic Apps and Enterprise Integration Pack + In Azure Logic Apps, you can create workflows that work with X12 messages by using **X12** operations. These operations include triggers and actions that you can use in your workflow to handle X12 communication. You can add X12 triggers and actions in the same way as any other trigger and action in a workflow, but you need to meet extra prerequisites before you can use X12 operations. This article describes the requirements and settings for using X12 triggers and actions in your workflow. If you're looking for EDIFACT messages instead, review [Exchange EDIFACT messages](logic-apps-enterprise-integration-edifact.md). If you're new to logic apps, review [What is Azure Logic Apps](logic-apps-overview.md) and [Quickstart: Create an integration workflow with multi-tenant Azure Logic Apps and the Azure portal](quickstart-create-first-logic-app-workflow.md).
logic-apps Logic Apps Enterprise Integration Xml Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-xml-validation.md
Last updated 08/20/2022
# Validate XML in workflows with Azure Logic Apps + In enterprise integration business-to-business (B2B) scenarios, the trading partners in an agreement often have to make sure that the messages they exchange are valid before any data processing can start. Your logic app workflow can validate XML messages and documents by using the **XML Validation** action and a predefined [schema](logic-apps-enterprise-integration-schemas.md). If you're new to logic apps, review [What is Azure Logic Apps](logic-apps-overview.md)? For more information about B2B enterprise integration, review [B2B enterprise integration workflows with Azure Logic Apps and Enterprise Integration Pack](logic-apps-enterprise-integration-overview.md).
logic-apps Logic Apps Exception Handling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-exception-handling.md
Previously updated : 05/26/2022 Last updated : 08/23/2022 # Handle errors and exceptions in Azure Logic Apps + The way that any integration architecture appropriately handles downtime or issues caused by dependent systems can pose a challenge. To help you create robust and resilient integrations that gracefully handle problems and failures, Azure Logic Apps provides a first-class experience for handling errors and exceptions. <a name="retry-policies"></a>
logic-apps Logic Apps Scenario Function Sb Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-scenario-function-sb-trigger.md
# Call or trigger logic apps by using Azure Functions and Azure Service Bus + You can use [Azure Functions](../azure-functions/functions-overview.md) to trigger a logic app when you need to deploy a long-running listener or task. For example, you can create a function that listens in on an [Azure Service Bus](../service-bus-messaging/service-bus-messaging-overview.md) queue and immediately fires a logic app as a push trigger. ## Prerequisites
logic-apps Logic Apps Schema 2016 04 01 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-schema-2016-04-01.md
Last updated 08/20/2022
# Schema updates for Azure Logic Apps - June 1, 2016 + The [updated schema](https://schema.management.azure.com/schemas/2016-06-01/Microsoft.Logic.json) and API version for Azure Logic Apps includes key improvements that make logic apps more reliable and easier to use:
logic-apps Logic Apps Track Integration Account As2 Tracking Schemas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-track-integration-account-as2-tracking-schemas.md
Last updated 08/20/2022
# Create schemas for tracking AS2 messages in Azure Logic Apps + To help you monitor success, errors, and message properties for business-to-business (B2B) transactions, you can use these AS2 tracking schemas in your integration account: * AS2 message tracking schema
logic-apps Logic Apps Track Integration Account X12 Tracking Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-track-integration-account-x12-tracking-schema.md
Last updated 08/20/2022
# Create schemas for tracking X12 messages in Azure Logic Apps + To help you monitor success, errors, and message properties for business-to-business (B2B) transactions, you can use these X12 tracking schemas in your integration account: * X12 transaction set tracking schema
logic-apps Manage Logic Apps With Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/manage-logic-apps-with-azure-portal.md
Last updated 04/01/2022
# Manage logic apps in the Azure portal + This article shows how to edit, disable, enable, or delete Consumption logic apps with the Azure portal. You can also [manage Consumption logic apps in Visual Studio](manage-logic-apps-with-visual-studio.md). To manage Standard logic apps, review [Create a Standard workflow with single-tenant Azure Logic Apps in the Azure portal](create-single-tenant-workflows-azure-portal.md). If you're new to Azure Logic Apps, review [What is Azure Logic Apps](logic-apps-overview.md)?
logic-apps Manage Logic Apps With Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/manage-logic-apps-with-visual-studio.md
ms.suite: integration
Previously updated : 01/28/2022 Last updated : 08/23/2022 # Manage logic apps with Visual Studio
logic-apps Monitor B2b Messages Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/monitor-b2b-messages-log-analytics.md
ms.suite: integration Previously updated : 01/30/2020 Last updated : 08/23/2022 # Set up Azure Monitor logs and collect diagnostics data for B2B messages in Azure Logic Apps + > [!NOTE] > This article applies only to Consumption logic apps. For information about monitoring Standard logic apps, review > [Enable or open Application Insights after deployment for Standard logic apps](create-single-tenant-workflows-azure-portal.md#enable-open-application-insights).
logic-apps Monitor Logic Apps Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/monitor-logic-apps-log-analytics.md
Last updated 03/14/2022
# Set up Azure Monitor logs and collect diagnostics data for Azure Logic Apps + > [!NOTE] > This article applies only to Consumption logic apps. For information about monitoring Standard logic apps, review > [Enable or open Application Insights after deployment for Standard logic apps](create-single-tenant-workflows-azure-portal.md#enable-open-application-insights).
logic-apps Plan Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/plan-manage-costs.md
Last updated 08/20/2022
# Plan and manage costs for Azure Logic Apps + This article helps you plan and manage costs for Azure Logic Apps. Before you create or add any resources using this service, estimate your costs by using the Azure pricing calculator. After you start using Logic Apps resources, you can set budgets and monitor costs by using [Azure Cost Management](../cost-management-billing/cost-management-billing-overview.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). To identify areas where you might want to act, you can also review forecasted costs and monitor spending trends. Keep in mind that costs for Logic Apps are only part of the monthly costs in your Azure bill. Although this article explains how to estimate and manage costs for Logic Apps, you're billed for all the Azure services and resources that are used in your Azure subscription, including any third-party services. After you're familiar with managing costs for Logic Apps, you can apply similar methods to manage costs for all the Azure services used in your subscription.
logic-apps Quickstart Create First Logic App Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/quickstart-create-first-logic-app-workflow.md
ms.suite: integration
Previously updated : 05/02/2022 Last updated : 08/23/2022 #Customer intent: As a developer, I want to create my first automated integration workflow that runs in Azure Logic Apps using the Azure portal.
logic-apps Set Up Devops Deployment Single Tenant Azure Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/set-up-devops-deployment-single-tenant-azure-logic-apps.md
For Azure DevOps deployments, you can deploy your logic app by using the [Azure
displayName: 'Deploy logic app workflows' inputs: azureSubscription: 'MyServiceConnection'
- appType: 'workflowapp'
+ appType: 'functionAppLinux' ## Default: functionApp
appName: 'MyLogicAppName' package: 'MyBuildArtifact.zip' deploymentMethod: 'zipDeploy'
machine-learning Concept Differential Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-differential-privacy.md
- Title: Differential privacy in machine learning (preview)-
-description: Learn what differential privacy is and how differentially private systems preserve data privacy.
-- Previously updated : 10/21/2021-----
-#Customer intent: As a data scientist, I want to know what differential privacy is and how I can implement a differentially private systems.
--
-# What is differential privacy in machine learning (preview)?
-
-Learn about differential privacy in machine learning and how it works.
-
-As the amount of data that an organization collects and uses for analyses increases, so do concerns of privacy and security. Analyses require data. Typically, the more data used to train machine learning models, the more accurate they are. When personal information is used for these analyses, it's especially important that the data remains private throughout its use.
-
-## How differential privacy works
-
-Differential privacy is a set of systems and practices that help keep the data of individuals safe and private. In machine learning solutions, differential privacy may be required for regulatory compliance.
--
-In traditional scenarios, raw data is stored in files and databases. When users analyze data, they typically use the raw data. This is a concern because it might infringe on an individual's privacy. Differential privacy tries to deal with this problem by adding "noise" or randomness to the data so that users can't identify any individual data points. At the least, such a system provides plausible deniability. Therefore, the privacy of individuals is preserved with limited impact on the accuracy of the data.
-
-In differentially private systems, data is shared through requests called **queries**. When a user submits a query for data, operations known as **privacy mechanisms** add noise to the requested data. Privacy mechanisms return an *approximation of the data* instead of the raw data. This privacy-preserving result appears in a **report**. Reports consist of two parts, the actual data computed and a description of how the data was created.
-
-## Differential privacy metrics
-
-Differential privacy tries to protect against the possibility that a user can produce an indefinite number of reports to eventually reveal sensitive data. A value known as **epsilon** measures how noisy, or private, a report is. Epsilon has an inverse relationship to noise or privacy. The lower the epsilon, the more noisy (and private) the data is.
-
-Epsilon values are non-negative. Values below 1 provide full plausible deniability. Anything above 1 comes with a higher risk of exposure of the actual data. As you implement machine learning solutions with differential privacy, you want to data with epsilon values between 0 and 1.
-
-Another value directly correlated to epsilon is **delta**. Delta is a measure of the probability that a report isnΓÇÖt fully private. The higher the delta, the higher the epsilon. Because these values are correlated, epsilon is used more often.
-
-## Limit queries with a privacy budget
-
-To ensure privacy in systems where multiple queries are allowed, differential privacy defines a rate limit. This limit is known as a **privacy budget**. Privacy budgets prevent data from being recreated through multiple queries. Privacy budgets are allocated an epsilon amount, typically between 1 and 3 to limit the risk of reidentification. As reports are generated, privacy budgets keep track of the epsilon value of individual reports as well as the aggregate for all reports. After a privacy budget is spent or depleted, users can no longer access data.
-
-## Reliability of data
-
-Although the preservation of privacy should be the goal, thereΓÇÖs a tradeoff when it comes to usability and reliability of the data. In data analytics, accuracy can be thought of as a measure of uncertainty introduced by sampling errors. This uncertainty tends to fall within certain bounds. **Accuracy** from a differential privacy perspective instead measures the reliability of the data, which is affected by the uncertainty introduced by the privacy mechanisms. In short, a higher level of noise or privacy translates to data that has a lower epsilon, accuracy, and reliability.
-
-## Open-source differential privacy libraries
-
-SmartNoise is an open-source project that contains components for building machine learning solutions with differential privacy. SmartNoise is made up of the following top-level components:
--- SmartNoise Core library-- SmartNoise SDK library-
-### SmartNoise Core
-
-The core library includes the following privacy mechanisms for implementing a differentially private system:
-
-|Component |Description |
-|||
-|Analysis | A graph description of arbitrary computations. |
-|Validator | A Rust library that contains a set of tools for checking and deriving the necessary conditions for an analysis to be differentially private. |
-|Runtime | The medium to execute the analysis. The reference runtime is written in Rust but runtimes can be written using any computation framework such as SQL and Spark depending on your data needs. |
-|Bindings | Language bindings and helper libraries to build analyses. Currently SmartNoise provides Python bindings. |
-
-### SmartNoise SDK
-
-The system library provides the following tools and services for working with tabular and relational data:
-
-|Component |Description |
-|||
-|Data Access | Library that intercepts and processes SQL queries and produces reports. This library is implemented in Python and supports the following ODBC and DBAPI data sources:<ul><li>PostgreSQL</li><li>SQL Server</li><li>Spark</li><li>Preston</li><li>Pandas</li></ul>|
-|Service | Execution service that provides a REST endpoint to serve requests or queries against shared data sources. The service is designed to allow composition of differential privacy modules that operate on requests containing different delta and epsilon values, also known as heterogeneous requests. This reference implementation accounts for additional impact from queries on correlated data. |
-|Evaluator | Stochastic evaluator that checks for privacy violations, accuracy, and bias. The evaluator supports the following tests: <ul><li>Privacy Test - Determines whether a report adheres to the conditions of differential privacy.</li><li>Accuracy Test - Measures whether the reliability of reports falls within the upper and lower bounds given a 95% confidence level.</li><li>Utility Test - Determines whether the confidence bounds of a report are close enough to the data while still maximizing privacy.</li><li>Bias Test - Measures the distribution of reports for repeated queries to ensure they arenΓÇÖt unbalanced</li></ul> |
-
-## Next steps
-
-Learn more about differential privacy in machine learning:
--
machine-learning Concept Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-mlflow.md
Previously updated : 04/15/2022 Last updated : 08/15/2022
The following table shows which operations are supported by each of the tools av
| Track and log metrics, parameters and models | **&check;** | | | | Retrieve metrics, parameters and models | **&check;**<sup>1</sup> | <sup>2</sup> | **&check;** | | Submit training jobs with MLflow projects | **&check;** | | |
-| Submit training jobs with inputs and outputs | | **&check;** | |
-| Submit training pipelines | | **&check;** | |
-| Manage experiments runs | **&check;**<sup>1</sup> | **&check;** | **&check;** |
+| Submit training jobs with inputs and outputs | | **&check;** | **&check;** |
+| Submit training jobs using ML pipelines | | **&check;** | |
+| Manage experiments and runs | **&check;**<sup>1</sup> | **&check;** | **&check;** |
| Manage MLflow models | **&check;**<sup>3</sup> | **&check;** | **&check;** | | Manage non-MLflow models | | **&check;** | **&check;** | | Deploy MLflow models to Azure Machine Learning | **&check;**<sup>4</sup> | **&check;** | **&check;** |
machine-learning How To Track Experiments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/how-to-track-experiments.md
Delete the Inference Compute you created in Step 1 so that you don't incur ongoi
## Next Steps
-* Learn more about [deploying models in AzureML](../how-to-deploy-and-where.md)
+* Learn more about [deploying models in AzureML](../v1/how-to-deploy-and-where.md)
machine-learning How To High Availability Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-high-availability-machine-learning.md
Microsoft strives to ensure that Azure services are always available. However, u
* Design for high availability of your solution. * Initiate a failover to another region.
-> [!NOTE]
+> [!IMPORTANT]
> Azure Machine Learning itself does not provide automatic failover or disaster recovery. Backup and restore of workspace metadata such as run history is unavailable. In case you have accidentally deleted your workspace or corresponding components, this article also provides you with currently supported recovery options.
If you accidentally deleted your workspace it is currently not possible to recov
## Next steps
-To learn about repeatable infrastructure deployments with Azure Machine Learning, use an [Azure Resource Manager template](https://docs.microsoft.com/azure/machine-learning/tutorial-create-secure-workspace-template).
+To learn about repeatable infrastructure deployments with Azure Machine Learning, use an [Azure Resource Manager template](/azure/machine-learning/tutorial-create-secure-workspace-template).
machine-learning How To Differential Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-differential-privacy.md
- Title: Differential privacy how-to - SmartNoise (preview)-
-description: Learn how to apply differential privacy best practices to Azure Machine Learning models by using the SmartNoise open-source libraries.
-------- Previously updated : 10/21/2021
-# Customer intent: As an experienced data scientist, I want to use differential privacy in Azure Machine Learning.
--
-# Use differential privacy in Azure Machine Learning (preview)
-
-Learn how to apply differential privacy best practices to Azure Machine Learning models by using the SmartNoise Python open-source libraries.
-
-Differential privacy is the gold-standard definition of privacy. Systems that adhere to this definition of privacy provide strong assurances against a wide range of data reconstruction and reidentification attacks, including attacks by adversaries who possess auxiliary information. Learn more about [how differential privacy works](../concept-differential-privacy.md).
--
-## Prerequisites
--- If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.-- [Python 3](https://www.python.org/downloads/)-
-## Install SmartNoise Python libraries
-
-### Standalone installation
-
-The libraries are designed to work from distributed Spark clusters, and can be installed just like any other package.
-
-The instructions below assume that your `python` and `pip` commands are mapped to `python3` and `pip3`.
-
-Use pip to install the [SmartNoise Python packages](https://pypi.org/project/opendp-smartnoise/).
-
-`pip install opendp-smartnoise`
-
-To verify that the packages are installed, launch a Python prompt and type:
-
-```python
-import opendp.smartnoise.core
-import opendp.smartnoise.sql
-```
-
-If the imports succeed, the libraries are installed, and ready to use.
-
-### Docker image installation
-
-You can also use SmartNoise packages with Docker.
-
-Pull the `opendp/smartnoise` image to use the libraries inside a Docker container that includes Spark, Jupyter, and sample code.
--
-```sh
-docker pull opendp/smartnoise:privacy
-```
-
-Once you've pulled the image, launch the Jupyter server:
-
-```sh
-docker run --rm -p 8989:8989 --name smartnoise-run opendp/smartnoise:privacy
-```
-
-This starts a Jupyter server at port `8989` on your `localhost`, with password `pass@word99`. Assuming you used the command line above to start the container with name `smartnoise-privacy`, you can open a bash terminal in the Jupyter server by running:
-
-```sh
-docker exec -it smartnoise-run bash
-```
-
-The Docker instance clears all state on shutdown, so you'll lose any notebooks you create in the running instance. To remedy this, you can mount a local folder to the container when you launch it:
-
-```sh
-docker run --rm -p 8989:8989 --name smartnoise-run --mount type=bind,source=/Users/your_name/my-notebooks,target=/home/privacy/my-notebooks opendp/smartnoise:privacy
-```
-
-Any notebooks you create under the *my-notebooks* folder will be stored in your local filesystem.
-
-## Perform data analysis
-
-To prepare a differentially private release, you need to choose a data source, a statistic, and some privacy parameters, indicating the level of privacy protection.
-
-This sample references the California Public Use Microdata (PUMS), representing anonymized records of citizen demographics:
-
-```python
-import os
-import sys
-import numpy as np
-import opendp.smartnoise.core as sn
-
-data_path = os.path.join('.', 'data', 'PUMS_california_demographics_1000', 'data.csv')
-var_names = ["age", "sex", "educ", "race", "income", "married", "pid"]
-```
-
-In this example, we compute the mean and the variance of the age. We use a total `epsilon` of 1.0 (epsilon is our privacy parameter, spreading our privacy budget across the two quantities we want to compute. Learn more about [privacy metrics](../concept-differential-privacy.md#differential-privacy-metrics).
-
-```python
-with sn.Analysis() as analysis:
- # load data
- data = sn.Dataset(path = data_path, column_names = var_names)
-
- # get mean of age
- age_mean = sn.dp_mean(data = sn.cast(data['age'], type="FLOAT"),
- privacy_usage = {'epsilon': .65},
- data_lower = 0.,
- data_upper = 100.,
- data_n = 1000
- )
- # get variance of age
- age_var = sn.dp_variance(data = sn.cast(data['age'], type="FLOAT"),
- privacy_usage = {'epsilon': .35},
- data_lower = 0.,
- data_upper = 100.,
- data_n = 1000
- )
-analysis.release()
-
-print("DP mean of age: {0}".format(age_mean.value))
-print("DP variance of age: {0}".format(age_var.value))
-print("Privacy usage: {0}".format(analysis.privacy_usage))
-```
-
-The results look something like those below:
-
-```text
-DP mean of age: 44.55598845931517
-DP variance of age: 231.79044646429134
-Privacy usage: approximate {
- epsilon: 1.0
-}
-```
-
-There are some important things to note about this example. First, the `Analysis` object represents a data processing graph. In this example, the mean and variance are computed from the same source node. However, you can include more complex expressions that combine inputs with outputs in arbitrary ways.
-
-The analysis graph includes `data_upper` and `data_lower` metadata, specifying the lower and upper bounds for ages. These values are used to precisely calibrate the noise to ensure differential privacy. These values are also used in some handling of outliers or missing values.
-
-Finally, the analysis graph keeps track of the total privacy budget spent.
-
-You can use the library to compose more complex analysis graphs, with several mechanisms, statistics, and utility functions:
-
-| Statistics | Mechanisms | Utilities |
-| - |||
-| Count | Gaussian | Cast |
-| Histogram | Geometric | Clamping |
-| Mean | Laplace | Digitize |
-| Quantiles | | Filter |
-| Sum | | Imputation |
-| Variance/Covariance | | Transform |
-
-See the [data analysis notebook](https://github.com/opendifferentialprivacy/smartnoise-samples/blob/master/analysis/basic_data_analysis.ipynb) for more details.
-
-## Approximate utility of differentially private releases
-
-Because differential privacy operates by calibrating noise, the utility of releases may vary depending on the privacy risk. Generally, the noise needed to protect each individual becomes negligible as sample sizes grow large, but overwhelm the result for releases that target a single individual. Analysts can review the accuracy information for a release to determine how useful the release is:
-
-```python
-with sn.Analysis() as analysis:
- # load data
- data = sn.Dataset(path = data_path, column_names = var_names)
-
- # get mean of age
- age_mean = sn.dp_mean(data = sn.cast(data['age'], type="FLOAT"),
- privacy_usage = {'epsilon': .65},
- data_lower = 0.,
- data_upper = 100.,
- data_n = 1000
- )
-analysis.release()
-
-print("Age accuracy is: {0}".format(age_mean.get_accuracy(0.05)))
-```
-
-The result of that operation should look similar to that below:
-
-```text
-Age accuracy is: 0.2995732273553991
-```
-
-This example computes the mean as above, and uses the `get_accuracy` function to request accuracy at `alpha` of 0.05. An `alpha` of 0.05 represents a 95% interval, in that released value will fall within the reported accuracy bounds about 95% of the time. In this example, the reported accuracy is 0.3, which means the released value will be within an interval of width 0.6, about 95% of the time. It isn't correct to think of this value as an error bar, since the released value will fall outside the reported accuracy range at the rate specified by `alpha`, and values outside the range may be outside in either direction.
-
-Analysts may query `get_accuracy` for different values of `alpha` to get narrower or wider confidence intervals, without incurring other privacy cost.
-
-## Generate a histogram
-
-The built-in `dp_histogram` function creates differentially private histograms over any of the following data types:
--- A continuous variable, where the set of numbers has to be divided into bins-- A boolean or dichotomous variable that can only take on two values-- A categorical variable, where there are distinct categories enumerated as strings-
-Here's an example of an `Analysis` specifying bins for a continuous variable histogram:
-
-```python
-income_edges = list(range(0, 100000, 10000))
-
-with sn.Analysis() as analysis:
- data = sn.Dataset(path = data_path, column_names = var_names)
-
- income_histogram = sn.dp_histogram(
- sn.cast(data['income'], type='int', lower=0, upper=100),
- edges = income_edges,
- upper = 1000,
- null_value = 150,
- privacy_usage = {'epsilon': 0.5}
- )
-```
-
-Because the individuals are disjointly partitioned among histogram bins, the privacy cost is incurred only once per histogram, even if the histogram includes many bins.
-
-For more on histograms, see the [histograms notebook](https://github.com/opendifferentialprivacy/smartnoise-samples/blob/master/analysis/histograms.ipynb).
-
-## Generate a covariance matrix
-
-SmartNoise offers three different functionalities with its `dp_covariance` function:
--- Covariance between two vectors-- Covariance matrix of a matrix-- Cross-covariance matrix of a pair of matrices-
-Here's an example of computing a scalar covariance:
-
-```python
-with sn.Analysis() as analysis:
- wn_data = sn.Dataset(path = data_path, column_names = var_names)
-
- age_income_cov_scalar = sn.dp_covariance(
- left = sn.cast(wn_data['age'],
- type = "FLOAT"),
- right = sn.cast(wn_data['income'],
- type = "FLOAT"),
- privacy_usage = {'epsilon': 1.0},
- left_lower = 0.,
- left_upper = 100.,
- left_n = 1000,
- right_lower = 0.,
- right_upper = 500_000.,
- right_n = 1000)
-```
-
-For more information, see the [covariance notebook](
-https://github.com/opendifferentialprivacy/smartnoise-samples/blob/master/analysis/covariance.ipynb)
-
-## Next Steps
--- Explore [SmartNoise sample notebooks](https://github.com/opendifferentialprivacy/smartnoise-samples/tree/master/analysis).
machine-learning How To Setup Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-setup-authentication.md
Azure AD Conditional Access can be used to further control or restrict access to
All the authentication workflows for your workspace rely on Azure Active Directory. If you want users to authenticate using individual accounts, they must have accounts in your Azure AD. If you want to use service principals, they must exist in your Azure AD. Managed identities are also a feature of Azure AD.
-For more on Azure AD, see [What is Azure Active Directory authentication](/azure/active-directory/authentication/overview-authentication.md).
+For more on Azure AD, see [What is Azure Active Directory authentication](/azure/active-directory/authentication/overview-authentication).
Once you've created the Azure AD accounts, see [Manage access to Azure Machine Learning workspace](../how-to-assign-roles.md) for information on granting them access to the workspace and other operations in Azure Machine Learning.
The easiest way to create an SP and grant access to your workspace is by using t
### Managed identity with a VM
-1. Enable a [system-assigned managed identity for Azure resources on the VM](/azure/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#system-assigned-managed-identity).
+1. Enable a [system-assigned managed identity for Azure resources on the VM](/azure/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm#system-assigned-managed-identity).
1. From the [Azure portal](https://portal.azure.com), select your workspace and then select __Access Control (IAM)__. 1. Select __Add__, __Add Role Assignment__ to open the __Add role assignment page__.
ws = Workspace(subscription_id="your-sub-id",
## Use Conditional Access
-As an administrator, you can enforce [Azure AD Conditional Access policies](/azure/active-directory/conditional-access/overview.md) for users signing in to the workspace. For example, you
-can require two-factor authentication, or allow sign in only from managed devices. To use Conditional Access for Azure Machine Learning workspaces specifically, [assign the Conditional Access policy](/azure/active-directory/conditional-access/concept-conditional-access-cloud-apps.md) to Machine Learning Cloud app.
+As an administrator, you can enforce [Azure AD Conditional Access policies](/azure/active-directory/conditional-access/overview) for users signing in to the workspace. For example, you
+can require two-factor authentication, or allow sign in only from managed devices. To use Conditional Access for Azure Machine Learning workspaces specifically, [assign the Conditional Access policy](/azure/active-directory/conditional-access/concept-conditional-access-cloud-apps) to Machine Learning Cloud app.
## Next steps
machine-learning Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/introduction.md
Title: Machine Learning SDK & CLI (v1)
+ Title: SDK & CLI (v1)
-description: Learn about the machine learning extension for the Azure CLI (v1).
+description: Learn about Azure Machine Learning SDK & CLI (v1).
Last updated 05/10/2022
-# Azure Machine Learning SDK & CLI v1
+# Azure Machine Learning SDK & CLI (v1)
[!INCLUDE [dev v1](../../../includes/machine-learning-dev-v1.md)]
managed-grafana Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/encryption.md
+
+ Title: Encryption in Azure Managed Grafana
+description: Learn how data is encrypted in Azure Managed Grafana.
++++ Last updated : 07/22/2022+++
+# Encryption in Azure Managed Grafana
+
+This article provides a short description of encryption within Azure Managed Grafana.
+
+## Data storage
+
+ Azure Managed Grafana stores data in the following
+
+- Resource-provider related system metadata is stored in Azure Cosmos DB.
+- Grafana instance user data is stored in a per instance Azure Database for PostgreSQL.
+
+## Encryption in Cosmos DB and Azure Database for PostgreSQL
+
+Managed Grafana leverages encryption offered by Cosmos DB and Azure Database for PostgreSQL.
+
+Data stored in Cosmos DB and Azure Database for PostgreSQL is encrypted at rest on storage devices and in transport over the network.
+
+For more information, go to [Encryption at rest in Azure Cosmos DB](/azure/cosmos-db/database-encryption-at-rest) and [Security in Azure Database for PostgreSQL - Flexible Server](/azure/postgresql/flexible-server/concepts-security).
+
+## Server-side encryption
+
+The encryption model used by Managed Grafana is the server-side encryption model with Service-Managed keys.
+
+In this model, all key management aspects such as key issuance, rotation, and backup are managed by Microsoft. The Azure resource providers create the keys, place them in secure storage, and retrieve them when needed. For more information, go to [Server-side encryption using Service-Managed key](/azure/security/fundamentals/encryption-models).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Monitor your Azure Managed Grafana instance](how-to-monitor-managed-grafana-workspace.md)
managed-grafana Grafana App Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/grafana-app-ui.md
A Grafana dashboard is a collection of [panels](#panels) arranged in rows and co
## Next steps > [!div class="nextstepaction"]
-> [How to share an Azure Managed Grafana Preview instance](./how-to-share-grafana-workspace.md)
+> [How to share an Azure Managed Grafana instance](./how-to-share-grafana-workspace.md)
managed-grafana High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/high-availability.md
Title: High availability in Azure Managed Grafana Preview
-description: Learn about high availability options provided by Azure Managed Grafana Preview
+ Title: Azure Managed Grafana service reliability
+description: Learn about service reliability and availability options provided by Azure Managed Grafana
Previously updated : 6/18/2022 Last updated : 7/27/2022
-# High availability in Azure Managed Grafana Preview
+# Azure Managed Grafana service reliability
-An Azure Managed Grafana Preview instance in the Standard tier is hosted on a dedicated set of virtual machines (VMs). By default, two VMs are deployed to provide redundancy. Each VM runs a Grafana server. A network load balancer distributes browser requests amongst the Grafana servers. On the backend, the Grafana servers are connected to a shared database that stores the configuration and other persistent data for an entire Managed Grafana instance.
+An Azure Managed Grafana instance in the Standard tier is hosted on a dedicated set of virtual machines (VMs). By default, two VMs are deployed to provide redundancy. Each VM runs a Grafana server. A network load balancer distributes browser requests amongst the Grafana servers. On the backend, the Grafana servers are connected to a shared database that stores the configuration and other persistent data for an entire Managed Grafana instance.
The load balancer always keeps track of which Grafana servers are available. In a dual-server setup, if it detects that one server is down, the load balancer starts sending all requests to the remaining server. That server should be able to pick up the browser sessions previously served by the other one based on information saved in the shared database. In the meantime, the Managed Grafana service will work to repair the unhealthy server or bring up a new one.
+Microsoft is not providing or setting up disaster recovery for this service. In case of a region level outage, service will experience downtime and users can set up additional instances in other regions for disaster recovery purposes.
+ ## Zone redundancy
-Normally the network load balancer, VMs and database that underpin a Managed Grafana instance are located within one Azure datacenter. The Managed Grafana Standard tier supports *zone redundancy*, which provides protection against zonal outages. When the zone redundancy option is selected, the VMs are spread across [availability zones](../availability-zones/az-overview.md#availability-zones) and other resources with availability zone enabled.
+Normally the network load balancer, VMs and database that underpin a Managed Grafana instance are located in a region based on system resource availability, and could end up being in a same Azure datacenter
-> [!NOTE]
-> Zone redundancy can only be enabled when creating the Managed Grafana instance, and can't be modified subsequently. There's also an additional charge for using the zone redundancy option. Go to [Azure Managed Grafana pricing](https://azure.microsoft.com/pricing/details/managed-grafana/) for details.
+### With zone redundancy enabled
+
+When the zone redundancy option is enabled, VMs are spread across [availability zones](../availability-zones/az-overview.md#availability-zones) and other resources with availability zone enabled.
In a zone-wide outage, no user action is required. An impacted Managed Grafana instance will rebalance itself to take advantage of the healthy zone automatically. The Managed Grafana service will attempt to heal the affected instances during zone recovery.
+> [!NOTE]
+> Zone redundancy can only be enabled when creating the Managed Grafana instance, and can't be modified subsequently. The zone redundancy option comes with an additional cost. Go to [Azure Managed Grafana pricing](https://azure.microsoft.com/pricing/details/managed-grafana/) for details.
+
+### With zone redundancy disabled
+
+Zone redundancy is disabled in the Managed Grafana Standard tier by default. In this scenario, virtual machines are created as regional resources and should not be expected to survive zone-downs scenarios as they can go down at same time.
+ ## Next steps > [!div class="nextstepaction"]
-> [Create an Azure Managed Grafana Preview instance](./quickstart-managed-grafana-portal.md)
+> [Create an Azure Managed Grafana instance](./quickstart-managed-grafana-portal.md)
managed-grafana How To Api Calls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-api-calls.md
Title: 'Call Grafana APIs programmatically'-
-description: Learn how to call Grafana APIs programmatically with Azure Active Directory (Azure AD) and an Azure service principal
+ Title: 'Call Grafana APIs programmatically with Azure Managed Grafana'
+
+description: Learn how to call Grafana APIs programmatically with Azure Active Directory and an Azure service principal
- Previously updated : 4/18/2022 + Last updated : 08/11/2022
-# How to call Grafana APIs programmatically
+# Tutorial: Call Grafana APIs programmatically
-In this article, you'll learn how to call Grafana APIs within Azure Managed Grafana Preview using a service principal.
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Assign an Azure Managed Grafana role to the service principal of your application
+> * Retrieve application details
+> * Get an access token
+> * Call Grafana APIs
## Prerequisites -- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet).-- An Azure Managed Grafana instance. If you don't have one yet, [create an Azure Managed Grafana instance](./quickstart-managed-grafana-portal.md).
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/).
+* An Azure Managed Grafana workspace. [Create an Azure Managed Grafana instance](./quickstart-managed-grafana-portal.md).
+* An Azure Active Directory (Azure AD) application with a service principal. [Create an Azure AD application and service principal](../active-directory/develop/howto-create-service-principal-portal.md). For simplicity, use an application located in the same Azure AD tenant as your Managed Grafana instance.
## Sign in to Azure Sign in to the Azure portal at [https://portal.azure.com/](https://portal.azure.com/) with your Azure account.
-## Assign roles to the service principal of your application and of your Azure Managed Grafana Preview instance
+## Assign an Azure Managed Grafana role to the service principal of your application
+
+1. In the Azure portal, open your Managed Grafana instance.
+1. Select **Access control (IAM)** in the navigation menu.
+1. Select **Add**, then **Add role assignment**.
+1. Select the **Grafana Editor** role and then **Next**.
+1. Under **Assign access to**, select **User,group, or service principal**.
+1. Select **Select members**, select your service principal, and hit **Select**.
+1. Select **Review + assign**.
+
+ :::image type="content" source="media/tutorial-api/role-assignment.png" alt-text="Screenshot of Add role assignment in the Azure platform.":::
+
+## Retrieve application details
+
+You now need to gather some information, which you'll use to get a Grafana API access token, and call Grafana APIs.
+
+1. Find your tenant ID:
+ 1. In the Azure portal, enter *Azure Active Directory* in the **Search resources, services, and docs (G+ /)**.
+ 1. Select **Azure Active Directory**.
+ 1. Select **Properties** from the left menu.
+ 1. Locate the field **Tenant ID** and save its value.
+
+ :::image type="content" source="./media/tutorial-api/tenant-id.png" alt-text="Screenshot of the Azure portal, getting tenant ID.":::
+
+1. Find your client ID:
+ 1. In the Azure portal, in Azure Active Directory, select **App registrations** from the left menu.
+ 1. Select your app.
+ 1. In **Overview**, find the **Application (client) ID** field and save its value.
+
+ :::image type="content" source="./media/tutorial-api/client-id.png" alt-text="Screenshot of the Azure portal, getting client ID.":::
+
+1. Create an application secret:
+ 1. In the Azure portal, in Azure Active Directory, select **App registrations** from the left menu.
+ 1. Select your app.
+ 1. Select **Certificates & secrets** from the left menu.
+ 1. Select **New client secret**.
+ 1. Create a new client secret and save its value.
-1. Start by [Creating an Azure AD application and service principal that can access resources](../active-directory/develop/howto-create-service-principal-portal.md). This guide takes you through creating an application and assigning a role to its service principal. For simplicity, use an application located in the same Azure Active Directory (Azure AD) tenant as your Grafana instance.
-1. Assign the role of your choice to the service principal for your Grafana resource. Refer to [How to share a Managed Grafana instance](how-to-share-grafana-workspace.md) to learn how to grant access to a Grafana instance. Instead of selecting a user, select **Service principal**.
+ :::image type="content" source="./media/tutorial-api/create-new-secret.png" alt-text="Screenshot of the Azure portal, creating a secret.":::
+
+ > [!NOTE]
+ > You can only access a secret's value immediately after creating it. Copy the value before leaving the page to use it in the next step of this tutorial.
+
+1. Find your Grafana endpoint URL:
+
+ 1. In the Azure portal, enter *Azure Managed Grafana* in the **Search resources, services, and docs (G+ /)** bar.
+ 1. Select **Azure Managed Grafana** and open your Managed Grafana workspace.
+ 1. Select **Overview** from the left menu and save the **Endpoint** value.
+
+ :::image type="content" source="media/tutorial-api/endpoint-url.png" alt-text="Screenshot of the Azure platform. Endpoint displayed in the Overview page.":::
## Get an access token
-To access Grafana APIs, you first need to get an access token. Here's an example showing how you can call Azure AD to retrieve a token:
+To access Grafana APIs, you need to get an access token. Follow the example below to call Azure AD and retrieve a token. Replace `<tenant-id>`, `<client-id>`, and `<client-secret>` with the tenant ID, application (client) ID, and client secret collected in the previous step.
```bash curl -X POST -H 'Content-Type: application/x-www-form-urlencoded' \--d 'grant_type=client_credentials&client_id=<client-id>&client_secret=<application-secret>&resource=ce34e7e5-485f-4d76-964f-b3d2b16d1e4f' \
+-d 'grant_type=client_credentials&client_id=<client-id>&client_secret=<client-secret>&resource=ce34e7e5-485f-4d76-964f-b3d2b16d1e4f' \
https://login.microsoftonline.com/<tenant-id>/oauth2/token ```
-Replace `<tenant-id>` with your own Azure AD tenant ID, replace `<client-id>` with your client ID and `<application-secret>` with the application secret of the application you want to share.
- Here's an example of response: ```bash
Here's an example of response:
} ```
-## Call a Grafana API
+## Call Grafana APIs
-You can now call the Grafana API using the access token retrieved in the previous step as the Authorization header. For example:
+You can now call Grafana APIs using the access token retrieved in the previous step as the Authorization header. For example:
```bash curl -X GET \
curl -X GET \
https://<grafana-url>/api/user ```
-Replace `<access-token>` with the access token retrieved in the previous step and replace `<grafana-url>` with the URL of your Grafana instance. For example `https://grafanaworkspace-abcd.cuse.grafana.azure.com`. This URL is displayed in the Azure platform, in the **Overview** page of your Managed Grafana instance.
+Replace `<access-token>` and `<grafana-url>` with the access token retrieved in the previous step and the endpoint URL of your Grafana instance. For example `https://my-grafana-abcd.cuse.grafana.azure.com`.
+
+## Clean up resources
+
+If you're not going to continue to use these resources, delete them with the following steps:
+
+1. Delete Azure Managed Grafana:
+ 1. In the Azure portal, in Azure Managed Grafana, select **Overview** from the left menu.
+ 1. Select **Delete**.
+ 1. Enter the resource name to confirm deletion and select **Delete**.
+1. Delete the Azure AD application:
+ 1. In the Azure portal, in Azure Active Directory, select **App registrations** from the left menu.
+ 1. Select your app.
+ 1. In the **Overview** tab, select **Delete**.
+ 1. Select **Delete**.
## Next steps > [!div class="nextstepaction"]
-> [Grafana UI](./grafana-app-ui.md)
+> [How to configure data sources](./how-to-data-source-plugins-managed-identity.md)
managed-grafana How To Authentication Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-authentication-permissions.md
+
+ Title: How to set up authentication and permissions in Azure Managed Grafana
+description: Learn how to set up Azure Managed Grafana authentication permissions using a system-assigned Managed identity or a Service Principal
++++ Last updated : 08/22/2022
+
+
+# Set up Azure Managed Grafana authentication and permissions
+
+To process data, Azure Managed Grafana needs permission to access data sources. In this guide, learn how to set up authentication during the creation of the Azure Managed Grafana instance, so that Grafana can access data sources using a system-assigned managed identity or a service principal. This guide also introduces the option to add a Monitoring Reader role assignment on the target subscription.
+
+## Prerequisite
+
+An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+
+## Sign in to Azure
+
+Sign in to Azure with the Azure portal or with the Azure CLI.
+
+### [Portal](#tab/azure-portal)
+
+Sign in to the [Azure portal](https://portal.azure.com) with your Azure account.
+
+### [Azure CLI](#tab/azure-cli)
+
+Open your CLI and run the `az login` command to sign in to Azure.
+
+```azurecli
+az login
+```
+
+This command will prompt your web browser to launch and load an Azure sign-in page. If the browser fails to open, use device code flow with `az login --use-device-code`. For more sign-in options, go to [sign in with the Azure CLI](/cli/azure/authenticate-azure-cli).
+++
+## Set up authentication and permissions during the creation of the instance
+
+Create a workspace with the Azure portal or the CLI.
+
+### [Portal](#tab/azure-portal)
+
+#### Create a workspace: basic and advanced settings
+
+1. In the upper-left corner of the home page, select **Create a resource**. In the **Search resources, services, and docs (G+/)** box, enter *Azure Managed Grafana* and select **Azure Managed Grafana**.
+
+ :::image type="content" source="media/authentication/find-azure-portal-grafana.png" alt-text="Screenshot of the Azure platform. Find Azure Managed Grafana in the marketplace." :::
+
+1. Select **Create**.
+
+1. In the **Basics** pane, enter the following settings.
+
+ | Setting | Description | Example |
+ ||--||
+ | Subscription ID | Select the Azure subscription you want to use. | *my-subscription* |
+ | Resource group name | Create a resource group for your Azure Managed Grafana resources. | *my-resource-group* |
+ | Location | Use Location to specify the geographic location in which to host your resource. Choose the location closest to you. | *(US) East US* |
+ | Name | Enter a unique resource name. It will be used as the domain name in your Managed Grafana instance URL. | *my-grafana* |
+ | Zone redundancy | Zone redundancy is disabled by default. Zone redundancy automatically provisions and manages a standby replica of the Managed Grafana instance in a different availability zone within one region. There's an [additional charge](https://azure.microsoft.com/pricing/details/managed-grafana/#pricing) for this option. | *Disabled* |
+
+ :::image type="content" source="media/authentication/create-form-basics.png" alt-text="Screenshot of the Azure portal. Create workspace form. Basics.":::
+
+1. Select **Next : Advanced >** to access API key creation and statics IP address options. **Enable API key creation** and **Deterministic outbound IP** options are set to **Disable** by default. Optionally enable API key creation and enable a static IP address.
+
+1. Select **Next : Permission >** to control access rights for your Grafana instance and data sources:
+
+#### Create a workspace: permission settings
+
+Review below different methods to manage permissions to access data sources within Azure Managed Grafana.
+
+##### With managed identity enabled
+
+System-assigned managed identity is the default authentication method provided to all users who have the Owner or User Access Administrator role for the subscription.
+
+> [!NOTE]
+> In the permissions tab, if Azure displays the message "You must be a subscription 'Owner' or 'User Access Administrator' to use this feature.", go to the next section of this doc to learn about setting up Azure Managed Grafana with system-assigned managed identity disabled.
+
+1. The box **System assigned managed identity** is set to **On** by default.
+
+1. The box **Add role assignment to this identity with 'Monitoring Reader' role on target subscription** is checked by default. If you uncheck this box, you will need to manually add role assignments for Azure Managed Grafana later on. For reference, go to [Modify access permissions to Azure Monitor](how-to-permissions.md).
+
+1. Under **Grafana administrator role**, the box **Include myself** is checked by default. Optionally select **Add** to grant the Grafana administrator role to more members.
++
+##### With managed identity disabled
+
+1. Azure Managed Grafana can also access data sources with managed identity disabled. You can use a service principal for authentication, using a client ID and secret. To use this method, in the **Permissions** tab, set the box **System assigned managed identity** to **Off**.
+
+1. **Add role assignment to this identity with 'Monitoring Reader' role on target subscription** is disabled.
+
+1. Under **Grafana administrator role**, if you have the Owner or User Access Administrator role for the subscription, the box **Include myself** is checked by default. Optionally select **Add** to grant the Grafana administrator role to more members. If you don't have the necessary role, you won't be able to manage Grafana access rights yourself.
+
+> [!NOTE]
+> Turning off system-assigned managed identity disables the Azure Monitor data source plugin for your Azure Managed Grafana instance. In this scenario, use a service principal instead of Azure Monitor to access data sources.
+
+#### Create a workspace: tags and review + create
+
+1. Select **Next : Tags** and optionally add tags to categorize resources.
+
+1. Select **Next : Review + create >**. After validation runs, select **Create**. Your Azure Managed Grafana resource is deploying.
+
+ :::image type="content" source="media/authentication/create-form-validation.png" alt-text="Screenshot of the Azure portal. Create workspace form. Validation.":::
+
+### [Azure CLI](#tab/azure-cli)
+
+Run the [az group create](/cli/azure/group#az-group-create) command below to create a resource group to organize the Azure resources needed. Skip this step if you already have a resource group you want to use.
+
+| Parameter | Description | Example |
+||-|--|
+| --name | Choose a unique name for your new resource group. | *grafana-rg* |
+| --location | Choose an Azure region where Managed Grafana is available. For more info, go to [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=managed-grafana). | *eastus* |
+
+```azurecli
+az group create --location <location> --name <resource-group-name>
+```
+
+> [!NOTE]
+> The CLI experience for Azure Managed Grafana is part of the amg extension for the Azure CLI (version 2.30.0 or higher). The extension will automatically install the first time you run an `az grafana` command.
+
+#### With managed identity enabled
+
+System-assigned managed identity is the default authentication method for Azure Managed Grafana. Run the [az grafana create](/cli/azure/grafana#az-grafana-create) command below to create an Azure Managed Grafana instance with system-assigned managed identity.
+
+1. If you have the owner or administrator role on this subscription:
+
+ | Parameter | Description | Example |
+ ||--|-|
+ | --name | Choose a unique name for your new Managed Grafana instance. | *grafana-test* |
+ | --resource-group | Choose a resource group for your Managed Grafana instance. | *my-resource-group* |
+
+ ```azurecli
+ az grafana create --name <managed-grafana-resource-name> --resource-group <resource-group-name>
+ ```
+
+1. If you don't have the owner or administrator role on this subscription:
+
+ | Parameter | Description | Example |
+ ||--|-|
+ | --name | Choose a unique name for your new Managed Grafana instance. | *grafana-test* |
+ | --resource-group | Choose a resource group for your Managed Grafana instance. | *my-resource-group* |
+ | --skip-role-assignment | Enter `true` to skip role assignment if you don't have an owner or administrator role on this subscription. Skipping role assignment lets you create an instance without the roles required to assign permissions. | *--skip-role-assignment true* |
+
+ ```azurecli
+ az grafana create --name <managed-grafana-resource-name> --resource-group <resource-group-name> --skip-role-assignment true
+ ```
+
+> [!NOTE]
+> You must have the owner or administrator role on your subscription to use the system-assigned managed identity authentication method. If you don't have the necessary role, go to the next section to see how to create an Azure Managed Grafana instance with system-assigned managed identity disabled.
+
+#### With managed identity disabled
+
+Azure Managed Grafana can also access data sources with managed identity disabled. You can use a service principal for authentication, using a client ID and secret instead of a managed identity. To use this method, run the command below:
+
+| Parameter | Description | Example |
+|||-|
+| --name | Choose a unique name for your new Managed Grafana instance. | *grafana-test* |
+| --resource-group | Choose a resource group for your Managed Grafana instance. | *my-resource-group* |
+| --skip-system-assigned-identity | Enter `true` to disable system assigned identity. System-assigned managed identity is the default authentication method for Azure Managed Grafana. Use this option if you don't want to use a system-assigned managed identity. | *--skip-system-assigned-identity true* |
+| --skip-role-assignment | Enter `true` to skip role assignment if you don't have an owner or administrator role on this subscription. Skipping role assignment lets you create an instance without the roles required to assign permissions. | *--skip-role-assignment true* |
+
+```azurecli
+az grafana create --name <managed-grafana-resource-name> --resource-group <resource-group-name> --skip-role-assignment true --skip-system-assigned-identity true
+```
+
+> [!NOTE]
+> Turning off system-assigned managed identity disables the Azure Monitor data source plugin for your Azure Managed Grafana instance. In this scenario, use a service principal instead of Azure Monitor to access data sources.
+
+Once the deployment is complete, you'll see a note in the output of the command line stating that the instance was successfully created, alongside with additional information about the deployment.
+++
+## Update authentication and permissions
+
+After your workspace has been created, you can still turn on or turn off system-assigned managed identity and update Azure role assignments for Azure Managed Grafana.
+
+1. In the Azure portal, from the left menu, under **Settings**, select **Identity**.
+1. Set the status for System assigned to **Off**, to deactivate the system assigned managed identity, or set it to **On** to activate this authentication method.
+1. Under permissions, select **Azure role assignments** to set Azure roles.
+1. When done, select **Save**
+
+ :::image type="content" source="media/authentication/update-identity.jpg" alt-text="Screenshot of the Azure portal. Updating the system-assigned managed identity. Basics.":::
+
+> [!NOTE]
+> Disabling a system-assigned managed identity is irreversible. If you re-enable the identity in the future, Azure will create a new identity.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [How to configure data sources](./how-to-data-source-plugins-managed-identity.md)
managed-grafana How To Data Source Plugins Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-data-source-plugins-managed-identity.md
Title: How to configure data sources for Azure Managed Grafana Preview
+ Title: How to configure data sources for Azure Managed Grafana
description: In this how-to guide, discover how you can configure data sources for Azure Managed Grafana using Managed Identity.
Last updated 3/31/2022
-# How to configure data sources for Azure Managed Grafana Preview
+# How to configure data sources for Azure Managed Grafana
## Prerequisites -- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet).
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
- An Azure Managed Grafana instance. If you don't have one yet, [create an Azure Managed Grafana instance](./how-to-permissions.md). - A resource including monitoring data with Managed Grafana monitoring permissions. Read [how to configure permissions](how-to-permissions.md) for more information.
Other data sources include:
You can find all available Grafana data sources by going to your resource and selecting **Configuration** > **Data sources** from the left menu. Search for the data source you need from the available list and select **Add data source**.
- :::image type="content" source="media/managed-grafana-how-to-source-plugins.png" alt-text="Screenshot of the Add data source page.":::
+ :::image type="content" source="media/data-sources/add-data-source.png" alt-text="Screenshot of the Add data source page.":::
> [!NOTE] > Installing Grafana plugins listed on the page **Configuration** > **Plugins** isnΓÇÖt currently supported. For more information about data sources, go to [Data sources](https://grafana.com/docs/grafana/latest/datasources/) on the Grafana Labs website.
-## Default configuration for Azure Monitor
+## Configuration for Azure Monitor
The Azure Monitor data source is automatically added to all new Managed Grafana resources. To review or modify its configuration, follow these steps in your Managed Grafana endpoint: 1. From the left menu, select **Configuration** > **Data sources**.
- :::image type="content" source="media/managed-grafana-how-to-source-configuration.png" alt-text="Screenshot of the Add data sources page.":::
+ :::image type="content" source="media/data-sources/configuration.png" alt-text="Screenshot of the Add data sources page.":::
1. Azure Monitor is listed as a built-in data source for your Managed Grafana instance. Select **Azure Monitor**.
-1. In **Settings**, authenticate through **Managed Identity** and select your subscription from the dropdown list or enter your **App Registration** details
+1. In the **Settings** tab, authenticate through **Managed Identity** and select your subscription from the dropdown list or enter your **App Registration** details
- :::image type="content" source="media/managed-grafana-how-to-source-configuration-Azure-Monitor-settings.png" alt-text="Screenshot of the Azure Monitor page in data sources.":::
+ :::image type="content" source="media/data-sources/configure-Azure-Monitor.png" alt-text="Screenshot of the Azure Monitor page in data sources.":::
-Authentication and authorization are subsequently made through the provided managed identity. With Managed Identity, you can assign permissions for your Managed Grafana instance to access Azure Monitor data without having to manually manage service principals in Azure Active Directory (Azure AD).
+Authentication and authorization are then made through the provided managed identity. With Managed Identity, you can assign permissions for your Managed Grafana instance to access Azure Monitor data without having to manually manage service principals in Azure Active Directory (Azure AD).
+
+## Configuration for Azure Data Explorer
+
+Azure Managed Grafana can also access data sources using a service principal set up in Azure Active Directory (Azure AD).
+
+1. From the left menu, select **Configuration** > **Data sources**.
+
+ :::image type="content" source="media/data-sources/configuration.png" alt-text="Screenshot of the Add data sources page.":::
+
+1. **Azure Data Explorer Datasource** is listed as a built-in data source for your Managed Grafana instance. Select this data source.
+1. In the **Settings** tab, fill out the form under **Connection Details**, and optionally also edit the **Query Optimizations**, **Database schema settings**, and **Tracking** sections.
+
+ :::image type="content" source="media/data-sources/data-explorer-connection-settings.jpg" alt-text="Screenshot of the Connection details section for Data Explorer in data sources.":::
+
+ To complete this process, you need to have an Azure AD service principal and connect Azure AD with an Azure Data Explorer User. For more information, go to [Configuring the datasource in Grafana](https://github.com/grafana/azure-data-explorer-datasource#configuring-the-datasource-in-grafana).
+
+1. Select **Save & test** to validate the connection. "Success" is displayed on screen and confirms that Azure Managed Grafana is able to fetch the data source through the provided connection details, using the service principal in Azure AD.
## Next steps > [!div class="nextstepaction"]
-> [Modify access permissions to Azure Monitor](./how-to-permissions.md)
> [Share an Azure Managed Grafana instance](./how-to-share-grafana-workspace.md)
managed-grafana How To Enable Zone Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-enable-zone-redundancy.md
+
+ Title: How to enable zone redundancy in Azure Managed Grafana
+description: Learn how to create a zone-redundant Managed Grafana instance.
++++ Last updated : 03/08/2022+
+
+
+# Enable zone redundancy in Azure Managed Grafana
+
+Azure Managed Grafana offers a zone-redundant option to protect your instance against datacenter failure. Enabling zone redundancy for Managed Grafana lets you deploy your Managed Grafana resources across a minimum of three [Azure availability zones](/azure/availability-zones/az-overview#azure-regions-with-availability-zones) within the same Azure region.
+
+In this how-to guide, learn how to enable zone redundancy for Azure Managed Grafana during the creation of your Managed Grafana instance.
+
+> [!NOTE]
+> Zone redundancy for Azure Managed Grafana is a billable option. [See prices](https://azure.microsoft.com/pricing/details/managed-grafana/#pricing). Zone redundancy can only be enabled when creating the Managed Grafana instance, and can't be modified subsequently.
+
+## Prerequisite
+
+An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+
+## Sign in to Azure
+
+Sign in to Azure with the Azure portal or with the Azure CLI.
+
+### [Portal](#tab/azure-portal)
+
+Sign in to the [Azure portal](https://portal.azure.com) with your Azure account.
+
+### [Azure CLI](#tab/azure-cli)
+
+Open your CLI and run the `az login` command to sign in to Azure.
+
+```azurecli
+az login
+```
+
+This command will prompt your web browser to launch and load an Azure sign-in page. If the browser fails to open, use device code flow with `az login --use-device-code`. For more sign-in options, go to [sign in with the Azure CLI](/cli/azure/authenticate-azure-cli).
+++
+## Create a Managed Grafana workspace
+
+Create a workspace and enable zone redundancy with the Azure portal or the CLI.
+
+### [Portal](#tab/azure-portal)
+
+1. In the upper-left corner of the home page, select **Create a resource**. In the **Search resources, services, and docs (G+/)** box, enter *Azure Managed Grafana* and select **Azure Managed Grafana**.
+
+ :::image type="content" source="media/quickstart-portal/find-azure-portal-grafana.png" alt-text="Screenshot of the Azure platform. Find Azure Managed Grafana in the marketplace." :::
+
+1. Select **Create**.
+
+1. In the **Basics** pane, enter the following settings.
+
+ | Setting | Description | Example |
+ ||-||
+ | Subscription ID | Select the Azure subscription you want to use. | *my-subscription* |
+ | Resource group name | Create a resource group for your Azure Managed Grafana resources. | *my-resource-group* |
+ | Location | Use Location to specify the geographic location in which to host your resource. Choose the location closest to you. | *(US) East US* |
+ | Name | Enter a unique resource name. It will be used as the domain name in your Managed Grafana instance URL. | *my-grafana* |
+ | Zone Redundancy | Set **Enable Zone Redundancy** to **Enable**. Zone redundancy automatically provisions and manages a standby replica of the Managed Grafana instance in a different availability zone within one region. | *Enabled* |
+
+1. Set **Zone redundancy** to **Enable**. Zone redundancy automatically provisions and manages a standby replica of the Managed Grafana instance in a different availability zone within one region. There's an [additional charge](https://azure.microsoft.com/pricing/details/managed-grafana/#pricing) for this option.
+
+ :::image type="content" source="media/quickstart-portal/create-form-basics-with-redundancy.png" alt-text="Screenshot of the Azure portal. Create workspace form. Basics.":::
+
+1. Select **Next : Advanced >** to access API key creation and statics IP address options. **Enable API key creation** and **Deterministic outbound IP** options are set to **Disable** by default. Optionally enable API key creation and enable a static IP address.
+
+ :::image type="content" source="media/quickstart-portal/create-form-advanced.png" alt-text="Screenshot of the Azure portal. Create workspace form. Advanced.":::
+
+1. Select **Next : Permission >** to control access rights for your Grafana instance and data sources:
+ 1. **System assigned managed identity** is set to **On**.
+
+ 1. The box **Add role assignment to this identity with 'Monitoring Reader' role on target subscription** is checked.
+
+ 1. The box **Include myself** under **Grafana administrator role** is checked by default. This !grants you the Grafana administrator role, and lets you manage access rights. You can give this right to more members by selecting **Add**. If this option is grayed out, ask someone with the Owner role on the subscription to assign you the Grafana Admin role.
+
+ :::image type="content" source="media/quickstart-portal/create-form-permission.png" alt-text="Screenshot of the Azure portal. Create workspace form. Permission.":::
+
+1. Optionally select **Next : Tags** and add tags to categorize resources.
+
+ :::image type="content" source="media/quickstart-portal/create-form-tags.png" alt-text="Screenshot of the Azure portal. Create workspace form. Tags.":::
+
+1. Select **Next : Review + create >**. After validation runs, select **Create**. Your Azure Managed Grafana resource is deploying.
+
+ :::image type="content" source="media/quickstart-portal/create-form-validation-with-redundancy.png" alt-text="Screenshot of the Azure portal. Create workspace form. Validation.":::
+
+ ### [Azure CLI](#tab/azure-cli)
+
+1. Run the code below to create a resource group to organize the Azure resources needed. Skip this step if you already have a resource group you want to use.
+
+ | Parameter | Description | Example |
+ ||-|--|
+ | --name | Choose a unique name for your new resource group. | *grafana-rg* |
+ | --location | Choose an Azure region where Managed Grafana is available. For more info, go to [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=managed-grafana). | *eastus* |
+
+ ```azurecli
+ az group create --location <location> --name <resource-group-name>
+ ```
+
+1. Run the code below to create an Azure Managed Grafana workspace.
+
+ | Parameter | Description | Example |
+ |-||--|
+ | --name | Choose a unique name for your new Managed Grafana instance. | *grafana-test* |
+ | --resource-group | Choose a resource group for your Managed Grafana instance. | *my-resource-group* |
+ | --zone-redundancy | Enter `enabled` to enable zone redundancy for this new instance. | *--zone-redundancy enabled* |
+
+ ```azurecli
+ az grafana create --name <managed-grafana-resource-name> --resource-group <resource-group-name> --zone-redundancy enabled
+ ```
+
+Once the deployment is complete, you'll see a note in the output of the command line stating that the instance was successfully created, alongside with additional information about the deployment.
+
+> [!NOTE]
+> The CLI experience for Azure Managed Grafana is part of the amg extension for the Azure CLI (version 2.30.0 or higher). The extension will automatically install the first time you run an `az grafana` command.
+++
+## Check if zone redundancy is enabled
+
+In the Azure portal, under **Settings**, go to **Configuration** and check if **Zone redundancy** is listed as enabled or disabled.
+
+ :::image type="content" source="media/quickstart-portal/configuration.png" alt-text="Screenshot of the Azure portal. Check zone redundancy.":::
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [How to configure data sources](./how-to-data-source-plugins-managed-identity.md)
managed-grafana How To Monitor Managed Grafana Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-monitor-managed-grafana-workspace.md
Title: 'How to monitor your Azure Managed Grafana Preview instance with logs'
-description: Learn how to monitor your Azure Managed Grafana Preview instance with logs.
+ Title: 'How to monitor your Azure Managed Grafana instance with logs'
+description: Learn how to monitor your Azure Managed Grafana instance with logs.
Last updated 3/31/2022
-# How to monitor your Azure Managed Grafana Preview instance with logs
+# How to monitor your Azure Managed Grafana instance with logs
-In this article, you'll learn how to monitor an Azure Managed Grafana Preview instance by configuring diagnostic settings and accessing event logs.
+In this article, you'll learn how to monitor an Azure Managed Grafana instance by configuring diagnostic settings and accessing event logs.
## Prerequisites -- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet).
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
- An Azure Managed Grafana instance with access to at least one data source. If you don't have a Managed Grafana instance yet, [create an Azure Managed Grafana instance](./how-to-permissions.md) and [add a data source](how-to-data-source-plugins-managed-identity.md). ## Sign in to Azure
You can create up to five different diagnostic settings to send different logs t
1. Open a Managed Grafana resource, and go to **Diagnostic settings**, under **Monitoring**
- :::image type="content" source="media/managed-grafana-monitoring-diagnostic-overview.png" alt-text="Screenshot of the Azure platform. Diagnostic settings.":::
+ :::image type="content" source="media/monitoring-logs/diagnostic-overview.png" alt-text="Screenshot of the Azure platform. Diagnostic settings.":::
1. Select **+ Add diagnostic setting**
- :::image type="content" source="media/managed-grafana-monitoring-add-settings.png" alt-text="Screenshot of the Azure platform. Add diagnostic settings.":::
+ :::image type="content" source="media/monitoring-logs/add-settings.png" alt-text="Screenshot of the Azure platform. Add diagnostic settings.":::
1. Enter a unique **diagnostic setting name** for your diagnostic
You can create up to five different diagnostic settings to send different logs t
| Event hub | Stream to an event hub | Select a **subscription** and an existing Azure Event Hub **namespace**. Optionally also choose an existing **event hub**. Lastly, choose an **event hub policy** from the list. Only event hubs in the same region as the Grafana instance are displayed in the dropdown menu. | | Partner solution | Send to a partner solution | Select a **subscription** and a **destination**. For more information about available destinations, go to [partner destinations](../azure-monitor/partners.md). |
- :::image type="content" source="media/managed-grafana-monitoring-settings.png" alt-text="Screenshot of the Azure platform. Diagnostic settings configuration.":::
+ :::image type="content" source="media//monitoring-logs/monitoring-settings.png" alt-text="Screenshot of the Azure platform. Diagnostic settings configuration.":::
## Access logs
Now that you've configured your diagnostic settings, Azure will stream all new e
1. In your Managed Grafana instance, select **Logs** from the left menu. The Azure platform displays a **Queries** page, with suggestions of queries to choose from.
- :::image type="content" source="media/managed-grafana-monitoring-logs-menu.png" alt-text="Screenshot of the Azure platform. Open Logs.":::
+ :::image type="content" source="media/monitoring-logs/menu.png" alt-text="Screenshot of the Azure platform. Open Logs.":::
1. Select a query from the suggestions displayed under the **Queries** page, or close the page to create your own query. 1. To use a suggested query, select a query and select **Run**, or select **Load to editor** to review the code. 1. To create your own query, enter your query in the code editor and select **Run**. You can also perform some actions, such as editing the scope and the range of the query, as well as saving and sharing the query. The result of the query is displayed in the lower part of the screen.
- :::image type="content" source="media/managed-grafana-monitoring-logs-query.png" alt-text="Screenshot of the Azure platform. Log query editing." lightbox="media/managed-grafana-monitoring-logs-query-expanded.png":::
+ :::image type="content" source="media/monitoring-logs/query.png" alt-text="Screenshot of the Azure platform. Log query editing." lightbox="media/monitoring-logs/query-expanded.png":::
1. Select **Schema and Filter** on the left side of the screen to access tables, queries and functions. You can also filter and group results, as well as find your favorites. 1. Select **Columns** on the right of **Results** to edit the columns of the results table, and manage the table like a pivot table.
- :::image type="content" source="media/managed-grafana-monitoring-logs-filters.png" alt-text="Screenshot of the Azure platform. Log query filters and columns." lightbox="media/managed-grafana-monitoring-logs-filters-expanded.png":::
+ :::image type="content" source="media/monitoring-logs/filters.png" alt-text="Screenshot of the Azure platform. Log query filters and columns." lightbox="media/monitoring-logs/filters-expanded.png":::
## Next steps > [!div class="nextstepaction"] > [Grafana UI](./grafana-app-ui.md)
-> [How to share an Azure Managed Grafana instance](./how-to-share-grafana-workspace.md)
+
+> [!div class="nextstepaction"]
+> [Share an Azure Managed Grafana instance](./how-to-share-grafana-workspace.md)
managed-grafana How To Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-permissions.md
Title: How to modify access permissions to Azure Monitor
-description: Learn how to manually set up permissions that allow your Azure Managed Grafana Preview instance to access a data source
+description: Learn how to manually set up permissions that allow your Azure Managed Grafana instance to access a data source
By default, when a Grafana instance is created, Azure Managed Grafana grants it
This means that the new Grafana instance can access and search all monitoring data in the subscription, including viewing the Azure Monitor metrics and logs from all resources, and any logs stored in Log Analytics workspaces in the subscription.
-In this article, you'll learn how to manually edit permissions for a specific resource.
+In this article, you'll learn how to manually grant permission for Azure Managed Grafana to access an Azure resource using a managed identity.
## Prerequisites -- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet).
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
- An Azure Managed Grafana instance. If you don't have one yet, [create an Azure Managed Grafana instance](./quickstart-managed-grafana-portal.md). - An Azure resource with monitoring data and write permissions, such as [User Access Administrator](../../articles/role-based-access-control/built-in-roles.md#user-access-administrator) or [Owner](../../articles/role-based-access-control/built-in-roles.md#owner)
For more information about how to use Managed Grafana with Azure Monitor, go to
## Next steps > [!div class="nextstepaction"]
-> [How to configure data sources for Azure Managed Grafana](./how-to-data-source-plugins-managed-identity.md)
+> [How to configure data sources for Azure Managed Grafana](./how-to-data-source-plugins-managed-identity.md)
managed-grafana How To Share Grafana Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-share-grafana-workspace.md
Title: How to share an Azure Managed Grafana Preview instance
+ Title: How to share an Azure Managed Grafana instance
description: 'Azure Managed Grafana: learn how you can share access permissions and dashboards with your team and customers.'
Last updated 3/31/2022
-# How to share an Azure Managed Grafana Preview instance
+# How to share an Azure Managed Grafana instance
A DevOps team may build dashboards to monitor and diagnose an application or infrastructure that it manages. Likewise, a support team may use a Grafana monitoring solution for troubleshooting customer issues. In these scenarios, multiple users will be accessing one Grafana instance. Azure Managed Grafana enables such sharing by allowing you to set the custom permissions on an instance that you own. This article explains what permissions are supported and how to grant permissions to share dashboards with your internal teams or external customers. ## Prerequisites -- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet).
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
- An Azure Managed Grafana instance. If you don't have one yet, [create a Managed Grafana instance](./how-to-permissions.md). ## Supported Grafana roles
Sign in to the Azure portal at [https://portal.azure.com/](https://portal.azure.
1. Select **Access control (IAM)** in the navigation menu. 1. Click **Add**, then **Add role assignment**
- :::image type="content" source="media/managed-grafana-how-to-share-IAM.png" alt-text="Screenshot of Add role assignment in the Azure platform.":::
+ :::image type="content" source="media/share/iam-page.png" alt-text="Screenshot of Add role assignment in the Azure platform.":::
1. Select one of the Grafana roles to assign to a user or security group. The available roles are:
Sign in to the Azure portal at [https://portal.azure.com/](https://portal.azure.
- Grafana Editor - Grafana Viewer
- :::image type="content" source="media/managed-grafana-how-to-share-role-assignment.png" alt-text="Screenshot of the Grafana roles in the Azure platform.":::
+ :::image type="content" source="media/share/role-assignment.png" alt-text="Screenshot of the Grafana roles in the Azure platform.":::
> [!NOTE] > Dashboard and data source level sharing will be done from within the Grafana application. Fore more details, refer to [Grafana permissions](https://grafana.com/docs/grafana/latest/permissions/).
Sign in to the Azure portal at [https://portal.azure.com/](https://portal.azure.
## Next steps > [!div class="nextstepaction"]
-> [How to configure data sources for Azure Managed Grafana](./how-to-data-source-plugins-managed-identity.md)
-> [How to modify access permissions to Azure Monitor](./how-to-permissions.md)
-> [How to call Grafana APIs in your automation with Azure Managed Grafana](./how-to-api-calls.md)
+> [Configure data sources](./how-to-data-source-plugins-managed-identity.md)
+
+> [!div class="nextstepaction"]
+> [Modify access permissions to Azure Monitor](./how-to-permissions.md)
+
+> [!div class="nextstepaction"]
+> [Call Grafana APIs in your automation](./how-to-api-calls.md)
managed-grafana Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/overview.md
Title: What is Azure Managed Grafana Preview?
+ Title: What is Azure Managed Grafana?
description: Read an overview of Azure Managed Grafana. Understand why and how to use Managed Grafana.
Last updated 3/31/2022
-# What is Azure Managed Grafana Preview?
+# What is Azure Managed Grafana?
Azure Managed Grafana is a data visualization platform built on top of the Grafana software by Grafana Labs. It's built as a fully managed Azure service operated and supported by Microsoft. Grafana helps you bring together metrics, logs and traces into a single user interface. With its extensive support for data sources and graphing capabilities, you can view and analyze your application and infrastructure telemetry data in real-time.
-Azure Managed Grafana is optimized for the Azure environment. It works seamlessly with many Azure services. Specifically, for the current preview, it provides with the following integration features:
+Azure Managed Grafana is optimized for the Azure environment. It works seamlessly with many Azure services. It provides the following integration features:
* Built-in support for [Azure Monitor](../azure-monitor/index.yml) and [Azure Data Explorer](/azure/data-explorer/) * User authentication and access control using Azure Active Directory identities
-* Direct import of existing charts from Azure portal
+* Direct import of existing charts from the Azure portal
To learn more about how Grafana works, visit the [Getting Started documentation](https://grafana.com/docs/grafana/latest/getting-started/) on the Grafana Labs website.
-## Why use Azure Managed Grafana Preview?
+## Why use Azure Managed Grafana?
Managed Grafana lets you bring together all your telemetry data into one place. It can access a wide variety of data sources supported, including your data stores in Azure and elsewhere. By combining charts, logs and alerts into one view, you can get a holistic view of your application and infrastructure, and correlate information across multiple datasets.
You can create dashboards instantaneously by importing existing charts directly
## Next steps > [!div class="nextstepaction"]
-> [Create an Azure Managed Grafana Preview instance using the Azure portal](./quickstart-managed-grafana-portal.md)
-
+> [Create an Azure Managed Grafana instance using the Azure portal](./quickstart-managed-grafana-portal.md)
managed-grafana Quickstart Managed Grafana Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/quickstart-managed-grafana-cli.md
Title: 'Quickstart: create an Azure Managed Grafana Preview instance using the Azure CLI'
+ Title: 'Quickstart: create an Azure Managed Grafana instance using the Azure CLI'
description: Learn how to create a Managed Grafana instance using the Azure CLI Previously updated : 07/25/2022 Last updated : 08/12/2022 ms.devlang: azurecli
-# Quickstart: Create an Azure Managed Grafana Preview instance using the Azure CLI
+# Quickstart: Create an Azure Managed Grafana instance using the Azure CLI
-Get started by creating an Azure Managed Grafana Preview workspace using the Azure CLI. Creating a workspace will generate a Managed Grafana instance.
+Get started by creating an Azure Managed Grafana workspace using the Azure CLI. Creating a workspace will generate a Managed Grafana instance.
> [!NOTE]
-> The CLI experience for Azure Managed Grafana Preview is part of the amg extension for the Azure CLI (version 2.30.0 or higher). The extension will automatically install the first time you run an `az grafana` command.
-
-> [!NOTE]
-> Azure Managed Grafana doesn't support personal [Microsoft accounts](https://account.microsoft.com) currently.
+> The CLI experience for Azure Managed Grafana is part of the amg extension for the Azure CLI (version 2.30.0 or higher). The extension will automatically install the first time you run an `az grafana` command.
## Prerequisite
-An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet).
+An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
## Sign in to Azure
Open your CLI and run the `az login` command:
az login ```
-This command will prompt your web browser to launch and load an Azure sign-in page. If the browser fails to open, use device code flow with `az login --use-device-code`. For more sign in options, go to [sign in with the Azure CLI](/cli/azure/authenticate-azure-cli).
+This command will prompt your web browser to launch and load an Azure sign-in page. If the browser fails to open, use device code flow with `az login --use-device-code`. For more sign-in options, go to [sign in with the Azure CLI](/cli/azure/authenticate-azure-cli).
## Create a resource group
Now let's check if you can access your new Managed Grafana instance.
1. Take note of the **endpoint** URL ending by `eus.grafana.azure.com`, listed in the CLI output.
-1. Open a browser and enter the endpoint URL. You should now see your Azure Managed Grafana instance. From there, you can finish setting up your Grafana installation.
+1. Open a browser and enter the endpoint URL. Single sign-on via Azure Active Directory has been configured for you automatically. If prompted, enter your Azure account. You should now see your Azure Managed Grafana instance. From there, you can finish setting up your Grafana installation.
+ :::image type="content" source="media/quickstart-portal/grafana-ui.png" alt-text="Screenshot of a Managed Grafana instance.":::
-> [!NOTE]
-> If creating a Grafana instance fails the first time, please try again. The failure might be due to a limitation in our backend, and we are actively working to fix.
+ > [!NOTE]
+ > Azure Managed Grafana doesn't support connecting with personal Microsoft accounts currently.
+
+You can now start interacting with the Grafana application to configure data sources, create dashboards, reports and alerts. Suggested read: [Monitor Azure services and applications using Grafana](/azure/azure-monitor/visualize/grafana-plugin).
## Clean up resources
-If you're not going to continue to use this instance, delete the Azure resources you created.
+In the preceding steps, you created an Azure Managed Grafana workspace in a new resource group. If you don't expect to need these resources again in the future, delete the resource group.
`az group delete -n <resource-group-name> --yes`
managed-grafana Quickstart Managed Grafana Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/quickstart-managed-grafana-portal.md
Title: 'Quickstart: create an Azure Managed Grafana Preview instance using the Azure portal'
+ Title: 'Quickstart: create an Azure Managed Grafana instance using the Azure portal'
description: Learn how to create a Managed Grafana workspace to generate a new Managed Grafana instance in the Azure portal Previously updated : 06/10/2022 Last updated : 08/12/2022
-# Quickstart: Create an Azure Managed Grafana Preview instance using the Azure portal
+# Quickstart: Create an Azure Managed Grafana instance using the Azure portal
-Get started by creating an Azure Managed Grafana Preview workspace using the Azure portal. Creating a workspace will generate a Managed Grafana instance.
+Get started by creating an Azure Managed Grafana workspace using the Azure portal. Creating a workspace will generate a Managed Grafana instance.
## Prerequisite
-An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet).
-
-> [!NOTE]
-> Azure Managed Grafana doesn't support personal [Microsoft accounts](https://account.microsoft.com) currently.
+An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
## Create a Managed Grafana workspace 1. Sign in to the [Azure portal](https://portal.azure.com) with your Azure account.
-1. In the upper-left corner of the home page, select **Create a resource**. In the **Search services and marketplace** box, enter *Managed Grafana* and select **Azure Managed Grafana**.
+1. In the upper-left corner of the home page, select **Create a resource**. In the **Search resources, services, and docs (G+/)** box, enter *Azure Managed Grafana* and select **Azure Managed Grafana**.
- :::image type="content" source="media/managed-grafana-quickstart-marketplace.png" alt-text="Screenshot of the Azure platform. Find Azure Managed Grafana in the marketplace." lightbox="media/managed-grafana-quickstart-marketplace-expanded.png":::
+ :::image type="content" source="media/quickstart-portal/find-azure-portal-grafana.png" alt-text="Screenshot of the Azure platform. Find Azure Managed Grafana in the marketplace." :::
1. Select **Create**.
-1. In the **Create Grafana Workspace** pane, enter the following settings.
+1. In the **Basics** pane, enter the following settings.
- :::image type="content" source="media/managed-grafana-quickstart-portal-form.png" alt-text="Screenshot of the Azure portal. Create workspace form.":::
+ | Setting | Sample value | Description |
+ ||||
+ | Subscription ID | *my-subscription* | Select the Azure subscription you want to use. |
+ | Resource group name | *my-resource-group* | Create a resource group for your Azure Managed Grafana resources. |
+ | Location | *(US) East US* | Use Location to specify the geographic location in which to host your resource. Choose the location closest to you. |
+ | Name | *my-grafana* | Enter a unique resource name. It will be used as the domain name in your Managed Grafana instance URL. |
+ | Zone redundancy | *Disabled* | Zone redundancy is disabled by default. Zone redundancy automatically provisions and manages a standby replica of the Managed Grafana instance in a different availability zone within one region. There's an [additional charge](https://azure.microsoft.com/pricing/details/managed-grafana/#pricing) for this option. |
- | Setting | Sample value | Description |
- ||||
- | Subscription ID | *mysubscription* | Select the Azure subscription you want to use. |
- | Resource group name | *myresourcegroup* | Select or create a resource group for your Azure Managed Grafana resources. |
- | Location | *East US* | Use Location to specify the geographic location in which to host your resource. Choose the location closest to you. |
- | Name | *mygrafanaworkspace* | Enter a unique resource name. It will be used as the domain name in your Managed Grafana instance URL. |
+ :::image type="content" source="media/quickstart-portal/create-form-basics.png" alt-text="Screenshot of the Azure portal. Create workspace form. Basics.":::
-1. Select **Next : Permission >** to access rights for your Grafana instance and data sources:
- 1. Make sure the **System assigned identity** is set to **On**. The box **Add role assignment to this identity with 'Monitoring Reader' role on target subscription** should also be checked for this Managed Identity to get access to your current subscription.
+1. Select **Next : Advanced >** to access API key creation and statics IP address options. **Enable API key creation** and **Deterministic outbound IP** options are set to **Disable** by default. Optionally enable API key creation and enable a static IP address.
- 1. Make sure that you're listed as a Grafana administrator. You can also add more users as administrators at this point or later.
+ :::image type="content" source="media/quickstart-portal/create-form-advanced.png" alt-text="Screenshot of the Azure portal. Create workspace form. Advanced.":::
- If you uncheck this option (or if the option grays out for you), someone with the Owner role on the subscription can do the role assignment to give you the Grafana Admin permission.
+1. Select **Next : Permission >** to control access rights for your Grafana instance and data sources:
+ 1. **System assigned managed identity** is set to **On**.
- > [!NOTE]
- > If creating a Managed Grafana instance fails the first time, please try again. The failure might be due to a limitation in our backend, and we are actively working to fix.
+ 1. The box **Add role assignment to this identity with 'Monitoring Reader' role on target subscription** is checked.
+
+ 1. The box **Include myself** under **Grafana administrator role** is checked. This option grants you the Grafana administrator role, and lets you manage access rights. You can give this right to more members by selecting **Add**. If this option grays out for you, ask someone with the Owner role on the subscription to assign you the Grafana Admin role.
+
+ :::image type="content" source="media/quickstart-portal/create-form-permission.png" alt-text="Screenshot of the Azure portal. Create workspace form. Permission.":::
1. Optionally select **Next : Tags** and add tags to categorize resources.
-1. Select **Next : Review + create >** and then **Create**. Your Azure Managed Grafana resource is deploying.
+ :::image type="content" source="media/quickstart-portal/create-form-tags.png" alt-text="Screenshot of the Azure portal. Create workspace form. Tags.":::
+
+1. Select **Next : Review + create >**. After validation runs, select **Create**. Your Azure Managed Grafana resource is deploying.
+
+ :::image type="content" source="media/quickstart-portal/create-form-validation.png" alt-text="Screenshot of the Azure portal. Create workspace form. Validation.":::
## Access your Managed Grafana instance
-1. Once the deployment is complete, select **Go to resource** to open your resource.
+1. Once the deployment is complete, select **Go to resource** to open your resource.
+
+1. In the **Overview** tab's Essentials section, select the **Endpoint** URL. Single sign-on via Azure Active Directory has been configured for you automatically. If prompted, enter your Azure account.
+
+ :::image type="content" source="media/quickstart-portal/grafana-overview.png" alt-text="Screenshot of the Azure portal. Endpoint URL display.":::
- :::image type="content" source="media/managed-grafana-quickstart-portal-deployment-complete.png" alt-text="Screenshot of the Azure portal. Message: Your deployment is complete.":::
+ :::image type="content" source="media/quickstart-portal/grafana-ui.png" alt-text="Screenshot of a Managed Grafana instance.":::
+
+ > [!NOTE]
+ > Azure Managed Grafana doesn't support connecting with personal Microsoft accounts currently.
-1. In the **Overview** tab's Essentials section, select the **Endpoint** URL. Single sign-on via Azure Active Directory should have been configured for you automatically. If prompted, enter your Azure account.
+You can now start interacting with the Grafana application to configure data sources, create dashboards, reports and alerts. Suggested read: [Monitor Azure services and applications using Grafana](/azure/azure-monitor/visualize/grafana-plugin).
- :::image type="content" source="media/managed-grafana-quickstart-workspace-overview.png" alt-text="Screenshot of the Azure portal. Endpoint URL display.":::
+## Clean up resources
- :::image type="content" source="media/managed-grafana-quickstart-portal-grafana-workspace.png" alt-text="Screenshot of a Managed Grafana instance.":::
+In the preceding steps, you created an Azure Managed Grafana workspace in a new resource group. If you don't expect to need these resources again in the future, delete the resource group.
-You can now start interacting with the Grafana application to configure data sources, create dashboards, reporting and alerts.
+1. In the **Search resources, services, and docs (G+/)** box in the Azure portal, enter the name of your resource group and select it.
+1. In the **Overview** page, make sure that the listed resources are the ones you want to delete.
+1. Select **Delete**, type the name of your resource group in the text box, and then select **Delete**.
## Next steps > [!div class="nextstepaction"] > [How to configure data sources for Azure Managed Grafana](./how-to-data-source-plugins-managed-identity.md)
-> [How to modify access permissions to Azure Monitor](./how-to-permissions.md)
migrate Common Questions Appliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/common-questions-appliance.md
However, deleting the resource group also deletes other registered appliances, t
## Can I use the appliance with a different subscription or project?
-To use the appliance with a different subscription or project, you would need to reconfigure the existing appliance by running the PowerShell installer script for the specific scenario (VMware/Hyper-V/Physical) on the appliance. The script will clean up the existing appliance components and settings to deploy a fresh appliance. Ensure to clear the browser cache before you start using the newly deployed appliance configuration manager.
+To use the appliance with a different subscription or project, you would need to reconfigure the existing appliance by running the [PowerShell installer script](deploy-appliance-script.md) for the specific scenario (VMware/Hyper-V/Physical) on the appliance. The script will clean up the existing appliance components and settings to deploy a fresh appliance. Ensure to clear the browser cache before you start using the newly deployed appliance configuration manager.
Also, you cannot reuse an existing project key on a reconfigured appliance. Make sure you generate a new key from the desired subscription/project to complete the appliance registration.
migrate Create Manage Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/create-manage-projects.md
Set up a new project in an Azure subscription.
5. In **Create project**, select the Azure subscription and resource group. Create a resource group if you don't have one. 6. In **Project Details**, specify the project name and the geography in which you want to create the project.
- - The geography is only used to store the metadata gathered from on-premises servers. You can select any target region for migration.
+ - The geography is only used to store the metadata gathered from on-premises servers. You can assess or migrate servers for any target region regardless of the selected geography.
- Review supported geographies for [public](migrate-support-matrix.md#public-cloud) and [government clouds](migrate-support-matrix.md#azure-government).
Note that:
## Next steps
-Add [assessment](how-to-assess.md) or [migration](how-to-migrate.md) tools to projects.
+Add [assessment](how-to-assess.md) or [migration](how-to-migrate.md) tools to projects.
migrate Migrate Appliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-appliance.md
download.microsoft.com/download | Allow downloads from Microsoft download center
**URL** | **Details** | | *.portal.azure.us | Navigate to the Azure portal.
-graph.windows.net | Sign in to your Azure subscription.
+graph.windows.net <br> graph.microsoftazure.us | Sign in to your Azure subscription.
login.microsoftonline.us | Used for access control and identity management by Azure Active Directory. management.usgovcloudapi.net | Used for resource deployments and management operations. *.services.visualstudio.com | Upload appliance logs used for internal monitoring.
mysql Concepts Service Tiers Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-service-tiers-storage.md
You can create an Azure Database for MySQL Flexible Server in one of three diffe
| Resource / Tier | **Burstable** | **General Purpose** | **Business Critical** | |:|:-|:--|:|
-| VM series| B-series | Ddsv4-series | Edsv4/v5-series*|
+| VM series| [B-series](https://docs.microsoft.com/azure/virtual-machines/sizes-b-series-burstable) | [Ddsv4-series](https://docs.microsoft.com/azure/virtual-machines/ddv4-ddsv4-series#ddsv4-series) | [Edsv4](https://docs.microsoft.com/azure/virtual-machines/edv4-edsv4-series#edsv4-series)/[Edsv5-series](https://docs.microsoft.com/azure/virtual-machines/edv5-edsv5-series#edsv5-series)*|
| vCores | 1, 2, 4, 8, 12, 16, 20 | 2, 4, 8, 16, 32, 48, 64 | 2, 4, 8, 16, 32, 48, 64, 80, 96 | | Memory per vCore | Variable | 4 GiB | 8 GiB * | | Storage size | 20 GiB to 16 TiB | 20 GiB to 16 TiB | 20 GiB to 16 TiB |
You can create an Azure Database for MySQL Flexible Server in one of three diffe
\* With the exception of E64ds_v4 (Business Critical) SKU, which has 504 GB of memory
-\* Only few regions have Edsv5 compute availability.
+\* Ev5 compute provides best performance among other VM series in terms of QPS and latency. learn more about performance and region availability of Ev5 compute from [here](https://techcommunity.microsoft.com/t5/azure-database-for-mysql-blog/boost-azure-mysql-business-critical-flexible-server-performance/ba-p/3603698).
To choose a compute tier, use the following table as a starting point.
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/whats-new.md
Last updated 08/16/2022
This article summarizes new releases and features in Azure Database for MySQL - Flexible Server beginning in January 2021. Listings appear in reverse chronological order, with the most recent updates first.
+> [!NOTE]
+> This article contains references to the term slave, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
+ ## August 2022 - **Server logs for Azure Database for MySQL - Flexible Server**
This article summarizes new releases and features in Azure Database for MySQL -
The on-Demand backup feature allows customers to trigger On-Demand backups of their production workload, in addition to the automated backups taken by Azure Database for MySQL Flexible service, and store it in alignment with the serverΓÇÖs backup retention policy. These backups can be used as the fastest restore point to perform a point-in-time restore for faster and more predictable restore times. [**Learn more**](how-to-trigger-on-demand-backup.md#trigger-on-demand-backup)
-**Known Issues**
+- **Business Critical tier now supports Ev5 compute series**
+
+ Business Critical tier for Azure Database for MySQL ΓÇô Flexible Server now supports the Ev5 compute series in more regions.
+Learn more about [Boost Azure MySQL Business Critical flexible server performance by 30% with the Ev5 compute series!](https://techcommunity.microsoft.com/t5/azure-database-for-mysql-blog/boost-azure-mysql-business-critical-flexible-server-performance/ba-p/3603698)
+
+- **Server paramaters that are now configurable**
+
+ List of dynamic server parameters that are now configurable
+ - [lc_time_names](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_lc_time_names)
+ - [replicate_wild_ignore_table](https://dev.mysql.com/doc/refman/8.0/en/replication-options-replica.html#option_mysqld_replicate-wild-ignore-table)
+ - [slave_pending_jobs_size_max](https://dev.mysql.com/doc/refman/8.0/en/replication-options-replica.html#sysvar_slave_pending_jobs_size_max)
+ - [slave_parallel_workers](https://dev.mysql.com/doc/refman/8.0/en/replication-options-replica.html#sysvar_slave_parallel_workers)
+ - [log_output](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_log_output)
+ - [performance_schema_max_digest_length](https://dev.mysql.com/doc/refman/8.0/en/performance-schema-system-variables.html#sysvar_performance_schema_max_digest_length)
+ - [performance_schema_max_sql_text_length](https://dev.mysql.com/doc/refman/8.0/en/performance-schema-system-variables.html#sysvar_performance_schema_max_sql_text_length)
+
+
+- **Known Issues**
+
+ - When you try to connect to the server, you will receive error "ERROR 9107 (HY000): Only Azure Active Directory accounts are allowed to connect to server".
-Server parameter aad_auth_only was exposed in this month's deployment. Enabling server parameter aad_auth_only will block all non Azure Active Directory MySQL connections to your Azure Database for MySQL Flexible server. When you try to connect to the server, you will receive error "ERROR 9107 (HY000): Only Azure Active Directory accounts are allowed to connect to server".
-We are currently working on additional configurations required for AAD authentication to be fully functional, and the feature will be available in the upcoming deployments. Do not enable the aad_auth_only parameter until then.
+ Server parameter aad_auth_only was exposed in this month's deployment. Enabling server parameter aad_auth_only will block all non Azure Active Directory MySQL connections to your Azure Database for MySQL Flexible server. We are currently working on additional configurations required for AAD authentication to be fully functional, and the feature will be available in the upcoming deployments. Do not enable the aad_auth_only parameter until then.
We are currently working on additional configurations required for AAD authentic
- **Known Issues**
-On a few servers where audit or slow logs are enabled, you may no longer see logs uploaded to data sinks configured under diagnostics settings. Verify whether your logs have the latest updated timestamp for the events based on the [data sink](./tutorial-query-performance-insights.md#set-up-diagnostics) you've configured. If your server is affected by this issue, open a [support ticket](https://azure.microsoft.com/support/create-ticket/) so that we can apply a quick fix on the server to resolve the issue.
+ On a few servers where audit or slow logs are enabled, you may no longer see logs uploaded to data sinks configured under diagnostics settings. Verify whether your logs have the latest updated timestamp for the events based on the [data sink](./tutorial-query-performance-insights.md#set-up-diagnostics) you've configured. If your server is affected by this issue, open a [support ticket](https://azure.microsoft.com/support/create-ticket/) so that we can apply a quick fix on the server to resolve the issue.
## May 2022
mysql Concept Reserved Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concept-reserved-pricing.md
Last updated 06/20/2022
Azure Database for MySQL now helps you save money by prepaying for compute resources compared to pay-as-you-go prices. With Azure Database for MySQL reserved instances, you make an upfront commitment on MySQL server for a one or three year period to get a significant discount on the compute costs. To purchase Azure Database for MySQL reserved capacity, you need to specify the Azure region, deployment type, performance tier, and term. </br>
->[!NOTE]
->The Reserved instances (RI) feature in Azure Database for MySQL ΓÇô Flexible server is not working properly for the Business Critical service tier, after its rebranding > from the Memory Optimized service tier. Specifically, instance reservation has stopped working, and we are currently working to fix the issue.
- ## How does the instance reservation work? You do not need to assign the reservation to specific Azure Database for MySQL servers. An already running Azure Database for MySQL or ones that are newly deployed, will automatically get the benefit of reserved pricing. By purchasing a reservation, you are pre-paying for the compute costs for a period of one or three years. As soon as you buy a reservation, the Azure database for MySQL compute charges that match the reservation attributes are no longer charged at the pay-as-you go rates. A reservation does not cover software, networking, or storage charges associated with the MySQL Database server. At the end of the reservation term, the billing benefit expires, and the Azure Database for MySQL are billed at the pay-as-you go price. Reservations do not auto-renew. For pricing information, see the [Azure Database for MySQL reserved capacity offering](https://azure.microsoft.com/pricing/details/mysql/). </br>
postgresql Concepts Single To Flexible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/concepts-single-to-flexible.md
[!INCLUDE[applies-to-postgres-single-flexible-server](../includes/applies-to-postgresql-single-flexible-server.md)]
-Azure Database for PostgreSQL Flexible Server provides zone-redundant high availability, control over price, and control over maintenance windows. You can use the available migration tool to move your databases from Single Server to Flexible Server. To understand the differences between the two deployment options, see [this comparison chart](../flexible-server/concepts-compare-single-server-flexible-server.md).
+Azure Database for PostgreSQL Flexible Server provides zone-redundant high availability, control over price, and control over maintenance windows. You can use the available migration tool to move your databases from Single Server to Flexible Server. To understand the differences between the two deployment options, see [this comparison chart](../flexible-server/concepts-compare-single-server-flexible-server.md).
Single to Flexible server migration tool is designed to help you with your migration from Single to flexible server task. The tool allows you to initiate migrations for multiple servers and databases in a repeatable way. The tool automates most of the migration steps to make the migration journey across Azure platforms as seamless as possible. The tool is offered **free of cost**. >[!NOTE]
-> The migration tool is in public preview. Feature, functionality, and user interfaces are subject to change.
+> The migration tool is in public preview. Feature, functionality, and user interfaces are subject to change. Migration initiation from Single Server is enabled in preview in these regions: Central US, West US, South Central US, North Central US, East Asia, Switzerland North, Australia South East, UAE North, UK West and Canada East. However, you can use the migration wizard from the Flexible Server side as well, in all regions.
## Recommended migration path
The migration tool is agnostic of source and target PostgreSQL versions. Here ar
| Postgres 11 | Postgres 14 | Verify your application compatibility. | | Postgres 11 | Postgres 11 | You can choose to migrate to the same version in Flexible Server. You can then upgrade to a higher version in Flexible Server |
->[!NOTE]
-> Migration initiation from Single Server is enabled in preview in these regions: Central US, West US, South Central US, North Central US, East Asia, Switzerland North, Australia South East, UAE North, UK West and Canada East. However, you can use the migration wizard from the Flexible Server side in all regions.
-
->[!IMPORTANT]
-> We continue to add support for more regions with Flexible Server. If Flexible Server is not available in your preferred region, you can either choose an alternative region or you can wait until the Flexible server is enabled in that region.
- ## Overview The migration tool provides an inline experience to migrate databases from Single Server (source) to Flexible Server (target).
You choose the source server and can select up to eight databases from it. This
The following diagram shows the process flow for migration from Single Server to Flexible Server via the migration tool. :::image type="content" source="./media/concepts-single-to-flexible/concepts-flow-diagram.png" alt-text="Diagram that shows the Migration from Single Server to Flexible Server." lightbox="./media/concepts-single-to-flexible/concepts-flow-diagram.png":::
-
+ The steps in the process are: 1. Create a Flexible Server target. 2. Invoke migration. 3. Provision the migration infrastructure by using Azure Database Migration Service. 4. Start the migration.
- 1. Initial dump/restore (online and offline)
+ 1. Initial dump/restore (online and offline)
1. Streaming the changes (online only) 5. Cut over to the target.
-
+ The migration tool is exposed through the Azure portal and through easy-to-use Azure CLI commands. It allows you to create migrations, list migrations, display migration details, modify the state of the migration, and delete migrations. ## Comparison of migration modes
The following table shows the approximate time for performing offline migrations
| 500 GB | 08:00 | | 1,000 GB | 09:30 |
-### Migration considerations for online mode
+### Migration considerations for online mode
The migration process for online mode entails a dump of the Single Server database(s), a restore of that dump in the Flexible Server target, and then replication of ongoing changes. You capture change data by using logical decoding. The time for completing an online migration depends on the incoming writes to the source server. The higher the write workload is on the source, the more time it takes for the data to be replicated to Flexible Server.
-## Migration steps
+To begin the migration in either Online or Offline mode, you can get started with the Prerequisites below.
+
+## Migration prerequisites
+
+>[!NOTE]
+> It is very important to complete the prerequisite steps in this section before you initiate a migration using this tool.
+
+#### Register your subscription for Azure Database Migration Service
+
+ 1. On the Azure portal, go to the subscription of your Target server.
+
+ :::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-azure-portal.png" alt-text="Screenshot of Azure portal subscription details." lightbox="./media/concepts-single-to-flexible/single-to-flex-azure-portal.png":::
+
+ 2. On the left menu, select **Resource Providers**. Search for **Microsoft.DataMigration**, and then select **Register**.
+
+ :::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-register-data-migration.png" alt-text="Screenshot of the Register button for Azure Data Migration Service." lightbox="./media/concepts-single-to-flexible/single-to-flex-register-data-migration.png":::
+
+#### Enable logical replication
+
+ [Enable logical replication](../single-server/concepts-logical.md) on the source server.
+
+ :::image type="content" source="./media/concepts-single-to-flexible/logical-replication-support.png" alt-text="Screenshot of logical replication support in the Azure portal." lightbox="./media/concepts-single-to-flexible/logical-replication-support.png":::
+
+ >[!NOTE]
+ > Enabling logical replication will require a server restart for the change to take effect.
-### Prerequisites
+#### Create an Azure Database for PostgreSQL Flexible server
-Before you start using the migration tool:
+ [Create an Azure Database for PostgreSQL Flexible server](../flexible-server/quickstart-create-server-portal.md) which will be used as the Target (if not created already).
-- [Create an Azure Database for PostgreSQL server](../flexible-server/quickstart-create-server-portal.md).
+#### Set up and configure an Azure Active Directory (Azure AD) app
-- [Enable logical replication](../single-server/concepts-logical.md) on the source server.
+ [Set up and configure an Azure Active Directory (Azure AD) app](./how-to-set-up-azure-ad-app-portal.md). An Azure AD app is a critical component of the migration tool. It helps with role-based access control as the migration tool accesses both the source and target servers.
- :::image type="content" source="./media/concepts-single-to-flexible/logical-replication-support.png" alt-text="Screenshot of logical replication support in the Azure portal." lightbox="./media/concepts-single-to-flexible/logical-replication-support.png":::
+#### Assign contributor roles to Azure resources
- >[!NOTE]
- > Enabling logical replication will require a server restart for the change to take effect.
+ Assign [contributor roles](./how-to-set-up-azure-ad-app-portal.md#add-contributor-privileges-to-an-azure-resource) to source server, target server and the migration resource group. In case of private access for source/target server, add Contributor privileges to the corresponding VNet as well.
-- [Set up an Azure Active Directory (Azure AD) app](./how-to-set-up-azure-ad-app-portal.md). An Azure AD app is a critical component of the migration tool. It helps with role-based access control as the migration tool accesses both the source and target servers.
+#### Allow-list required extensions
-- If you are using any PostgreSQL extensions on the Single Server, it has to allow-listed on the Flexible Server before initiating the migration using the steps below:
+ If you are using any PostgreSQL extensions on the Single Server, it has to be allow-listed on the Flexible Server before initiating the migration using the steps below:
- 1. Use select command in the Single Server environment to list all the extensions in use.
+ 1. Use select command in the Single Server environment to list all the extensions in use.
``` select * from pg_extension
Before you start using the migration tool:
The output of the above command gives the list of extensions currently active on the Single Server
- 2. Enable the list of extensions obtained from step 1 in the Flexible Server. Search for the 'azure.extensions' parameter by selecting the Server Parameters tab in the side pane. Select the extensions that are to be allow-listed and click Save.
+ 2. Enable the list of extensions obtained from step 1 in the Flexible Server. Search for the 'azure.extensions' parameter by selecting the Server Parameters tab in the side pane. Select the extensions that are to be allow-listed and click Save.
:::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-azure-extensions.png" alt-text="Screenshot of PG extension support in the Flexible Server Azure portal." lightbox="./media/concepts-single-to-flexible/single-to-flex-azure-extensions.png":::
After you finish the prerequisites, migrate the data and schemas by using one of
- Batch similar-sized databases in a migration task. - Perform large database migrations with one or two databases at a time to avoid source-side load and migration failures. - Perform test migrations before migrating for production:
- - Test migrations are an important for ensuring that you cover all aspects of the database migration, including application testing.
+ - Test migrations are an important for ensuring that you cover all aspects of the database migration, including application testing.
The best practice is to begin by running a migration entirely for testing purposes. After a newly started migration enters the continuous replication (CDC) phase with minimal lag, make your Flexible Server target the primary database server. Use that target for testing the application to ensure expected performance and results. If you're migrating to a higher Postgres version, test for application compatibility.
postgresql How To Migrate Single To Flexible Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-single-to-flexible-cli.md
This article shows you how to use the migration tool in the Azure CLI to migrate
>[!NOTE] > The migration tool is in public preview.
-## Prerequisites
+## Getting started
-1. If you're new to Microsoft Azure, [create an account](https://azure.microsoft.com/free/) to evaluate the offerings.
-2. Register your subscription for Azure Database Migration Service. (If you've already done it, you can skip this step.)
+1. If you're new to Microsoft Azure, [create an account](https://azure.microsoft.com/free/) to evaluate the offerings.
- 1. On the Azure portal, go to your subscription.
-
- :::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-cli-dms.png" alt-text="Screenshot of Azure Database Migration Service." lightbox="./media/concepts-single-to-flexible/single-to-flex-cli-dms.png":::
-
- 1. On the left menu, select **Resource Providers**. Search for **Microsoft.DataMigration**, and then select **Register**.
-
- :::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-cli-dms-register.png" alt-text="Screenshot of the Register button for Azure Database Migration Service." lightbox="./media/concepts-single-to-flexible/single-to-flex-cli-dms-register.png":::
-
-3. Install the latest Azure CLI for your operating system from the [Azure CLI installation page](/cli/azure/install-azure-cli).
+2. Install the latest Azure CLI for your operating system from the [Azure CLI installation page](/cli/azure/install-azure-cli).
If the Azure CLI is already installed, check the version by using the `az version` command. The version should be 2.28.0 or later to use the migration CLI commands. If not, [update your Azure CLI version](/cli/azure/update-azure-cli).
-4. Run the `az login` command:
+
+3. Run the `az login` command:
```bash az login ```
- A browser window opens with the Azure sign-in page. Provide your Azure credentials to do a successful authentication. For other ways to sign with the Azure CLI, see [this article](/cli/azure/authenticate-azure-cli).
-5. Complete the prerequisites listed in [Migrate from Azure Database for PostgreSQL Single Server to Flexible Server](./concepts-single-to-flexible.md#prerequisites). You need them to get started with the migration tool.
+ A browser window opens with the Azure sign-in page. Provide your Azure credentials to do a successful authentication. For other ways to sign with the Azure CLI, see [this article](/cli/azure/authenticate-azure-cli).
+
+4. Complete the prerequisites listed in [Migrate from Azure Database for PostgreSQL Single Server to Flexible Server](./concepts-single-to-flexible.md#migration-prerequisites). It is very important to complete the prerequisite steps before you initiate a migration using this tool.
## Migration CLI commands
-The migration tool comes with easy-to-use CLI commands to do migration-related tasks. All the CLI commands start with `az postgres flexible-server migration`.
+The migration tool comes with easy-to-use CLI commands to do migration-related tasks. All the CLI commands start with `az postgres flexible-server migration`.
For help with understanding the options associated with a command and with framing the right syntax, you can use the `help` parameter:
The structure of the JSON is:
} ```
+>[!NOTE]
+> Gentle reminder to complete the [prerequisites](./concepts-single-to-flexible.md#migration-prerequisites) before you execute **Create** in case it is not yet done. It is very important to complete the prerequisite steps in before you initiate a migration using this tool.
Here are the `create` parameters:
postgresql How To Migrate Single To Flexible Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-single-to-flexible-portal.md
This article shows you how to use the migration tool in the Azure portal to migr
>[!NOTE] > The migration tool is in public preview.
-## Prerequisites
+## Getting started
-1. If you're new to Microsoft Azure, [create an account](https://azure.microsoft.com/free/) to evaluate the offerings.
-2. Register your subscription for Azure Database Migration Service:
+1. If you're new to Microsoft Azure, [create an account](https://azure.microsoft.com/free/) to evaluate the offerings.
- 1. On the Azure portal, go to your subscription.
-
- :::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-azure-portal.png" alt-text="Screenshot of Azure portal subscription details." lightbox="./media/concepts-single-to-flexible/single-to-flex-azure-portal.png":::
-
- 1. On the left menu, select **Resource Providers**. Search for **Microsoft.DataMigration**, and then select **Register**.
-
- :::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-register-data-migration.png" alt-text="Screenshot of the Register button for Azure Data Migration Service." lightbox="./media/concepts-single-to-flexible/single-to-flex-register-data-migration.png":::
-
-3. Complete the prerequisites listed in [Migrate from Azure Database for PostgreSQL Single Server to Flexible Server](./concepts-single-to-flexible.md#prerequisites). You need them to get started with the migration tool.
+2. Complete the prerequisites listed in [Migrate from Azure Database for PostgreSQL Single Server to Flexible Server](./concepts-single-to-flexible.md#migration-prerequisites). It is very important to complete the prerequisite steps before you initiate a migration using this tool.
## Configure the migration task
Alternatively, you can initiate the migration process from the Azure Database fo
### Setup tab
-The first tab is **Setup**. It has basic information about the migration and the list of prerequisites for getting started with migrations. These prerequisites are the same as the ones listed in the [Migrate from Azure Database for PostgreSQL Single Server to Flexible Server](./concepts-single-to-flexible.md) article.
+The first tab is **Setup**. It has basic information about the migration and the list of prerequisites for getting started with migrations. These prerequisites are the same as the ones listed in the [Migrate from Azure Database for PostgreSQL Single Server to Flexible Server](./concepts-single-to-flexible.md#migration-prerequisites) article.
:::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-setup.png" alt-text="Screenshot of the details belonging to Setup tab." lightbox="./media/concepts-single-to-flexible/single-to-flex-setup.png":::
After you choose a subnet, select the **Next** button.
### Review + create tab
+>[!NOTE]
+> Gentle reminder to complete the [prerequisites](./concepts-single-to-flexible.md#migration-prerequisites) before you click **Create** in case it is not yet complete.
+ The **Review + create** tab summarizes all the details for creating the migration. Review the details and select the **Create** button to start the migration. :::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-migration-review.png" alt-text="Screenshot of details to review for the migration." lightbox="./media/concepts-single-to-flexible/single-to-flex-migration-review.png":::
You can use the refresh button to refresh the status of the migrations.
You can also select the migration name in the grid to see the details of that migration. As soon as the migration is created, the migration moves to the **InProgress** state and **PerformingPreRequisiteSteps** substate. It takes up to 10 minutes for the migration workflow to move out of this substate. The reason is that it takes time to create and deploy Database Migration Service, add the IP address on the firewall list of source and target servers, and perform maintenance tasks.
private-link Private Endpoint Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-overview.md
A private-link resource is the destination target of a specified private endpoin
| Azure Device Provisioning Service | Microsoft.Devices/provisioningServices | iotDps | | Azure IoT Hub | Microsoft.Devices/IotHubs | iotHub | | Azure IoT Central | Microsoft.IoTCentral/IoTApps | IoTApps |
-| Azure Digital Twins | Microsoft.DigitalTwins/digitalTwinsInstances | digitaltwinsinstance |
+| Azure Digital Twins | Microsoft.DigitalTwins/digitalTwinsInstances | API |
| Azure Event Grid | Microsoft.EventGrid/domains | domain | | Azure Event Grid | Microsoft.EventGrid/topics | topic | | Azure Event Hub | Microsoft.EventHub/namespaces | namespace |
The following table shows an example of a dual port NSG rule:
## Next steps - For more information about private endpoints and Private Link, see [What is Azure Private Link?](private-link-overview.md).-- To get started with creating a private endpoint for a web app, see [Quickstart: Create a private endpoint by using the Azure portal](create-private-endpoint-portal.md).
+- To get started with creating a private endpoint for a web app, see [Quickstart: Create a private endpoint by using the Azure portal](create-private-endpoint-portal.md).
purview How To Enable Data Use Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-enable-data-use-management.md
Title: Enabling Data Use Management on your Microsoft Purview sources
+ Title: Enabling Data use management on your Microsoft Purview sources
description: Step-by-step guide on how to enable data use access for your registered sources.
Last updated 8/10/2022
-# Enable Data Use Management on your Microsoft Purview sources
+# Enable Data use management on your Microsoft Purview sources
[!INCLUDE [feature-in-preview](includes/feature-in-preview.md)]
-*Data Use Management* (DUM) is an option within the data source registration in Microsoft Purview. This option lets Microsoft Purview manage data access for your resources. The high level concept is that the data owner allows its data resource to be available for access policies by enabling *DUM*.
+*Data use management* (DUM) is an option within the data source registration in Microsoft Purview. This option lets Microsoft Purview manage data access for your resources. The high level concept is that the data owner allows its data resource to be available for access policies by enabling *DUM*.
Currently, a data owner can enable DUM on a data resource for these types of access policies:
Currently, a data owner can enable DUM on a data resource for these types of acc
To be able to create any data policy on a resource, DUM must first be enabled on that resource. This article will explain how to enable DUM on your resources in Microsoft Purview. >[!IMPORTANT]
->Because Data Use Management directly affects access to your data, it directly affects your data security. Review [**additional considerations**](#additional-considerations-related-to-data-use-management) and [**best practices**](#data-use-management-best-practices) below before enabling DUM in your environment.
+>Because Data use management directly affects access to your data, it directly affects your data security. Review [**additional considerations**](#additional-considerations-related-to-data-use-management) and [**best practices**](#data-use-management-best-practices) below before enabling DUM in your environment.
## Prerequisites [!INCLUDE [Access policies generic configuration](./includes/access-policies-configuration-generic.md)]
-## Enable Data Use Management
+## Enable Data use management
-To enable *Data Use Management* for a resource, the resource will first need to be registered in Microsoft Purview.
+To enable *Data use management* for a resource, the resource will first need to be registered in Microsoft Purview.
To register a resource, follow the **Prerequisites** and **Register** sections of the [source pages](azure-purview-connector-overview.md) for your resources.
-Once you have your resource registered, follow the rest of the steps to enable an individual resource for *Data Use Management*.
+Once you have your resource registered, follow the rest of the steps to enable an individual resource for *Data use management*.
1. Go to the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/).
Once you have your resource registered, follow the rest of the steps to enable a
1. Select the **Sources** tab in the left menu.
-1. Select the source where you want to enable *Data Use Management*.
+1. Select the source where you want to enable *Data use management*.
1. At the top of the source page, select **Edit source**.
-1. Set the *Data Use Management* toggle to **Enabled**, as shown in the image below.
+1. Set the *Data use management* toggle to **Enabled**, as shown in the image below.
-## Disable Data Use Management
+## Disable Data use management
-To disable Data Use Management for a source, resource group, or subscription, a user needs to either be a resource IAM **Owner** or a Microsoft Purview **Data source admin**. Once you have those permissions follow these steps:
+To disable Data use management for a source, resource group, or subscription, a user needs to either be a resource IAM **Owner** or a Microsoft Purview **Data source admin**. Once you have those permissions follow these steps:
1. Go to the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/).
To disable Data Use Management for a source, resource group, or subscription, a
1. Select the **Sources** tab in the left menu.
-1. Select the source you want to disable Data Use Management for.
+1. Select the source you want to disable Data use management for.
1. At the top of the source page, select **Edit source**.
-1. Set the **Data Use Management** toggle to **Disabled**.
+1. Set the **Data use management** toggle to **Disabled**.
-## Additional considerations related to Data Use Management
+## Additional considerations related to Data use management
- Make sure you write down the **Name** you use when registering in Microsoft Purview. You will need it when you publish a policy. The recommended practice is to make the registered name exactly the same as the endpoint name.-- To disable a source for *Data Use Management*, remove it first from being bound (i.e. published) in any policy.-- While user needs to have both data source *Owner* and Microsoft Purview *Data source admin* to enable a source for *Data Use Management*, either of those roles can independently disable it.-- Disabling *Data Use Management* for a subscription will disable it also for all assets registered in that subscription.
+- To disable a source for *Data use management*, remove it first from being bound (i.e. published) in any policy.
+- While user needs to have both data source *Owner* and Microsoft Purview *Data source admin* to enable a source for *Data use management*, either of those roles can independently disable it.
+- Disabling *Data use management* for a subscription will disable it also for all assets registered in that subscription.
> [!WARNING] > **Known issues** related to source registration > - Moving data sources to a different resource group or subscription is not supported. If want to do that, de-register the data source in Microsoft Purview before moving it and then register it again after that happens. Note that policies are bound to the data source ARM path. Changing the data source subscription or resource group makes policies ineffective.
-> - Once a subscription gets disabled for *Data Use Management* any underlying assets that are enabled for *Data Use Management* will be disabled, which is the right behavior. However, policy statements based on those assets will still be allowed after that.
+> - Once a subscription gets disabled for *Data use management* any underlying assets that are enabled for *Data use management* will be disabled, which is the right behavior. However, policy statements based on those assets will still be allowed after that.
-## Data Use Management best practices
-- We highly encourage registering data sources for *Data Use Management* and managing all associated access policies in a single Microsoft Purview account.-- Should you have multiple Microsoft Purview accounts, be aware that **all** data sources belonging to a subscription must be registered for *Data Use Management* in a single Microsoft Purview account. That Microsoft Purview account can be in any subscription in the tenant. The *Data Use Management* toggle will become greyed out when there are invalid configurations. Some examples of valid and invalid configurations follow in the diagram below:
+## Data use management best practices
+- We highly encourage registering data sources for *Data use management* and managing all associated access policies in a single Microsoft Purview account.
+- Should you have multiple Microsoft Purview accounts, be aware that **all** data sources belonging to a subscription must be registered for *Data use management* in a single Microsoft Purview account. That Microsoft Purview account can be in any subscription in the tenant. The *Data use management* toggle will become greyed out when there are invalid configurations. Some examples of valid and invalid configurations follow in the diagram below:
- **Case 1** shows a valid configuration where a Storage account is registered in a Microsoft Purview account in the same subscription. - **Case 2** shows a valid configuration where a Storage account is registered in a Microsoft Purview account in a different subscription.
- - **Case 3** shows an invalid configuration arising because Storage accounts S3SA1 and S3SA2 both belong to Subscription 3, but are registered to different Microsoft Purview accounts. In that case, the *Data Use Management* toggle will only enable in the Microsoft Purview account that wins and registers a data source in that subscription first. The toggle will then be greyed out for the other data source.
-- If the *Data Use Management* toggle is greyed out and cannot be enabled, hover over it to know the name of the Microsoft Purview account that has registered the data resource first.
+ - **Case 3** shows an invalid configuration arising because Storage accounts S3SA1 and S3SA2 both belong to Subscription 3, but are registered to different Microsoft Purview accounts. In that case, the *Data use management* toggle will only enable in the Microsoft Purview account that wins and registers a data source in that subscription first. The toggle will then be greyed out for the other data source.
+- If the *Data use management* toggle is greyed out and cannot be enabled, hover over it to know the name of the Microsoft Purview account that has registered the data resource first.
-![Diagram shows valid and invalid configurations when using multiple Microsoft Purview accounts to manage policies.](./media/access-policies-common/valid-and-invalid-configurations.png)
+![Diagram shows valid and invalid configurations when using multiple Microsoft Purview accounts to manage policies.](./media/how-to-policies-data-owner-authoring-generic/valid-and-invalid-configurations.png)
## Next steps
purview How To Policies Data Owner Authoring Generic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-data-owner-authoring-generic.md
Previously updated : 05/27/2022 Last updated : 08/22/2022 # Authoring and publishing data owner access policies (Preview)
Before authoring data policies in the Microsoft Purview governance portal, you'l
1. Follow any policy-specific prerequisites for your source. Check the [Microsoft Purview supported data sources table](microsoft-purview-connector-overview.md) and select the link in the **Access Policy** column for sources where access policies are available. Follow any steps listed in the Access policy or Prerequisites sections. 1. Register the data source in Microsoft Purview. Follow the **Prerequisites** and **Register** sections of the [source pages](microsoft-purview-connector-overview.md) for your resources.
-1. [Enable the Data Use Management toggle on the data source](how-to-enable-data-use-management.md#enable-data-use-management). Additional permissions for this step are described in the linked document.
+1. [Enable the Data use management toggle on the data source](how-to-enable-data-use-management.md#enable-data-use-management). Additional permissions for this step are described in the linked document.
## Create a new policy This section describes the steps to create a new policy in Microsoft Purview.
-Ensure you have the *Policy Author* permission as described [here](#permissions-for-policy-authoring-and-publishing)
+Ensure you have the *Policy Author* permission as described [here](#permissions-for-policy-authoring-and-publishing).
1. Sign in to the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/).
Ensure you have the *Policy Author* permission as described [here](#permissions-
1. Select the **New Policy** button in the policy page.
- :::image type="content" source="./media/access-policies-common/policy-onboard-guide-1.png" alt-text="Data owner can access the Policy functionality in Microsoft Purview when it wants to create policies.":::
+ :::image type="content" source="./media/how-to-policies-data-owner-authoring-generic/policy-onboard-guide-1.png" alt-text="Screenshot showing data owner can access the Policy functionality in Microsoft Purview when it wants to create policies.":::
1. The new policy page will appear. Enter the policy **Name** and **Description**. 1. To add policy statements to the new policy, select the **New policy statement** button. This will bring up the policy statement builder.
- :::image type="content" source="./media/access-policies-common/create-new-policy.png" alt-text="Data owner can create a new policy statement.":::
+ :::image type="content" source="./media/how-to-policies-data-owner-authoring-generic/create-new-policy.png" alt-text="Screenshot showing data owner can create a new policy statement.":::
1. Select the **Effect** button and choose *Allow* from the drop-down list.
Ensure you have the *Policy Author* permission as described [here](#permissions-
- To create a broad policy statement that covers an entire data source, resource group, or subscription that was previously registered, use the **Data sources** box and select its **Type**. - To create a fine-grained policy, use the **Assets** box instead. Enter the **Data Source Type** and the **Name** of a previously registered and scanned data source. See example in the image.
- :::image type="content" source="./media/access-policies-common/select-data-source-type.png" alt-text="Data owner can select a Data Resource when editing a policy statement.":::
+ :::image type="content" source="./media/how-to-policies-data-owner-authoring-generic/select-data-source-type.png" alt-text="Screenshot showing data owner can select a Data Resource when editing a policy statement.":::
1. Select the **Continue** button and transverse the hierarchy to select and underlying data-object (for example: folder, file, etc.). Select **Recursive** to apply the policy from that point in the hierarchy down to any child data-objects. Then select the **Add** button. This will take you back to the policy editor.
- :::image type="content" source="./media/access-policies-common/select-asset.png" alt-text="Data owner can select the asset when creating or editing a policy statement.":::
+ :::image type="content" source="./media/how-to-policies-data-owner-authoring-generic/select-asset.png" alt-text="Screenshot showing data owner can select the asset when creating or editing a policy statement.":::
1. Select the **Subjects** button and enter the subject identity as a principal, group, or MSI. Then select the **OK** button. This will take you back to the policy editor
- :::image type="content" source="./media/access-policies-common/select-subject.png" alt-text="Data owner can select the subject when creating or editing a policy statement.":::
+ :::image type="content" source="./media/how-to-policies-data-owner-authoring-generic/select-subject.png" alt-text="Screenshot showing data owner can select the subject when creating or editing a policy statement.":::
1. Repeat the steps #5 to #11 to enter any more policy statements.
The steps to publish a policy are as follows:
1. Navigate to the **Data policy** feature using the left side panel. Then select **Data policies**.
- :::image type="content" source="./media/access-policies-common/policy-onboard-guide-2.png" alt-text="Data owner can access the Policy functionality in Microsoft Purview when it wants to update a policy by selecting 'Data policies'.":::
+ :::image type="content" source="./media/how-to-policies-data-owner-authoring-generic/policy-onboard-guide-2.png" alt-text="Screenshot showing data owner can access the Policy functionality in Microsoft Purview when it wants to update a policy by selecting Data policies.":::
1. The Policy portal will present the list of existing policies in Microsoft Purview. Locate the policy that needs to be published. Select the **Publish** button on the right top corner of the page.
- :::image type="content" source="./media/access-policies-common/publish-policy.png" alt-text="Data owner can publish a policy.":::
+ :::image type="content" source="./media/how-to-policies-data-owner-authoring-generic/publish-policy.png" alt-text="Screenshot showing data owner can publish a policy.":::
1. A list of data sources is displayed. You can enter a name to filter the list. Then, select each data source where this policy is to be published and then select the **Publish** button.
- :::image type="content" source="./media/access-policies-common/select-data-sources-publish-policy.png" alt-text="Data owner can select the data source where the policy will be published.":::
+ :::image type="content" source="./media/how-to-policies-data-owner-authoring-generic/select-data-sources-publish-policy.png" alt-text="Screenshot showing data owner can select the data source where the policy will be published.":::
>[!Note] > After making changes to a policy, there is no need to publish it again for it to take effect if the data source(s) continues to be the same.
Ensure you have the *Policy Author* permission as described [here](#permissions-
1. Navigate to the **Data policy** feature using the left side panel. Then select **Data policies**.
- :::image type="content" source="./media/access-policies-common/policy-onboard-guide-2.png" alt-text="Data owner can access the Policy functionality in Microsoft Purview when it wants to update a policy.":::
+ :::image type="content" source="./media/how-to-policies-data-owner-authoring-generic/policy-onboard-guide-2.png" alt-text="Screenshot showing data owner can access the Policy functionality in Microsoft Purview when it wants to update a policy.":::
1. The Policy portal will present the list of existing policies in Microsoft Purview. Select the policy that needs to be updated. 1. The policy details page will appear, including Edit and Delete options. Select the **Edit** button, which brings up the policy statement builder. Now, any parts of the statements in this policy can be updated. To delete the policy, use the **Delete** button.
- :::image type="content" source="./media/access-policies-common/edit-policy.png" alt-text="Data owner can edit or delete a policy statement.":::
+ :::image type="content" source="./media/how-to-policies-data-owner-authoring-generic/edit-policy.png" alt-text="Screenshot showing data owner can edit or delete a policy statement.":::
## Next steps
purview Tutorial Data Owner Policies Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-data-owner-policies-storage.md
To register your resource and enable Data Use Management, follow these steps:
1. Select the **New Policy** button in the policy page.
- :::image type="content" source="./media/access-policies-common/policy-onboard-guide-1.png" alt-text="Data owner can access the Policy functionality in Microsoft Purview when it wants to create policies.":::
+ :::image type="content" source="./media/how-to-policies-data-owner-authoring-generic/policy-onboard-guide-1.png" alt-text="Data owner can access the Policy functionality in Microsoft Purview when it wants to create policies.":::
1. The new policy page will appear. Enter the policy **Name** and **Description**. 1. To add policy statements to the new policy, select the **New policy statement** button. This will bring up the policy statement builder.
- :::image type="content" source="./media/access-policies-common/create-new-policy.png" alt-text="Data owner can create a new policy statement.":::
+ :::image type="content" source="./media/how-to-policies-data-owner-authoring-generic/create-new-policy.png" alt-text="Data owner can create a new policy statement.":::
1. Select the **Effect** button and choose *Allow* from the drop-down list.
To register your resource and enable Data Use Management, follow these steps:
- To create a broad policy statement that covers an entire data source, resource group, or subscription that was previously registered, use the **Data sources** box and select its **Type**. - To create a fine-grained policy, use the **Assets** box instead. Enter the **Data Source Type** and the **Name** of a previously registered and scanned data source. See example in the image.
- :::image type="content" source="./media/access-policies-common/select-data-source-type.png" alt-text="Screenshot showing the policy editor, with Data Resources selected, and Data source Type highlighted in the data resources menu.":::
+ :::image type="content" source="./media/how-to-policies-data-owner-authoring-generic/select-data-source-type.png" alt-text="Screenshot showing the policy editor, with Data Resources selected, and Data source Type highlighted in the data resources menu.":::
1. Select the **Continue** button and transverse the hierarchy to select and underlying data-object (for example: folder, file, etc.). Select **Recursive** to apply the policy from that point in the hierarchy down to any child data-objects. Then select the **Add** button. This will take you back to the policy editor.
- :::image type="content" source="./media/access-policies-common/select-asset.png" alt-text="Screenshot showing the Select asset menu, and the Add button highlighted.":::
+ :::image type="content" source="./media/how-to-policies-data-owner-authoring-generic/select-asset.png" alt-text="Screenshot showing the Select asset menu, and the Add button highlighted.":::
1. Select the **Subjects** button and enter the subject identity as a principal, group, or MSI. Then select the **OK** button. This will take you back to the policy editor
- :::image type="content" source="./media/access-policies-common/select-subject.png" alt-text="Screenshot showing the Subject menu, with a subject select from the search and the OK button highlighted at the bottom.":::
+ :::image type="content" source="./media/how-to-policies-data-owner-authoring-generic/select-subject.png" alt-text="Screenshot showing the Subject menu, with a subject select from the search and the OK button highlighted at the bottom.":::
1. Repeat the steps #5 to #11 to enter any more policy statements.
To register your resource and enable Data Use Management, follow these steps:
1. Navigate to the **Data policy** feature using the left side panel. Then select **Data policies**.
- :::image type="content" source="./media/access-policies-common/policy-onboard-guide-2.png" alt-text="Screenshot showing the Microsoft Purview governance portal with the leftmost menu open, Policy Management highlighted, and Data Policies selected on the next page.":::
+ :::image type="content" source="./media/how-to-policies-data-owner-authoring-generic/policy-onboard-guide-2.png" alt-text="Screenshot showing the Microsoft Purview governance portal with the leftmost menu open, Policy Management highlighted, and Data Policies selected on the next page.":::
1. The Policy portal will present the list of existing policies in Microsoft Purview. Locate the policy that needs to be published. Select the **Publish** button on the right top corner of the page.
- :::image type="content" source="./media/access-policies-common/publish-policy.png" alt-text="Screenshot showing the policy editing menu with the Publish button highlighted in the top right of the page.":::
+ :::image type="content" source="./media/how-to-policies-data-owner-authoring-generic/publish-policy.png" alt-text="Screenshot showing the policy editing menu with the Publish button highlighted in the top right of the page.":::
1. A list of data sources is displayed. You can enter a name to filter the list. Then, select each data source where this policy is to be published and then select the **Publish** button.
- :::image type="content" source="./media/access-policies-common/select-data-sources-publish-policy.png" alt-text="Screenshot showing with Policy publish menu with a data resource selected and the publish button highlighted.":::
+ :::image type="content" source="./media/how-to-policies-data-owner-authoring-generic/select-data-sources-publish-policy.png" alt-text="Screenshot showing with Policy publish menu with a data resource selected and the publish button highlighted.":::
>[!Important] > - Publish is a background operation. It can take up to **2 hours** for the changes to be reflected in Storage account(s).
To delete a policy in Microsoft Purview, follow these steps:
1. Navigate to the **Data policy** feature using the left side panel. Then select **Data policies**.
- :::image type="content" source="./media/access-policies-common/policy-onboard-guide-2.png" alt-text="Screenshot showing the leftmost menu open, Policy Management highlighted, and Data Policies selected on the next page.":::
+ :::image type="content" source="./media/how-to-policies-data-owner-authoring-generic/policy-onboard-guide-2.png" alt-text="Screenshot showing the leftmost menu open, Policy Management highlighted, and Data Policies selected on the next page.":::
1. The Policy portal will present the list of existing policies in Microsoft Purview. Select the policy that needs to be updated. 1. The policy details page will appear, including Edit and Delete options. Select the **Edit** button, which brings up the policy statement builder. Now, any parts of the statements in this policy can be updated. To delete the policy, use the **Delete** button.
- :::image type="content" source="./media/access-policies-common/edit-policy.png" alt-text="Screenshot showing an open policy with the Edit button highlighted in the top menu on the page.":::
+ :::image type="content" source="./media/how-to-policies-data-owner-authoring-generic/edit-policy.png" alt-text="Screenshot showing an open policy with the Edit button highlighted in the top menu on the page.":::
## Next steps
search Cognitive Search Custom Skill Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-custom-skill-python.md
Title: 'Custom skill example (Python)'
-description: For Python developers, learn the tools and techniques for building a custom skill using Azure Functions and Visual Studio. Custom skills contain user-defined models or logic that you can add to an AI-enriched indexing pipeline in Azure Cognitive Search.
+description: For Python developers, learn the tools and techniques for building a custom skill using Azure Functions and Visual Studio Code. Custom skills contain user-defined models or logic that you can add to a skillset for AI-enriched indexing in Azure Cognitive Search.
--++ Previously updated : 01/15/2020- Last updated : 08/22/2022+ # Example: Create a custom skill using Python
-In this Azure Cognitive Search skillset example, you will learn how to create a web API custom skill using Python and Visual Studio Code. The example uses an [Azure Function](https://azure.microsoft.com/services/functions/) that implements the [custom skill interface](cognitive-search-custom-skill-interface.md).
+In this Azure Cognitive Search skillset example, you'll learn how to create a web API custom skill using Python and Visual Studio Code. The example uses an [Azure Function](https://azure.microsoft.com/services/functions/) that implements the [custom skill interface](cognitive-search-custom-skill-interface.md).
-The custom skill is simple by design (it concatenates two strings) so that you can focus on the tools and technologies used for custom skill development in Python. Once you succeed with a simple skill, you can branch out with more complex scenarios.
+The custom skill is simple by design (it concatenates two strings) so that you can focus on the pattern. Once you succeed with a simple skill, you can branch out with more complex scenarios.
## Prerequisites
-+ Review the [custom skill interface](cognitive-search-custom-skill-interface.md) for an introduction into the input/output interface that a custom skill should implement.
++ Review the [custom skill interface](cognitive-search-custom-skill-interface.md) to review the inputs and outputs that a custom skill should implement.
-+ Set up your environment. We followed [this tutorial end-to-end](/azure/python/tutorial-vs-code-serverless-python-01) to set up serverless Azure Function using Visual Studio Code and Python extensions. The tutorial leads you through installation of the following tools and components:
++ Set up your environment. We followed [Quickstart: Create a function in Azure with Python using Visual Studio Code](/azure/python/tutorial-vs-code-serverless-python-01) to set up serverless Azure Function using Visual Studio Code and Python extensions. The quickstart leads you through installation of the following tools and components:
- + [Python 3.75](https://www.python.org/downloads/release/python-375/)
+ + [Python 3.75 or later](https://www.python.org/downloads/release/python-375/)
+ [Visual Studio Code](https://code.visualstudio.com/) + [Python extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-python.python) + [Azure Functions Core Tools](../azure-functions/functions-run-local.md#v2)
The custom skill is simple by design (it concatenates two strings) so that you c
This example uses an Azure Function to demonstrate the concept of hosting a web API, but other approaches are possible. As long as you meet the [interface requirements for a cognitive skill](cognitive-search-custom-skill-interface.md), the approach you take is immaterial. Azure Functions, however, make it easy to create a custom skill.
-### Create a function app
+### Create a project for the function
-The Azure Functions project template in Visual Studio Code creates a project that can be published to a function app in Azure. A function app lets you group functions as a logical unit for management, deployment, and sharing of resources.
+The Azure Functions project template in Visual Studio Code creates a local project that can be published to a function app in Azure. A function app lets you group functions as a logical unit for management, deployment, and sharing of resources.
1. In Visual Studio Code, press F1 to open the command palette. In the command palette, search for and select `Azure Functions: Create new project...`.-
-1. Choose a directory location for your project workspace and choose **Select**.
-
- > [!NOTE]
- > These steps were designed to be completed outside of a workspace. For this reason, do not select a project folder that is part of a workspace.
-
+1. Choose a directory location for your project workspace and choose **Select**. Don't use a project folder that is already part of another workspace.
1. Select a language for your function app project. For this tutorial, select **Python**.
-1. Select the Python version, (version 3.7.5 is supported by Azure Functions)
+1. Select the Python version (version 3.7.5 is supported by Azure Functions).
1. Select a template for your project's first function. Select **HTTP trigger** to create an HTTP triggered function in the new function app. 1. Provide a function name. In this case, let's use **Concatenator**
-1. Select **Function** as the Authorization level. This means that we will provide a [function key](../azure-functions/functions-bindings-http-webhook-trigger.md#authorization-keys) to call the function's HTTP endpoint.
-1. Select how you would like to open your project. For this step, select **Add to workspace** to create the function app in the current workspace.
+1. Select **Function** as the Authorization level. You'll use a [function access key](../azure-functions/functions-bindings-http-webhook-trigger.md#authorization-keys) to call the function's HTTP endpoint.
+1. Specify how you would like to open your project. For this step, select **Add to workspace** to create the function app in the current workspace.
Visual Studio Code creates the function app project in a new workspace. This project contains the [host.json](../azure-functions/functions-host-json.md) and [local.settings.json](../azure-functions/functions-develop-local.md#local-settings-file) configuration files, plus any language-specific project files.
def main(req: func.HttpRequest) -> func.HttpResponse:
```
-Now let's modify that code to follow the [custom skill interface](cognitive-search-custom-skill-interface.md)). Modify the code with the following content:
+Now let's modify that code to follow the [custom skill interface](cognitive-search-custom-skill-interface.md)). Replace the default code with the following content:
```py import logging
def transform_value(value):
}) ```
-The **transform_value** method performs an operation on a single record. You may modify the method to meet your specific needs. Remember to do any necessary input validation and to return any errors and warnings produced if the operation could not be completed for the record.
+The **transform_value** method performs an operation on a single record. You can modify the method to meet your specific needs. Remember to do any necessary input validation and to return any errors and warnings if the operation can't be completed.
### Debug your code locally
You can set any breakpoints on the code by hitting 'F9' on the line of interest.
Once you started debugging, your function will run locally. You can use a tool like Postman or Fiddler to issue the request to localhost. Note the location of your local endpoint on the Terminal window.
-## Publish your function
+## Create a function app in Azure
+
+When you're satisfied with the function behavior, you can publish it. So far you've been working locally. In this section, you'll create a function app in Azure and then deploy the local project to the app you created.
+
+### Create the app from Visual Studio Code
+
+1. In Visual Studio Code, press F1 to open the command palette. In the command palette, search for and select **Create Function App in Azure**.
+
+1. If you have multiple active subscriptions, select the subscription for this app.
-When you're satisfied with the function behavior, you can publish it.
+1. Enter a globally unique name for the function app. Type a name that is valid for a URL.
-1. In Visual Studio Code, press F1 to open the command palette. In the command palette, search for and select **Deploy to Function App...**.
+1. Select a runtime stack and choose the language version on which you've been running locally.
-1. Select the Azure Subscription where you would like to deploy your application.
+1. Select a location for your app. If possible, choose the same region that also hosts your search service.
-1. Select **+ Create New Function App in Azure**
+It takes a few minutes to create the app. When it's ready, you'll see the new app under **Resources** and **Function App** of the active subscription.
-1. Enter a globally unique name for your function app.
+### Deploy to Azure
-1. Select Python version (Python 3.7.x works for this function).
+1. Still in Visual Studio Code, press F1 to open the command palette. In the command palette, search for and select **Deploy to Function App...**.
-1. Select a location for the new resource (for example, West US 2).
+1. Select the function app you created.
-At this point, the necessary resources will be created in your Azure subscription to host the new Azure Function on Azure. Wait for the deployment to complete. The output window will show you the status of the deployment process.
+1. Confirm that you want to continue, and then select **Deploy**. You can monitor the deployment status in the output window.
-1. In the [Azure portal](https://portal.azure.com), navigate to **All Resources** and look for the function you published by its name. If you named it **Concatenator**, select the resource.
+1. Switch to the [Azure portal](https://portal.azure.com), navigate to **All Resources**. Search for the function app you deployed using the globally unique name you provided in a previous step.
-1. Click the **</> Get Function URL** button. This will allow you to copy the URL to call the function.
+ > [!TIP]
+ > You can also right-click the function app in Visual Studio Code and select **Open in Portal**.
+
+1. In the portal, on the left, select **Functions**, and then select the function you created.
+
+1. In the function's overview page, select **Get Function URL** in the command bar the top. This will allow you to copy the URL to call the function.
+
+ :::image type="content" source="media/cognitive-search-custom-skill-python/get-function-url.png" alt-text="Screenshot of the Get Function URL command in Azure portal." border="true":::
## Test the function in Azure
-Now that you have the default host key, test your function as follows:
+Using the default host key and URL that you copied, test your function from within Azure portal.
-```http
-POST [Function URL you copied above]
-```
+1. On the left, under Developer, select **Code + Test**.
-### Request Body
-```json
-{
- "values": [
- {
- "recordId": "e1",
- "data":
+1. Select **Test/Run** in the command bar.
+
+1. For input, use **Post**, the default key, and then paste in the request body:
+
+ ```json
+ {
+ "values": [
+ {
+ "recordId": "e1",
+ "data":
+ {
+ "text1": "Hello",
+ "text2": "World"
+ }
+ },
{
- "text1": "Hello",
- "text2": "World"
+ "recordId": "e2",
+ "data": "This is an invalid input"
}
- },
- {
- "recordId": "e2",
- "data": "This is an invalid input"
- }
- ]
-}
-```
+ ]
+ }
+ ```
+
+1. Select **Run**.
+
+ :::image type="content" source="media/cognitive-search-custom-skill-python/test-run-function.png" alt-text="Screenshot of the input specification." border="true":::
This example should produce the same result you saw previously when running the function in the local environment.
-## Connect to your pipeline
+## Add to a skillset
-Now that you have a new custom skill, you can add it to your skillset. The example below shows you how to call the skill to
-concatenate the Title and the Author of the document into a single field which we call merged_title_author. Replace `[your-function-url-here]` with the URL of your new Azure Function.
+Now that you have a new custom skill, you can add it to your skillset. The example below shows you how to call the skill to concatenate the Title and the Author of the document into a single field, which we call merged_title_author.
+
+Replace `[your-function-url-here]` with the URL of your new Azure Function.
```json { "skills": [
- "[... your existing skills remain here]",
+ "[... other existing skills in the skillset are here]",
{ "@odata.type": "#Microsoft.Skills.Custom.WebApiSkill", "description": "Our new search custom skill",
concatenate the Title and the Author of the document into a single field which w
} ```
+Remember to add an "outputFieldMapping" in the indexer definition to send "merged_title_author" to a "fullname" field in the search index.
+
+```json
+"outputFieldMappings": [
+ {
+ "sourceFieldName": "/document/content/merged_title_author",
+ "targetFieldName": "fullname"
+ }
+]
+```
+ ## Next steps
-Congratulations! You've created your first custom skill. Now you can follow the same pattern to add your own custom functionality. Click the following links to learn more.
+
+Congratulations! You've created your first custom skill. Now you can follow the same pattern to add your own custom functionality. Select the following links to learn more.
+ [Power Skills: a repository of custom skills](https://github.com/Azure-Samples/azure-search-power-skills) + [Add a custom skill to an AI enrichment pipeline](cognitive-search-custom-skill-interface.md)
search Cognitive Search Custom Skill Web Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-custom-skill-web-api.md
Previously updated : 03/25/2022 Last updated : 08/20/2022 # Custom Web API skill in an Azure Cognitive Search enrichment pipeline
Parameters are case-sensitive.
| Parameter name | Description | |--|-|
-| `uri` | The URI of the Web API to which the JSON payload will be sent. Only **https** URI scheme is allowed |
+| `uri` | The URI of the Web API to which the JSON payload will be sent. Only the **https** URI scheme is allowed. |
| `authResourceId` | (Optional) A string that if set, indicates that this skill should use a managed identity on the connection to the function or app hosting the code. The value of this property is the application (client) ID of the function or app's registration in Azure Active Directory. This value will be used to scope the authentication token retrieved by the indexer, and will be sent along with the custom Web skill API request to the function or app. Setting this property requires that your search service is [configured for managed identity](search-howto-managed-identities-data-sources.md) and your Azure function app is [configured for an Azure AD login](../app-service/configure-authentication-provider-aad.md). | | `httpMethod` | The method to use while sending the payload. Allowed methods are `PUT` or `POST` | | `httpHeaders` | A collection of key-value pairs where the keys represent header names and values represent header values that will be sent to your Web API along with the payload. The following headers are prohibited from being in this collection: `Accept`, `Accept-Charset`, `Accept-Encoding`, `Content-Length`, `Content-Type`, `Cookie`, `Host`, `TE`, `Upgrade`, `Via`. | | `timeout` | (Optional) When specified, indicates the timeout for the http client making the API call. It must be formatted as an XSD "dayTimeDuration" value (a restricted subset of an [ISO 8601 duration](https://www.w3.org/TR/xmlschema11-2/#dayTimeDuration) value). For example, `PT60S` for 60 seconds. If not set, a default value of 30 seconds is chosen. The timeout can be set to a maximum of 230 seconds and a minimum of 1 second. | | `batchSize` | (Optional) Indicates how many "data records" (see JSON payload structure below) will be sent per API call. If not set, a default of 1000 is chosen. We recommend that you make use of this parameter to achieve a suitable tradeoff between indexing throughput and load on your API. |
-| `degreeOfParallelism` | (Optional) When specified, indicates the number of calls the indexer will make in parallel to the endpoint you have provided. You can decrease this value if your endpoint is failing under too high of a request load, or raise it if your endpoint is able to accept more requests and you would like an increase in the performance of the indexer. If not set, a default value of 5 is used. The `degreeOfParallelism` can be set to a maximum of 10 and a minimum of 1. |
+| `degreeOfParallelism` | (Optional) When specified, indicates the number of calls the indexer will make in parallel to the endpoint you have provided. You can decrease this value if your endpoint is failing under pressure, or raise it if your endpoint can handle the load. If not set, a default value of 5 is used. The `degreeOfParallelism` can be set to a maximum of 10 and a minimum of 1. |
## Skill inputs
-There are no predefined inputs for this skill. You can choose one or more fields that would be already available at the time of this skill's execution as inputs and the JSON payload sent to the Web API will have different fields.
+There are no predefined inputs for this skill. The inputs are any existing field, or any [node in the enrichment tree](cognitive-search-working-with-skillsets.md#enrichment-tree) that you want to pass to your custom skill.
## Skill outputs
-There are no predefined outputs for this skill. Depending on the response your Web API will return, add output fields so that they can be picked up from the JSON response.
+There are no predefined outputs for this skill. Be sure to [define an output field mapping](cognitive-search-output-field-mapping.md) in the indexer if the skill's output should be sent to a field in the search index.
## Sample definition
This JSON structure represents the payload that will be sent to your Web API.
It will always follow these constraints: * The top-level entity is called `values` and will be an array of objects. The number of such objects will be at most the `batchSize`.+ * Each object in the `values` array will have:
- * A `recordId` property that is a **unique** string, used to identify that record.
- * A `data` property that is a JSON object. The fields of the `data` property will correspond to the "names" specified in the `inputs` section of the skill definition. The value of those fields will be from the `source` of those fields (which could be from a field in the document, or potentially from another skill).
+
+ * A `recordId` property that is a **unique** string, used to identify that record.
+
+ * A `data` property that is a JSON object. The fields of the `data` property will correspond to the "names" specified in the `inputs` section of the skill definition. The value of those fields will be from the `source` of those fields (which could be from a field in the document, or potentially from another skill).
```json {
It will always follow these constraints:
The "output" corresponds to the response returned from your Web API. The Web API should only return a JSON payload (verified by looking at the `Content-Type` response header) and should satisfy the following constraints: * There should be a top-level entity called `values` which should be an array of objects.+ * The number of objects in the array should be the same as the number of objects sent to the Web API.+ * Each object should have:
- * A `recordId` property
- * A `data` property, which is an object where the fields are enrichments matching the "names" in the `output` and whose value is considered the enrichment.
- * An `errors` property, an array listing any errors encountered that will be added to the indexer execution history. This property is required, but can have a `null` value.
- * A `warnings` property, an array listing any warnings encountered that will be added to the indexer execution history. This property is required, but can have a `null` value.
+
+ * A `recordId` property.
+
+ * A `data` property, which is an object where the fields are enrichments matching the "names" in the `output` and whose value is considered the enrichment.
+
+ * An `errors` property, an array listing any errors encountered that will be added to the indexer execution history. This property is required, but can have a `null` value.
+
+ * A `warnings` property, an array listing any warnings encountered that will be added to the indexer execution history. This property is required, but can have a `null` value.
+ * The ordering of objects in the `values` in either the request or response isn't important. However, the `recordId` is used for correlation so any record in the response containing a `recordId` which was not part of the original request to the Web API will be discarded. ```json
search Search Howto Run Reset Indexers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-run-reset-indexers.md
Indexer limits vary by the workload. For each workload, the following job limits
| Workload | Maximum duration | Maximum jobs | Execution environment <sup>1</sup> | |-|||--|
-| Text-based indexing | 24 hours | One per search unit <sup>2</sup> | Typically runs on the search service. |
+| Text-based indexing <sup>3</sup> | 2 or 24 hours | One per search unit <sup>2</sup> | Typically runs on the search service. It may also run on internally managed, multi-tenant content processing cluster. |
| Skills-based indexing | 2 hours | Indeterminate | Typically runs on an internally managed, multi-tenant content processing cluster, depending on how complex the skillset is. A simple skill might execute on your search service if the service has capacity. Otherwise, skills-based indexer jobs execute off-service. Because the content processing cluster is multi-tenant, nodes are added to meet demand. If you experience a delay in on-demand or scheduled execution, it's probably because the system is either adding nodes or waiting for one to become available.| <sup>1</sup> For optimum processing, a search service determines the internal execution environment for the indexer operation. The execution environment is either the search service or a multi-tenant environment that's managed and secured by Microsoft at no extra cost. You cannot control or configure which environment is used. Using an internally managed cluster for skillset processing leaves more service-specific resources available for routine operations like queries and text indexing. <sup>2</sup> Search units can be [flexible combinations](search-capacity-planning.md#partition-and-replica-combinations) of partitions and replicas, and maximum indexer jobs are not tied to one or the other. In other words, if you have four units, you can have four text-based indexer jobs running concurrently, no matter how the search units are deployed.
+<sup>3</sup> Indexer maximum run time for Basic tier or higher can be 2 or 24 hours, depending on system resources, product implementation and other factors.
+ > [!TIP] > If you are [indexing a large data set](search-howto-large-index.md), you can stretch processing out by putting the indexer [on a schedule](search-howto-schedule-indexers.md). For the full list of all indexer-related limits, see [indexer limits](search-limits-quotas-capacity.md#indexer-limits)
After you reset and rerun indexer jobs, you can monitor status from the search s
+ [Indexer operations (REST)](/rest/api/searchservice/indexer-operations) + [Monitor search indexer status](search-howto-monitor-indexers.md) + [Collect and analyze log data](monitor-azure-cognitive-search.md)
-+ [Schedule an indexer](search-howto-schedule-indexers.md)
++ [Schedule an indexer](search-howto-schedule-indexers.md)
sentinel Automate Responses With Playbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automate-responses-with-playbooks.md
The following recommended playbooks, and other similar playbooks are available t
- **Create, update, or close playbooks** can create, update, or close incidents in Microsoft Sentinel, Microsoft 365 security services, or other ticketing systems: - [Change an incident's severity](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Change-Incident-Severity)
- - [Create a ServiceNow incident](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Create-SNOW-record)
+ - [Create a ServiceNow incident](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/Servicenow/Playbooks/Create-SNOW-record)
## Next steps
service-bus-messaging Monitor Service Bus Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/monitor-service-bus-reference.md
The following two types of errors are classified as **user errors**:
| Metric Name | Exportable via diagnostic settings | Unit | Aggregation type | Description | Dimensions | | - | - | -- | | | | |Active Connections| No | Count | Total | The number of active connections on a namespace and on an entity in the namespace. Value for this metric is a point-in-time value. Connections that were active immediately after that point-in-time may not be reflected in the metric. | |
-|Connections Opened | No | Count | Average | The number of open connections. | Entity name|
-|Connections Closed | No | Count | Average | The number of closed connections. | Entity name|
+|Connections Opened | No | Count | Average | The number of connections opened. Value for this metric is an aggregation, and includes all connections that were opened in the aggregration time window. | Entity name|
+|Connections Closed | No | Count | Average | The number of connections closed. Value for this metric is an aggregation, and includes all connections that were opened in the aggregration time window. | Entity name|
### Resource usage metrics
service-bus-messaging Service Bus Dotnet Get Started With Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-dotnet-get-started-with-queues.md
This section shows you how to create a .NET Core console application to send mes
// of the application, which is best practice when messages are being published or read // regularly. //
- // Create the clients that we'll use for sending and processing messages.
- client = new ServiceBusClient(connectionString);
+ // set the transport type to AmqpWebSockets so that the ServiceBusClient uses the port 443.
+ // If you use the default AmqpTcp, you will need to make sure that the ports 5671 and 5672 are open
+
+ var clientOptions = new ServiceBusClientOptions() { TransportType = ServiceBusTransportType.AmqpWebSockets };
+ client = new ServiceBusClient(connectionString, clientOptions);
sender = client.CreateSender(queueName); // create a batch
This section shows you how to create a .NET Core console application to send mes
// of the application, which is best practice when messages are being published or read // regularly. //
- // Create the clients that we'll use for sending and processing messages.
- client = new ServiceBusClient(connectionString);
+ // set the transport type to AmqpWebSockets so that the ServiceBusClient uses the port 443.
+ // If you use the default AmqpTcp, you will need to make sure that the ports 5671 and 5672 are open
+
+ var clientOptions = new ServiceBusClientOptions() { TransportType = ServiceBusTransportType.AmqpWebSockets };
+ client = new ServiceBusClient(connectionString, clientOptions);
sender = client.CreateSender(queueName); // create a batch
In this section, you'll add code to retrieve messages from the queue.
// of the application, which is best practice when messages are being published or read // regularly. //
+ // set the transport type to AmqpWebSockets so that the ServiceBusClient uses the port 443.
+ // If you use the default AmqpTcp, you will need to make sure that the ports 5671 and 5672 are open
- // Create the client object that will be used to create sender and receiver objects
- client = new ServiceBusClient(connectionString);
-
+ var clientOptions = new ServiceBusClientOptions() { TransportType = ServiceBusTransportType.AmqpWebSockets };
+ client = new ServiceBusClient(connectionString, clientOptions);
+
// create a processor that we can use to process the messages processor = client.CreateProcessor(queueName, new ServiceBusProcessorOptions());
In this section, you'll add code to retrieve messages from the queue.
// of the application, which is best practice when messages are being published or read // regularly. //
+ // set the transport type to AmqpWebSockets so that the ServiceBusClient uses the port 443.
+ // If you use the default AmqpTcp, you will need to make sure that the ports 5671 and 5672 are open
- // Create the client object that will be used to create sender and receiver objects
- client = new ServiceBusClient(connectionString);
+ var clientOptions = new ServiceBusClientOptions() { TransportType = ServiceBusTransportType.AmqpWebSockets };
+ client = new ServiceBusClient(connectionString, clientOptions);
// create a processor that we can use to process the messages processor = client.CreateProcessor(queueName, new ServiceBusProcessorOptions());
service-bus-messaging Service Bus Sas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-sas.md
The rights conferred by the policy rule can be a combination of:
The 'Manage' right includes the 'Send' and 'Receive' rights.
-A namespace or entity policy can hold up to 12 Shared Access Authorization rules, providing room for three sets of rules, each covering the basic rights and the combination of Send and Listen. This limit underlines that the SAS policy store isn't intended to be a user or service account store. If your application needs to grant access to Service Bus based on user or service identities, it should implement a security token service that issues SAS tokens after an authentication and access check.
+A namespace or entity policy can hold up to 12 Shared Access Authorization rules, providing room for three sets of rules, each covering the basic rights and the combination of Send and Listen. This limit is per entity, meaning the namespace and each entity can have up to 12 Shared Access Authorization rules. This limit underlines that the SAS policy store isn't intended to be a user or service account store. If your application needs to grant access to Service Bus based on user or service identities, it should implement a security token service that issues SAS tokens after an authentication and access check.
An authorization rule is assigned a *Primary Key* and a *Secondary Key*. These keys are cryptographically strong keys. Don't lose them or leak them - they'll always be available in the [Azure portal][Azure portal]. You can use either of the generated keys, and you can regenerate them at any time. If you regenerate or change a key in the policy, all previously issued tokens based on that key become instantly invalid. However, ongoing connections created based on such tokens will continue to work until the token expires.
spring-apps Tutorial Managed Identities Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/tutorial-managed-identities-key-vault.md
export SERVICE_IDENTITY=$(az spring app show --name "springapp" -s "myspringclou
### [User-assigned managed identity](#tab/user-assigned-managed-identity)
-First, create a user-assigned managed identity in advance with its resource ID set to `$USER_IDENTITY_RESOURCE_ID`.
+First, create a user-assigned managed identity in advance with its resource ID set to `$USER_IDENTITY_RESOURCE_ID`. Save the client ID for the property configuration below.
```azurecli export SERVICE_IDENTITY={principal ID of user-assigned managed identity} export USER_IDENTITY_RESOURCE_ID={resource ID of user-assigned managed identity}
-export USER_IDENTITY_CLIENT_ID={client ID of user-assigned managed identity}
``` The following example creates an app named `springapp` with a user-assigned managed identity, as requested by the `--user-assigned` parameter.
This app will have access to get secrets from Azure Key Vault. Use the Azure Key
1. Use the following command to generate a sample project from `start.spring.io` with Azure Key Vault Spring Starter. ```azurecli
- curl https://start.spring.io/starter.tgz -d dependencies=web,azure-keyvault-secrets -d baseDir=springapp -d bootVersion=2.3.1.RELEASE -d javaVersion=1.8 | tar -xzvf -
+ curl https://start.spring.io/starter.tgz -d dependencies=web,azure-keyvault -d baseDir=springapp -d bootVersion=2.7.2 -d javaVersion=1.8 | tar -xzvf -
``` 1. Specify your Key Vault in your app.
This app will have access to get secrets from Azure Key Vault. Use the Azure Key
### [System-assigned managed identity](#tab/system-assigned-managed-identity) ```properties
-azure.keyvault.enabled=true
-azure.keyvault.uri=https://<your-keyvault-name>.vault.azure.net
+spring.cloud.azure.keyvault.secret.property-sources[0].endpoint=https://<your-keyvault-name>.vault.azure.net
+spring.cloud.azure.keyvault.secret.property-sources[0].credential.managed-identity-enabled=true
``` ### [User-assigned managed identity](#tab/user-assigned-managed-identity) ```properties
-azure.keyvault.enabled=true
-azure.keyvault.uri=https://<your-keyvault-name>.vault.azure.net
-azure.keyvault.client-id={Client ID of user-assigned managed identity}
+spring.cloud.azure.keyvault.secret.property-sources[0].endpoint=https://<your-keyvault-name>.vault.azure.net
+spring.cloud.azure.keyvault.secret.property-sources[0].credential.managed-identity-enabled=true
+spring.cloud.azure.keyvault.secret.property-sources[0].credential.client-id={Client ID of user-assigned managed identity}
```
azure.keyvault.client-id={Client ID of user-assigned managed identity}
} ```
- If you open the *pom.xml* file, you'll see the dependency of `azure-keyvault-secrets-spring-boot-starter`. Add this dependency to your project in your *pom.xml* file.
+ If you open the *pom.xml* file, you'll see the dependency of `spring-cloud-azure-starter-keyvault`.
```xml <dependency>
- <groupId>com.microsoft.azure</groupId>
- <artifactId>azure-keyvault-secrets-spring-boot-starter</artifactId>
+ <groupId>com.azure.spring</groupId>
+ <artifactId>spring-cloud-azure-starter-keyvault</artifactId>
</dependency> ``` 1. Use the following command to package your sample app. ```azurecli
- mvn clean package
+ ./mvnw clean package -DskipTests
``` 1. Now you can deploy your app to Azure with the following command:
azure.keyvault.client-id={Client ID of user-assigned managed identity}
--resource-group <your-resource-group-name> \ --name "springapp" \ --service <your-Azure-Spring-Apps-instance-name> \
- --jar-path target/demo-0.0.1-SNAPSHOT.jar
+ --artifact-path target/demo-0.0.1-SNAPSHOT.jar
``` 1. To test your app, access the public endpoint or test endpoint by using the following command:
storage Archive Rehydrate Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/archive-rehydrate-overview.md
To learn how to rehydrate an archived blob to an online tier, see [Rehydrate an
When you rehydrate a blob, you can set the priority for the rehydration operation via the optional *x-ms-rehydrate-priority* header on a [Set Blob Tier](/rest/api/storageservices/set-blob-tier) or [Copy Blob](/rest/api/storageservices/copy-blob) operation. Rehydration priority options include: -- **Standard priority**: The rehydration request will be processed in the order it was received and may take up to 15 hours.
+- **Standard priority**: The rehydration request will be processed in the order it was received and may take up to 15 hours for objects under 10 GB in size.
- **High priority**: The rehydration request will be prioritized over standard priority requests and may complete in less than one hour for objects under 10 GB in size. To check the rehydration priority while the rehydration operation is underway, call [Get Blob Properties](/rest/api/storageservices/get-blob-properties) to return the value of the `x-ms-rehydrate-priority` header. The rehydration priority property returns either *Standard* or *High*.
storage Secure File Transfer Protocol Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-known-issues.md
The unsupported client list above is not exhaustive and may change over time.
| Extensions | Unsupported extensions include but aren't limited to: fsync@openssh.com, limits@openssh.com, lsetstat@openssh.com, statvfs@openssh.com | | SSH Commands | SFTP is the only supported subsystem. Shell requests after the completion of key exchange will fail. | | Multi-protocol writes | Random writes and appends (`PutBlock`,`PutBlockList`, `GetBlockList`, `AppendBlock`, `AppendFile`) aren't allowed from other protocols (NFS, Blob REST, Data Lake Storage Gen2 REST) on blobs that are created by using SFTP. Full overwrites are allowed.|
+| Rename Operations | Rename operations where the target file name already exists is a protocol violation. Attempting such an operation will return an error. See [Removing and Renaming Files](https://datatracker.ietf.org/doc/html/draft-ietf-secsh-filexfer-02#section-6.5) for more information.
## Authentication and authorization
storage Storage Files Migration Storsimple 8000 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-storsimple-8000.md
_File fidelity_ refers to the multitude of attributes, timestamps, and data that
* Corrupt files are skipped. The copy logs may list different errors for each item that is corrupt on the StorSimple disk: "The request failed due to a fatal device hardware error" or "The file or directory is corrupted or unreadable" or "The access control list (ACL) structure is invalid". * Individual files larger than 4 TiB are skipped. * File path lengths need to be equal to or fewer than 2048 characters. Files and folders with longer paths will be skipped.
+* Reparse points will be skipped. Any Microsoft Data Deduplication / SIS reparse points or those of third parties cannot be resolved by the migration engine and prevent a migration of the affected files and folders.
+
+The [troubleshooting section](#troubleshooting) at the end of this article has more details for item level and migration job level error codes and where possible, their mitigation options.
### StorSimple volume backups
Your migration is complete.
> Still have questions or encountered any issues?</br> > We're here to help: :::image type="content" source="media/storage-files-migration-storsimple-8000/storage-files-migration-storsimple-8000-migration-email.png" alt-text="Email address in one word: Azure Files migration at microsoft dot com":::
+## Troubleshooting
+
+When using the StorSimple Data Manager migration service, either an entire migration job or individual files may fail for various reasons. The file fidelity section has more details on supported / unsupported scenarios. The following tables list error codes, error details, and where possible, mitigation options.
+
+### Job level errors
+
+|Phase |Error |Details / Mitigation |
+|||-|
+|**Backup** |*Could not find a backup for the parameters specified* |The backup selected for the job run is not found at the time of "Estimation" or "Copy". Ensure that the backup is still present in the StorSimple backup catalog. Sometimes automatic backup retention policies delete backups between selecting them for migration and actually running the migration job for this backup. Consider disabling any backup retention schedules before starting a migration. |
+|**Estimation </br> Configure compute** |*Installation of encryption keys failed* |Your *Service Data Encryption Key* is incorrect. Review the [encryption key section in this article](#storsimple-service-data-encryption-key) for more details and help retrieving the correct key. |
+| |*Batch error* |It is possible that starting up all the internal infrastructure required to perform a migration runs into an issue. Multiple other services are involved in this process. These problems generally resolve themselves when you attempt to run the job again. |
+| |*StorSimple Manager encountered an internal error. Wait for a few minutes and then try the operation again. If the issue persists, contact Microsoft Support. (Error code: 1074161829)* |This generic error has multiple causes, but one possibility encountered is that the StorSimple device manager reached the limit of 50 appliances. Check if the most recently run jobs in the device manager have suddenly started to fail with this error, which would suggest this is the problem. The mitigation for this particular issue is to remove any offline StorSimple 8001 appliances created and used byt the Data Manager Service. You can file a support ticket or delete them manually in the portal. Make sure to only delete offline 8001 series appliances. |
+|**Estimating Files** |*Clone volume job failed* |This error most likely indicates that you specified a backup that was somehow corrupted. The migration service can't mount or read it. You can try out the backup manually or open a support ticket. |
+| |*Cannot proceed as volume is in non-NTFS format* |Only NTFS volume, non dedupe enabled, can be used by the migration service. If you have a differently formatted volume, like ReFS or a third party format, the migration service won't be able to migrate this volume. See the [Known limitations](#known-limitations) section. |
+| |*Contact support. No suitable partition found on the disk* |The StorSimple disk that is supposed to have the volume specified fr migration doesn't appear to have a partition for said volume. That is unusual and can indicate a corruption or management mis-alignment. Your only option to further investigate this issue to to file a support ticket. |
+| |*Timed out* |The estimation phase failing with a timeout is typically an issues with either the StorSimple appliance, or the source Volume Backup being slow and sometimes even corrupt. If re-running the backup doesn't work, then filing a support ticket is your best course of action. |
+| |*Could not find file &lt;path&gt; </br>Could not find a part of the path* |The job definition allows you to provide a source sub-path. This error is shown when that path does not exist. For instance: *\Share1 > \Share\Share1* </br> In this example you've specified \Share1 as a sub-path in the source, mapping to another sub-path in the target. However, the source path does not exist (was misspelled?). Note: Windows is case preserving but not case dependent. Meaning specifying *\Share1* and *\share1* is equivalent. Also: Target paths that don't exist, will be automatically created. |
+| |*This request is not authorized to perform this operation* |This error shows when the source StorSimple storage account or the target storage account with the Azure file share has a firewall setting enabled. You must allow traffic over the public endpoint and not restrict it with further firewall rules. Otherwise the Data Transformation Service will be unable to access either storage account, even if you authorized it. Disable any firewall rules and re-run the job. |
+|**Copying Files** |*The account being accessed does not support HTTP* |This is an Azure FIles bug that is being fixed. The temporary mitigation is to disable internet routing on the target storage account or use the Microsoft routing endpoint. |
+| |*The specified share is full* |If the target is a premium Azure file share, ensure you have provisioned a sufficient capacity for the share. Temporary over-provisioning is a common practice. If the target is a standard file share, check that the target share has the "large file share" feature enabled. Standard storage is growing as you use the share. However, if you use a legacy storage account as a target, you might encounter a 5 TiB share limit. You will have to manually enable the ["Large file share"](storage-how-to-create-file-share.md#enable-large-files-shares-on-an-existing-account) feature. Fix the limits on the target and re-run the job. |
+
+### Item level errors
+
+During the copy phase of a migration job run, individual namespace items (files and folders) can encounter errors. The following table lists the most common ones and suggests mitigation options when possible.
+
+|Phase |Error |Mitigation |
+|-|--||
+|**Copy** |*-2146233088 </br>The server is busy.* |Rerun the job if there are too many failures. If there are only very few errors, you can try running the job again but often a manual copy of the failed items can be faster. Then resume the migration by skipping to processing the next backup. |
+| |*-2146233088 </br>Operation could not be completed within the specified time.* |Rerun the job if there are too many failures. If there are only very few errors, you can try running the job again but often a manual copy of the failed items can be faster. Then resume the migration by skipping to processing the next backup. |
+| |*Upload timed out or copy not started* |Rerun the job if there are too many failures. If there are only very few errors, you can try running the job again but often a manual copy of the failed items can be faster. Then resume the migration by skipping to processing the next backup. |
+| |*-2146233029 </br>The operation was cancelled.* |Rerun the job if there are too many failures. If there are only very few errors, you can try running the job again but often a manual copy of the failed items can be faster. Then resume the migration by skipping to processing the next backup. |
+| |*1920 </br>The file cannot be accessed by the system.* |This is a common error when the migration engine encounters a reparse point, link, or junction. They are not supported. These types of files can't be copied. Review the [Known limitations](#known-limitations) section and the [File fidelity](#file-fidelity) section in this article. |
+| |*-2147024891 </br>Access is denied* |This is an error for files that are encrypted in a way that they can't be accessed on the disk. Files that can be read from disk but simply have encrypted content are not affected and can be copied. Your only option is to copy them manually. You can find such items by mounting the affected volume and running the following command: `get-childitem <path> [-Recurse] -Force -ErrorAction SilentlyContinue | Where-Object {$_.Attributes -ge "Encrypted"} | format-list fullname, attributes` |
+| |*Not a valid Win32 FileTime. Parameter name: fileTime* |In this case, the file can be accessed but can't be evaluated for copy because a timestamp the migration engine depends on is either corrupted or was written by an application in an incorrect format. There is not much you can do, because you can't change the timestamp in the backup. If retaining this file is important, perhaps on the latest version (last backup containing this file) you manually copy the file, fix the timestamp and then move it to the target Azure file share. This option doesn't scale very well but is an option for high-value files where you want to have at least one version retained in your target. |
+| |*-2146232798 </br>Safe handle has been closed* |Often a transient error. Rerun the job if there are too many failures. If there are only very few errors, you can try running the job again but often a manual copy of the failed items can be faster. Then resume the migration by skipping to processing the next backup. |
+| |*-2147024413 </br>Fatal device hardware error* |This is a rare error and not actually reported for a physical device, but rather the 8001 series virtualized appliances used by the migration service. The appliance ran into an issue. Files with this error won't stop the migration from proceeding to the next backup. That makes it hard for you to perform a manual copy or retry the backup that contains files with this error. If the files left behind are very important or there is a large number of files, you may need to start the migration of all backups again. Open a support ticket for further investigation. |
+|**Delete </br>(Mirror purging)** |*The specified directory is not empty.* |This error occurs when the migration mode is set to *mirror* and the process that removes items from the Azure file share ran into an issue that prevented it from deleting items. Deletion happens only in the live share, not from previous snapshots. The deletion is necessary because the affected files are not in the current backup and thus must be removed from the live share before the next snapshot. There are two options: Option 1: mount the target Azure file share and delete the files with this error manually. Option 2: you can ignore these errors and continue processing the next backup with an expectation that the target is not identical to source and has some extra items that weren't in the original StorSimple backup. |
+| |*Bad request* |This error indicates that the source file has certain characteristics that could not be copied to the Azure file share. Most notably there could be invisible control characters in a file name or 1 byte of a double byte character in the file name or file path. You can use the copy logs to get path names, copy the files to a temporary location, rename the paths to remove the unsupported characters and then robocopy again to the Azure file share. You can then resume the migration by skipping to the next backup to be processed. |
+++ ## Next steps * Get more familiar with [Azure File Sync: aka.ms/AFS](../file-sync/file-sync-planning.md).
storsimple Storsimple Data Manager Change Default Blob Path https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-data-manager-change-default-blob-path.md
description: Learn how to set up an Azure function to rename a default blob file
Previously updated : 01/16/2018 Last updated : 08/22/2022 # Change a blob path from the default path + When the StorSimple Data Manager service transforms the data, by default it places the transformed blobs in a storage container as specified during the creation of the target repository. As the blobs arrive at this location, you may want to move these blobs to an alternate location. This article describes how to set up an Azure function to rename a default blob file path and hence move the blobs to a different location. ## Prerequisites
storsimple Storsimple Data Manager Dotnet Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-data-manager-dotnet-jobs.md
description: Learn how to use the .NET SDK within the StorSimple Data Manager se
Previously updated : 01/16/2018 Last updated : 08/22/2022 # Use the .NET SDK to initiate data transformation + ## Overview This article explains how you can use the data transformation feature within the StorSimple Data Manager service to transform StorSimple device data. The transformed data is then consumed by other Azure services in the cloud.
storsimple Storsimple Data Manager Job Using Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-data-manager-job-using-automation.md
description: Learn how to use Azure Automation for triggering StorSimple Data Ma
Previously updated : 01/16/2018 Last updated : 08/22/2022 # Use Azure Automation to trigger a job + This article explains how you can use the data transformation feature within the StorSimple Data Manager service to transform StorSimple device data. You can launch a data transformation job in two ways: - Use the .NET SDK
storsimple Storsimple Partner Csp Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-partner-csp-deploy.md
NA Previously updated : 02/08/2017 Last updated : 08/22/2022 # Deploy StorSimple Virtual Array for Cloud Solution Provider Program + ## Overview StorSimple Virtual Array can be deployed by the Cloud Solution Provider (CSP) partners for their customers. A CSP partner can create a StorSimple Device Manager service. This service can then be used to deploy and manage StorSimple Virtual Array and the associated shares, volumes, and backups.
storsimple Storsimple Partner Csp Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-partner-csp-overview.md
NA Previously updated : 02/08/2017 Last updated : 08/22/2022 # What is StorSimple for Cloud Solutions Providers Program? ## Overview
storsimple Storsimple Update1 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-update1-release-notes.md
NA Previously updated : 11/03/2017 Last updated : 08/22/2022 # Update 1.2 release notes for your StorSimple 8000 series device + ## Overview The following release notes describe the new features and identify the critical open issues for StorSimple 8000 Series Update 1.2. They also contain a list of the StorSimple software, driver and disk firmware updates included in this release.
storsimple Storsimple Update2 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-update2-release-notes.md
Title: StorSimple 8000 Series Update 2 release notes | Microsoft Docs
-description: Describes the new features, issues, and workarounds for StorSimple 8000 Series Update 2.
+description: Describes the new features, issues, and work arounds for StorSimple 8000 Series Update 2.
documentationcenter: NA
NA Previously updated : 11/03/2017 Last updated : 08/23/2022 # StorSimple 8000 Series Update 2 release notes + ## Overview The following release notes describe the new features and identify the critical open issues for StorSimple 8000 Series Update 2. They also contain a list of the StorSimple software, driver, and disk firmware updates included in this release.
Update 2 introduces the following new features.
* **Proactive Support** ΓÇô Update 2 enables Microsoft to pull additional diagnostic information from the device. When our operations team identifies devices that are having problems, we are better equipped to collect information from the device and diagnose issues. **By accepting Update 2, you allow us to provide this proactive support**. ## Issues fixed in Update 2
-The following tables provides a summary of issues that were fixed in Updates 2.
+The following table provides a summary of issues that were fixed in Updates 2.
| No. | Feature | Issue | Applies to physical device | Applies to virtual device | | | | | | |
The following tables provides a summary of issues that were fixed in Updates 2.
## Known issues in Update 2 The following table provides a summary of known issues in this release.
-| No. | Feature | Issue | Comments / workaround | Applies to physical device | Applies to virtual device |
+| No. | Feature | Issue | Comments / work around | Applies to physical device | Applies to virtual device |
| | | | | | | | 1 |Disk quorum |In rare instances, if the majority of disks in the EBOD enclosure of an 8600 device are disconnected resulting in no disk quorum, then the storage pool will go offline. It will stay offline even if the disks are reconnected. |You will need to reboot the device. If the issue persists, please contact Microsoft Support for next steps. |Yes |No | | 2 |Incorrect controller ID |When a controller replacement is performed, controller 0 may show up as controller 1. During controller replacement, when the image is loaded from the peer node, the controller ID can show up initially as the peer controllerΓÇÖs ID. In rare instances, this behavior may also be seen after a system reboot. |No user action is required. This situation will resolve itself after the controller replacement is complete. |Yes |No |
storsimple Storsimple Update21 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-update21-release-notes.md
Title: StorSimple 8000 Series Update 2.2 release notes | Microsoft Docs
-description: Describes the new features, issues, and workarounds for StorSimple 8000 Series Update 2.2.
+description: Describes the new features, issues, and work arounds for StorSimple 8000 Series Update 2.2.
documentationcenter: NA
NA Previously updated : 08/22/2022 Last updated : 08/23/2022
The following key improvements have been made in Update 2.2.
* **Update reliability improvements** ΓÇô This release has bug fixes that result in an improved Update reliability. ## Issues fixed in Update 2.2
-The following tables provide a summary of issues that were fixed in Updates 2.2 and 2.1.
+
+The following table provides a summary of issues that were fixed in Updates 2.2 and 2.1.
| No | Feature | Issue | Applies to physical device | Applies to virtual device | | | | | | |
The following tables provide a summary of issues that were fixed in Updates 2.2
## Known issues in Update 2.2 The following table provides a summary of known issues in this release.
-| No. | Feature | Issue | Comments / workaround | Applies to physical device | Applies to virtual device |
+| No. | Feature | Issue | Comments / work around | Applies to physical device | Applies to virtual device |
| | | | | | | | 1 |Disk quorum |In rare instances, if most disks in the EBOD enclosure of an 8600 device are disconnected resulting in no disk quorum, then the storage pool will go offline. It will stay offline even if the disks are reconnected. |You'll need to reboot the device. If the issue persists, please contact Microsoft Support for next steps. |Yes |No | | 2 |Incorrect controller ID |When a controller replacement is performed, controller 0 may show up as controller 1. During controller replacement, when the image is loaded from the peer node, the controller ID can show up initially as the peer controllerΓÇÖs ID. In rare instances, this behavior may also be seen after a system reboot. |No user action is required. This situation will resolve itself after the controller replacement is complete. |Yes |No |
synapse-analytics How To Access Secured Purview Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/catalog-and-governance/how-to-access-secured-purview-account.md
To create managed private endpoints for Microsoft Purview on Synapse Studio:
2. Select **Yes** for **Create managed private endpoints**. You need to have "**workspaces/managedPrivateEndpoint/write**" permission, e.g. Synapse Administrator or Synapse Linked Data Manager role.
+ >[!TIP]
+ > If you are not seeing any option to create managed private endpoints, you need to use or create an [Azure Synapse workspace has the managed virtual network option enabled at creation](../security/synapse-workspace-managed-vnet.md).
+ 3. Click **+ Create all** button to batch create the needed Microsoft Purview private endpoints, including the ***account*** private endpoint and the ***ingestion*** private endpoints for the Microsoft Purview managed resources - Blob storage, Queue storage, and Event Hubs namespace. You need to have at least **Reader** role on your Microsoft Purview account for Synapse to retrieve the Microsoft Purview managed resources' information. :::image type="content" source="./media/purview-create-all-managed-private-endpoints.png" alt-text="Create managed private endpoint for your connected Microsoft Purview account.":::
You can monitor the created managed private endpoints for Microsoft Purview at t
- [Connect Synapse workspace to Microsoft Purview](quickstart-connect-azure-purview.md) - [Metadata and lineage from Azure Synapse Analytics](../../purview/how-to-lineage-azure-synapse-analytics.md)-- [Discover, connect and explore data in Synapse using Microsoft Purview](how-to-discover-connect-analyze-azure-purview.md)
+- [Discover, connect and explore data in Synapse using Microsoft Purview](how-to-discover-connect-analyze-azure-purview.md)
synapse-analytics Quickstart Connect Azure Purview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/catalog-and-governance/quickstart-connect-azure-purview.md
For **Data Lineage - Synapse Pipeline**, you may see one of below status:
- Cannot reach the Microsoft Purview account from your current network because the account is protected by firewall. You can launch the Synapse Studio from a private network that has connectivity to your Microsoft Purview account instead. - You don't have permission to check role assignments on the Microsoft Purview account. You can contact the Microsoft Purview account admin to check the role assignments for you. Learn about the needed Microsoft Purview role from [Set up authentication](#set-up-authentication) section.
+>[!Note]
+>
+> Disconnected status doesn't impact you to use catalog search feature within Azure Synapse; it continues to work if the data readers role is granted at the Microsoft purview collection level.
+ ## Report lineage to Microsoft Purview Once you connect the Synapse workspace to a Microsoft Purview account, when you execute pipelines, Synapse reports lineage information to the Microsoft Purview account. For detailed supported capabilities and an end to end walkthrough, see [Metadata and lineage from Azure Synapse Analytics](../../purview/how-to-lineage-azure-synapse-analytics.md).
synapse-analytics How To Grant Workspace Managed Identity Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/how-to-grant-workspace-managed-identity-permissions.md
Select that same container or file system to grant the *Storage Blob Data Contri
| Setting | Value | | | |
- | Role | Storage Blob Contributor |
+ | Role | Storage Blob Data Contributor |
| Assign access to | MANAGEDIDENTITY | | Members | managed identity name |
synapse-analytics Apache Spark Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-autoscale.md
Apache Spark enables configuration of Dynamic Allocation of Executors through co
{ "conf" : { "spark.dynamicAllocation.maxExecutors" : "6",
- "spark.dynamicAllocation.enable": "true",
+ "spark.dynamicAllocation.enabled": "true",
"spark.dynamicAllocation.minExecutors": "2" } }
synapse-analytics Apache Spark Secure Credentials With Tokenlibrary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-secure-credentials-with-tokenlibrary.md
display(df.limit(10))
```python %%pyspark # Python code
-val source_full_storage_account_name = "teststorage.dfs.core.windows.net"
+source_full_storage_account_name = "teststorage.dfs.core.windows.net"
spark.conf.set(f"spark.storage.synapse.{source_full_storage_account_name}.linkedServiceName", "<lINKED SERVICE NAME>") spark.conf.set(f"fs.azure.account.auth.type.{source_full_storage_account_name}", "SAS") spark.conf.set(f"fs.azure.sas.token.provider.type.{source_full_storage_account_name}", "com.microsoft.azure.synapse.tokenlibrary.LinkedServiceBasedSASProvider")
display(df.limit(10))
```python %%pyspark # Python code
-val source_full_storage_account_name = "teststorage.dfs.core.windows.net"
+source_full_storage_account_name = "teststorage.dfs.core.windows.net"
spark.conf.set(f"spark.storage.synapse.{source_full_storage_account_name}.linkedServiceName", "<LINKED SERVICE NAME>") spark.conf.set(f"fs.azure.account.oauth.provider.type.{source_full_storage_account_name}", "com.microsoft.azure.synapse.tokenlibrary.LinkedServiceBasedTokenProvider")
synapse-analytics Microsoft Spark Utilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/microsoft-spark-utilities.md
Env.Help()
Get result: ```
-GetUserName(): returns user name
-GetUserId(): returns unique user id
-GetJobId(): returns job id
-GetWorkspaceName(): returns workspace name
-GetPoolName(): returns Spark pool name
-GetClusterId(): returns cluster id
+getUserName(): returns user name
+getUserId(): returns unique user id
+getJobId(): returns job id
+getWorkspaceName(): returns workspace name
+getPoolName(): returns Spark pool name
+getClusterId(): returns cluster id
``` ### Get user name
synapse-analytics Sql Data Warehouse Concept Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-concept-recommendations.md
You can [check your recommendations](https://aka.ms/Azureadvisor) today!
Data skew can cause additional data movement or resource bottlenecks when running your workload. The following documentation describes show to identify data skew and prevent it from happening by selecting an optimal distribution key. -- [Identify and remove skew](sql-data-warehouse-tables-distribute.md#how-to-tell-if-your-distribution-column-is-a-good-choice)
+- [Identify and remove skew](sql-data-warehouse-tables-distribute.md#how-to-tell-if-your-distribution-is-a-good-choice)
## No or outdated statistics
synapse-analytics Sql Data Warehouse Tables Distribute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-distribute.md
Previously updated : 11/02/2021 Last updated : 08/09/2022
A hash-distributed table distributes table rows across the Compute nodes by usin
Since identical values always hash to the same distribution, SQL Analytics has built-in knowledge of the row locations. In dedicated SQL pool this knowledge is used to minimize data movement during queries, which improves query performance.
-Hash-distributed tables work well for large fact tables in a star schema. They can have very large numbers of rows and still achieve high performance. There are some design considerations that help you to get the performance the distributed system is designed to provide. Choosing a good distribution column is one such consideration that is described in this article.
+Hash-distributed tables work well for large fact tables in a star schema. They can have very large numbers of rows and still achieve high performance. There are some design considerations that help you to get the performance the distributed system is designed to provide. Choosing a good distribution column or columns is one such consideration that is described in this article.
Consider using a hash-distributed table when:
The tutorial [Load New York taxicab data](./load-data-from-azure-blob-storage-us
## Choose a distribution column
-A hash-distributed table has a distribution column that is the hash key. For example, the following code creates a hash-distributed table with ProductKey as the distribution column.
+A hash-distributed table has a distribution column or set of columns that is the hash key. For example, the following code creates a hash-distributed table with `ProductKey` as the distribution column.
```sql CREATE TABLE [dbo].[FactInternetSales]
WITH
); ```
-Data stored in the distribution column can be updated. Updates to data in the distribution column could result in data shuffle operation.
+Hash distribution can be applied on multiple columns for a more even distribution of the base table. Multi-column distribution will allow you to choose up to eight columns for distribution. This not only reduces the data skew over time but also improves query performance. For example:
-Choosing a distribution column is an important design decision since the values in this column determine how the rows are distributed. The best choice depends on several factors, and usually involves tradeoffs. Once a distribution column is chosen, you cannot change it.
+```sql
+CREATE TABLE [dbo].[FactInternetSales]
+( [ProductKey] int NOT NULL
+, [OrderDateKey] int NOT NULL
+, [CustomerKey] int NOT NULL
+, [PromotionKey] int NOT NULL
+, [SalesOrderNumber] nvarchar(20) NOT NULL
+, [OrderQuantity] smallint NOT NULL
+, [UnitPrice] money NOT NULL
+, [SalesAmount] money NOT NULL
+)
+WITH
+( CLUSTERED COLUMNSTORE INDEX
+, DISTRIBUTION = HASH([ProductKey], [OrderDateKey], [CustomerKey] , [PromotionKey])
+);
+```
+
+> [!NOTE]
+> Multi-column distribution is currently in preview for Azure Synapse Analytics. For more information on joining the preview, see multi-column distribution with [CREATE MATERIALIZED VIEW](/sql/t-sql/statements/create-materialized-view-as-select-transact-sql), [CREATE TABLE](/sql/t-sql/statements/create-table-azure-sql-data-warehouse), or [CREATE TABLE AS SELECT](/sql/t-sql/statements/create-materialized-view-as-select-transact-sql).
-If you didn't choose the best column the first time, you can use [CREATE TABLE AS SELECT (CTAS)](/sql/t-sql/statements/create-table-as-select-azure-sql-data-warehouse?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) to re-create the table with a different distribution column.
+<!-- Data stored in the distribution column(s) can be updated. Updates to data in distribution column(s) could result in data shuffle operation.-->
+
+Choosing distribution column(s) is an important design decision since the values in the hash column(s) determine how the rows are distributed. The best choice depends on several factors, and usually involves tradeoffs. Once a distribution column or column set is chosen, you cannot change it. If you didn't choose the best column(s) the first time, you can use [CREATE TABLE AS SELECT (CTAS)](/sql/t-sql/statements/create-table-as-select-azure-sql-data-warehouse?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) to re-create the table with the desired distribution hash key.
### Choose a distribution column with data that distributes evenly
For best performance, all of the distributions should have approximately the sam
- Data skew means the data is not distributed evenly across the distributions - Processing skew means that some distributions take longer than others when running parallel queries. This can happen when the data is skewed.
-To balance the parallel processing, select a distribution column that:
+To balance the parallel processing, select a distribution column or set of columns that:
-- **Has many unique values.** The column can have duplicate values. All rows with the same value are assigned to the same distribution. Since there are 60 distributions, some distributions can have > 1 unique values while others may end with zero values. -- **Does not have NULLs, or has only a few NULLs.** For an extreme example, if all values in the column are NULL, all the rows are assigned to the same distribution. As a result, query processing is skewed to one distribution, and does not benefit from parallel processing.-- **Is not a date column**. All data for the same date lands in the same distribution. If several users are all filtering on the same date, then only 1 of the 60 distributions do all the processing work.
+- **Has many unique values.** The distribution column(s) can have duplicate values. All rows with the same value are assigned to the same distribution. Since there are 60 distributions, some distributions can have > 1 unique values while others may end with zero values.
+- **Does not have NULLs, or has only a few NULLs.** For an extreme example, if all values in the distribution column(s) are NULL, all the rows are assigned to the same distribution. As a result, query processing is skewed to one distribution, and does not benefit from parallel processing.
+- **Is not a date column**. All data for the same date lands in the same distribution, or will cluster records by date. If several users are all filtering on the same date (such as today's date), then only 1 of the 60 distributions do all the processing work.
### Choose a distribution column that minimizes data movement
-To get the correct query result queries might move data from one Compute node to another. Data movement commonly happens when queries have joins and aggregations on distributed tables. Choosing a distribution column that helps minimize data movement is one of the most important strategies for optimizing performance of your dedicated SQL pool.
+To get the correct query result queries might move data from one Compute node to another. Data movement commonly happens when queries have joins and aggregations on distributed tables. Choosing a distribution column or column set that helps minimize data movement is one of the most important strategies for optimizing performance of your dedicated SQL pool.
-To minimize data movement, select a distribution column that:
+To minimize data movement, select a distribution column or set of columns that:
-- Is used in `JOIN`, `GROUP BY`, `DISTINCT`, `OVER`, and `HAVING` clauses. When two large fact tables have frequent joins, query performance improves when you distribute both tables on one of the join columns. When a table is not used in joins, consider distributing the table on a column that is frequently in the `GROUP BY` clause.
+- Is used in `JOIN`, `GROUP BY`, `DISTINCT`, `OVER`, and `HAVING` clauses. When two large fact tables have frequent joins, query performance improves when you distribute both tables on one of the join columns. When a table is not used in joins, consider distributing the table on a column or column set that is frequently in the `GROUP BY` clause.
- Is *not* used in `WHERE` clauses. This could narrow the query to not run on all the distributions. - Is *not* a date column. `WHERE` clauses often filter by date. When this happens, all the processing could run on only a few distributions.
-### What to do when none of the columns are a good distribution column
-
-If none of your columns have enough distinct values for a distribution column, you can create a new column as a composite of one or more values. To avoid data movement during query execution, use the composite distribution column as a join column in queries.
-
-Once you design a hash-distributed table, the next step is to load data into the table. For loading guidance, see [Loading overview](design-elt-data-loading.md).
+Once you design a hash-distributed table, the next step is to load data into the table. For loading guidance, see [Loading overview](design-elt-data-loading.md).
-## How to tell if your distribution column is a good choice
+## How to tell if your distribution is a good choice
-After data is loaded into a hash-distributed table, check to see how evenly the rows are distributed across the 60 distributions. The rows per distribution can vary up to 10% without a noticeable impact on performance.
+After data is loaded into a hash-distributed table, check to see how evenly the rows are distributed across the 60 distributions. The rows per distribution can vary up to 10% without a noticeable impact on performance. Consider the following topics to evaluate your distribution column(s).
### Determine if the table has data skew
where two_part_name in
group by two_part_name having (max(row_count * 1.000) - min(row_count * 1.000))/max(row_count * 1.000) >= .10 )
-order by two_part_name, row_count
-;
+order by two_part_name, row_count;
``` ### Check query plans for data movement
-A good distribution column enables joins and aggregations to have minimal data movement. This affects the way joins should be written. To get minimal data movement for a join on two hash-distributed tables, one of the join columns needs to be the distribution column. When two hash-distributed tables join on a distribution column of the same data type, the join does not require data movement. Joins can use additional columns without incurring data movement.
+A good distribution column set enables joins and aggregations to have minimal data movement. This affects the way joins should be written. To get minimal data movement for a join on two hash-distributed tables, one of the join columns needs to be in distribution column or column(s). When two hash-distributed tables join on a distribution column of the same data type, the join does not require data movement. Joins can use additional columns without incurring data movement.
To avoid data movement during a join:
It is not necessary to resolve all cases of data skew. Distributing data is a ma
To decide if you should resolve data skew in a table, you should understand as much as possible about the data volumes and queries in your workload. You can use the steps in the [Query monitoring](sql-data-warehouse-manage-monitor.md) article to monitor the impact of skew on query performance. Specifically, look for how long it takes large queries to complete on individual distributions.
-Since you cannot change the distribution column on an existing table, the typical way to resolve data skew is to re-create the table with a different distribution column.
+Since you cannot change the distribution column(s) on an existing table, the typical way to resolve data skew is to re-create the table with a different distribution column(s).
+
+<a id="re-create-the-table-with-a-new-distribution-column"></a>
+### Re-create the table with a new distribution column set
-### Re-create the table with a new distribution column
+This example uses [CREATE TABLE AS SELECT](/sql/t-sql/statements/create-table-as-select-azure-sql-data-warehouse?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) to re-create a table with a different hash distribution column or column(s).
-This example uses [CREATE TABLE AS SELECT](/sql/t-sql/statements/create-table-as-select-azure-sql-data-warehouse?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) to re-create a table with a different hash distribution column.
+First use `CREATE TABLE AS SELECT` (CTAS) the new table with the new key. Then re-create the statistics and finally, swap the tables by re-naming them.
```sql CREATE TABLE [dbo].[FactInternetSales_CustomerKey]
virtual-machines Generation 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/generation-2.md
Azure now offers generation 2 support for the following selected VM series:
|[NCv2-series](ncv2-series.md) | :heavy_check_mark: | :heavy_check_mark: | |[NCv3-series](ncv3-series.md) | :heavy_check_mark: | :heavy_check_mark: | |[NCasT4_v3-series](nct4-v3-series.md) | :heavy_check_mark: | :heavy_check_mark: |
+|[NC A100 v4-series](nc-a100-v4-series.md) | :x: | :heavy_check_mark: |
|[ND-series](nd-series.md) | :heavy_check_mark: | :heavy_check_mark: | |[ND A100 v4-series](nda100-v4-series.md) | :x: | :heavy_check_mark: | |[NDv2-series](ndv2-series.md) | :x: | :heavy_check_mark: |
virtual-machines Expand Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/expand-disks.md
Previously updated : 08/02/2022 Last updated : 08/23/2022
This article describes how to expand managed disks for a Linux virtual machine (
## Expand an Azure Managed Disk
-### Expand without downtime (preview)
+### Expand without downtime
You can now expand your managed disks without deallocating your VM.
-The preview for this has the following limitations:
+This feature has the following limitations:
[!INCLUDE [virtual-machines-disks-expand-without-downtime-restrictions](../../../includes/virtual-machines-disks-expand-without-downtime-restrictions.md)]
This article requires an existing VM in Azure with at least one data disk attach
In the following samples, replace example parameter names such as *myResourceGroup* and *myVM* with your own values. > [!IMPORTANT]
-> If you've enabled **LiveResize** and your disk meets the requirements in [Expand without downtime (preview)](#expand-without-downtime-preview), you can skip step 1 and 3.
+> If you've enabled **LiveResize** and your disk meets the requirements in [Expand without downtime](#expand-without-downtime), you can skip step 1 and 3.
1. Operations on virtual hard disks can't be performed with the VM running. Deallocate your VM with [az vm deallocate](/cli/azure/vm#az-vm-deallocate). The following example deallocates the VM named *myVM* in the resource group named *myResourceGroup*:
virtual-machines Share Gallery Direct https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/share-gallery-direct.md
This article covers how to share an Azure Compute Gallery with specific subscrip
> [!IMPORTANT] > Azure Compute Gallery ΓÇô direct shared gallery is currently in PREVIEW and subject to the [Preview Terms for Azure Compute Gallery](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). >
-> To publish images to a direct shared gallery during the preview, you need to register at [https://aka.ms/directsharedgallery-preview](https://aka.ms/directsharedgallery-preview). Creating VMs from a direct shared gallery is open to all Azure users.
+> To publish images to a direct shared gallery during the preview, you need to register at [https://aka.ms/directsharedgallery-preview](https://aka.ms/directsharedgallery-preview). No additional access required to consume images, Creating VMs from a direct shared gallery is open to all Azure users in the target subscription or tenant the gallery is shared with.
> > During the preview, you need to create a new gallery, with the property `sharingProfile.permissions` set to `Groups`. When using the CLI to create a gallery, use the `--permissions groups` parameter. You can't use an existing gallery, the property can't currently be updated.
virtual-machines Vm Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/vm-applications.md
It's possible to use a different interpreter like Chocolatey or PowerShell, as l
## How updates are handled
-When you update an application version, the update command you provided during deployment will be used. If the updated version doesnΓÇÖt have an update command, then the current version will be removed and the new version will be installed.
+When you update an application version on a VM or VMSS, the update command you provided during deployment will be used. If the updated version doesnΓÇÖt have an update command, then the current version will be removed and the new version will be installed.
Update commands should be written with the expectation that it could be updating from any older version of the VM application.
virtual-machines Expand Os Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/expand-os-disk.md
Previously updated : 08/02/2022 Last updated : 08/23/2022
When you create a new virtual machine (VM) in a resource group by deploying an image from [Azure Marketplace](https://azure.microsoft.com/marketplace/), the default operating system (OS) disk is usually 127 GiB (some images have smaller OS disk sizes by default). You can add data disks to your VM (the amount depends on the VM SKU you selected) and we recommend installing applications and CPU-intensive workloads on data disks. You may need to expand the OS disk if you're supporting a legacy application that installs components on the OS disk or if you're migrating a physical PC or VM from on-premises that has a larger OS disk. This article covers expanding either OS disks or data disks. > [!IMPORTANT]
-> Unless you use [Expand without downtime (preview)](#expand-without-downtime-preview), expanding a data disk requires the VM to be deallocated.
+> Unless you use [Expand without downtime](#expand-without-downtime), expanding a data disk requires the VM to be deallocated.
> > Shrinking an existing disk isnΓÇÖt supported and may result in data loss. > > After expanding the disks, you need to [Expand the volume in the operating system](#expand-the-volume-in-the-operating-system) to take advantage of the larger disk.
-## Expand without downtime (preview)
+## Expand without downtime
You can now expand your data disks without deallocating your VM.
-The preview for this has the following limitations:
+This feature has the following limitations:
[!INCLUDE [virtual-machines-disks-expand-without-downtime-restrictions](../../../includes/virtual-machines-disks-expand-without-downtime-restrictions.md)]
Get-AzProviderFeature -FeatureName "LiveResize" -ProviderNamespace "Microsoft.Co
## Resize a managed disk in the Azure portal > [!IMPORTANT]
-> If you've enabled **LiveResize** and your disk meets the requirements in [Expand without downtime (preview)](#expand-without-downtime-preview), you can skip step 1. To expand a disk without downtime in the Azure portal, you must use the following link: [https://aka.ms/iaasexp/DiskLiveResize](https://aka.ms/iaasexp/DiskLiveResize)
+> If you've enabled **LiveResize** and your disk meets the requirements in [Expand without downtime](#expand-without-downtime), you can skip step 1.
-1. In the [Azure portal](https://aka.ms/iaasexp/DiskLiveResize), go to the virtual machine in which you want to expand the disk. Select **Stop** to deallocate the VM.
+1. In the [Azure portal](https://portal.azure.com/), go to the virtual machine in which you want to expand the disk. Select **Stop** to deallocate the VM.
1. In the left menu under **Settings**, select **Disks**. :::image type="content" source="./media/expand-os-disk/select-disks.png" alt-text="Screenshot that shows the Disks option selected in the Settings section of the menu.":::
$vm = Get-AzVM -ResourceGroupName $rgName -Name $vmName
``` > [!IMPORTANT]
-> If you've enabled **LiveResize** and your disk meets the requirements in [expand without downtime (preview)](#expand-without-downtime-preview), you can skip step 4 and 6.
+> If you've enabled **LiveResize** and your disk meets the requirements in [expand without downtime](#expand-without-downtime), you can skip step 4 and 6.
Stop the VM before resizing the disk:
When you've expanded the disk for the VM, you need to go into the OS and expand
## Next steps
-You can also attach disks using the [Azure portal](attach-managed-disk-portal.md).
+You can also attach disks using the [Azure portal](attach-managed-disk-portal.md).
virtual-network Kubernetes Network Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/kubernetes-network-policies.md
-# Azure Kubernetes Network Policies overview
+# Azure Kubernetes Network Policies
+## Overview
Network Policies provides micro-segmentation for pods just like Network Security Groups (NSGs) provide micro-segmentation for VMs. The Azure Network Policy Manager (also known as Azure NPM) implementation supports the standard Kubernetes Network Policy specification. You can use labels to select a group of pods and define a list of ingress and egress rules to filter traffic to and from these pods. Learn more about the Kubernetes network policies in the [Kubernetes documentation](https://kubernetes.io/docs/concepts/services-networking/network-policies/). ![Kubernetes network policies overview](./media/kubernetes-network-policies/kubernetes-network-policies-overview.png)
-Azure NPM implementation works in conjunction with the Azure CNI that provides VNet integration for containers. NPM is supported only on Linux today. The implementation enforces traffic filtering by configuring allow and deny IP rules in Linux IPTables based on the defined policies. These rules are grouped together using Linux IPSets.
+Azure NPM implementation works in conjunction with the Azure CNI that provides VNet integration for containers. NPM is supported only on Linux and Windows Server 2022 today. The implementation enforces traffic filtering by configuring allow and deny IP rules in Linux IPTables or Windows HNS ACLPolicies based on the defined policies. These rules are grouped together using Linux IPSets or Windows HNS SetPolicies.
## Planning security for your Kubernetes cluster When implementing security for your cluster, use network security groups (NSGs) to filter traffic entering and leaving your cluster subnet (North-South traffic). Use Azure NPM for traffic between pods in your cluster (East-West traffic).
Azure NPM can be used in the following ways to provide micro-segmentation for po
### Azure Kubernetes Service (AKS) NPM is available natively in AKS and can be enabled at the time of cluster creation. Learn more about it in [Secure traffic between pods using network policies in Azure Kubernetes Service (AKS)](../aks/use-network-policies.md).
-### AKS-engine
-AKS-Engine is a tool that generates an Azure Resource Manager template for the deployment of a Kubernetes cluster in Azure. The cluster configuration is specified in a JSON file that is passed to the tool when generating the template. To learn more about the entire list of supported cluster settings and their descriptions, see Microsoft Azure Container Service Engine - Cluster Definition.
-
-To enable policies on clusters deployed using acs-engine, specify the value of the networkPolicy setting in the cluster definition file to be "azure".
-
-#### Example configuration
-
-The below JSON example configuration creates a new virtual network and subnet, and deploys a Kubernetes cluster in it with Azure CNI. We recommend that you use ΓÇ£NotepadΓÇ¥ to edit the JSON file.
-```json
-{
- "apiVersion": "vlabs",
- "properties": {
- "orchestratorProfile": {
- "orchestratorType": "Kubernetes",
- "kubernetesConfig": {
- "networkPolicy": "azure"
- }
- },
- "masterProfile": {
- "count": 1,
- "dnsPrefix": "<specify a cluster name>",
- "vmSize": "Standard_D2s_v3"
- },
- "agentPoolProfiles": [
- {
- "name": "agentpool",
- "count": 2,
- "vmSize": "Standard_D2s_v3",
- "availabilityProfile": "AvailabilitySet"
- }
- ],
- "linuxProfile": {
- "adminUsername": "<specify admin username>",
- "ssh": {
- "publicKeys": [
- {
- "keyData": "<cut and paste your ssh key here>"
- }
- ]
- }
- },
- "servicePrincipalProfile": {
- "clientId": "<enter the client ID of your service principal here >",
- "secret": "<enter the password of your service principal here>"
- }
- }
-}
-
-```
### Do it yourself (DIY) Kubernetes clusters in Azure For DIY clusters, first install the CNI plug-in and enable it on every virtual machine in a cluster. For detailed instructions, see [Deploy the plug-in for a Kubernetes cluster that you deploy yourself](deploy-container-networking.md#deploy-plug-in-for-a-kubernetes-cluster). Once the cluster is deployed run the following `kubectl` command to download and apply the Azure NPM *daemon set* to the cluster.
+For Linux:
+
+ ```
+ kubectl apply -f https://github.com/Azure/azure-container-networking/blob/master/npm/azure-npm.yaml
```
- kubectl apply -f https://raw.githubusercontent.com/Azure/acs-engine/master/parts/k8s/addons/kubernetesmasteraddons-azure-npm-daemonset.yaml
+For Windows:
+
+ ```
+ kubectl apply -f https://github.com/Azure/azure-container-networking/blob/master/npm/examples/windows/azure-npm.yaml
```+ The solution is also open source and the code is available on the [Azure Container Networking repository](https://github.com/Azure/azure-container-networking/tree/master/npm). ## Monitor and Visualize Network Configurations with Azure NPM Azure NPM includes informative Prometheus metrics that allow you to monitor and better understand your configurations. It provides built-in visualizations in either the Azure portal or Grafana Labs. You can start collecting these metrics using either Azure Monitor or a Prometheus Server. ### Benefits of Azure NPM Metrics
-Users previously were only able to learn about their Network Configuration with the command `iptables -L` run inside a cluster node, which yields a verbose and difficult to understand output. NPM metrics provide the following benefits related to Network Policies, IPTables Rules, and IPSets.
-- Provides insight into the relationship between the three and a time dimension to debug a configuration.-- Number of entries in all IPSets and each IPSet.-- Time taken to apply a policy with IPTable/IPSet level granularity.
+Users previously were only able to learn about their Network Configuration with `iptables` and `ipset` commands run inside a cluster node, which yields a verbose and difficult to understand output.
+
+Overall, the metrics provide:
+- counts of policies, ACL rules, ipsets, ipset entries, and entries in any given ipset
+- execution times for individual OS calls and for handling kubernetes resource events (median, 90th percentile, and 99th percentile)
+- failure info for handling kubernetes resource events (these will fail when an OS call fails)
+
+#### Example Metrics Use Cases
+##### Alerts via a Prometheus AlertManager
+See a [configuration for these alerts](#set-up-alerts-for-alertmanager) below.
+1. Alert when NPM has a failure with an OS call or when translating a Network Policy.
+2. Alert when the median time to apply changes for a create event was more than 100 milliseconds.
+
+##### Visualizations and Debugging via our Grafana Dashboard or Azure Monitor Workbook
+1. See how many IPTables rules your policies create (having a massive amount of IPTables rules may increase latency slightly).
+2. Correlate cluster counts (e.g. ACLs) to execution times.
+3. Get the human-friendly name of an ipset in a given IPTables rule (e.g. "azure-npm-487392" represents "podlabel-role:database").
-### Supported Metrics
-Following is the list of supported metrics:
-
-|Metric Name |Description |Prometheus Metric Type |Labels |
-|||||
-|`npm_num_policies` |number of network policies |Gauge |- |
-|`npm_num_iptables_rules` | number of IPTables rules | Gauge |- |
-|`npm_num_ipsets` |number of IPSets |Gauge |- |
-|`npm_num_ipset_entries` |number of IP address entries in all IPSets |Gauge |- |
-|`npm_add_policy_exec_time` |runtime for adding a network policy |Summary |quantile (0.5, 0.9, or 0.99) |
-|`npm_add_iptables_rule_exec_time` |runtime for adding an IPTables rule |Summary |quantile (0.5, 0.9, or 0.99) |
-|`npm_add_ipset_exec_time` |runtime for adding an IPSet |Summary |quantile (0.5, 0.9, or 0.99) |
-|`npm_ipset_counts` (advanced) |number of entries within each individual IPSet |GaugeVec |set name & hash |
-
-The different quantile levels in "exec_time" metrics help you differentiate between the general and worst case scenarios.
-
-There's also an "exec_time_count" and "exec_time_sum" metric for each "exec_time" Summary metric.
-
-The metrics can be scraped through Container insights or through Prometheus.
-
-### Setup for Azure Monitor
-The first step is to enable Container insights for your Kubernetes cluster. Steps can be found in [Container insights Overview](../azure-monitor/containers/container-insights-overview.md). Once you have Container insights enabled, configure the [Container insights ConfigMap](https://aka.ms/container-azm-ms-agentconfig) to enable NPM integration and collection of Prometheus NPM metrics. Container insights ConfigMap has an ```integrations``` section with settings to collect NPM metrics. These settings are disabled by default in the ConfigMap. Enabling the basic setting ```collect_basic_metrics = true```, will collect basic NPM metrics. Enabling advanced setting ```collect_advanced_metrics = true``` will collect advanced metrics in addition to basic metrics.
+### All supported metrics
+The following is the list of supported metrics. Any `quantile` label has possible values `0.5`, `0.9`, and `0.99`. Any `had_error` label has possible values `false` and `true`, representing whether the operation succeeded or failed.
+
+| Metric Name | Description | Prometheus Metric Type | Labels |
+| -- | -- | -- | -- |
+| `npm_num_policies` | number of network policies | Gauge | - |
+| `npm_num_iptables_rules` | number of IPTables rules | Gauge | - |
+| `npm_num_ipsets` | number of IPSets | Gauge | - |
+| `npm_num_ipset_entries` | number of IP address entries in all IPSets | Gauge | - |
+| `npm_add_iptables_rule_exec_time` | runtime for adding an IPTables rule | Summary | `quantile` |
+| `npm_add_ipset_exec_time` | runtime for adding an IPSet | Summary | `quantile` |
+| `npm_ipset_counts` (advanced) | number of entries within each individual IPSet | GaugeVec | `set_name` & `set_hash` |
+| `npm_add_policy_exec_time` | runtime for adding a network policy | Summary | `quantile` & `had_error` |
+| `npm_controller_policy_exec_time` | runtime for updating/deleting a network policy | Summary | `quantile` & `had_error` & `operation` (with values `update` or `delete`) |
+| `npm_controller_namespace_exec_time` | runtime for creating/updating/deleting a namespace | Summary | `quantile` & `had_error` & `operation` (with values `create`, `update`, or `delete`) |
+| `npm_controller_pod_exec_time` | runtime for creating/updating/deleting a pod | Summary | `quantile` & `had_error` & `operation` (with values `create`, `update`, or `delete`) |
+
+There are also "exec_time_count" and "exec_time_sum" metrics for each "exec_time" Summary metric.
+
+The metrics can be scraped through Azure Monitor for containers or through Prometheus.
+
+### Set up for Azure Monitor
+The first step is to enable Azure Monitor for containers for your Kubernetes cluster. Steps can be found in [Azure Monitor for containers Overview](../azure-monitor/containers/container-insights-overview.md). Once you have Azure Monitor for containers enabled, configure the [Azure Monitor for containers ConfigMap](https://aka.ms/container-azm-ms-agentconfig) to enable NPM integration and collection of Prometheus NPM metrics. Azure Monitor for containers ConfigMap has an ```integrations``` section with settings to collect NPM metrics. These settings are disabled by default in the ConfigMap. Enabling the basic setting ```collect_basic_metrics = true```, will collect basic NPM metrics. Enabling advanced setting ```collect_advanced_metrics = true``` will collect advanced metrics in addition to basic metrics.
After editing the ConfigMap, save it locally and apply the ConfigMap to your cluster as follows. `kubectl apply -f container-azm-ms-agentconfig.yaml`
-Below is a snippet from the [Container insights ConfigMap](https://aka.ms/container-azm-ms-agentconfig), which shows the NPM integration enabled with advanced metrics collection.
+Below is a snippet from the [Azure Monitor for containers ConfigMap](https://aka.ms/container-azm-ms-agentconfig), which shows the NPM integration enabled with advanced metrics collection.
``` integrations: |- [integrations.azure_network_policy_manager]
integrations: |-
``` Advanced metrics are optional, and turning them on will automatically turn on basic metrics collection. Advanced metrics currently include only `npm_ipset_counts`
-Learn more about [Container insights collection settings in config map](../azure-monitor/containers/container-insights-agent-config.md)
+Learn more about [Azure Monitor for containers collection settings in config map](../azure-monitor/containers/container-insights-agent-config.md)
### Visualization Options for Azure Monitor Once NPM metrics collection is enabled, you can view the metrics in the Azure portal using Container Insights or in Grafana.
Set up your Grafana Server and configure a Log Analytics Data Source as describe
The dashboard has visuals similar to the Azure Workbook. You can add panels to chart & visualize NPM metrics from InsightsMetrics table.
-### Setup for Prometheus Server
-Some users may choose to collect metrics with a Prometheus Server instead of Container insights. You merely need to add two jobs to your scrape config to collect NPM metrics.
+### Set up for Prometheus Server
+Some users may choose to collect metrics with a Prometheus Server instead of Azure Monitor for containers. You merely need to add two jobs to your scrape config to collect NPM metrics.
To install a simple Prometheus Server, add this helm repo on your cluster ```
You can also replace the `azure-npm-node-metrics` job with the content below or
target_label: __address__ ```
+#### Set up Alerts for AlertManager
+If you use a Prometheus Server, you can set up an AlertManager like so. Here is an example config for [the two alerting rules described above](#alerts-via-a-prometheus-alertmanager):
+```
+groups:
+- name: npm.rules
+ rules:
+ # fire when NPM has a new failure with an OS call or when translating a Network Policy (suppose there's a scraping interval of 5m)
+ - alert: AzureNPMFailureCreatePolicy
+ # this expression says to grab the current count minus the count 5 minutes ago, or grab the current count if there was no data 5 minutes ago
+ expr: (npm_add_policy_exec_time_count{had_error='true'} - (npm_add_policy_exec_time_count{had_error='true'} offset 5m)) or npm_add_policy_exec_time_count{had_error='true'}
+ labels:
+ severity: warning
+ addon: azure-npm
+ annotations:
+ summary: "Azure NPM failed to handle a policy create event"
+ description: "Current failure count since NPM started: {{ $value }}"
+ # fire when the median time to apply changes for a pod create event is more than 100 milliseconds.
+ - alert: AzureNPMHighControllerPodCreateTimeMedian
+ expr: topk(1, npm_controller_pod_exec_time{operation="create",quantile="0.5",had_error="false"}) > 100.0
+ labels:
+ severity: warning
+ addon: azure-npm
+ annotations:
+ summary: "Azure NPM controller pod create time median > 100.0 ms"
+ # could have a simpler description like the one for the alert above,
+ # but this description includes the number of pod creates that were handled in the past 10 minutes,
+ # which is the retention period for observations when calculating quantiles for a Prometheus Summary metric
+ description: "value: [{{ $value }}] and observation count: [{{ printf `(npm_controller_pod_exec_time_count{operation='create',pod='%s',had_error='false'} - (npm_controller_pod_exec_time_count{operation='create',pod='%s',had_error='false'} offset 10m)) or npm_controller_pod_exec_time_count{operation='create',pod='%s',had_error='false'}` $labels.pod $labels.pod $labels.pod | query | first | value }}] for pod: [{{ $labels.pod }}]"
+```
+ ### Visualization Options for Prometheus When using a Prometheus Server only Grafana Dashboard is supported.
Following are some sample dashboard for NPM metrics in Container Insights (CI) a
## Next steps - Learn about [Azure Kubernetes Service](../aks/intro-kubernetes.md). - Learn about [container networking](container-networking-overview.md).-- [Deploy the plug-in](deploy-container-networking.md) for Kubernetes clusters or Docker containers.
+- [Deploy the plug-in](deploy-container-networking.md) for Kubernetes clusters or Docker containers.