Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory-domain-services | Ad Auth No Join Linux Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/ad-auth-no-join-linux-vm.md | The final step is to check that the flow works properly. To check this, try logg [centosuser@centos8 ~]su - ADUser@contoso.com Last login: Wed Oct 12 15:13:39 UTC 2022 on pts/0 [ADUser@Centos8 ~]$ exit- ```+ Now you are ready to use AD authentication on your Linux VM. <!-- INTERNAL LINKS -->-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md -[associate-azure-ad-tenant]: ../active-directory/fundamentals/how-subscriptions-associated-directory.md +[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization.md +[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory.md [create-azure-ad-ds-instance]: tutorial-create-instance.md |
active-directory-domain-services | Administration Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/administration-concepts.md | To get started, [create a Domain Services managed domain][create-instance]. [password-policy]: password-policy.md [hybrid-phs]: tutorial-configure-password-hash-sync.md#enable-synchronization-of-password-hashes [secure-domain]: secure-your-domain.md-[azure-ad-password-sync]: ../active-directory/hybrid/connect/how-to-connect-password-hash-synchronization.md#password-hash-sync-process-for-azure-ad-domain-services +[azure-ad-password-sync]: /azure/active-directory/hybrid/connect/how-to-connect-password-hash-synchronization#password-hash-sync-process-for-azure-ad-domain-services [create-instance]: tutorial-create-instance.md [tutorial-create-instance-advanced]: tutorial-create-instance-advanced.md [concepts-forest]: ./concepts-forest-trust.md |
active-directory-domain-services | Alert Ldaps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/alert-ldaps.md | Create a replacement secure LDAP certificate by following the steps to [create a If you still have issues, [open an Azure support request][azure-support] for more troubleshooting help. <!-- INTERNAL LINKS -->-[azure-support]: ../active-directory/fundamentals/how-to-get-support.md +[azure-support]: /azure/active-directory/fundamentals/how-to-get-support |
active-directory-domain-services | Alert Nsg | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/alert-nsg.md | It takes a few moments for the security rule to be added and show in the list. If you still have issues, [open an Azure support request][azure-support] for additional troubleshooting assistance. <!-- INTERNAL LINKS -->-[azure-support]: ../active-directory/fundamentals/how-to-get-support.md -[configure-ldaps]: tutorial-configure-ldaps.md +[azure-support]: /azure/active-directory/fundamentals/how-to-get-support +[configure-ldaps]: ./tutorial-configure-ldaps.md |
active-directory-domain-services | Alert Service Principal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/alert-service-principal.md | -[Service principals](../active-directory/develop/app-objects-and-service-principals.md) are applications that the Azure platform uses to manage, update, and maintain a Microsoft Entra Domain Services managed domain. If a service principal is deleted, functionality in the managed domain is impacted. +[Service principals](/azure/active-directory/develop/app-objects-and-service-principals) are applications that the Azure platform uses to manage, update, and maintain a Microsoft Entra Domain Services managed domain. If a service principal is deleted, functionality in the managed domain is impacted. This article helps you troubleshoot and resolve service principal-related configuration alerts. After you delete both applications, the Azure platform automatically recreates t If you still have issues, [open an Azure support request][azure-support] for additional troubleshooting assistance. <!-- INTERNAL LINKS -->-[azure-support]: ../active-directory/fundamentals/how-to-get-support.md +[azure-support]: /azure/active-directory/fundamentals/how-to-get-support <!-- EXTERNAL LINKS --> [New-AzureAdServicePrincipal]: /powershell/module/azuread/new-azureadserviceprincipal |
active-directory-domain-services | Change Sku | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/change-sku.md | It can take a minute or two to change the SKU type. If you have a resource forest and want to create additional trusts after the SKU change, see [Create an outbound forest trust to an on-premises domain in Domain Services][create-trust]. <!-- INTERNAL LINKS -->-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md -[associate-azure-ad-tenant]: ../active-directory/fundamentals/how-subscriptions-associated-directory.md +[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization +[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory [create-azure-ad-ds-instance]: tutorial-create-instance.md [concepts-sku]: administration-concepts.md#azure-ad-ds-skus [create-trust]: tutorial-create-forest-trust.md |
active-directory-domain-services | Check Health | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/check-health.md | This article shows you how to view the Domain Services health status and underst The health status for a managed domain is viewed using the Microsoft Entra admin center. Information on the last backup time and synchronization with Microsoft Entra ID can be seen, along with any alerts that indicate a problem with the managed domain's health. To view the health status for a managed domain, complete the following steps: -1. Sign in to [Microsoft Entra admin center](https://entra.microsoft.com) as a [Global Administrator](../active-directory/roles/permissions-reference.md#global-administrator). +1. Sign in to [Microsoft Entra admin center](https://entra.microsoft.com) as a [Global Administrator](/azure/active-directory/roles/permissions-reference#global-administrator). 1. Search for and select **Microsoft Entra Domain Services**. 1. Select your managed domain, such as *aaddscontoso.com*. 1. On the left-hand side of the Domain Services resource window, select **Health**. The following example screenshot shows a healthy managed domain and the status of the last backup and Azure AD synchronization: Health status alerts are categorized into the following levels of severity: For more information on alerts that are shown in the health status page, see [Resolve alerts on your managed domain][troubleshoot-alerts] <!-- INTERNAL LINKS -->-[azure-support]: ../active-directory/fundamentals/how-to-get-support.md +[azure-support]: /azure/active-directory/fundamentals/how-to-get-support [troubleshoot-alerts]: troubleshoot-alerts.md |
active-directory-domain-services | Compare Identity Solutions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/compare-identity-solutions.md | You can also learn more about [manage-gpos]: manage-group-policy.md [tutorial-ldaps]: tutorial-configure-ldaps.md [tutorial-create]: tutorial-create-instance.md-[whatis-azuread]: ../active-directory/fundamentals/whatis.md +[whatis-azuread]: /azure/active-directory/fundamentals/whatis [overview-adds]: /windows-server/identity/ad-ds/get-started/virtual-dc/active-directory-domain-services-overview [create-forest-trust]: tutorial-create-forest-trust.md [administration-concepts]: administration-concepts.md |
active-directory-domain-services | Concepts Custom Attributes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/concepts-custom-attributes.md | Select **+ Add** to choose which custom attributes to synchronize. The list show If you don't see the directory extension you are looking for, enter the extensionΓÇÖs associated application appId and click **Search** to load only that applicationΓÇÖs defined extension properties. This search helps when multiple applications define many extensions in your tenant. >[!NOTE]->If you would like to see directory extensions synchronized by Microsoft Entra Connect, click **Enterprise App** and look for the Application ID of the **Tenant Schema Extension App**. For more information, see [Microsoft Entra Connect Sync: Directory extensions](../active-directory/hybrid/connect/how-to-connect-sync-feature-directory-extensions.md#configuration-changes-in-azure-ad-made-by-the-wizard). +>If you would like to see directory extensions synchronized by Microsoft Entra Connect, click **Enterprise App** and look for the Application ID of the **Tenant Schema Extension App**. For more information, see [Microsoft Entra Connect Sync: Directory extensions](/azure/active-directory/hybrid/connect/how-to-connect-sync-feature-directory-extensions#configuration-changes-in-azure-ad-made-by-the-wizard). Click **Select**, and then **Save** to confirm the change. To check the backfilling status, click **Domain Services Health** and verify the ## Next steps -To configure onPremisesExtensionAttributes or directory extensions for cloud-only users in Microsoft Entra ID, see [Custom data options in Microsoft Graph](/graph/extensibility-overview?tabs=http#custom-data-options-in-microsoft-graph). +To configure onPremisesExtensionAttributes or directory extensions for cloud-only users in Microsoft Entra ID, see [Custom data options in Microsoft Graph](/graph/extensibility-overview?tabs=http#custom-data-options-in-microsoft-graph). -To sync onPremisesExtensionAttributes or directory extensions from on-premises to Microsoft Entra ID, [configure Microsoft Entra Connect](../active-directory/hybrid/connect/how-to-connect-sync-feature-directory-extensions.md). +To sync onPremisesExtensionAttributes or directory extensions from on-premises to Microsoft Entra ID, [configure Microsoft Entra Connect](/azure/active-directory/hybrid/connect/how-to-connect-sync-feature-directory-extensions). |
active-directory-domain-services | Create Forest Trust Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/create-forest-trust-powershell.md | To complete this article, you need the following resources and privileges: * Install and configure Azure AD PowerShell. * If needed, follow the instructions to [install the Azure AD PowerShell module and connect to Microsoft Entra ID](/powershell/azure/active-directory/install-adv2). * Make sure that you sign in to your Microsoft Entra tenant using the [Connect-AzureAD][Connect-AzureAD] cmdlet.-* You need [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Microsoft Entra roles in your tenant to enable Domain Services. -* You need [Domain Services Contributor](../role-based-access-control/built-in-roles.md#contributor) Azure role to create the required Domain Services resources. +* You need [Application Administrator](/azure/active-directory/roles/permissions-reference#application-administrator) and [Groups Administrator](/azure/active-directory/roles/permissions-reference#groups-administrator) Microsoft Entra roles in your tenant to enable Domain Services. +* You need [Domain Services Contributor](/azure/role-based-access-control/built-in-roles#contributor) Azure role to create the required Domain Services resources. ## Sign in to the Microsoft Entra admin center Before you start, make sure you understand the [network considerations and recom 1. Create the hybrid connectivity to your on-premises network to Azure using an Azure VPN or Azure ExpressRoute connection. The hybrid network configuration is beyond the scope of this documentation, and may already exist in your environment. For details on specific scenarios, see the following articles: - * [Azure Site-to-Site VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md). - * [Azure ExpressRoute Overview](../expressroute/expressroute-introduction.md). + * [Azure Site-to-Site VPN](/azure/vpn-gateway/vpn-gateway-about-vpngateways). + * [Azure ExpressRoute Overview](/azure/expressroute/expressroute-introduction). > [!IMPORTANT] > If you create the connection directly to your managed domain's virtual network, use a separate gateway subnet. Don't create the gateway in the managed domain's subnet. You should have Windows Server virtual machine joined to the managed domain reso 1. Connect to the Windows Server VM joined to the managed domain using Remote Desktop and your managed domain administrator credentials. If you get a Network Level Authentication (NLA) error, check the user account you used is not a domain user account. > [!TIP]- > To securely connect to your VMs joined to Microsoft Entra Domain Services, you can use the [Azure Bastion Host Service](../bastion/bastion-overview.md) in supported Azure regions. + > To securely connect to your VMs joined to Microsoft Entra Domain Services, you can use the [Azure Bastion Host Service](/azure/bastion/bastion-overview) in supported Azure regions. 1. Open a command prompt and use the `whoami` command to show the distinguished name of the currently authenticated user: Using the Windows Server VM joined to the managed domain, you can test the scena 1. Connect to the Windows Server VM joined to the managed domain using Remote Desktop and your managed domain administrator credentials. If you get a Network Level Authentication (NLA) error, check the user account you used is not a domain user account. > [!TIP]- > To securely connect to your VMs joined to Microsoft Entra Domain Services, you can use the [Azure Bastion Host Service](../bastion/bastion-overview.md) in supported Azure regions. + > To securely connect to your VMs joined to Microsoft Entra Domain Services, you can use the [Azure Bastion Host Service](/azure/bastion/bastion-overview) in supported Azure regions. 1. Open **Windows Settings**, then search for and select **Network and Sharing Center**. 1. Choose the option for **Change advanced sharing** settings. For more conceptual information about forest types in Domain Services, see [How <!-- INTERNAL LINKS --> [concepts-trust]: concepts-forest-trust.md-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md -[associate-azure-ad-tenant]: ../active-directory/fundamentals/how-subscriptions-associated-directory.md +[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization +[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory [create-azure-ad-ds-instance-advanced]: tutorial-create-instance-advanced.md [Connect-AzAccount]: /powershell/module/az.accounts/connect-azaccount [Connect-AzureAD]: /powershell/module/azuread/connect-azuread [New-AzResourceGroup]: /powershell/module/az.resources/new-azresourcegroup-[network-peering]: ../virtual-network/virtual-network-peering-overview.md -[New-AzureADServicePrincipal]: /powershell/module/AzureAD/New-AzureADServicePrincipal +[network-peering]: /azure/virtual-network/virtual-network-peering-overview +[New-AzureADServicePrincipal]: /powershell/module/azuread/new-azureadserviceprincipal [Get-AzureRMSubscription]: /powershell/module/azurerm.profile/get-azurermsubscription [Install-Script]: /powershell/module/powershellget/install-script |
active-directory-domain-services | Create Gmsa | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/create-gmsa.md | Applications and services can now be configured to use the gMSA as needed. For more information about gMSAs, see [Getting started with group managed service accounts][gmsa-start]. <!-- INTERNAL LINKS -->-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md -[associate-azure-ad-tenant]: ../active-directory/fundamentals/how-subscriptions-associated-directory.md +[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization +[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory [create-azure-ad-ds-instance]: tutorial-create-instance.md [tutorial-create-management-vm]: tutorial-create-management-vm.md [create-custom-ou]: create-ou.md |
active-directory-domain-services | Create Ou | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/create-ou.md | For more information on using the administrative tools or creating and using ser * [Service Accounts Step-by-Step Guide](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd548356(v=ws.10)) <!-- INTERNAL LINKS -->-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md -[associate-azure-ad-tenant]: ../active-directory/fundamentals/how-subscriptions-associated-directory.md +[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization +[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory [create-azure-ad-ds-instance]: tutorial-create-instance.md [tutorial-create-management-vm]: tutorial-create-management-vm.md [connect-windows-server-vm]: join-windows-vm.md#connect-to-the-windows-server-vm |
active-directory-domain-services | Delete Aadds | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/delete-aadds.md | This article shows you how to use the Microsoft Entra admin center to delete a m To delete a managed domain, complete the following steps: -1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as a [Global Administrator](../active-directory/roles/permissions-reference.md#global-administrator). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as a [Global Administrator](/azure/active-directory/roles/permissions-reference#global-administrator). 1. Search for and select **Microsoft Entra Domain Services**. 1. Select the name of your managed domain, such as *aaddscontoso.com*. 1. On the **Overview** page, select **Delete**. To confirm the deletion, type the domain name of the managed domain again, then select **Delete**. |
active-directory-domain-services | Deploy Azure App Proxy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/deploy-azure-app-proxy.md | -If you're new to the Microsoft Entra application proxy and want to learn more, see [How to provide secure remote access to internal applications](../active-directory/app-proxy/application-proxy.md). +If you're new to the Microsoft Entra application proxy and want to learn more, see [How to provide secure remote access to internal applications](/azure/active-directory/app-proxy/application-proxy). This article shows you how to create and configure a Microsoft Entra application proxy connector to provide secure access to applications in a managed domain. To create a VM for the Microsoft Entra application proxy connector, complete the Perform the following steps to download the Microsoft Entra application proxy connector. The setup file you download is copied to your App Proxy VM in the next section. -1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as a [Global Administrator](../active-directory/roles/permissions-reference.md#global-administrator). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as a [Global Administrator](/azure/active-directory/roles/permissions-reference#global-administrator). 1. Search for and select **Enterprise applications**. 1. Select **Application proxy** from the menu on the left-hand side. To create your first connector and enable App Proxy, select the link to **download a connector**. 1. On the download page, accept the license terms and privacy agreement, then select **Accept terms & Download**. With a VM ready to be used as the Microsoft Entra application proxy connector, n > For example, if the Microsoft Entra domain is *contoso.com*, the global administrator should be `admin@contoso.com` or another valid alias on that domain. * If Internet Explorer Enhanced Security Configuration is turned on for the VM where you install the connector, the registration screen might be blocked. To allow access, follow the instructions in the error message, or turn off Internet Explorer Enhanced Security during the install process.- * If connector registration fails, see [Troubleshoot Application Proxy](../active-directory/app-proxy/application-proxy-troubleshoot.md). + * If connector registration fails, see [Troubleshoot Application Proxy](/azure/active-directory/app-proxy/application-proxy-troubleshoot). 1. At the end of the setup, a note is shown for environments with an outbound proxy. To configure the Microsoft Entra application proxy connector to work through the outbound proxy, run the provided script, such as `C:\Program Files\Microsoft AAD App Proxy connector\ConfigureOutBoundProxy.ps1`. 1. On the Application proxy page in the Microsoft Entra admin center, the new connector is listed with a status of *Active*, as shown in the following example: If you deploy multiple Microsoft Entra application proxy connectors, you must co ## Next steps -With the Microsoft Entra application proxy integrated with Domain Services, publish applications for users to access. For more information, see [publish applications using Microsoft Entra application proxy](../active-directory/app-proxy/application-proxy-add-on-premises-application.md). +With the Microsoft Entra application proxy integrated with Domain Services, publish applications for users to access. For more information, see [publish applications using Microsoft Entra application proxy](/azure/active-directory/app-proxy/application-proxy-add-on-premises-application). <!-- INTERNAL LINKS -->-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md -[associate-azure-ad-tenant]: ../active-directory/fundamentals/how-subscriptions-associated-directory.md +[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization +[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory [create-azure-ad-ds-instance]: tutorial-create-instance.md [create-join-windows-vm]: join-windows-vm.md-[azure-bastion]: ../bastion/tutorial-create-host-portal.md +[azure-bastion]: /azure/bastion/tutorial-create-host-portal [Get-ADComputer]: /powershell/module/activedirectory/get-adcomputer [Set-ADComputer]: /powershell/module/activedirectory/set-adcomputer |
active-directory-domain-services | Deploy Kcd | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/deploy-kcd.md | In this scenario, let's assume you have a web app that runs as a service account To learn more about how delegation works in Active Directory Domain Services, see [Kerberos Constrained Delegation Overview][kcd-technet]. <!-- INTERNAL LINKS -->-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md -[associate-azure-ad-tenant]: ../active-directory/fundamentals/how-subscriptions-associated-directory.md +[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization +[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory [create-azure-ad-ds-instance]: tutorial-create-instance.md [create-join-windows-vm]: join-windows-vm.md [tutorial-create-management-vm]: tutorial-create-management-vm.md |
active-directory-domain-services | Deploy Sp Profile Sync | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/deploy-sp-profile-sync.md | From your Domain Services management VM, complete the following steps: <!-- INTERNAL LINKS -->-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md -[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md +[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization +[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory [create-azure-ad-ds-instance]: tutorial-create-instance.md [tutorial-create-management-vm]: tutorial-create-management-vm.md |
active-directory-domain-services | Fleet Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/fleet-metrics.md | The following table describes the metrics that are available for Domain Services ## Azure Monitor alert -You can configure metric alerts for Domain Services to be notified of possible problems. Metric alerts are one type of alert for Azure Monitor. For more information about other types of alerts, see [What are Azure Monitor Alerts?](../azure-monitor/alerts/alerts-overview.md). +You can configure metric alerts for Domain Services to be notified of possible problems. Metric alerts are one type of alert for Azure Monitor. For more information about other types of alerts, see [What are Azure Monitor Alerts?](/azure/azure-monitor/alerts/alerts-overview). -To view and manage Azure Monitor alert, a user needs to be assigned [Azure Monitor roles](../azure-monitor/roles-permissions-security.md). +To view and manage Azure Monitor alert, a user needs to be assigned [Azure Monitor roles](/azure/azure-monitor/roles-permissions-security). In Azure Monitor or Domain Services Metrics, click **New alert** and configure a Domain Services instance as the scope. Then choose the metrics you want to measure from the list of available signals: |
active-directory-domain-services | How To Data Retrieval | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/how-to-data-retrieval.md | You can create a user in the Microsoft Entra admin center or by using Graph Powe You can create a new user using the Microsoft Entra admin center. To add a new user, follow these steps: -1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [User Administrator](../active-directory/roles/permissions-reference.md#user-administrator). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [User Administrator](/azure/active-directory/roles/permissions-reference#user-administrator). 1. Browse to **Identity** > **Users**, and then select **New user**. |
active-directory-domain-services | Join Centos Linux Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/join-centos-linux-vm.md | If you have an existing CentOS Linux VM in Azure, connect to it using SSH, then If you need to create a CentOS Linux VM, or want to create a test VM for use with this article, you can use one of the following methods: -* [Microsoft Entra admin center](../virtual-machines/linux/quick-create-portal.md) -* [Azure CLI](../virtual-machines/linux/quick-create-cli.md) -* [Azure PowerShell](../virtual-machines/linux/quick-create-powershell.md) +* [Microsoft Entra admin center](/azure/virtual-machines/linux/quick-create-portal) +* [Azure CLI](/azure/virtual-machines/linux/quick-create-cli) +* [Azure PowerShell](/azure/virtual-machines/linux/quick-create-powershell) When you create the VM, pay attention to the virtual network settings to make sure that the VM can communicate with the managed domain: Now that the required packages are installed on the VM, join the VM to the manag * Check that the VM is deployed to the same, or a peered, virtual network in which the managed domain is available. * Confirm that the DNS server settings for the virtual network have been updated to point to the domain controllers of the managed domain. -1. Now initialize Kerberos using the `kinit` command. Specify a user that's a part of the managed domain. If needed, [add a user account to a group in Microsoft Entra ID](../active-directory/fundamentals/active-directory-groups-members-azure-portal.md). +1. Now initialize Kerberos using the `kinit` command. Specify a user that's a part of the managed domain. If needed, [add a user account to a group in Microsoft Entra ID](/azure/active-directory/fundamentals/how-to-manage-groups). Again, the managed domain name must be entered in ALL UPPERCASE. In the following example, the account named `contosoadmin@aaddscontoso.com` is used to initialize Kerberos. Enter your own user account that's a part of the managed domain: To verify that the VM has been successfully joined to the managed domain, start If you have problems connecting the VM to the managed domain or signing in with a domain account, see [Troubleshooting domain join issues](join-windows-vm.md#troubleshoot-domain-join-issues). <!-- INTERNAL LINKS -->-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md -[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md +[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization +[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory [create-azure-ad-ds-instance]: tutorial-create-instance.md |
active-directory-domain-services | Join Coreos Linux Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/join-coreos-linux-vm.md | If you have an existing CoreOS Linux VM in Azure, connect to it using SSH, then If you need to create a CoreOS Linux VM, or want to create a test VM for use with this article, you can use one of the following methods: -* [Microsoft Entra admin center](../virtual-machines/linux/quick-create-portal.md) -* [Azure CLI](../virtual-machines/linux/quick-create-cli.md) -* [Azure PowerShell](../virtual-machines/linux/quick-create-powershell.md) +* [Microsoft Entra admin center](/azure/virtual-machines/linux/quick-create-portal) +* [Azure CLI](/azure/virtual-machines/linux/quick-create-cli) +* [Azure PowerShell](/azure/virtual-machines/linux/quick-create-powershell) When you create the VM, pay attention to the virtual network settings to make sure that the VM can communicate with the managed domain: With the SSSD configuration file updated, now join the virtual machine to the ma * Check that the VM is deployed to the same, or a peered, virtual network in which the managed domain is available. * Confirm that the DNS server settings for the virtual network have been updated to point to the domain controllers of the managed domain. -1. Now join the VM to the managed domain using the `adcli join` command. Specify a user that's a part of the managed domain. If needed, [add a user account to a group in Microsoft Entra ID](../active-directory/fundamentals/active-directory-groups-members-azure-portal.md). +1. Now join the VM to the managed domain using the `adcli join` command. Specify a user that's a part of the managed domain. If needed, [add a user account to a group in Microsoft Entra ID](/azure/active-directory/fundamentals/how-to-manage-groups). Again, the managed domain name must be entered in ALL UPPERCASE. In the following example, the account named `contosoadmin@aaddscontoso.com` is used to initialize Kerberos. Enter your own user account that's a part of the managed domain. To verify that the VM has been successfully joined to the managed domain, start If you have problems connecting the VM to the managed domain or signing in with a domain account, see [Troubleshooting domain join issues](join-windows-vm.md#troubleshoot-domain-join-issues). <!-- INTERNAL LINKS -->-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md -[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md +[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization +[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory [create-azure-ad-ds-instance]: tutorial-create-instance.md |
active-directory-domain-services | Join Rhel Linux Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/join-rhel-linux-vm.md | If you have an existing RHEL Linux VM in Azure, connect to it using SSH, then co If you need to create a RHEL Linux VM, or want to create a test VM for use with this article, you can use one of the following methods: -* [Microsoft Entra admin center](../virtual-machines/linux/quick-create-portal.md) -* [Azure CLI](../virtual-machines/linux/quick-create-cli.md) -* [Azure PowerShell](../virtual-machines/linux/quick-create-powershell.md) +* [Microsoft Entra admin center](/azure/virtual-machines/linux/quick-create-portal) +* [Azure CLI](/azure/virtual-machines/linux/quick-create-cli) +* [Azure PowerShell](/azure/virtual-machines/linux/quick-create-powershell) When you create the VM, pay attention to the virtual network settings to make sure that the VM can communicate with the managed domain: Now that the required packages are installed on the VM, join the VM to the manag * Check that the VM is deployed to the same, or a peered, virtual network in which the managed domain is available. * Confirm that the DNS server settings for the virtual network have been updated to point to the domain controllers of the managed domain. -1. Now initialize Kerberos using the `kinit` command. Specify a user that's a part of the managed domain. If needed, [add a user account to a group in Microsoft Entra ID](../active-directory/fundamentals/active-directory-groups-members-azure-portal.md). +1. Now initialize Kerberos using the `kinit` command. Specify a user that's a part of the managed domain. If needed, [add a user account to a group in Microsoft Entra ID](/azure/active-directory/fundamentals/how-to-manage-groups). Again, the managed domain name must be entered in ALL UPPERCASE. In the following example, the account named `contosoadmin@aaddscontoso.com` is used to initialize Kerberos. Enter your own user account that's a part of the managed domain: To verify that the VM has been successfully joined to the managed domain, start If you have problems connecting the VM to the managed domain or signing in with a domain account, see [Troubleshooting domain join issues](join-windows-vm.md#troubleshoot-domain-join-issues). <!-- INTERNAL LINKS -->-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md -[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md +[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization +[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory [create-azure-ad-ds-instance]: tutorial-create-instance.md |
active-directory-domain-services | Join Suse Linux Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/join-suse-linux-vm.md | If you have an existing SLE Linux VM in Azure, connect to it using SSH, then con If you need to create a SLE Linux VM, or want to create a test VM for use with this article, you can use one of the following methods: -* [Microsoft Entra admin center](../virtual-machines/linux/quick-create-portal.md) -* [Azure CLI](../virtual-machines/linux/quick-create-cli.md) -* [Azure PowerShell](../virtual-machines/linux/quick-create-powershell.md) +* [Microsoft Entra admin center](/azure/virtual-machines/linux/quick-create-portal) +* [Azure CLI](/azure/virtual-machines/linux/quick-create-cli) +* [Azure PowerShell](/azure/virtual-machines/linux/quick-create-powershell) When you create the VM, pay attention to the virtual network settings to make sure that the VM can communicate with the managed domain: To join the VM to the managed domain, complete the following steps: ![Example screenshot of the Active Directory enrollment window in YaST](./media/join-suse-linux-vm/enroll-window.png) -1. In the dialog, specify the *Username* and *Password* of a user that's a part of the managed domain. If needed, [add a user account to a group in Microsoft Entra ID](../active-directory/fundamentals/active-directory-groups-members-azure-portal.md). +1. In the dialog, specify the *Username* and *Password* of a user that's a part of the managed domain. If needed, [add a user account to a group in Microsoft Entra ID](/azure/active-directory/fundamentals/how-to-manage-groups). To make sure that the current domain is enabled for Samba, activate *Overwrite Samba configuration to work with this AD*. To verify that the VM has been successfully joined to the managed domain, start If you have problems connecting the VM to the managed domain or signing in with a domain account, see [Troubleshooting domain join issues](join-windows-vm.md#troubleshoot-domain-join-issues). <!-- INTERNAL LINKS -->-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md -[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md +[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization +[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory [create-azure-ad-ds-instance]: tutorial-create-instance.md |
active-directory-domain-services | Join Ubuntu Linux Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/join-ubuntu-linux-vm.md | If you have an existing Ubuntu Linux VM in Azure, connect to it using SSH, then If you need to create an Ubuntu Linux VM, or want to create a test VM for use with this article, you can use one of the following methods: -* [Microsoft Entra admin center](../virtual-machines/linux/quick-create-portal.md) -* [Azure CLI](../virtual-machines/linux/quick-create-cli.md) -* [Azure PowerShell](../virtual-machines/linux/quick-create-powershell.md) +* [Microsoft Entra admin center](/azure/virtual-machines/linux/quick-create-portal) +* [Azure CLI](/azure/virtual-machines/linux/quick-create-cli) +* [Azure PowerShell](/azure/virtual-machines/linux/quick-create-powershell) When you create the VM, pay attention to the virtual network settings to make sure that the VM can communicate with the managed domain: Now that the required packages are installed on the VM and NTP is configured, jo * Check that the VM is deployed to the same, or a peered, virtual network in which the managed domain is available. * Confirm that the DNS server settings for the virtual network have been updated to point to the domain controllers of the managed domain. -1. Now initialize Kerberos using the `kinit` command. Specify a user that's a part of the managed domain. If needed, [add a user account to a group in Microsoft Entra ID](../active-directory/fundamentals/active-directory-groups-members-azure-portal.md). +1. Now initialize Kerberos using the `kinit` command. Specify a user that's a part of the managed domain. If needed, [add a user account to a group in Microsoft Entra ID](/azure/active-directory/fundamentals/how-to-manage-groups). Again, the managed domain name must be entered in ALL UPPERCASE. In the following example, the account named `contosoadmin@aaddscontoso.com` is used to initialize Kerberos. Enter your own user account that's a part of the managed domain: To verify that the VM has been successfully joined to the managed domain, start If you have problems connecting the VM to the managed domain or signing in with a domain account, see [Troubleshooting domain join issues](join-windows-vm.md#troubleshoot-domain-join-issues). <!-- INTERNAL LINKS -->-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md -[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md +[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization +[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory [create-azure-ad-ds-instance]: tutorial-create-instance.md |
active-directory-domain-services | Join Windows Vm Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/join-windows-vm-template.md | It takes a few moments for the deployment to complete successfully. When finishe In this article, you used the Azure portal to configure and deploy resources using templates. You can also deploy resources with Resource Manager templates using [Azure PowerShell][deploy-powershell] or the [Azure CLI][deploy-cli]. <!-- INTERNAL LINKS -->-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md -[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md +[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization +[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory [create-azure-ad-ds-instance]: tutorial-create-instance.md-[template-overview]: ../azure-resource-manager/templates/overview.md -[deploy-powershell]: ../azure-resource-manager/templates/deploy-powershell.md -[deploy-cli]: ../azure-resource-manager/templates/deploy-cli.md +[template-overview]: /azure/azure-resource-manager/templates/overview +[deploy-powershell]: /azure/azure-resource-manager/templates/deploy-powershell +[deploy-cli]: /azure/azure-resource-manager/templates/deploy-cli |
active-directory-domain-services | Join Windows Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/join-windows-vm.md | To administer your managed domain, configure a management VM using the Active Di > [Install administration tools on a management VM](tutorial-create-management-vm.md) <!-- INTERNAL LINKS -->-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md -[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md +[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization +[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory [create-azure-ad-ds-instance]: tutorial-create-instance.md-[vnet-peering]: ../virtual-network/virtual-network-peering-overview.md +[vnet-peering]: /azure/virtual-network/virtual-network-peering-overview [password-sync]: ./tutorial-create-instance.md [add-computer]: /powershell/module/microsoft.powershell.management/add-computer-[azure-bastion]: ../bastion/tutorial-create-host-portal.md +[azure-bastion]: /azure/bastion/tutorial-create-host-portal [set-azvmaddomainextension]: /powershell/module/az.compute/set-azvmaddomainextension |
active-directory-domain-services | Manage Dns | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/manage-dns.md | Name resolution of the resources in other namespaces from VMs connected to the m For more information about managing DNS, see the [DNS tools article on Technet](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc753579(v=ws.11)). <!-- INTERNAL LINKS -->-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md -[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md +[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization +[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory [create-azure-ad-ds-instance]: tutorial-create-instance.md-[expressroute]: ../expressroute/expressroute-introduction.md -[vpn-gateway]: ../vpn-gateway/vpn-gateway-about-vpngateways.md +[expressroute]: /azure/expressroute/expressroute-introduction +[vpn-gateway]: /azure/vpn-gateway/vpn-gateway-about-vpngateways [create-join-windows-vm]: join-windows-vm.md [tutorial-create-management-vm]: tutorial-create-management-vm.md [connect-windows-server-vm]: join-windows-vm.md#connect-to-the-windows-server-vm |
active-directory-domain-services | Manage Group Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/manage-group-policy.md | In a hybrid environment, group policies configured in an on-premises AD DS envir This article shows you how to install the Group Policy Management tools, then edit the built-in GPOs and create custom GPOs. If you are interested in server management strategy, including machines in Azure and-[hybrid connected](../azure-arc/servers/overview.md), +[hybrid connected](/azure/azure-arc/servers/overview), consider reading about the-[guest configuration](../governance/machine-configuration/overview.md) +[guest configuration](/azure/governance/machine-configuration/overview) feature of-[Azure Policy](../governance/policy/overview.md). +[Azure Policy](/azure/governance/policy/overview). ## Before you begin To group similar policy settings, you often create additional GPOs instead of ap For more information on the available Group Policy settings that you can configure using the Group Policy Management Console, see [Work with Group Policy preference items][group-policy-console]. <!-- INTERNAL LINKS -->-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md -[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md +[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization +[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory [create-azure-ad-ds-instance]: tutorial-create-instance.md [create-join-windows-vm]: join-windows-vm.md [tutorial-create-management-vm]: tutorial-create-management-vm.md |
active-directory-domain-services | Mismatched Tenant Error | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/mismatched-tenant-error.md | The managed domain and the virtual network belong to two different Microsoft Ent The following two options resolve the mismatched directory error: * First, [delete the managed domain](delete-aadds.md) from your existing Microsoft Entra directory. Then, [create a replacement managed domain](tutorial-create-instance.md) in the same Microsoft Entra directory as the virtual network you wish to use. When ready, join all machines previously joined to the deleted domain to the recreated managed domain.-* [Move the Azure subscription](../cost-management-billing/manage/billing-subscription-transfer.md) containing the virtual network to the same Microsoft Entra directory as the managed domain. +* [Move the Azure subscription](/azure/cost-management-billing/manage/billing-subscription-transfer) containing the virtual network to the same Microsoft Entra directory as the managed domain. ## Next steps |
active-directory-domain-services | Network Considerations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/network-considerations.md | Virtual network peering is a mechanism that connects two virtual networks in the ![Virtual network connectivity using peering](./media/active-directory-domain-services-design-guide/vnet-peering.png) -For more information, see [Azure virtual network peering overview](../virtual-network/virtual-network-peering-overview.md). +For more information, see [Azure virtual network peering overview](/azure/virtual-network/virtual-network-peering-overview). ### Virtual Private Networking (VPN) You can connect a virtual network to another virtual network (VNet-to-VNet) in t ![Virtual network connectivity using a VPN Gateway](./media/active-directory-domain-services-design-guide/vnet-connection-vpn-gateway.jpg) -For more information on using virtual private networking, read [Configure a VNet-to-VNet VPN gateway connection by using the Microsoft Entra admin center](../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md). +For more information on using virtual private networking, read [Configure a VNet-to-VNet VPN gateway connection by using the Microsoft Entra admin center](/azure/vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal). ## Name resolution when connecting virtual networks Don't lock the networking resources used by Domain Services. If networking resou | Azure resource | Description | |:-|:| | Network interface card | Domain Services hosts the managed domain on two domain controllers (DCs) that run on Windows Server as Azure VMs. Each VM has a virtual network interface that connects to your virtual network subnet. |-| Dynamic standard public IP address | Domain Services communicates with the synchronization and management service using a Standard SKU public IP address. For more information about public IP addresses, see [IP address types and allocation methods in Azure](../virtual-network/ip-services/public-ip-addresses.md). | -| Azure standard load balancer | Domain Services uses a Standard SKU load balancer for network address translation (NAT) and load balancing (when used with secure LDAP). For more information about Azure load balancers, see [What is Azure Load Balancer?](../load-balancer/load-balancer-overview.md) | +| Dynamic standard public IP address | Domain Services communicates with the synchronization and management service using a Standard SKU public IP address. For more information about public IP addresses, see [IP address types and allocation methods in Azure](/azure/virtual-network/ip-services/public-ip-addresses). | +| Azure standard load balancer | Domain Services uses a Standard SKU load balancer for network address translation (NAT) and load balancing (when used with secure LDAP). For more information about Azure load balancers, see [What is Azure Load Balancer?](/azure/load-balancer/load-balancer-overview) | | Network address translation (NAT) rules | Domain Services creates and uses two Inbound NAT rules on the load balancer for secure PowerShell remoting. If a Standard SKU load balancer is used, it will have an Outbound NAT Rule too. For the Basic SKU load balancer, no Outbound NAT rule is required. | | Load balancer rules | When a managed domain is configured for secure LDAP on TCP port 636, three rules are created and used on a load balancer to distribute the traffic. | Don't lock the networking resources used by Domain Services. If networking resou ## Network security groups and required ports -A [network security group (NSG)](../virtual-network/network-security-groups-overview.md) contains a list of rules that allow or deny network traffic in an Azure virtual network. When you deploy a managed domain, a network security group is created with a set of rules that let the service provide authentication and management functions. This default network security group is associated with the virtual network subnet your managed domain is deployed into. +A [network security group (NSG)](/azure/virtual-network/network-security-groups-overview) contains a list of rules that allow or deny network traffic in an Azure virtual network. When you deploy a managed domain, a network security group is created with a set of rules that let the service provide authentication and management functions. This default network security group is associated with the virtual network subnet your managed domain is deployed into. The following sections cover network security groups and Inbound and Outbound port requirements. You must also route inbound traffic from the IP addresses included in the respec For more information about some of the network resources and connection options used by Domain Services, see the following articles: -* [Azure virtual network peering](../virtual-network/virtual-network-peering-overview.md) -* [Azure VPN gateways](../vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md) -* [Azure network security groups](../virtual-network/network-security-groups-overview.md) +* [Azure virtual network peering](/azure/virtual-network/virtual-network-peering-overview) +* [Azure VPN gateways](/azure/vpn-gateway/vpn-gateway-about-vpn-gateway-settings) +* [Azure network security groups](/azure/virtual-network/network-security-groups-overview) |
active-directory-domain-services | Notifications | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/notifications.md | You can also choose to have all *Global Administrators* of the Microsoft Entra d To review the existing email notification recipients, or add recipients, complete the following steps: -1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as a [Global Administrator](../active-directory/roles/permissions-reference.md#authentication-policy-administrator). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as a [Global Administrator](/azure/active-directory/roles/permissions-reference#authentication-policy-administrator). 1. Search for and select **Microsoft Entra Domain Services**. 1. Select your managed domain, such as *aaddscontoso.com*. 1. On the left-hand side of the Domain Services resource window, select **Notification settings**. The existing recipients for email notifications are shown. |
active-directory-domain-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/overview.md | To get started, [create a managed domain using the Microsoft Entra admin center] [compare]: compare-identity-solutions.md [synchronization]: synchronization.md [tutorial-create]: tutorial-create-instance.md-[azure-ad-connect]: ../active-directory/hybrid/whatis-azure-ad-connect.md -[password-hash-sync]: ../active-directory/hybrid/how-to-connect-password-hash-synchronization.md -[availability-zones]: ../reliability/availability-zones-overview.md -[forest-trusts]: concepts-resource-forest.md +[azure-ad-connect]: /azure/active-directory/hybrid/connect/whatis-azure-ad-connect +[password-hash-sync]: /azure/active-directory/hybrid/connect/how-to-connect-password-hash-synchronization +[availability-zones]: /azure/reliability/availability-zones-overview +[forest-trusts]: ./concepts-forest-trust.md [administration-concepts]: administration-concepts.md [synchronization]: synchronization.md [concepts-replica-sets]: concepts-replica-sets.md |
active-directory-domain-services | Password Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/password-policy.md | For more information about password policies and using the Active Directory Admi * [Configure fine-grained password policies using AD Administration Center](/windows-server/identity/ad-ds/get-started/adac/introduction-to-active-directory-administrative-center-enhancements--level-100-#fine_grained_pswd_policy_mgmt) <!-- INTERNAL LINKS -->-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md -[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md +[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization +[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory [create-azure-ad-ds-instance]: tutorial-create-instance.md [tutorial-create-management-vm]: tutorial-create-management-vm.md |
active-directory-domain-services | Powershell Create Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/powershell-create-instance.md | To see the managed domain in action, you can [domain-join a Windows VM][windows- [windows-join]: join-windows-vm.md [tutorial-ldaps]: tutorial-configure-ldaps.md [tutorial-phs]: tutorial-configure-password-hash-sync.md-[nsg-overview]: ../virtual-network/network-security-groups-overview.md +[nsg-overview]: /azure/virtual-network/network-security-groups-overview [network-ports]: network-considerations.md#network-security-groups-and-required-ports <!-- EXTERNAL LINKS -->-[Connect-AzAccount]: /powershell/module/Az.Accounts/Connect-AzAccount -[Connect-AzureAD]: /powershell/module/AzureAD/Connect-AzureAD +[Connect-AzAccount]: /powershell/module/az.accounts/connect-azaccount +[Connect-AzureAD]: /powershell/module/azuread/connect-azuread [New-AzureADServicePrincipal]: /powershell/module/AzureAD/New-AzureADServicePrincipal-[New-AzureADGroup]: /powershell/module/AzureAD/New-AzureADGroup -[Add-AzureADGroupMember]: /powershell/module/AzureAD/Add-AzureADGroupMember -[Get-AzureADGroup]: /powershell/module/AzureAD/Get-AzureADGroup -[Get-AzureADUser]: /powershell/module/AzureAD/Get-AzureADUser -[Register-AzResourceProvider]: /powershell/module/Az.Resources/Register-AzResourceProvider -[New-AzResourceGroup]: /powershell/module/Az.Resources/New-AzResourceGroup -[New-AzVirtualNetworkSubnetConfig]: /powershell/module/Az.Network/New-AzVirtualNetworkSubnetConfig -[New-AzVirtualNetwork]: /powershell/module/Az.Network/New-AzVirtualNetwork -[Get-AzSubscription]: /powershell/module/Az.Accounts/Get-AzSubscription -[cloud-shell]: ../cloud-shell/cloud-shell-windows-users.md -[availability-zones]: ../reliability/availability-zones-overview.md +[New-AzureADGroup]: /powershell/module/azuread/new-azureadgroup +[Add-AzureADGroupMember]: /powershell/module/azuread/add-azureadgroupmember +[Get-AzureADGroup]: /powershell/module/azuread/get-azureadgroup +[Get-AzureADUser]: /powershell/module/azuread/get-azureaduser +[Register-AzResourceProvider]: /powershell/module/az.resources/register-azresourceprovider +[New-AzResourceGroup]: /powershell/module/az.resources/new-azresourcegroup +[New-AzVirtualNetworkSubnetConfig]: /powershell/module/az.network/new-azvirtualnetworksubnetconfig +[New-AzVirtualNetwork]: /powershell/module/az.network/new-azvirtualnetwork +[Get-AzSubscription]: /powershell/module/az.accounts/get-azsubscription +[cloud-shell]: /azure/active-directory/develop/configure-app-multi-instancing +[availability-zones]: /azure/reliability/availability-zones-overview [New-AzNetworkSecurityRuleConfig]: /powershell/module/az.network/new-aznetworksecurityruleconfig [New-AzNetworkSecurityGroup]: /powershell/module/az.network/new-aznetworksecuritygroup [Set-AzVirtualNetworkSubnetConfig]: /powershell/module/az.network/set-azvirtualnetworksubnetconfig |
active-directory-domain-services | Powershell Scoped Synchronization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/powershell-scoped-synchronization.md | To complete this article, you need the following resources and privileges: * If needed, [create a Microsoft Entra tenant][create-azure-ad-tenant] or [associate an Azure subscription with your account][associate-azure-ad-tenant]. * A Microsoft Entra Domain Services managed domain enabled and configured in your Microsoft Entra tenant. * If needed, complete the tutorial to [create and configure a Microsoft Entra Domain Services managed domain][tutorial-create-instance].-* You need [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Microsoft Entra roles in your tenant to change the Domain Services synchronization scope. +* You need [Application Administrator](/azure/active-directory/roles/permissions-reference#application-administrator) and [Groups Administrator](/azure/active-directory/roles/permissions-reference#groups-administrator) Microsoft Entra roles in your tenant to change the Domain Services synchronization scope. ## Scoped synchronization overview To learn more about the synchronization process, see [Understand synchronization [scoped-sync]: scoped-synchronization.md [concepts-sync]: synchronization.md [tutorial-create-instance]: tutorial-create-instance.md-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md -[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md +[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization +[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory <!-- EXTERNAL LINKS --> [Connect-AzureAD]: /powershell/module/azuread/connect-azuread |
active-directory-domain-services | Scenarios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/scenarios.md | For more information about this deployment scenario, see [how to configure domai To get started, [Create and configure a Microsoft Entra Domain Services managed domain][tutorial-create-instance]. <!-- INTERNAL LINKS -->-[hdinsight]: ../hdinsight/domain-joined/apache-domain-joined-configure-using-azure-adds.md +[hdinsight]: /azure/hdinsight/domain-joined/apache-domain-joined-configure-using-azure-adds [tutorial-create-instance]: tutorial-create-instance.md [custom-ou]: create-ou.md [create-gpo]: manage-group-policy.md-[sspr]: ../active-directory/authentication/overview-authentication.md#self-service-password-reset +[sspr]: /azure/active-directory/authentication/overview-authentication#self-service-password-reset [compare]: compare-identity-solutions.md-[azure-ad-connect]: ../active-directory/hybrid/whatis-azure-ad-connect.md +[azure-ad-connect]: /azure/active-directory/hybrid/connect/whatis-azure-ad-connect <!-- EXTERNAL LINKS --> [windows-rds]: /windows-server/remote/remote-desktop-services/rds-azure-adds |
active-directory-domain-services | Scoped Synchronization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/scoped-synchronization.md | To complete this article, you need the following resources and privileges: * If needed, [create a Microsoft Entra tenant][create-azure-ad-tenant] or [associate an Azure subscription with your account][associate-azure-ad-tenant]. * A Microsoft Entra Domain Services managed domain enabled and configured in your Microsoft Entra tenant. * If needed, complete the tutorial to [create and configure a Microsoft Entra Domain Services managed domain][tutorial-create-instance].-* You need [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Microsoft Entra roles in your tenant to change the Domain Services synchronization scope. +* You need [Application Administrator](/azure/active-directory/roles/permissions-reference#application-administrator) and [Groups Administrator](/azure/active-directory/roles/permissions-reference#groups-administrator) Microsoft Entra roles in your tenant to change the Domain Services synchronization scope. ## Scoped synchronization overview To learn more about the synchronization process, see [Understand synchronization [scoped-sync-powershell]: powershell-scoped-synchronization.md [concepts-sync]: synchronization.md [tutorial-create-instance]: tutorial-create-instance.md-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md -[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md +[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization +[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory |
active-directory-domain-services | Secure Remote Vm Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/secure-remote-vm-access.md | For more information on improving resiliency of your deployment, see [Remote Des For more information about securing user sign-in, see [How it works: Microsoft Entra multifactor authentication][concepts-mfa]. <!-- INTERNAL LINKS -->-[bastion-overview]: ../bastion/bastion-overview.md -[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md -[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md +[bastion-overview]: /azure/bastion/bastion-overview +[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization +[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory [create-azure-ad-ds-instance]: tutorial-create-instance.md [configure-azureadds-vnet]: tutorial-configure-networking.md [tutorial-create-join-vm]: join-windows-vm.md-[user-mfa-registration]: ../active-directory/authentication/howto-mfa-nps-extension.md#register-users-for-mfa -[nps-extension]: ../active-directory/authentication/howto-mfa-nps-extension.md -[azure-mfa-nps-integration]: ../active-directory/authentication/howto-mfa-nps-extension-rdg.md -[register-nps-ad]:../active-directory/authentication/howto-mfa-nps-extension-rdg.md#register-server-in-active-directory -[create-nps-policy]: ../active-directory/authentication/howto-mfa-nps-extension-rdg.md#configure-network-policy -[concepts-mfa]: ../active-directory/authentication/concept-mfa-howitworks.md +[user-mfa-registration]: /azure/active-directory/authentication/howto-mfa-nps-extension#register-users-for-mfa +[nps-extension]: /azure/active-directory/authentication/howto-mfa-nps-extension +[azure-mfa-nps-integration]: /azure/active-directory/authentication/howto-mfa-nps-extension-rdg +[register-nps-ad]:/azure/active-directory/authentication/howto-mfa-nps-extension-rdg#register-server-in-active-directory +[create-nps-policy]: /azure/active-directory/authentication/howto-mfa-nps-extension-rdg#configure-network-policy +[concepts-mfa]: /azure/active-directory/authentication/concept-mfa-howitworks <!-- EXTERNAL LINKS --> [deploy-remote-desktop]: /windows-server/remote/remote-desktop-services/rds-deploy-infrastructure |
active-directory-domain-services | Secure Your Domain | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/secure-your-domain.md | It takes a few moments for the security settings to be applied to the managed do To learn more about the synchronization process, see [How objects and credentials are synchronized in a managed domain][synchronization]. <!-- INTERNAL LINKS -->-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md -[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md +[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization +[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory [create-azure-ad-ds-instance]: tutorial-create-instance.md-[global-admin]: ../role-based-access-control/elevate-access-global-admin.md +[global-admin]: /azure/role-based-access-control/elevate-access-global-admin [synchronization]: synchronization.md <!-- EXTERNAL LINKS -->-[Get-AzResource]: /powershell/module/az.resources/Get-AzResource -[Set-AzResource]: /powershell/module/Az.Resources/Set-AzResource +[Get-AzResource]: /powershell/module/az.resources/get-azresource +[Set-AzResource]: /powershell/module/az.resources/set-azresource [Connect-AzAccount]: /powershell/module/Az.Accounts/Connect-AzAccount [Connect-AzureAD]: /powershell/module/AzureAD/Connect-AzureAD |
active-directory-domain-services | Security Audit Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/security-audit-events.md | The following table outlines scenarios for each destination resource type. | Target Resource | Scenario | |:|:|-|Azure Storage| This target should be used when your primary need is to store security audit events for archival purposes. Other targets can be used for archival purposes, however those targets provide capabilities beyond the primary need of archiving. <br /><br />Before you enable Domain Services security audit events, first [Create an Azure Storage account](../storage/common/storage-account-create.md).| -|Azure Event Hubs| This target should be used when your primary need is to share security audit events with additional software such as data analysis software or security information & event management (SIEM) software.<br /><br />Before you enable Domain Services security audit events, [Create an event hub using Microsoft Entra admin center](../event-hubs/event-hubs-create.md)| -|Azure Log Analytics Workspace| This target should be used when your primary need is to analyze and review secure audits from the Microsoft Entra admin center directly.<br /><br />Before you enable Domain Services security audit events, [Create a Log Analytics workspace in the Microsoft Entra admin center.](../azure-monitor/logs/quick-create-workspace.md)| +|Azure Storage| This target should be used when your primary need is to store security audit events for archival purposes. Other targets can be used for archival purposes, however those targets provide capabilities beyond the primary need of archiving. <br /><br />Before you enable Domain Services security audit events, first [Create an Azure Storage account](/azure/storage/common/storage-account-create).| +|Azure Event Hubs| This target should be used when your primary need is to share security audit events with additional software such as data analysis software or security information & event management (SIEM) software.<br /><br />Before you enable Domain Services security audit events, [Create an event hub using Microsoft Entra admin center](/azure/event-hubs/event-hubs-create)| +|Azure Log Analytics Workspace| This target should be used when your primary need is to analyze and review secure audits from the Microsoft Entra admin center directly.<br /><br />Before you enable Domain Services security audit events, [Create a Log Analytics workspace in the Microsoft Entra admin center.](/azure/azure-monitor/logs/quick-create-workspace)| ## Enable security audit events using the Microsoft Entra admin center To enable Domain Services security and DNS audit events using Azure PowerShell, 1. Create the target resource for the audit events. - * **Azure Log Analytic workspaces** - [Create a Log Analytics workspace with Azure PowerShell](../azure-monitor/logs/powershell-workspace-configuration.md). - * **Azure storage** - [Create a storage account using Azure PowerShell](../storage/common/storage-account-create.md?tabs=azure-powershell) - * **Azure event hubs** - [Create an event hub using Azure PowerShell](../event-hubs/event-hubs-quickstart-powershell.md). You may also need to use the [New-AzEventHubAuthorizationRule](/powershell/module/az.eventhub/new-azeventhubauthorizationrule) cmdlet to create an authorization rule that grants Domain Services permissions to the event hub *namespace*. The authorization rule must include the **Manage**, **Listen**, and **Send** rights. + * **Azure Log Analytic workspaces** - [Create a Log Analytics workspace with Azure PowerShell](/azure/azure-monitor/logs/powershell-workspace-configuration). + * **Azure storage** - [Create a storage account using Azure PowerShell](/azure/storage/common/storage-account-create?tabs=azure-powershell) + * **Azure event hubs** - [Create an event hub using Azure PowerShell](/azure/event-hubs/event-hubs-quickstart-powershell). You may also need to use the [New-AzEventHubAuthorizationRule](/powershell/module/az.eventhub/new-azeventhubauthorizationrule) cmdlet to create an authorization rule that grants Domain Services permissions to the event hub *namespace*. The authorization rule must include the **Manage**, **Listen**, and **Send** rights. > [!IMPORTANT] > Ensure you set the authorization rule on the event hub namespace and not the event hub itself. -1. Get the resource ID for your Domain Services managed domain using the [Get-AzResource](/powershell/module/Az.Resources/Get-AzResource) cmdlet. Create a variable named *$aadds.ResourceId* to hold the value: +1. Get the resource ID for your Domain Services managed domain using the [Get-AzResource](/powershell/module/az.resources/get-azresource) cmdlet. Create a variable named *$aadds.ResourceId* to hold the value: ```azurepowershell $aadds = Get-AzResource -name aaddsDomainName ``` -1. Configure the Azure Diagnostic settings using the [Set-AzDiagnosticSetting](/powershell/module/Az.Monitor/Set-AzDiagnosticSetting) cmdlet to use the target resource for Microsoft Entra Domain Services audit events. In the following examples, the variable *$aadds.ResourceId* is used from the previous step. +1. Configure the Azure Diagnostic settings using the [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting) cmdlet to use the target resource for Microsoft Entra Domain Services audit events. In the following examples, the variable *$aadds.ResourceId* is used from the previous step. * **Azure storage** - Replace *storageAccountId* with your storage account name: To enable Domain Services security and DNS audit events using Azure PowerShell, Log Analytic workspaces let you view and analyze the security and DNS audit events using Azure Monitor and the Kusto query language. This query language is designed for read-only use that boasts power analytic capabilities with an easy-to-read syntax. For more information to get started with Kusto query languages, see the following articles: -* [Azure Monitor documentation](../azure-monitor/index.yml) -* [Get started with Log Analytics in Azure Monitor](../azure-monitor/logs/log-analytics-tutorial.md) -* [Get started with log queries in Azure Monitor](../azure-monitor/logs/get-started-queries.md) -* [Create and share dashboards of Log Analytics data](../azure-monitor/visualize/tutorial-logs-dashboards.md) +* [Azure Monitor documentation](/azure/azure-monitor/) +* [Get started with Log Analytics in Azure Monitor](/azure/azure-monitor/logs/log-analytics-tutorial) +* [Get started with log queries in Azure Monitor](/azure/azure-monitor/logs/get-started-queries) +* [Create and share dashboards of Log Analytics data](/azure/azure-monitor/visualize/tutorial-logs-dashboards) The following sample queries can be used to start analyzing audit events from Domain Services. The following audit event categories are available: For specific information on Kusto, see the following articles: -* [Overview](/azure/kusto/query/) of the Kusto query language. -* [Kusto tutorial](/azure/kusto/query/tutorial) to familiarize you with query basics. -* [Sample queries](/azure/kusto/query/samples) that help you learn new ways to see your data. -* Kusto [best practices](/azure/kusto/query/best-practices) to optimize your queries for success. +* [Overview](/azure/data-explorer/kusto/query/) of the Kusto query language. +* [Kusto tutorial](/azure/data-explorer/kusto/query/tutorials/learn-common-operators) to familiarize you with query basics. +* [Sample queries](/azure/data-explorer/kusto/query/tutorials/learn-common-operators) that help you learn new ways to see your data. +* Kusto [best practices](/azure/data-explorer/kusto/query/best-practices) to optimize your queries for success. |
active-directory-domain-services | Suspension | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/suspension.md | To keep your managed domain healthy and minimize the risk of it becoming suspend <!-- INTERNAL LINKS --> [alert-nsg]: alert-nsg.md-[azure-support]: ../active-directory/fundamentals/active-directory-troubleshooting-support-howto.md +[azure-support]: /azure/active-directory/fundamentals/how-to-get-support [resolve-alerts]: troubleshoot-alerts.md |
active-directory-domain-services | Synchronization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/synchronization.md | For hybrid user accounts synced from on-premises AD DS environment using Microso ## Next steps -For more information on the specifics of password synchronization, see [How password hash synchronization works with Microsoft Entra Connect](../active-directory/hybrid/how-to-connect-password-hash-synchronization.md?context=/azure/active-directory-domain-services/context/azure-ad-ds-context). +For more information on the specifics of password synchronization, see [How password hash synchronization works with Microsoft Entra Connect](/azure/active-directory/hybrid/connect/how-to-connect-password-hash-synchronization?context=/azure/active-directory-domain-services/context/azure-ad-ds-context). To get started with Domain Services, [create a managed domain](tutorial-create-instance.md). |
active-directory-domain-services | Template Create Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/template-create-instance.md | To complete this article, you need the following resources: * Install and configure Azure AD PowerShell. * If needed, follow the instructions to [install the Azure AD PowerShell module and connect to Microsoft Entra ID](/powershell/azure/active-directory/install-adv2). * Make sure that you sign in to your Microsoft Entra tenant using the [Connect-AzureAD][Connect-AzureAD] cmdlet.-* You need [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Microsoft Entra roles in your tenant to enable Domain Services. +* You need [Application Administrator](/azure/active-directory/roles/permissions-reference#application-administrator) and [Groups Administrator](/azure/active-directory/roles/permissions-reference#groups-administrator) Microsoft Entra roles in your tenant to enable Domain Services. * You need Domain Services Contributor Azure role to create the required Domain Services resources. ## DNS naming requirements To see the managed domain in action, you can [domain-join a Windows VM][windows- [windows-join]: join-windows-vm.md [tutorial-ldaps]: tutorial-configure-ldaps.md [tutorial-phs]: tutorial-configure-password-hash-sync.md-[availability-zones]: ../reliability/availability-zones-overview.md -[portal-deploy]: ../azure-resource-manager/templates/deploy-portal.md -[powershell-deploy]: ../azure-resource-manager/templates/deploy-powershell.md +[availability-zones]: /azure/reliability/availability-zones-overview +[portal-deploy]: /azure/azure-resource-manager/templates/deploy-portal +[powershell-deploy]: /azure/azure-resource-manager/templates/deploy-powershell [scoped-sync]: scoped-synchronization.md-[resource-forests]: concepts-resource-forest.md +[resource-forests]: ./concepts-forest-trust.md <!-- EXTERNAL LINKS --> [Connect-AzAccount]: /powershell/module/Az.Accounts/Connect-AzAccount To see the managed domain in action, you can [domain-join a Windows VM][windows- [Register-AzResourceProvider]: /powershell/module/Az.Resources/Register-AzResourceProvider [New-AzResourceGroup]: /powershell/module/Az.Resources/New-AzResourceGroup [Get-AzSubscription]: /powershell/module/Az.Accounts/Get-AzSubscription-[cloud-shell]: ../cloud-shell/cloud-shell-windows-users.md +[cloud-shell]: /azure/active-directory/develop/configure-app-multi-instancing [naming-prefix]: /windows-server/identity/ad-ds/plan/selecting-the-forest-root-domain-[New-AzResourceGroupDeployment]: /powershell/module/Az.Resources/New-AzResourceGroupDeployment +[New-AzResourceGroupDeployment]: /powershell/module/az.resources/new-azresourcegroupdeployment |
active-directory-domain-services | Troubleshoot Account Lockout | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/troubleshoot-account-lockout.md | If you still have problems joining your VM to the managed domain, [find help and <!-- INTERNAL LINKS --> [configure-fgpp]: password-policy.md [security-audit-events]: security-audit-events.md-[azure-ad-support]: ../active-directory/fundamentals/active-directory-troubleshooting-support-howto.md +[azure-ad-support]: /azure/active-directory/fundamentals/how-to-get-support |
active-directory-domain-services | Troubleshoot Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/troubleshoot-alerts.md | The managed domain's health automatically updates itself within two hours and re Domain Services requires an active subscription, and can't be moved to a different subscription. If the Azure subscription that the managed domain was associated with is deleted, you must recreate an Azure subscription and managed domain. -1. [Create an Azure subscription](../cost-management-billing/manage/create-subscription.md). +1. [Create an Azure subscription](/azure/cost-management-billing/manage/create-subscription). 1. [Delete the managed domain](delete-aadds.md) from your existing Microsoft Entra directory. 1. [Create a replacement managed domain](tutorial-create-instance.md). Domain Services requires an active subscription, and can't be moved to a differe Domain Services requires an active subscription. If the Azure subscription that the managed domain was associated with isn't active, you must renew it to reactivate the subscription. -1. [Renew your Azure subscription](../cost-management-billing/manage/subscription-disabled.md). +1. [Renew your Azure subscription](/azure/cost-management-billing/manage/subscription-disabled). 2. Once the subscription is renewed, a Domain Services notification lets you re-enable the managed domain. When the managed domain is enabled again, the managed domain's health automatically updates itself within two hours and removes the alert. This error is unrecoverable. To resolve the alert, [delete your existing managed Some automatically generated service principals are used to manage and create resources for a managed domain. If the access permissions for one of these service principals is changed, the domain is unable to correctly manage resources. The following steps show you how to understand and then grant access permissions to a service principal: -1. Read about [Azure role-based access control and how to grant access to applications in the Microsoft Entra admin center](../role-based-access-control/role-assignments-portal.md). +1. Read about [Azure role-based access control and how to grant access to applications in the Microsoft Entra admin center](/azure/role-based-access-control/role-assignments-portal). 2. Review the access that the service principal with the ID *abba844e-bc0e-44b0-947a-dc74e5d09022* has and grant the access that was denied at an earlier date. ## AADDS112: Not enough IP address in the managed domain The following common reasons cause synchronization to stop in a managed domain: Domain Services requires an active subscription. If the Azure subscription that the managed domain was associated with isn't active, you must renew it to reactivate the subscription. -1. [Renew your Azure subscription](../cost-management-billing/manage/subscription-disabled.md). +1. [Renew your Azure subscription](/azure/cost-management-billing/manage/subscription-disabled). 2. Once the subscription is renewed, a Domain Services notification lets you re-enable the managed domain. When the managed domain is enabled again, the managed domain's health automatically updates itself within two hours and removes the alert. When the managed domain is enabled again, the managed domain's health automatica If you still have issues, [open an Azure support request][azure-support] for more troubleshooting help. <!-- INTERNAL LINKS -->-[azure-support]: ../active-directory/fundamentals/active-directory-troubleshooting-support-howto.md +[azure-support]: /azure/active-directory/fundamentals/how-to-get-support |
active-directory-domain-services | Troubleshoot Domain Join | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/troubleshoot-domain-join.md | If you still have problems joining your VM to the managed domain, [find help and <!-- INTERNAL LINKS --> [enable-password-sync]: tutorial-create-instance.md#enable-user-accounts-for-azure-ad-ds [network-ports]: network-considerations.md#network-security-groups-and-required-ports-[azure-ad-support]: ../active-directory/fundamentals/active-directory-troubleshooting-support-howto.md +[azure-ad-support]: /azure/active-directory/fundamentals/how-to-get-support [configure-dns]: tutorial-create-instance.md#update-dns-settings-for-the-azure-virtual-network <!-- EXTERNAL LINKS --> |
active-directory-domain-services | Troubleshoot Sign In | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/troubleshoot-sign-in.md | If you still have problems joining your VM to the managed domain, [find help and [troubleshoot-account-lockout]: troubleshoot-account-lockout.md [azure-ad-connect-phs]: ./tutorial-configure-password-hash-sync.md [enable-user-accounts]: tutorial-create-instance.md#enable-user-accounts-for-azure-ad-ds-[phs-process]: ../active-directory/hybrid/how-to-connect-password-hash-synchronization.md#password-hash-sync-process-for-azure-ad-domain-services -[azure-ad-support]: ../active-directory/fundamentals/active-directory-troubleshooting-support-howto.md +[phs-process]: /azure/active-directory/hybrid/connect/how-to-connect-password-hash-synchronization#password-hash-sync-process-for-azure-ad-domain-services +[azure-ad-support]: /azure/active-directory/fundamentals/how-to-get-support |
active-directory-domain-services | Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/troubleshoot.md | Check if you've disabled an application with the identifier *00000002-0000-0000- To check the status of this application and enable it if needed, complete the following steps: -1. In the [Microsoft Entra admin center](https://entra.microsoft.com), seearch for and select **Enterprise applications**. +1. In the [Microsoft Entra admin center](https://entra.microsoft.com), search for and select **Enterprise applications**. 1. Choose *All applications* from the **Application Type** drop-down menu, then select **Apply**. 1. In the search box, enter *00000002-0000-0000-c000-00000000000*. Select the application, then choose **Properties**. 1. If **Enabled for users to sign-in** is set to *No*, set the value to *Yes*, then select **Save**. If you continue to have issues, [open an Azure support request][azure-support] f [password-policy]: password-policy.md [check-health]: check-health.md [troubleshoot-alerts]: troubleshoot-alerts.md-[Remove-MsolUser]: /powershell/module/MSOnline/Remove-MsolUser -[azure-support]: ../active-directory/fundamentals/active-directory-troubleshooting-support-howto.md +[Remove-MsolUser]: /powershell/module/msonline/remove-msoluser +[azure-support]: /azure/active-directory/fundamentals/how-to-get-support |
active-directory-domain-services | Tshoot Ldaps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tshoot-ldaps.md | If you have trouble connecting to a Microsoft Entra DS managed domain using secu If you still have issues, [open an Azure support request][azure-support] for additional troubleshooting assistance. <!-- INTERNAL LINKS -->-[azure-support]: ../active-directory/fundamentals/active-directory-troubleshooting-support-howto.md +[azure-support]: /azure/active-directory/fundamentals/how-to-get-support [configure-ldaps]: tutorial-configure-ldaps.md [certs-prereqs]: tutorial-configure-ldaps.md#create-a-certificate-for-secure-ldap [client-cert]: tutorial-configure-ldaps.md#export-a-certificate-for-client-computers |
active-directory-domain-services | Tutorial Configure Ldaps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-configure-ldaps.md | To complete this tutorial, you need the following resources and privileges: * If needed, [create and configure a Microsoft Entra Domain Services managed domain][create-azure-ad-ds-instance]. * The *LDP.exe* tool installed on your computer. * If needed, [install the Remote Server Administration Tools (RSAT)][rsat] for *Active Directory Domain Services and LDAP*.-* You need [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Microsoft Entra roles in your tenant to enable secure LDAP. +* You need [Application Administrator](/azure/active-directory/roles/permissions-reference#application-administrator) and [Groups Administrator](/azure/active-directory/roles/permissions-reference#groups-administrator) Microsoft Entra roles in your tenant to enable secure LDAP. ## Sign in to the Microsoft Entra admin center In this tutorial, you learned how to: > [Configure password hash synchronization for a hybrid Microsoft Entra environment](tutorial-configure-password-hash-sync.md) <!-- INTERNAL LINKS -->-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md -[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md +[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization +[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory [create-azure-ad-ds-instance]: tutorial-create-instance.md [secure-domain]: secure-your-domain.md <!-- EXTERNAL LINKS --> [rsat]: /windows-server/remote/remote-server-administration-tools-[ldap-query-basics]: /windows/desktop/ad/creating-a-query-filter +[ldap-query-basics]: /windows/win32/ad/creating-a-query-filter [New-SelfSignedCertificate]: /powershell/module/pki/new-selfsignedcertificate |
active-directory-domain-services | Tutorial Configure Networking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-configure-networking.md | To complete this tutorial, you need the following resources and privileges: * If you don't have an Azure subscription, [create an account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * A Microsoft Entra tenant associated with your subscription, either synchronized with an on-premises directory or a cloud-only directory. * If needed, [create a Microsoft Entra tenant][create-azure-ad-tenant] or [associate an Azure subscription with your account][associate-azure-ad-tenant].-* You need [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Microsoft Entra roles in your tenant to enable Domain Services. +* You need [Application Administrator](/azure/active-directory/roles/permissions-reference#application-administrator) and [Groups Administrator](/azure/active-directory/roles/permissions-reference#groups-administrator) Microsoft Entra roles in your tenant to enable Domain Services. * You need Domain Services Contributor Azure role to create the required Domain Services resources. * A Microsoft Entra Domain Services managed domain enabled and configured in your Microsoft Entra tenant. * If needed, the first tutorial [creates and configures a Microsoft Entra Domain Services managed domain][create-azure-ad-ds-instance]. To see this managed domain in action, create and join a virtual machine to the d > [Join a Windows Server virtual machine to your managed domain](join-windows-vm.md) <!-- INTERNAL LINKS --> -[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md -[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md +[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization +[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory [create-azure-ad-ds-instance]: tutorial-create-instance.md [create-join-windows-vm]: join-windows-vm.md-[peering-overview]: ../virtual-network/virtual-network-peering-overview.md +[peering-overview]: /azure/virtual-network/virtual-network-peering-overview [network-considerations]: network-considerations.md |
active-directory-domain-services | Tutorial Configure Password Hash Sync | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-configure-password-hash-sync.md | In this tutorial, you learned: > [Learn how synchronization works in a Microsoft Entra Domain Services managed domain](synchronization.md) <!-- INTERNAL LINKS -->-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md -[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md +[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization +[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory [create-azure-ad-ds-instance]: tutorial-create-instance.md-[enable-azure-ad-connect]: ../active-directory/hybrid/how-to-connect-install-express.md +[enable-azure-ad-connect]: /azure/active-directory/hybrid/connect/how-to-connect-install-express <!-- EXTERNAL LINKS --> [azure-ad-connect-download]: https://www.microsoft.com/download/details.aspx?id=47594 |
active-directory-domain-services | Tutorial Create Forest Trust | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-forest-trust.md | To complete this tutorial, you need the following resources and privileges: ## Sign in to the Microsoft Entra admin center -In this tutorial, you create and configure the outbound forest trust from Domain Services using the Microsoft Entra admin center. To get started, first sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). You need [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Microsoft Entra roles in your tenant to modify a Domain Services instance. +In this tutorial, you create and configure the outbound forest trust from Domain Services using the Microsoft Entra admin center. To get started, first sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). You need [Application Administrator](/azure/active-directory/roles/permissions-reference#application-administrator) and [Groups Administrator](/azure/active-directory/roles/permissions-reference#groups-administrator) Microsoft Entra roles in your tenant to modify a Domain Services instance. ## Networking considerations The following common scenarios let you validate that forest trust correctly auth You should have Windows Server virtual machine joined to the managed domain. Use this virtual machine to test your on-premises user can authenticate on a virtual machine. If needed, [create a Windows VM and join it to the managed domain][join-windows-vm]. -1. Connect to the Windows Server VM joined to the Domain Services forest using [Azure Bastion](../bastion/bastion-overview.md) and your Domain Services administrator credentials. +1. Connect to the Windows Server VM joined to the Domain Services forest using [Azure Bastion](/azure/bastion/bastion-overview) and your Domain Services administrator credentials. 1. Open a command prompt and use the `whoami` command to show the distinguished name of the currently authenticated user: ```console Using the Windows Server VM joined to the Domain Services forest, you can test t #### Enable file and printer sharing -1. Connect to the Windows Server VM joined to the Domain Services forest using [Azure Bastion](../bastion/bastion-overview.md) and your Domain Services administrator credentials. +1. Connect to the Windows Server VM joined to the Domain Services forest using [Azure Bastion](/azure/bastion/bastion-overview) and your Domain Services administrator credentials. 1. Open **Windows Settings**, then search for and select **Network and Sharing Center**. 1. Choose the option for **Change advanced sharing** settings. For more conceptual information about forest in Domain Services, see [How do for <!-- INTERNAL LINKS --> [concepts-trust]: concepts-forest-trust.md-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md -[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md +[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization +[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory [create-azure-ad-ds-instance-advanced]: tutorial-create-instance-advanced.md [howto-change-sku]: change-sku.md-[vpn-gateway]: ../vpn-gateway/vpn-gateway-about-vpngateways.md -[expressroute]: ../expressroute/expressroute-introduction.md +[vpn-gateway]: /azure/vpn-gateway/vpn-gateway-about-vpngateways +[expressroute]: /azure/expressroute/expressroute-introduction [join-windows-vm]: join-windows-vm.md |
active-directory-domain-services | Tutorial Create Instance Advanced | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-instance-advanced.md | To complete this tutorial, you need the following resources and privileges: * If you don't have an Azure subscription, [create an account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * A Microsoft Entra tenant associated with your subscription, either synchronized with an on-premises directory or a cloud-only directory. * If needed, [create a Microsoft Entra tenant][create-azure-ad-tenant] or [associate an Azure subscription with your account][associate-azure-ad-tenant].-* You need [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Microsoft Entra roles in your tenant to enable Domain Services. -* You need [Domain Services Contributor](../role-based-access-control/built-in-roles.md#domain-services-contributor) Azure role to create the required Domain Services resources. +* You need [Application Administrator](/azure/active-directory/roles/permissions-reference#application-administrator) and [Groups Administrator](/azure/active-directory/roles/permissions-reference#groups-administrator) Microsoft Entra roles in your tenant to enable Domain Services. +* You need [Domain Services Contributor](/azure/role-based-access-control/built-in-roles#domain-services-contributor) Azure role to create the required Domain Services resources. Although not required for Domain Services, it's recommended to [configure self-service password reset (SSPR)][configure-sspr] for the Microsoft Entra tenant. Users can change their password without SSPR, but SSPR helps if they forget their password and need to reset it. To see this managed domain in action, create and join a virtual machine to the d <!-- INTERNAL LINKS --> [tutorial-create-instance]: tutorial-create-instance.md-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md -[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md +[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization +[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory [network-considerations]: network-considerations.md-[create-dedicated-subnet]: ../virtual-network/virtual-network-manage-subnet.md#add-a-subnet +[create-dedicated-subnet]: /azure/virtual-network/virtual-network-manage-subnet#add-a-subnet [scoped-sync]: scoped-synchronization.md [on-prem-sync]: tutorial-configure-password-hash-sync.md-[configure-sspr]: ../active-directory/authentication/tutorial-enable-sspr.md -[password-hash-sync-process]: ../active-directory/hybrid/how-to-connect-password-hash-synchronization.md#password-hash-sync-process-for-azure-ad-domain-services -[resource-forests]: concepts-resource-forest.md -[availability-zones]: ../reliability/availability-zones-overview.md +[configure-sspr]: /azure/active-directory/authentication/tutorial-enable-sspr +[password-hash-sync-process]: /azure/active-directory/hybrid/connect/how-to-connect-password-hash-synchronization#password-hash-sync-process-for-azure-ad-domain-services +[resource-forests]: ./concepts-forest-trust.md +[availability-zones]: /azure/reliability/availability-zones-overview [concepts-sku]: administration-concepts.md#azure-ad-ds-skus <!-- EXTERNAL LINKS --> |
active-directory-domain-services | Tutorial Create Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-instance.md | To complete this tutorial, you need the following resources and privileges: * If you don't have an Azure subscription, [create an account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * A Microsoft Entra tenant associated with your subscription, either synchronized with an on-premises directory or a cloud-only directory. * If needed, [create a Microsoft Entra tenant][create-azure-ad-tenant] or [associate an Azure subscription with your account][associate-azure-ad-tenant].-* You need [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Microsoft Entra roles in your tenant to enable Domain Services. -* You need [Domain Services Contributor](../role-based-access-control/built-in-roles.md#domain-services-contributor) Azure role to create the required Domain Services resources. +* You need [Application Administrator](/azure/active-directory/roles/permissions-reference#application-administrator) and [Groups Administrator](/azure/active-directory/roles/permissions-reference#groups-administrator) Microsoft Entra roles in your tenant to enable Domain Services. +* You need [Domain Services Contributor](/azure/role-based-access-control/built-in-roles#domain-services-contributor) Azure role to create the required Domain Services resources. * A virtual network with DNS servers that can query necessary infrastructure such as storage. DNS servers that can't perform general internet queries might block the ability to create a managed domain. Although not required for Domain Services, it's recommended to [configure self-service password reset (SSPR)][configure-sspr] for the Microsoft Entra tenant. Users can change their password without SSPR, but SSPR helps if they forget their password and need to reset it. To authenticate users on the managed domain, Domain Services needs password hash > > Synchronized credential information in Microsoft Entra ID can't be re-used if you later create a managed domain - you must reconfigure the password hash synchronization to store the password hashes again. Previously domain-joined VMs or users won't be able to immediately authenticate - Microsoft Entra ID needs to generate and store the password hashes in the new managed domain. >-> [Microsoft Entra Connect Cloud Sync is not supported with Domain Services](../active-directory/cloud-sync/what-is-cloud-sync.md#comparison-between-azure-ad-connect-and-cloud-sync). On-premises users need to be synced using Microsoft Entra Connect in order to be able to access domain-joined VMs. For more information, see [Password hash sync process for Domain Services and Microsoft Entra Connect][password-hash-sync-process]. +> [Microsoft Entra Connect Cloud Sync is not supported with Domain Services](/azure/active-directory/hybrid/cloud-sync/what-is-cloud-sync#comparison-between-azure-ad-connect-and-cloud-sync). On-premises users need to be synced using Microsoft Entra Connect in order to be able to access domain-joined VMs. For more information, see [Password hash sync process for Domain Services and Microsoft Entra Connect][password-hash-sync-process]. The steps to generate and store these password hashes are different for cloud-only user accounts created in Microsoft Entra ID versus user accounts that are synchronized from your on-premises directory using Microsoft Entra Connect. Before you domain-join VMs and deploy applications that use the managed domain, <!-- INTERNAL LINKS --> [tutorial-create-instance-advanced]: tutorial-create-instance-advanced.md-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md -[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md +[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization +[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory [network-considerations]: network-considerations.md-[create-dedicated-subnet]: ../virtual-network/virtual-network-manage-subnet.md#add-a-subnet +[create-dedicated-subnet]: /azure/virtual-network/virtual-network-manage-subnet#add-a-subnet [scoped-sync]: scoped-synchronization.md [on-prem-sync]: tutorial-configure-password-hash-sync.md-[configure-sspr]: ../active-directory/authentication/tutorial-enable-sspr.md -[password-hash-sync-process]: ../active-directory/hybrid/how-to-connect-password-hash-synchronization.md#password-hash-sync-process-for-azure-ad-domain-services +[configure-sspr]: /azure/active-directory/authentication/tutorial-enable-sspr +[password-hash-sync-process]: /azure/active-directory/hybrid/connect/how-to-connect-password-hash-synchronization#password-hash-sync-process-for-azure-ad-domain-services [tutorial-create-instance-advanced]: tutorial-create-instance-advanced.md [skus]: overview.md-[resource-forests]: concepts-resource-forest.md -[availability-zones]: ../reliability/availability-zones-overview.md +[resource-forests]: ./concepts-forest-trust.md +[availability-zones]: /azure/reliability/availability-zones-overview [concepts-sku]: administration-concepts.md#azure-ad-ds-skus <!-- EXTERNAL LINKS --> |
active-directory-domain-services | Tutorial Create Management Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-management-vm.md | To safely interact with your managed domain from other applications, enable secu > [Configure secure LDAP for your managed domain](tutorial-configure-ldaps.md) <!-- INTERNAL LINKS -->-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md -[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md +[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization +[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory [create-azure-ad-ds-instance]: tutorial-create-instance.md [create-join-windows-vm]: join-windows-vm.md-[azure-bastion]: ../bastion/tutorial-create-host-portal.md +[azure-bastion]: /azure/bastion/tutorial-create-host-portal |
active-directory-domain-services | Tutorial Create Replica Set | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-replica-set.md | For more conceptual information, learn how replica sets work in Domain Services. <!-- INTERNAL LINKS --> [replica-sets]: concepts-replica-sets.md [tutorial-create-instance]: tutorial-create-instance-advanced.md-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md -[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md +[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization +[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory [howto-change-sku]: change-sku.md [concepts-replica-sets]: concepts-replica-sets.md |
active-directory-domain-services | Tutorial Perform Disaster Recovery Drill | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-perform-disaster-recovery-drill.md | For more conceptual information, learn how replica sets work in Domain Services. <!-- INTERNAL LINKS --> [replica-sets]: concepts-replica-sets.md [tutorial-create-instance]: tutorial-create-instance-advanced.md-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md -[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md +[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization +[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory [howto-change-sku]: change-sku.md [concepts-replica-sets]: concepts-replica-sets.md |
active-directory-domain-services | Use Azure Monitor Workbooks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/use-azure-monitor-workbooks.md | Domain Services includes the following two workbook templates: * Security overview report * Account activity report -For more information about how to edit and manage workbooks, see [Azure Monitor Workbooks overview](../azure-monitor/visualize/workbooks-overview.md). +For more information about how to edit and manage workbooks, see [Azure Monitor Workbooks overview](/azure/azure-monitor/visualize/workbooks-overview). ## Use the security overview report workbook If you need to adjust password and lockout policies, see [Password and account l For problems with users, learn how to troubleshoot [account sign-in problems][troubleshoot-sign-in] or [account lockout problems][troubleshoot-account-lockout]. <!-- INTERNAL LINKS -->-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md -[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md +[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization +[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory [create-azure-ad-ds-instance]: tutorial-create-instance.md [enable-security-audits]: security-audit-events.md [password-policy]: password-policy.md [troubleshoot-sign-in]: troubleshoot-sign-in.md [troubleshoot-account-lockout]: troubleshoot-account-lockout.md [azure-monitor-queries]: /azure/data-explorer/kusto/query/-[kusto-queries]: /azure/kusto/query/tutorial?pivots=azuredataexplorer +[kusto-queries]: /azure/data-explorer/kusto/query/tutorials/learn-common-operators?pivots=azuredataexplorer |
active-directory | Concept Authentication Strengths | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-strengths.md | An authentication strength Conditional Access policy works together with [MFA tr - **Authentication methods that aren't currently supported by authentication strength** - The **Email one-time pass (Guest)** authentication method isn't included in the available combinations. -- **Windows Hello for Business** ΓÇô If the user signed in with Windows Hello for Business as their primary authentication method, it can be used to satisfy an authentication strength requirement that includes Windows Hello for Business. But if the user signed in with another method like password as their primary authenticating method, and the authentication strength requires Windows Hello for Business, they get prompted to sign in with Windows Hello for Business. +- **Windows Hello for Business** ΓÇô If the user signed in with Windows Hello for Business as their primary authentication method, it can be used to satisfy an authentication strength requirement that includes Windows Hello for Business. However, if the user signed in with another method like password as their primary authenticating method, and the authentication strength requires Windows Hello for Business, they aren't prompted to sign in with Windows Hello for Business. The user needs to restart the session, choose **Sign-in options**, and select a method required by the authentication strength. ## Known isssues |
active-directory | Concept Certificate Based Authentication Technical Deep Dive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-technical-deep-dive.md | Now we'll walk through each step: :::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/sign-in-alt.png" alt-text="Screenshot of the Sign-in if FIDO2 is also enabled."::: -1. Once the user selects certificate-based authentication, the client is redirected to the certauth endpoint, which is [https://certauth.login.microsoftonline.com](https://certauth.login.microsoftonline.com) or [`https://t<tenant id>.certauth.login.microsoftonline.com`](`https://t<tenant id>.certauth.login.microsoftonline.com`) for Azure Global. For [Azure Government](../../azure-government/compare-azure-government-global-azure.md#guidance-for-developers), the certauth endpoint is [https://certauth.login.microsoftonline.us](https://certauth.login.microsoftonline.us). +1. Once the user selects certificate-based authentication, the client is redirected to the certauth endpoint, which is [https://certauth.login.microsoftonline.com](https://certauth.login.microsoftonline.com) for Azure Global. For [Azure Government](../../azure-government/compare-azure-government-global-azure.md#guidance-for-developers), the certauth endpoint is [https://certauth.login.microsoftonline.us](https://certauth.login.microsoftonline.us). - The endpoint performs TLS mutual authentication, and requests the client certificate as part of the TLS handshake. You'll see an entry for this request in the Sign-ins log. +However, with the issue hints feature enabled (coming soon), the new certauth endpoint will change to `https://t{tenantid}.certauth.login.microsoftonline.com`. ++The endpoint performs TLS mutual authentication, and requests the client certificate as part of the TLS handshake. You'll see an entry for this request in the Sign-ins log. - :::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/sign-in-log.png" alt-text="Screenshot of the Sign-ins log in Microsoft Entra ID." lightbox="./media/concept-certificate-based-authentication-technical-deep-dive/sign-in-log.png"::: - >[!NOTE]- >The network administrator should allow access to the User sign-in page and certauth endpoint *.certauth.login.microsoftonline.com for the customerΓÇÖs cloud environment. Disable TLS inspection on the certauth endpoint to make sure the client certificate request succeeds as part of the TLS handshake. + >The network administrator should allow access to the User sign-in page and certauth endpoint `*.certauth.login.microsoftonline.com` for the customer's cloud environment. Disable TLS inspection on the certauth endpoint to make sure the client certificate request succeeds as part of the TLS handshake. ++ Customers should make sure their TLS inspection disablement also work for the new url with issuer hints. Our recommendation is not to hardcode the url with tenantId as for B2B users the tenantId might change. Use a regular expression to allow both the old and new URL to work for TLS inspection disablement. For example, use `*.certauth.login.microsoftonline.com` or `*certauth.login.microsoftonline.com`for Azure Global tenants, and `*.certauth.login.microsoftonline.us` or `*certauth.login.microsoftonline.us` for Azure Government tenants, depending on the proxy used. + Without this change, certificate-based authentication will fail when you enable Issuer Hints feature. ++ :::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/sign-in-log.png" alt-text="Screenshot of the Sign-ins log in Microsoft Entra ID." lightbox="./media/concept-certificate-based-authentication-technical-deep-dive/sign-in-log.png"::: + Click the log entry to bring up **Activity Details** and click **Authentication Details**. You'll see an entry for the X.509 certificate. :::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/entry.png" alt-text="Screenshot of the entry for X.509 certificate."::: |
active-directory | How To Mfa Registration Campaign | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-registration-campaign.md | Here are a few sample JSONs you can use to get started! { "registrationEnforcement": { "authenticationMethodsRegistrationCampaign": {- "snoozeDurationInDays": 0, + "snoozeDurationInDays": 1, + "enforceRegistrationAfterAllowedSnoozes": true, "state": "enabled", "excludeTargets": [], "includeTargets": [ Here are a few sample JSONs you can use to get started! { "registrationEnforcement": { "authenticationMethodsRegistrationCampaign": {- "snoozeDurationInDays": 0, + "snoozeDurationInDays": 1, + "enforceRegistrationAfterAllowedSnoozes": true, "state": "enabled", "excludeTargets": [], "includeTargets": [ Here are a few sample JSONs you can use to get started! { "registrationEnforcement": { "authenticationMethodsRegistrationCampaign": {- "snoozeDurationInDays": 0, + "snoozeDurationInDays": 1, + "enforceRegistrationAfterAllowedSnoozes": true, "state": "enabled", "excludeTargets": [ { |
active-directory | Howto Mfa Userdevicesettings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-userdevicesettings.md | If you're assigned the *Authentication Administrator* role, you can require user 1. Browse to **Identity** > **Users** > **All users**. 1. Choose the user you wish to perform an action on and select **Authentication methods**. At the top of the window, then choose one of the following options for the user: - **Reset Password** resets the user's password and assigns a temporary password that must be changed on the next sign-in.- - **Require Re-register MFA** makes it so that when the user signs in next time, they're requested to set up a new MFA authentication method. - > [!NOTE] - > The user's currently registered authentication methods aren't deleted when an admin requires re-registration for MFA. After a user re-registers for MFA, we recommend they review their security info and delete any previously registered authentication methods that are no longer usable. + - **Require Re-register MFA** deactivates the user's hardware OATH tokens and deletes the following authentication methods from this user: phone numbers, Microsoft Authenticator apps and software OATH tokens. If needed, the user is requested to set up a new MFA authentication method the next time they sign in. - **Revoke MFA Sessions** clears the user's remembered MFA sessions and requires them to perform MFA the next time it's required by the policy on the device. :::image type="content" source="media/howto-mfa-userdevicesettings/manage-authentication-methods-in-azure.png" alt-text="Manage authentication methods from the Microsoft Entra admin center"::: To delete a user's app passwords, complete the following steps: This article showed you how to configure individual user settings. To configure overall Microsoft Entra multifactor authentication service settings, see [Configure Microsoft Entra multifactor authentication settings](howto-mfa-mfasettings.md). If your users need help, see the [User guide for Microsoft Entra multifactor authentication](https://support.microsoft.com/account-billing/how-to-use-the-microsoft-authenticator-app-9783c865-0308-42fb-a519-8cf666fe0acc).+ |
active-directory | Troubleshoot Authentication Strengths | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/troubleshoot-authentication-strengths.md | To verify if a method can be used: 1. As needed, check if the tenant is enabled for any method required for the authentication strength. Click **Security** > **Multifactor Authentication** > **Additional cloud-based multifactor authentication settings**. 1. Check which authentication methods are registered for the user in the Authentication methods policy. Click **Users and groups** > _username_ > **Authentication methods**. -If the user is registered for an enabled method that meets the authentication strength, they might need to use another method that isn't available after primary authentication, such as Windows Hello for Business or certificate-based authentication. For more information, see [How each authentication method works](concept-authentication-methods.md#how-each-authentication-method-works). The user needs to restart the session, choose **Sign-in options** , and select a method required by the authentication strength. +If the user is registered for an enabled method that meets the authentication strength, they might need to use another method that isn't available after primary authentication, such as Windows Hello for Business. For more information, see [How each authentication method works](concept-authentication-methods.md#how-each-authentication-method-works). The user needs to restart the session, choose **Sign-in options** , and select a method required by the authentication strength. :::image type="content" border="true" source="./media/troubleshoot-authentication-strengths/choose-another-method.png" alt-text="Screenshot of how to choose another sign-in method."::: |
active-directory | Test Throttle Service Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/test-throttle-service-limits.md | The following table lists Microsoft Entra throttling limits to consider when run | Limit type | Resource unit quota | Write quota | |-|-|-| | application+tenant pair | S: 3500, M:5000, L:8000 per 10 seconds | 3000 per 2 minutes and 30 seconds |-| application | 150,000 per 20 seconds | 70,000 per 5 minutes | +| application | 150,000 per 20 seconds | 35,000 per 5 minutes | | tenant | Not Applicable | 18,000 per 5 minutes | The application + tenant pair limit varies based on the number of users in the tenant requests are run against. The tenant sizes are defined as follows: S - under 50 users, M - between 50 and 500 users, and L - above 500 users. |
active-directory | Manage Device Identities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/manage-device-identities.md | You can access the devices overview by completing these steps: 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Global Reader](../roles/permissions-reference.md#global-reader). 1. Go to **Identity** > **Devices** > **Overview**. -In the devices overview, you can view the number of total devices, stale devices, noncompliant devices, and unmanaged devices. You'll also find links to Intune, Conditional Access, BitLocker keys, and basic monitoring. +In the devices overview, you can view the number of total devices, stale devices, noncompliant devices, and unmanaged devices. It provides links to Intune, Conditional Access, BitLocker keys, and basic monitoring. Device counts on the overview page don't update in real time. Changes should be reflected every few hours. From there, you can go to **All devices** to: ## Manage an Intune device -If you have rights to manage devices in Intune, you can manage devices for which mobile device management is listed as **Microsoft Intune**. If the device isn't enrolled with Microsoft Intune, the **Manage** option won't be available. --<a name='enable-or-disable-an-azure-ad-device'></a> +If you have rights to manage devices in Intune, you can manage devices for which mobile device management is listed as **Microsoft Intune**. If the device isn't enrolled with Microsoft Intune, the **Manage** option isn't available. ## Enable or disable a Microsoft Entra device There are two ways to enable or disable devices: > - Disabling a device revokes the Primary Refresh Token (PRT) and any refresh tokens on the device. > - Printers can't be enabled or disabled in Microsoft Entra ID. -<a name='delete-an-azure-ad-device'></a> - ## Delete a Microsoft Entra device There are two ways to delete a device: There are two ways to delete a device: > - Removes all details attached to the device. For example, BitLocker keys for Windows devices. > - Is a nonrecoverable activity. We don't recommended it unless it's required. -If a device is managed by another management authority, like Microsoft Intune, be sure it's wiped or retired before you delete it. See [How to manage stale devices](manage-stale-devices.md) before you delete a device. +If a device is managed in another management authority, like Microsoft Intune, be sure it's wiped or retired before you delete it. See [How to manage stale devices](manage-stale-devices.md) before you delete a device. ## View or copy a device ID You can use a device ID to verify the device ID details on the device or to trou ## View or copy BitLocker keys -You can view and copy BitLocker keys to allow users to recover encrypted drives. These keys are available only for Windows devices that are encrypted and store their keys in Microsoft Entra ID. You can find these keys when you view a device's details by selecting **Show Recovery Key**. Selecting **Show Recovery Key** will generate an audit log, which you can find in the `KeyManagement` category. +You can view and copy BitLocker keys to allow users to recover encrypted drives. These keys are available only for Windows devices that are encrypted and store their keys in Microsoft Entra ID. You can find these keys when you view a device's details by selecting **Show Recovery Key**. Selecting **Show Recovery Key** generates an audit log entry, which you can find in the `KeyManagement` category. ![Screenshot that shows how to view BitLocker keys.](./media/manage-device-identities/show-bitlocker-key.png) You can now experience the enhanced **All devices** view. ## Download devices -Global readers, Cloud Device Administrators, Intune Administrators, and Global Administrators can use the **Download devices** option to export a CSV file that lists devices. You can apply filters to determine which devices to list. If you don't apply any filters, all devices will be listed. An export task might run for as long as an hour, depending on your selections. If the export task exceeds 1 hour, it fails, and no file is output. +Global readers, Cloud Device Administrators, Intune Administrators, and Global Administrators can use the **Download devices** option to export a CSV file that lists devices. You can apply filters to determine which devices to list. If you don't apply any filters, all devices are listed. An export task might run for as long as an hour, depending on your selections. If the export task exceeds 1 hour, it fails, and no file is output. The exported list includes these device identity attributes: You must be assigned one of the following roles to manage device settings: > [!NOTE] > The **Require multifactor authentication to register or join devices with Microsoft Entra ID** setting applies to devices that are either Microsoft Entra joined (with some exceptions) or Microsoft Entra registered. This setting doesn't apply to Microsoft Entra hybrid joined devices, [Microsoft Entra joined VMs in Azure](./howto-vm-sign-in-azure-ad-windows.md#enable-azure-ad-login-for-a-windows-vm-in-azure), or Microsoft Entra joined devices that use [Windows Autopilot self-deployment mode](/mem/autopilot/self-deploying). -- **Maximum number of devices**: This setting enables you to select the maximum number of Microsoft Entra joined or Microsoft Entra registered devices that a user can have in Microsoft Entra ID. If users reach this limit, they can't add more devices until one or more of the existing devices are removed. The default value is **50**. You can increase the value up to 100. If you enter a value above 100, Microsoft Entra ID will set it to 100. You can also use **Unlimited** to enforce no limit other than existing quota limits.+- **Maximum number of devices**: This setting enables you to select the maximum number of Microsoft Entra joined or Microsoft Entra registered devices that a user can have in Microsoft Entra ID. If users reach this limit, they can't add more devices until one or more of the existing devices are removed. The default value is **50**. You can increase the value up to 100. If you enter a value above 100, Microsoft Entra ID sets it to 100. You can also use **Unlimited** to enforce no limit other than existing quota limits. > [!NOTE] > The **Maximum number of devices** setting applies to devices that are either Microsoft Entra joined or Microsoft Entra registered. This setting doesn't apply to Microsoft Entra hybrid joined devices. You must be assigned one of the following roles to manage device settings: This option is a premium edition capability available through products like Microsoft Entra ID P1 or P2 and Enterprise Mobility + Security. - **Enable Microsoft Entra Local Administrator Password Solution (LAPS) (preview)**: LAPS is the management of local account passwords on Windows devices. LAPS provides a solution to securely manage and retrieve the built-in local admin password. With cloud version of LAPS, customers can enable storing and rotation of local admin passwords for both Microsoft Entra ID and Microsoft Entra hybrid join devices. To learn how to manage LAPS in Microsoft Entra ID, see [the overview article](howto-manage-local-admin-passwords.md). -- **Restrict non-admin users from recovering the BitLocker key(s) for their owned devices**: Admins can block self-service BitLocker key access to the registered owner of the device. Default users without the BitLocker read permission will be unable to view or copy their BitLocker key(s) for their owned devices. You must be a Global Administrator or Privileged Role Administrator to update this setting. +- **Restrict non-admin users from recovering the BitLocker key(s) for their owned devices**: Admins can block self-service BitLocker key access to the registered owner of the device. Default users without the BitLocker read permission are unable to view or copy their BitLocker key(s) for their owned devices. You must be a Global Administrator or Privileged Role Administrator to update this setting. - **Enterprise State Roaming**: For information about this setting, see [the overview article](./enterprise-state-roaming-enable.md). |
active-directory | Tenant Restrictions V2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/tenant-restrictions-v2.md | Last updated 09/12/2023 -+ |
active-directory | Security Defaults | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-defaults.md | After registration is finished, the following administrator roles will be requir - Global Administrator - Application Administrator - Authentication Administrator+- Authentication Policy Administrator - Billing Administrator - Cloud Application Administrator - Conditional Access Administrator - Exchange Administrator - Helpdesk Administrator+- Identity Governance Administrator - Password Administrator - Privileged Authentication Administrator - Privileged Role Administrator |
active-directory | Entitlement Management Access Package Assignments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-assignments.md | $policy = $accesspackage.AssignmentPolicies[0] $req = New-MgBetaEntitlementManagementAccessPackageAssignmentRequest -AccessPackageId $accesspackage.Id -AssignmentPolicyId $policy.Id -TargetEmail "sample@example.com" ``` +## Configure access assignment as part of a lifecycle workflow ++In the Microsoft Entra Lifecycle Workflows feature, you can add a [Request user access package assignment](lifecycle-workflow-tasks.md#request-user-access-package-assignment) task to an onboarding workflow. The task can specify an access package which users should have. When the workflow runs for a user, then an access package assignment request will be created automatically. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as a global administrator. ++1. Browse to **Identity governance** > **Lifecycle workflows** > **Workflows**. ++1. Select an employee onboarding or move workflow. ++1. Select **Tasks** and select **Add task**. ++1. Select **Request user access package assignment** and select **Add**. ++1. Select the newly added task. ++1. Select **Select Access package**, and choose the access package that new or moving users should be assigned to. ++1. Select **Select Policy**, and choose the access package assignment policy in that access package. ++1. Select **Save**. + ## Remove an assignment You can remove an assignment that a user or an administrator had previously requested. if ($assignment -ne $null) { } ``` +## Configure assignment removal as part of a lifecycle workflow ++In the Microsoft Entra Lifecycle Workflows feature, you can add a [Remove access package assignment for user](lifecycle-workflow-tasks.md#remove-access-package-assignment-for-user) task to an offboarding workflow. That task can specify an access package the user might be assigned to. When the workflow runs for a user, then their access package assignment will be removed automatically. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as a global administrator. ++1. Browse to **Identity governance** > **Lifecycle workflows** > **Workflows**. ++1. Select an employee offboarding workflow. ++1. Select **Tasks** and select **Add task**. ++1. Select **Remove access package assignment for user** and select **Add**. ++1. Select the newly added task. ++1. Select **Select Access packages**, and choose one or more access packages that users being offboarded should be removed from. ++1. Select **Save**. + ## Next steps - [Change request and settings for an access package](entitlement-management-access-package-request-policy.md) |
active-directory | Entitlement Management Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-overview.md | You can have policies for users to request access. In these kinds of policies, a - The approval process and the users that can approve or deny access - The duration of a user's access assignment, once approved, before the assignment expires -You can also have policies for users to be assigned access, either by an administrator or [automatically](entitlement-management-access-package-auto-assignment-policy.md). +You can also have policies for users to be assigned access, either [by an administrator](entitlement-management-access-package-assignments.md#directly-assign-a-user), [automatically based on rules](entitlement-management-access-package-auto-assignment-policy.md), or through lifecycle workflows. The following diagram shows an example of the different elements in entitlement management. It shows one catalog with two example access packages. |
active-directory | Entitlement Management Scenarios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-scenarios.md | There are several ways that you can configure entitlement management for your or ## Govern access for users in your organization -### Administrator: Assign employees access automatically (preview) +### Administrator: Assign employees access automatically 1. [Create a new access package](entitlement-management-access-package-create.md#start-the-creation-process) 1. [Add groups, Teams, applications, or SharePoint sites to access package](entitlement-management-access-package-create.md#select-resource-roles) 1. [Add an automatic assignment policy](entitlement-management-access-package-auto-assignment-policy.md) +### Administrator: Assign employees access from lifecycle workflows ++1. [Create a new access package](entitlement-management-access-package-create.md#start-the-creation-process) +1. [Add groups, Teams, applications, or SharePoint sites to access package](entitlement-management-access-package-create.md#select-resource-roles) +1. [Add a direct assignment policy](entitlement-management-access-package-request-policy.md#none-administrator-direct-assignments-only) +1. Add a task to [Request user access package assignment](lifecycle-workflow-tasks.md#request-user-access-package-assignment) to a workflow when a user joins +1. Add a task to [Remove access package assignment for user](lifecycle-workflow-tasks.md#remove-access-package-assignment-for-user) to a workflow when a user leaves + ### Access package 1. [Create a new access package](entitlement-management-access-package-create.md#start-the-creation-process) There are several ways that you can configure entitlement management for your or ## Day-to-day management -### Administrator: View the connected organziations that are proposed and configured +### Administrator: View the connected organizations that are proposed and configured 1. [View the list of connected organizations](entitlement-management-organization.md) |
active-directory | Pim Powershell Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-powershell-migration.md | |
active-directory | Pim Roles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-roles.md | For more information about the classic subscription administrator roles, see [Az We support all Microsoft 365 roles in the Microsoft Entra roles and Administrators portal experience, such as Exchange Administrator and SharePoint Administrator, but we don't support specific roles within Exchange RBAC or SharePoint RBAC. For more information about these Microsoft 365 services, see [Microsoft 365 admin roles](/office365/admin/add-users/about-admin-roles). > [!NOTE]-> - Eligible users for the SharePoint administrator role, the Device administrator role, and any roles trying to access the Microsoft Security & Compliance Center might experience delays of up to a few hours after activating their role. We are working with those teams to fix the issues. -> - For information about delays activating the Azure AD Joined Device Local Administrator role, see [How to manage the local administrators group on Microsoft Entra joined devices](../devices/assign-local-admin.md#manage-the-azure-ad-joined-device-local-administrator-role). +> For information about delays activating the Azure AD Joined Device Local Administrator role, see [How to manage the local administrators group on Microsoft Entra joined devices](../devices/assign-local-admin.md#manage-the-azure-ad-joined-device-local-administrator-role). ## Next steps |
active-directory | Concept Usage Insights Report | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-usage-insights-report.md | Viewing the AD FS application activity using Microsoft Graph retrieves a list of Add the following query, then select the **Run query** button. ```http- GET https://graph.microsoft.com/beta/reports/getRelyingPartyDetailedSummary + GET https://graph.microsoft.com/beta/reports/getRelyingPartyDetailedSummary(period='{period}') ``` For more information, see [AD FS application activity in Microsoft Graph](/graph/api/resources/relyingpartydetailedsummary?view=graph-rest-beta&preserve-view=true). |
active-directory | Reference Sla Performance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-sla-performance.md | + + Title: Microsoft Entra SLA performance +description: Learn about the Microsoft Entra service level performance and attainment +++++++ Last updated : 09/27/2023++++++# Microsoft Entra SLA performance ++As an identity admin, you may need to track the Microsoft Entra service-level agreement (SLA) performance to make sure Microsoft Entra ID can support your vital apps. This article shows how the Microsoft Entra service has performed according to the [SLA for Microsoft Entra ID](https://azure.microsoft.com/support/legal/sla/active-directory/v1_1/). ++You can use this article in discussions with app or business owners to help them understand the performance they can expect from Microsoft Entra ID. ++## Service availability commitment ++Microsoft offers Premium Microsoft Entra customers the opportunity to get a service credit if Microsoft Entra ID fails to meet the documented SLA. When you request a service credit, Microsoft evaluates the SLA for your specific tenant; however, this global SLA can give you an indication of the general health of Microsoft Entra ID over time. ++The SLA covers the following scenarios that are vital to businesses: ++- **User authentication:** Users are able to sign in to the Microsoft Entra service. ++- **App access:** Microsoft Entra ID successfully emits the authentication and authorization tokens required for users to sign in to applications connected to the service. ++For full details on SLA coverage and instructions on requesting a service credit, see the [SLA for Microsoft Entra ID](https://azure.microsoft.com/support/legal/sla/active-directory/v1_1/). +++## No planned downtime ++You rely on Microsoft Entra ID to provide identity and access management for your vital systems. To ensure Microsoft Entra ID is available when business operations require it, Microsoft doesn't plan downtime for Microsoft Entra system maintenance. Instead, maintenance is performed as the service runs, without customer impact. ++## Recent worldwide SLA performance ++To help you plan for moving workloads to Microsoft Entra ID, we publish past SLA performance. These numbers show the level at which Microsoft Entra ID met the requirements in the [SLA for Microsoft Entra ID](https://azure.microsoft.com/support/legal/sla/active-directory/v1_1/), for all tenants. ++The SLA attainment is truncated at three places after the decimal. Numbers aren't rounded up, so actual SLA attainment is higher than indicated. ++| Month | 2021 | 2022 | 2023 | +| | | | | +| January | | 99.998% | 99.998% | +| February | 99.999% | 99.999% | 99.999% | +| March | 99.568% | 99.998% | 99.999% | +| April | 99.999% | 99.999% | 99.999% | +| May | 99.999% | 99.999% | 99.999% | +| June | 99.999% | 99.999% | 99.999% | +| July | 99.999% | 99.999% | 99.999% | +| August | 99.999% | 99.999% | 99.999% | +| September | 99.999% | 99.998% | | +| October | 99.999% | 99.999% | | +| November | 99.998% | 99.999% | | +| December | 99.978% | 99.999% | | ++<a name='how-is-azure-ad-sla-measured-'></a> ++### How is Microsoft Entra SLA measured? ++The Microsoft Entra SLA is measured in a way that reflects customer authentication experience, rather than simply reporting on whether the system is available to outside connections. This distinction means that the calculation is based on if: ++- Users can authenticate +- Microsoft Entra ID successfully issues tokens for target apps after authentication + +The numbers in the table are a global total of Microsoft Entra authentications across all customers and geographies. + +## Incident history ++All incidents that seriously impact Microsoft Entra performance are documented in the [Azure status history](https://azure.status.microsoft/status/history/). Not all events documented in Azure status history are serious enough to cause Microsoft Entra ID to go below its SLA. You can view information about the impact of incidents, and a root cause analysis of what caused the incident and what steps Microsoft took to prevent future incidents. ++## Tenant-level SLA (preview) ++In addition to providing global SLA performance, Microsoft Entra ID now provides tenant-level SLA performance. This feature is currently in preview. ++To access your tenant-level SLA performance: ++1. Navigate to the [Microsoft Entra admin center](https://entra.microsoft.com) using the Reports Reader role (or higher). +1. Browse to **Identity** > **Monitoring & health** > **Scenario Health** from the side menu. +1. Select the **SLA Monitoring** tab. +1. Hover over the graph to see the SLA performance for that month. ++![Screenshot of the tenant-level SLA results.](media/reference-azure-ad-sla-performance/tenent-level-sla.png) ++## Next steps ++* [Microsoft Entra monitoring and health overview](overview-monitoring-health.md) +* [Programmatic access to Microsoft Entra reports](./howto-configure-prerequisites-for-reporting-api.md) +* [Microsoft Entra ID risk detections](../identity-protection/overview-identity-protection.md) |
active-directory | Govwin Iq Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/govwin-iq-tutorial.md | + + Title: Microsoft Entra SSO integration with GovWin IQ +description: Learn how to configure single sign-on between Microsoft Entra ID and GovWin IQ. ++++++++ Last updated : 09/27/2023+++++# Microsoft Entra SSO integration with GovWin IQ ++In this tutorial, you'll learn how to integrate GovWin IQ with Microsoft Entra ID. GovWin IQ by Deltek is the industry-leading platform providing the most comprehensive market intelligence for U.S. federal, state and local, and Canadian governments. When you integrate GovWin IQ with Microsoft Entra ID, you can: ++* Control in Microsoft Entra ID who has access to GovWin IQ. +* Enable your users to be automatically signed-in to GovWin IQ with their Microsoft Entra accounts. +* Manage your accounts in one central location. ++## Prerequisites ++To integrate Microsoft Entra ID with GovWin IQ, you need: ++* A Microsoft Entra subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* An active GovWin IQ Subscription. Single sign-on can be enabled at no cost. Make sure your Customer Success Manager has enabled a user at your organization as a SAML SSO Admin to perform the following steps. +* All users must have same email address in Azure as provisioned in GovWin IQ. ++## Scenario description ++In this tutorial, you configure and test Microsoft Entra SSO in a test environment. ++* GovWin IQ supports only **SP** initiated SSO. ++> [!NOTE] +> Identifier of this application is a fixed string value so only one instance can be configured in one tenant. ++## Adding GovWin IQ from the gallery ++To configure the integration of GovWin IQ into Microsoft Entra ID, you need to add GovWin IQ from the gallery to your list of managed SaaS apps. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **New application**. +1. In the **Add from the gallery** section, type **GovWin IQ** in the search box. +1. Select **GovWin IQ** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides) ++## Configure and test Microsoft Entra SSO for GovWin IQ ++Configure and test Microsoft Entra SSO with GovWin IQ using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between a Microsoft Entra user and the related user in GovWin IQ. ++To configure and test Microsoft Entra SSO with GovWin IQ, perform the following steps: ++1. **[Configure Microsoft Entra SSO](#configure-microsoft-entra-sso)** - to enable your users to use this feature. + 1. **[Assign the Microsoft Entra ID test user](#assign-the-microsoft-entra-id-test-user)** - to enable B.Simon to use Microsoft Entra single sign-on. +1. **[Configure GovWin IQ SSO](#configure-govwin-iq-sso)** - to configure the single sign-on settings on application side. + 1. **[Assign GovWin IQ test user to SSO](#assign-govwin-iq-test-user-to-sso)** - to have a counterpart of B.Simon in GovWin IQ that is linked to the Microsoft Entra ID representation of user. +1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ++## Configure Microsoft Entra SSO ++Follow these steps to enable Microsoft Entra SSO in the Microsoft Entra admin center. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **GovWin IQ** > **Single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings. ++ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration") ++1. On the **Basic SAML Configuration** section, perform the following steps: ++ a. In the **Identifier** textbox, type the URL: + `https://iq.govwin.com/cas` ++ b. In the **Reply URL** textbox, enter the value from the GovWin IQ Reply URL field. + + Reply URL will be of the following pattern: + `https://iq.govwin.com/cas/login?client_name=ORG_<ID>` ++ c. In the **Sign on URL** textbox, enter the value from the GovWIn IQ Sign On URL field. ++ Sign on URL will be of the following pattern: + `https://iq.govwin.com/cas/clientredirect?client_name=ORG_<ID>` ++ > [!NOTE] + > Update these values with the actual Reply URL and Sign on URL found in the GovWin SAML Single Sign-On Configuration page, accessible by your designated SAML SSO Admin. Reach out to your [Customer Success Manager](mailto:CustomerSuccess@iq.govwin.com) for assistance. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Microsoft Entra admin center. ++1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer. ++ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate") ++### Assign the Microsoft Entra ID test user ++In this section, you'll enable a test user to use Microsoft Entra single sign-on by granting access to GovWin IQ. ++ > [!Note] + > The user selected for testing must have an existing active GovWin IQ account. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **GovWin IQ**. +1. In the app's overview page, select **Users and groups**. +1. Select **Add user/group**, then select **Users and groups** in the **Add Assignment** dialog. + 1. In the **Users and groups** dialog, select a test user from the Users list, then click the **Select** button at the bottom of the screen. + 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. + 1. In the **Add Assignment** dialog, click the **Assign** button. ++## Configure GovWin IQ SSO ++1. Log in to GovWin IQ company site as the SAML SSO Admin user. ++1. Navigate to [**SAML Single Sign-On Configuration** page](https://iq.govwin.com/neo/authenticationConfiguration/viewSamlSSOConfig) and perform the following steps: ++ ![Screenshot shows settings of the configuration.](./media/govwin-iq-tutorial/settings.png "Account") ++ 1. Select **Azure** from the Identity Provider (IdP) dropdown. + 1. Copy **Identifier (EntityID)** value, paste this value into the **Identifier** textbox in the **Basic SAML Configuration** section in Microsoft Entra admin center. + 1. Copy **Reply URL** value, paste this value into the **Reply URL** textbox in the **Basic SAML Configuration** section in Microsoft Entra admin center. + 1. Copy **Sign On URL** value, paste this value into the **Sign on URL** textbox in the **Basic SAML Configuration** section in Microsoft Entra admin center. ++1. In the **Metadata URL** textbox, paste the **App Federation Metadata Url**, which you have copied from the Microsoft Entra admin center. ++ ![Screenshot shows metadata of the configuration.](./media/govwin-iq-tutorial/values.png "Folder") ++1. Click **Submit IDP Metadata**. ++### Assign GovWin IQ test user to SSO ++1. In the [**SAML Single Sign-On Configuration** page](https://iq.govwin.com/neo/authenticationConfiguration/viewSamlSSOConfig), navigate to **Excluded Users** tab and click **Select Users to Exclude from SSO**. ++ ![Screenshot shows how to exclude users from the page.](./media/govwin-iq-tutorial/data.png "Users") ++ > [!Note] + > Here you can select users to include or exclude from SSO. If you have a webservices subscription, the webservices user should always be excluded from SSO. ++1. Next, click **Exclude All Users from SSO** for testing purposes. This is to prevent any impact to existing access for users while testing SSO. ++1. Next, select the test user and click Add Selected Users to SSO. ++1. Once testing is successful, add the rest of the users you want to enable for SSO. ++## Test SSO ++In this section, you test your Microsoft Entra single sign-on configuration with following options. ++> [!Note] +> It may take up to 10 minutes for the configuration to sync. ++* Click on **Test this application** in Microsoft Entra admin center. This will redirect to GovWin IQ Sign-on URL where you can initiate the login flow. ++* Go to GovWin IQ Sign-on URL directly and initiate the login flow from there. ++* You can use Microsoft My Apps. When you click the GovWin IQ tile in the My Apps, this will redirect to GovWin IQ Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Next Steps ++Add all remaining users to the Microsoft Entra ID GovWin IQ app to enable SSO access. Once you configure GovWin IQ you can enforce session control, which protects exfiltration and infiltration of your organization's sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app). |
active-directory | The People Experience Hub Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/the-people-experience-hub-tutorial.md | + + Title: Microsoft Entra SSO integration with The People Experience Hub +description: Learn how to configure single sign-on between Microsoft Entra ID and The People Experience Hub. ++++++++ Last updated : 09/22/2023+++++# Microsoft Entra SSO integration with The People Experience Hub ++In this tutorial, you'll learn how to integrate The People Experience Hub with Microsoft Entra ID. When you integrate The People Experience Hub with Microsoft Entra ID, you can: ++* Control in Microsoft Entra ID who has access to The People Experience Hub. +* Enable your users to be automatically signed-in to The People Experience Hub with their Microsoft Entra accounts. +* Manage your accounts in one central location. ++## Prerequisites ++To integrate Microsoft Entra ID with The People Experience Hub, you need: ++* A Microsoft Entra subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* The People Experience Hub single sign-on (SSO) enabled subscription. ++## Scenario description ++In this tutorial, you configure and test Microsoft Entra SSO in a test environment. ++* The People Experience Hub supports **SP and IDP** initiated SSO. ++> [!NOTE] +> Identifier of this application is a fixed string value so only one instance can be configured in one tenant. ++## Adding The People Experience Hub from the gallery ++To configure the integration of The People Experience Hub into Microsoft Entra ID, you need to add The People Experience Hub from the gallery to your list of managed SaaS apps. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **New application**. +1. In the **Add from the gallery** section, type **The People Experience Hub** in the search box. +1. Select **The People Experience Hub** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides) ++## Configure and test Microsoft Entra SSO for The People Experience Hub ++Configure and test Microsoft Entra SSO with The People Experience Hub using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between a Microsoft Entra user and the related user in The People Experience Hub. ++To configure and test Microsoft Entra SSO with The People Experience Hub, perform the following steps: ++1. **[Configure Microsoft Entra SSO](#configure-microsoft-entra-sso)** - to enable your users to use this feature. + 1. **[Create a Microsoft Entra ID test user](#create-a-microsoft-entra-id-test-user)** - to test Microsoft Entra single sign-on with B.Simon. + 1. **[Assign the Microsoft Entra ID test user](#assign-the-microsoft-entra-id-test-user)** - to enable B.Simon to use Microsoft Entra single sign-on. +1. **[Configure The People Experience Hub SSO](#configure-the-people-experience-hub-sso)** - to configure the single sign-on settings on application side. + 1. **[Create The People Experience Hub test user](#create-the-people-experience-hub-test-user)** - to have a counterpart of B.Simon in The People Experience Hub that is linked to the Microsoft Entra ID representation of user. +1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ++## Configure Microsoft Entra SSO ++Follow these steps to enable Microsoft Entra SSO in the Microsoft Entra admin center. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **The People Experience Hub** > **Single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings. ++ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration") ++1. On the **Basic SAML Configuration** section, perform the following steps: ++ a. In the **Identifier** textbox, type the URL: + `https://app.pxhub.io` ++ b. In the **Reply URL** textbox, type a URL using the following pattern: + `https://auth.api.pxhub.io/v1/auth/saml/<COMPANY_ID>/assert` ++1. Perform the following step, if you wish to configure the application in **SP** initiated mode: ++ In the **Sign on URL** textbox, type a URL using the following pattern: + `https://auth.api.pxhub.io/v1/auth/saml/<COMPANY_ID>/login` ++ > [!NOTE] + > These values are not real. Update these values with the actual Reply URL and Sign on URL. Contact [The People Experience Hub support team](mailto:it@pxhub.io) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Microsoft Entra admin center. ++1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer. ++ ![Screenshot shows the Certificate download link](common/certificatebase64.png "Certificate") ++1. On the **Set up The People Experience Hub** section, copy the appropriate URL(s) based on your requirement. ++ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata") ++### Create a Microsoft Entra ID test user ++In this section, you'll create a test user in the Microsoft Entra admin center called B.Simon. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [User Administrator](../roles/permissions-reference.md#user-administrator). +1. Browse to **Identity** > **Users** > **All users**. +1. Select **New user** > **Create new user**, at the top of the screen. +1. In the **User** properties, follow these steps: + 1. In the **Display name** field, enter `B.Simon`. + 1. In the **User principal name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`. + 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box. + 1. Select **Review + create**. +1. Select **Create**. ++### Assign the Microsoft Entra ID test user ++In this section, you'll enable B.Simon to use Microsoft Entra single sign-on by granting access to The People Experience Hub. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **The People Experience Hub**. +1. In the app's overview page, select **Users and groups**. +1. Select **Add user/group**, then select **Users and groups** in the **Add Assignment** dialog. + 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. + 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. + 1. In the **Add Assignment** dialog, click the **Assign** button. ++## Configure The People Experience Hub SSO ++1. Log in to The People Experience Hub company site as an administrator. ++1. Go to **Admin Settings** > **Integrations** > **Single Sign-On** and click **Manage**. ++ ![Screenshot shows settings of the configuration.](./media/the-people-experience-hub-tutorial/settings.png "Account") ++1. In the **SAML 2.0 Single sign-on** page, perform the following steps: ++ ![Screenshot shows configuration of the page.](./media/the-people-experience-hub-tutorial/values.png "Page") ++ 1. **Enable SAML 2.0 Single sign-on** toggle on. ++ 1. Copy **EntityID** value, paste this value into the **Identifier** textbox in the **Basic SAML Configuration** section in Microsoft Entra admin center. ++ 1. Copy **Login URL** value, paste this value into the **Sign on URL** textbox in the **Basic SAML Configuration** section in Microsoft Entra admin center. ++ 1. Copy **Reply URL** value, paste this value into the **Reply URL** textbox in the **Basic SAML Configuration** section in Microsoft Entra admin center. ++ 1. In the **SSO Login URL** textbox, paste the **Login URL** value, which you copied from the Microsoft Entra admin center. ++ 1. Open the downloaded **Certificate (Base64)** into Notepad and paste the content into the **X509 Certificate** textbox. ++ 1. Click **Save Configuration**. ++### Create The People Experience Hub test user ++1. In a different web browser window, sign into The People Experience Hub website as an administrator. ++1. Navigate to **Admin Settings** > **Users** and click **Create**. + + ![Screenshot shows how to create users in application.](./media/the-people-experience-hub-tutorial/create.png "Users") ++1. In the **Create a new admin users** section, perform the following steps: ++ ![Screenshot shows how to create new users in the page.](./media/the-people-experience-hub-tutorial/details.png "Creating Users") ++ 1. In the **Email** textbox, enter a valid email address of the user. ++ 1. In the **First Name** textbox, enter the first name of the user. ++ 1. In the **Last Name** textbox, enter the last name of the user. ++ 1. Click **Create User**. ++## Test SSO ++In this section, you test your Microsoft Entra single sign-on configuration with following options. ++#### SP initiated: ++* Click on **Test this application** in Microsoft Entra admin center. This will redirect to The People Experience Hub Sign-on URL where you can initiate the login flow. ++* Go to The People Experience Hub Sign-on URL directly and initiate the login flow from there. ++#### IDP initiated: ++* Click on **Test this application** in Microsoft Entra admin center and you should be automatically signed in to The People Experience Hub for which you set up the SSO. ++You can also use Microsoft My Apps to test the application in any mode. When you click The People Experience Hub tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to The People Experience Hub for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Next Steps ++Once you configure The People Experience Hub you can enforce session control, which protects exfiltration and infiltration of your organization's sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app). |
advisor | Advisor Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-get-started.md | Title: Get started with Azure Advisor description: Get started with Azure Advisor.+++ Previously updated : 02/01/2019 Last updated : 09/15/2023 # Get started with Azure Advisor -Learn how to access Advisor through the Azure portal, get recommendations, and implement recommendations. +Learn how to access Advisor through the Azure portal, get and manage recommendations, and configure Advisor settings. > [!NOTE]-> Azure Advisor automatically runs in the background to find newly created resources. It can take up to 24 hours to provide recommendations on those resources. +> Azure Advisor runs in the background to find newly created resources. It can take up to 24 hours to provide recommendations on those resources. -## Get recommendations --1. Sign in to the [Azure portal](https://portal.azure.com). --1. In the left pane, click **Advisor**. If you do not see Advisor in the left pane, click **All services**. In the service menu pane, under **Monitoring and Management**, click **Advisor**. The Advisor dashboard is displayed. -- ![Access Azure Advisor using the Azure portal](./media/advisor-get-started/advisor-portal-menu.png) --1. The Advisor dashboard will display a summary of your recommendations for all selected subscriptions. You can choose the subscriptions that you want recommendations to be displayed for using the subscription filter dropdown. --1. To get recommendations for a specific category, click one of the tabs: **Cost**, **Security**, **Reliability**, **Operational Excellence**, or **Performance**. -- ![Azure Advisor dashboard](./media/advisor-overview/advisor-dashboard.png) +## Open Advisor -## Get recommendation details and implement a solution +To access Azure Advisor, sign in to the [Azure portal](https://portal.azure.com) and open [Advisor](https://aka.ms/azureadvisordashboard). The Advisor score page opens by default. -You can select a recommendation in Advisor to view additional details ΓÇô such as the recommendation actions and impacted resources ΓÇô and to implement the solution to the recommendation. +You can also use the search bar at the top, or the left navigation pane, to find Advisor. -1. Sign in to the [Azure portal](https://portal.azure.com), and then open [Advisor](https://aka.ms/azureadvisordashboard). -1. Select a recommendation category to display the list of recommendations within that category, or select the **All** tab to view all your recommendations. +## Read your score +See how your system configuration measures against Azure best practices. -1. Click a recommendation that you want to review in detail. -1. Review the information about the recommendation and the resources that the recommendation applies to. +* The far-left graphic is your overall system Advisor score against Azure best practices. The **Learn more** link opens the [Optimize Azure workloads by using Advisor score](azure-advisor-score.md) page. -1. Click on the **Recommended Action** to implement the recommendation. +* The middle graphic depicts the trend of your system Advisor score history. Roll over the graphic to activate a slider to see your trend at different points of time. Use the drop-down menu to pick a trend time frame. -## Filter recommendations +* The far-right graphic shows a breakdown of your best practices Advisor score per category. Click a category bar to open the recommendations page for that category. -You can filter recommendations to drill down to what is most important to you. You can filter by subscription, resource type, or recommendation status. --1. Sign in to the [Azure portal](https://portal.azure.com), and then open [Advisor](https://aka.ms/azureadvisordashboard). --1. Use the dropdowns on the Advisor dashboard to filter by subscription, resource type, or recommendation status. -- ![Advisor search-filter criteria](./media/advisor-get-started/advisor-filters.png) --## Postpone or dismiss recommendations --1. Sign in to the [Azure portal](https://portal.azure.com), and then open [Advisor](https://aka.ms/azureadvisordashboard). +## Get recommendations -1. Navigate to the recommendation you want to postpone or dismiss. +To display a specific list of recommendations, click a category tile. -1. Click the recommendation. +The tiles on the Advisor score page show the different categories of recommendations per subscription: -1. Click **Postpone**. +* To get recommendations for a specific category, click one of the tiles. To open a list of all recommendations for all categories, click the **All recommendations** tile. By default, the **Cost** tile is selected. -1. Specify a postpone time period, or select **Never** to dismiss the recommendation. +* You can filter the display using the buttons at the top of the page: + * **Subscription**: Choose *All* for Advisor recommendations on all subscriptions. Alternatively, select specific subscriptions. Apply changes by clicking outside of the button. + * **Recommendation Status**: *Active* (the default, recommendations that you haven't postponed or dismissed), *Postponed* or *Dismissed*. Apply changes by clicking outside of the button. + * **Resource Group**: Choose *All* (the default) or specific resource groups. Apply changes by clicking outside of the button. + * **Type**: Choose *All* (the default) or specific resources. Apply changes by clicking outside of the button. + * **Commitments**: Applicable only to cost recommendations. Adjust your subscription **Cost** recommendations to reflect your committed **Term (years)** and chosen **Look-back period (days)**. Apply changes by clicking **Apply**. + * For more advanced filtering, click **Add filter**. -## Exclude subscriptions or resource groups +* The **Commitments** button lets you adjust your subscription **Cost** recommendations to reflect your committed **Term (years)** and chosen **Look-back period (days)**. -You may have resource groups or subscriptions for which you do not want to receive Advisor recommendations ΓÇô such as ΓÇÿtestΓÇÖ resources. You can configure Advisor to only generate recommendations for specific subscriptions and resource groups. +## Get recommendation details and solution options -> [!NOTE] -> To include or exclude a subscription or resource group from Advisor, you must be a subscription Owner. If you do not have the required permissions for a subscription or resource group, the option to include or exclude it is disabled in the user interface. +View recommendation details ΓÇô such as the recommended actions and impacted resources ΓÇô and the solution options, including postponing or dismissing a recommendation. -1. Sign in to the [Azure portal](https://portal.azure.com), and then open [Advisor](https://aka.ms/azureadvisordashboard). +1. To review details of a recommendation, including the affected resources, open the recommendation list for a category and then click the **Description** or the **Impacted resources** link for a specific recommendation. The following screenshot shows a **Reliability** recommendation details page. -1. Click **Configure** in the action bar. + :::image type="content" source="./media/advisor-get-started/advisor-score-reliability-recommendation-page.png" alt-text="Screenshot of Azure Advisor reliability recommendation details example." lightbox="./media/advisor-get-started/advisor-score-reliability-recommendation-page.png"::: -1. Uncheck any subscriptions or resource groups you do not want to receive Advisor recommendations for. +1. To see action details, click a **Recommended actions** link. The Azure page where you can act opens. Alternatively, open a page to the affected resources to take the recommended action (the two pages may be the same). + + Understand the recommendation before you act by clicking the **Learn more** link on the recommended action page, or at the top of the recommendations details page. - ![Advisor configure resources example](./media/advisor-get-started/advisor-configure-resources.png) +1. You can postpone the recommendation. + + :::image type="content" source="./media/advisor-get-started/advisor-recommendation-postpone.png" alt-text="Sreenshot of Azure Advisor recommendation postpone option." lightbox="./media/advisor-get-started/advisor-recommendation-postpone.png"::: -1. Click the **Apply** button. + You can't dismiss the recommendation without certain privileges. For information on permissions, see [Permissions in Azure Advisor](permissions.md). -## Configure low usage VM recommendation +## Download recommendations -This procedure configures the average CPU utilization rule for the low usage virtual machine recommendation. +To download your recommendations from the Advisor score or any recommendation details page, click **Download as CSV** or **Download as PDF** on the action bar at the top. The download option respects any filters you have applied to Advisor. If you select the download option while viewing a specific recommendation category or recommendation, the downloaded summary only includes information for that category or recommendation. -Advisor monitors your virtual machine usage for 7 days by default and then identifies low-utilization virtual machines. -Virtual machines are considered low-utilization if their CPU utilization is 5% or less and their network utilization is less than 2% or if the current workload can be accommodated by a smaller virtual machine size. +## Configure recommendations -If you would like to be more aggressive at identifying low usage virtual machines, you can adjust the average CPU utilization rule and the look back period on a per subscription basis. -The CPU utilization rule can be set to 5%, 10%, 15%, 20%, or 100%(Default). In case the trigger is selected as 100%, it will present recommendations for virtual machines with less than 5%, 10%, 15%, and 20% of CPU utilization. -You can select how far back in historical data you want to analyze: 7 days (default), 14, 21, 30, 60, or 90 days. +You can exclude subscriptions or resources, such as 'test' resources, from Advisor recommendations and configure Advisor to generate recommendations only for specific subscriptions and resource groups. > [!NOTE]-> To adjust the average CPU utilization rule for identifying low usage virtual machines, you must be a subscription *Owner*. If you do not have the required permissions for a subscription or resource group, the option to include or exclude it will be disabled in the user interface. --1. Sign in to the [Azure portal](https://portal.azure.com), and then open [Advisor](https://aka.ms/azureadvisordashboard). --1. Click **Configure** in the action bar. --1. Click the **Rules** tab. --1. Select the subscriptions youΓÇÖd like to adjust the average CPU utilization rule for, and then click **Edit**. --1. Select the desired average CPU utilization value, and click **Apply**. --1. Click **Refresh recommendations** to update your existing recommendations to use the new average CPU utilization rule. +> To change subscriptions or Advisor compute rules, you must be a subscription Owner. If you do not have the required permissions, the option is disabled in the user interface. For information on permissions, see [Permissions in Azure Advisor](permissions.md). For details on right sizing VMs, see [Reduce service costs by using Azure Advisor](advisor-cost-recommendations.md). - ![Advisor configure recommendation rules example](./media/advisor-get-started/advisor-configure-rules.png) +From any Azure Advisor page, click **Configuration** in the left navigation pane. The Advisor Configuration page opens with the **Resources** tab selected, by default. -## Download recommendations -Advisor enables you to download a summary of your recommendations. You can download your recommendations as a PDF file or a CSV file. Downloading your recommendations enables you to easily share with your colleagues or perform your own analysis on top of the recommendation data. +* **Resources**: Uncheck any subscriptions you don't want to receive Advisor recommendations for, click **Apply**. The page refreshes. -1. Sign in to the [Azure portal](https://portal.azure.com), and then open [Advisor](https://aka.ms/azureadvisordashboard). +* **VM/VMSS right sizing**: You can adjust the average CPU utilization rule and the look back period on a per-subscription basis. Doing virtual machine (VM) right sizing requires specialized knowledge. -1. Click **Download as CSV** or **Download as PDF** on the action bar. + 1. Select the subscriptions youΓÇÖd like to adjust the average CPU utilization rule for, and then click **Edit**. Not all subscriptions can be edited for VM/VMSS right sizing and certain privileges are required; for more information on permissions, see [Permissions in Azure Advisor](permissions.md). + + 1. Select the desired average CPU utilization value and click **Apply**. It can take up to 24 hours for the new settings to be reflected in recommendations. -The download option respects any filters you have applied to the Advisor dashboard. If you select the download option while viewing a specific recommendation category or recommendation, the downloaded summary only includes information for that category or recommendation. + :::image type="content" source="./media/advisor-get-started/advisor-configure-rules.png" alt-text="Screenshot of Azure Advisor configuration option for VM/VMSS sizing rules." lightbox="./media/advisor-get-started/advisor-configure-rules.png"::: ## Next steps To learn more about Advisor, see: - [Introduction to Azure Advisor](advisor-overview.md)-- [Advisor Reliability recommendations](advisor-high-availability-recommendations.md)-- [Advisor Security recommendations](advisor-security-recommendations.md)-- [Advisor Performance recommendations](advisor-performance-recommendations.md) - [Advisor Cost recommendations](advisor-cost-recommendations.md)+- [Advisor Security recommendations](advisor-security-recommendations.md) +- [Advisor Reliability recommendations](advisor-high-availability-recommendations.md) - [Advisor Operational Excellence recommendations](advisor-operational-excellence-recommendations.md)+- [Advisor Performance recommendations](advisor-performance-recommendations.md) |
ai-services | Call Analyze Image 40 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/call-analyze-image-40.md | |
ai-services | Image Analysis Client Library 40 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/quickstarts-sdk/image-analysis-client-library-40.md | |
ai-services | Install Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/sdk/install-sdk.md | |
ai-services | Overview Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/sdk/overview-sdk.md | Before you create a new issue: ## Next steps - [Install the SDK](./install-sdk.md)-- [Try the Image Analysis Quickstart](../quickstarts-sdk/image-analysis-client-library-40.md)+- [Try the Image Analysis Quickstart](../quickstarts-sdk/image-analysis-client-library-40.md) |
ai-services | Multi Service Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/multi-service-resource.md | keywords: Azure AI services, cognitive -+ Last updated 08/02/2023 The multi-service resource enables access to the following Azure AI services wit ## Next steps -* Now that you have a resource, you can authenticate your API requests to one of the [supported Azure AI services](#supported-services-with-a-multi-service-resource). +* Now that you have a resource, you can authenticate your API requests to one of the [supported Azure AI services](#supported-services-with-a-multi-service-resource). |
ai-services | Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/models.md | The `gpt-4` model supports 8192 max input tokens and the `gpt-4-32k` model suppo ## GPT-3.5 -GPT-3.5 models can understand and generate natural language or code. The most capable and cost effective model in the GPT-3.5 family is GPT-3.5 Turbo, which has been optimized for chat and works well for traditional completions tasks as well. We recommend using GPT-3.5 Turbo over [legacy GPT-3.5 and GPT-3 models](./legacy-models.md). +GPT-3.5 models can understand and generate natural language or code. The most capable and cost effective model in the GPT-3.5 family is GPT-3.5 Turbo, which has been optimized for chat and works well for traditional completions tasks as well. GPT-3.5 Turbo is available for use with the Chat Completions API. GPT-3.5 Turbo Instruct has similar capabilities to `text-davinci-003` using the Completions API instead of the Chat Completions API. We recommend using GPT-3.5 Turbo and GPT-3.5 Turbo Instruct over [legacy GPT-3.5 and GPT-3 models](./legacy-models.md). - `gpt-35-turbo` - `gpt-35-turbo-16k`+- `gpt-35-turbo-instruct` -The `gpt-35-turbo` model supports 4096 max input tokens and the `gpt-35-turbo-16k` model supports up to 16,384 tokens. +The `gpt-35-turbo` model supports 4096 max input tokens and the `gpt-35-turbo-16k` model supports up to 16,384 tokens. `gpt-35-turbo-instruct` supports 4097 max input tokens. -Like GPT-4, use the Chat Completions API to use GPT-3.5 Turbo. To learn more about how to interact with GPT-3.5 Turbo and the Chat Completions API check out our [in-depth how-to](../how-to/chatgpt.md). +To learn more about how to interact with GPT-3.5 Turbo and the Chat Completions API check out our [in-depth how-to](../how-to/chatgpt.md). ## Embeddings models GPT-3.5 Turbo is used with the Chat Completion API. GPT-3.5 Turbo (0301) can als | `gpt-35-turbo`<sup>1</sup> (0301) | East US, France Central, South Central US, UK South, West Europe | N/A | 4,096 | Sep 2021 | | `gpt-35-turbo` (0613) | Australia East, Canada East, East US, East US 2, France Central, Japan East, North Central US, Sweden Central, Switzerland North, UK South | N/A | 4,096 | Sep 2021 | | `gpt-35-turbo-16k` (0613) | Australia East, Canada East, East US, East US 2, France Central, Japan East, North Central US, Sweden Central, Switzerland North, UK South | N/A | 16,384 | Sep 2021 |+| `gpt-35-turbo-instruct` (0914) | East US, Sweden Central | N/A | 4,097 | Sep 2021 | <sup>1</sup> Version `0301` of gpt-35-turbo will be retired no earlier than July 5, 2024. See [model updates](#model-updates) for model upgrade behavior. |
ai-services | Use Your Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/use-your-data.md | While Power Virtual Agents has features that leverage Azure OpenAI such as [gene > [!NOTE] > Deploying to Power Virtual Agents from Azure OpenAI is only available to US regions.+> Power Virtual Agents supports Azure Cognitive Search indexes with keyword or semantic search only. Other data sources and advanced features may not be supported. #### Using the web app |
ai-services | Create Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/create-resource.md | description: Learn how to get started with Azure OpenAI Service and create your -+ Last updated 08/25/2023 zone_pivot_groups: openai-create-resource |
ai-services | Use Your Data Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/use-your-data-quickstart.md | description: Use this article to import and use your data in Azure OpenAI. + |
ai-services | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whats-new.md | keywords: ## September 2023 +### GPT-3.5 Turbo Instruct ++Azure OpenAI Service now supports the GPT-3.5 Turbo Instruct model. This model has performance comparable to `text-davinci-003` and is available to use with the Completions API. Check the [models page](concepts/models.md), for the latest information on model availability in each region. + ### Whisper public preview Azure OpenAI Service now supports speech to text APIs powered by OpenAI's Whisper model. Get AI-generated text based on the speech audio you provide. To learn more, check out the [quickstart](./whisper-quickstart.md). |
aks | Access Private Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/access-private-cluster.md | Title: Access a private Azure Kubernetes Service (AKS) cluster description: Learn how to access a private Azure Kubernetes Service (AKS) cluster using the Azure CLI or Azure portal. + Last updated 09/15/2023 |
aks | App Routing Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/app-routing-migration.md | description: Learn how to migrate from the HTTP application routing feature to t -+ Last updated 08/18/2023 After migrating to the application routing add-on, learn how to [monitor ingress <!-- EXTERNAL LINKS --> [dns-pricing]: https://azure.microsoft.com/pricing/details/dns/ [kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get-[kubectl-delete]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#delete +[kubectl-delete]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#delete |
aks | App Routing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/app-routing.md | Title: Azure Kubernetes Service (AKS) ingress with the application routing add-on (preview) + Title: Azure Kubernetes Service (AKS) managed nginx ingress with the application routing add-on (preview) description: Use the application routing add-on to securely access applications deployed on Azure Kubernetes Service (AKS). Last updated 08/07/2023 -# Azure Kubernetes Service (AKS) ingress with the application routing add-on (preview) +# Managed nginx ingress with the application routing add-on (preview) -The application routing add-on configures an [ingress controller](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/) in your Azure Kubernetes Service (AKS) cluster with SSL termination through certificates stored in Azure Key Vault. It can optionally integrate with Open Service Mesh (OSM) for end-to-end encryption of inter-cluster communication using mutual TLS (mTLS). When you deploy ingresses, the add-on creates publicly accessible DNS names for endpoints on an Azure DNS zone. +The application routing add-on configures an [ingress controller](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/) in your Azure Kubernetes Service (AKS) cluster with SSL termination through certificates stored in Azure Key Vault. When you deploy ingresses, the add-on creates publicly accessible DNS names for endpoints on an Azure DNS zone. [!INCLUDE [preview features callout](./includes/preview/preview-callout.md)] -## Application routing add-on overview +## Application routing add-on with nginx overview The application routing add-on deploys the following components: |
aks | Dapr Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr-migration.md | Title: Migrate from Dapr OSS to the Dapr extension for Azure Kubernetes Service (AKS) -description: Learn how to migrate your managed clusters from Dapr OSS to the Dapr extension for AKS +description: Learn how to migrate your managed clusters from Dapr OSS to the Dapr extension for Azure Kubernetes Service (AKS). Previously updated : 11/21/2022 Last updated : 09/26/2023 # Migrate from Dapr OSS to the Dapr extension for Azure Kubernetes Service (AKS) -You've installed and configured Dapr OSS (using Dapr CLI or Helm) on your Kubernetes cluster, and want to start using the Dapr extension on AKS. In this guide, you'll learn how the Dapr extension for AKS can use the Kubernetes resources created by Dapr OSS and start managing them, by either: +This article shows you how to migrate from Dapr OSS to the Dapr extension for AKS. -- Checking for an existing Dapr installation via Azure CLI prompts (default method), or-- Using the release name and namespace from `--configuration-settings` to explicitly point to an existing Dapr installation.+You can configure the Dapr extension to use and manage the Kubernetes resources created by Dapr OSS by [checking for an existing Dapr installation using the Azure CLI](#check-for-an-existing-dapr-installation) (*default method*) or [configuring the existing Dapr installation using `--configuration-settings`](#configure-the-existing-dapr-installation-usingconfiguration-settings). ++For more information, see [Dapr extension for AKS][dapr-extension-aks]. ## Check for an existing Dapr installation -The Dapr extension, by default, checks for existing Dapr installations when you run the `az k8s-extension create` command. To list the details of your current Dapr installation, run the following command and save the Dapr release name and namespace: +When you [create the Dapr extension](./dapr.md), the extension checks for an existing Dapr installation on your cluster. If Dapr exists, the extension uses and manages the Kubernetes resources created by Dapr OSS. -```bash -helm list -A -``` +1. List the details of your current Dapr installation using the `helm list -A` command and save the Dapr release name and namespace from the output. -When [installing the extension][dapr-create], you'll receive a prompt asking if Dapr is already installed: + ```azurecli-interactive + helm list -A + ``` -```bash -Is Dapr already installed in the cluster? (y/N): y -``` +2. Enter the Helm release name and namespace (from `helm list -A`) when prompted with the following questions: -If Dapr is already installed, please enter the Helm release name and namespace (from `helm list -A`) when prompted the following: + ```azurecli-interactive + Enter the Helm release name for Dapr, or press Enter to use the default name [dapr]: + Enter the namespace where Dapr is installed, or press Enter to use the default namespace [dapr-system]: + ``` -```bash -Enter the Helm release name for Dapr, or press Enter to use the default name [dapr]: -Enter the namespace where Dapr is installed, or press Enter to use the default namespace [dapr-system]: -``` +## Configure the existing Dapr installation using `--configuration-settings` -## Configuring the existing Dapr installation using `--configuration-settings` +When you [create the Dapr extension](./dapr.md), you can configure the extension to use and manage the Kubernetes resources created by Dapr OSS using the `--configuration-settings` flag. -Alternatively, when creating the Dapr extension, you can configure the above settings via `--configuration-settings`. This method is useful when you are automating the installation via bash scripts, CI pipelines, etc. +1. List the details of your current Dapr installation using the `helm list -A` command and save the Dapr release name and namespace from the output. -If you don't have an existing Dapr installation on your cluster, set `skipExistingDaprCheck` to `true`: + ```azurecli-interactive + helm list -A + ``` -```azurecli-interactive -az k8s-extension create --cluster-type managedClusters \ cluster-name myAKScluster \resource-group myResourceGroup \name dapr \extension-type Microsoft.Dapr \configuration-settings "skipExistingDaprCheck=true"-``` +2. Create the Dapr extension using the [`az k8s-extension create`][az-k8s-extension-create] and use the `--configuration-settings` flags to set the Dapr release name and namespace. -If Dapr exists on your cluster, set the Helm release name and namespace (from `helm list -A`) via `--configuration-settings`: --```azurecli-interactive -az k8s-extension create --cluster-type managedClusters \ cluster-name myAKScluster \resource-group myResourceGroup \name dapr \extension-type Microsoft.Dapr \configuration-settings "existingDaprReleaseName=dapr" \configuration-settings "existingDaprReleaseNamespace=dapr-system"-``` + ```azurecli-interactive + az k8s-extension create --cluster-type managedClusters \ + --cluster-name myAKSCluster \ + --resource-group myResourceGroup \ + --name dapr \ + --extension-type Microsoft.Dapr \ + --configuration-settings "existingDaprReleaseName=dapr" \ + --configuration-settings "existingDaprReleaseNamespace=dapr-system" + ``` ## Update HA mode or placement service settings -When you install the Dapr extension on top of an existing Dapr installation, you'll see the following prompt: +When installing the Dapr extension on top of an existing Dapr installation, you receive the following message: -> ```The extension will be installed on your existing Dapr installation. Note, if you have updated the default values for global.ha.* or dapr_placement.* in your existing Dapr installation, you must provide them in the configuration settings. Failing to do so will result in an error, since Helm upgrade will try to modify the StatefulSet. See <link> for more information.``` +```output +The extension will be installed on your existing Dapr installation. Note, if you have updated the default values for global.ha.* or dapr_placement.* in your existing Dapr installation, you must provide them in the configuration settings. Failing to do so will result in an error, since Helm upgrade will try to modify the StatefulSet. See <link> for more information. +``` -Kubernetes only allows for limited fields in StatefulSets to be patched, subsequently failing upgrade of the placement service if any of the mentioned settings are configured. You can follow the steps below to update those settings: +Kubernetes only allows patching for limited fields in StatefulSets. If any of the HA mode or placement service settings are configured, the upgrade fails. To update the HA mode or placement service settings, you must delete the stateful set and then update the HA mode. -1. Delete the stateful set. +1. Delete the stateful set using the `kubectl delete` command. ```azurecli-interactive kubectl delete statefulset.apps/dapr-placement-server -n dapr-system ``` -1. Update the HA mode: - +2. Update the HA mode using the [`az k8s-extension update`][az-k8s-extension-update] command. + ```azurecli-interactive az k8s-extension update --cluster-type managedClusters \ --cluster-name myAKSCluster \ Kubernetes only allows for limited fields in StatefulSets to be patched, subsequ --configuration-settings "global.ha.enabled=true" \ ``` -For more information, see [Dapr Production Guidelines][dapr-prod-guidelines]. -+For more information, see the [Dapr production guidelines][dapr-prod-guidelines]. ## Next steps Learn more about [Dapr][dapr-overview] and [how to use it][dapr-howto]. - <!-- LINKS INTERNAL --> [dapr-overview]: ./dapr-overview.md [dapr-howto]: ./dapr.md-[dapr-create]: ./dapr.md#create-the-extension-and-install-dapr-on-your-aks-or-arc-enabled-kubernetes-cluster +[dapr-extension-aks]: ./dapr-overview.md +[az-k8s-extension-create]: /cli/azure/k8s-extension#az-k8s-extension-create +[az-k8s-extension-update]: /cli/azure/k8s-extension#az-k8s-extension-update <!-- LINKS EXTERNAL --> [dapr-prod-guidelines]: https://docs.dapr.io/operations/hosting/kubernetes/kubernetes-production/#enabling-high-availability-in-an-existing-dapr-deployment |
aks | Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/events.md | + + Title: Use Kubernetes events for troubleshooting +description: Learn about Kubernetes events, which provide details on pods, nodes, and other Kubernetes objects. +++ Last updated : 09/07/2023+++# Kubernetes events for troubleshooting ++Events are one of the most prominent sources for monitoring and troubleshooting issues in Kubernetes. They capture and record information about the lifecycle of various Kubernetes objects, such as pods, nodes, services, and deployments. By monitoring events, you can gain visibility into your cluster's activities, identify issues, and troubleshoot problems effectively. ++Kubernetes events do not persist throughout your cluster life cycle, as there is no mechanism for retention. They are short-lived, only available for one hour after the event is generated. To store events for a longer time period, enable [Container Insights][container-insights]. ++## Kubernetes event objects ++Below is a set of the important fields in a Kubernetes Event. For a comprehensive list of all fields, check the official [Kubernetes documentation][k8s-events]. ++|Field name|Significance| +|-|| +|type |Significance changes based on the severity of the event:<br/>**Warning:** these events signal potentially problematic situations, such as a pod repeatedly failing or a node running out of resources. They require attention, but might not result in immediate failure.<br/>**Normal:** These events represent routine operations, such as a pod being scheduled or a deployment scaling up. They usually indicate healthy cluster behavior.| +|reason|The reason why the event was generated. For example, *FailedScheduling* or *CrashLoopBackoff*.| +|message|A human-readable message that describes the event.| +|namespace|The namespace of the Kubernetes object that the event is associated with.| +|firstSeen|Timestamp when the event was first observed.| +|lastSeen|Timestamp of when the event was last observed.| +|reportingController|The name of the controller that reported the event. For example, `kubernetes.io/kubelet`| +|object|The name of the Kubernetes object that the event is associated with.| ++## Accessing events ++# [Azure CLI](#tab/azure-cli) ++You can find events for your cluster and its components by using `kubectl`. ++```azurecli-interactive +kubectl get events +``` ++To look at a specific pod's events, first find the name of the pod and then use `kubectl describe` to list events. ++```azurecli-interactive +kubectl get pods ++kubectl describe pods <pod-name> +``` ++# [Portal](#tab/azure-portal) ++You can browse the events for your cluster by navigating to **Events** under **Kubernetes resources** from the Azure portal overview page for your cluster. By default, all events are shown. +++You can also filter by event type: +++by reason: +++or by pods or nodes: +++These filters can be combined to scope the query to your specific needs. ++++## Best practices for troubleshooting with events ++### Filtering events for relevance ++In your AKS cluster, you might have various namespaces and services running. Filtering events based on object type, namespace, or reason can help you narrow down your focus to what's most relevant to your applications. For instance, you can use the following command to filter events within a specific namespace: ++```azurecli-interactive +kubectl get events -n <namespace> +``` ++### Automating event notifications ++To ensure timely response to critical events in your AKS cluster, set up automated notifications. Azure offers integration with monitoring and alerting services like [Azure Monitor][aks-azure-monitor]. You can configure alerts to trigger based on specific event patterns. This way, you're immediately informed about crucial issues that require attention. ++### Regularly reviewing events ++Make a habit of regularly reviewing events in your AKS cluster. This proactive approach can help you identify trends, catch potential problems early, and prevent escalations. By staying on top of events, you can maintain the stability and performance of your applications. ++## Next steps ++Now that you understand Kubernetes events, you can continue your monitoring and observability journey by [enabling Container Insights][container-insights]. ++<!-- LINKS --> +[aks-azure-monitor]: ./monitor-aks.md +[container-insights]: ../azure-monitor/containers/container-insights-enable-aks.md +[k8s-events]: https://kubernetes.io/docs/reference/kubernetes-api/cluster-resources/event-v1/ |
aks | Http Application Routing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/http-application-routing.md | Title: HTTP application routing add-on for Azure Kubernetes Service (AKS) -description: Use the HTTP application routing add-on to access applications deployed on Azure Kubernetes Service (AKS). + Title: HTTP application routing add-on for Azure Kubernetes Service (AKS) (retired) +description: Use the HTTP application routing add-on to access applications deployed on Azure Kubernetes Service (AKS) (retired). Last updated 04/05/2023 -# HTTP application routing add-on for Azure Kubernetes Service (AKS) +# HTTP application routing add-on for Azure Kubernetes Service (AKS) (retired) > [!CAUTION]-> The HTTP application routing add-on is in the process of being retired and isn't recommended for production use. We recommend migrating to the [Application Routing add-on](./app-routing-migration.md) instead. +> HTTP application routing add-on (preview) for Azure Kubernetes Service (AKS) will be [retired](https://azure.microsoft.com/updates/retirement-http-application-routing-addon-preview-for-aks-will-retire-03032025) on 03 March 2025. We recommend migrating to the [Application Routing add-on](./app-routing-migration.md) by that date. The HTTP application routing add-on makes it easy to access applications that are deployed to your Azure Kubernetes Service (AKS) cluster by: |
aks | Integrations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/integrations.md | AKS uses the following rules for applying updates to installed add-ons: | Name | Description | More details | ||||-| http_application_routing | Configure ingress with automatic public DNS name creation for your AKS cluster. | [HTTP application routing add-on on Azure Kubernetes Service (AKS)][http-app-routing] | +| web_application_routing | Use a managed NGINX ingress controller with your AKS cluster.| [Application Routing Overview][app-routing] | +| ingress-appgw | Use Application Gateway Ingress Controller with your AKS cluster. | [What is Application Gateway Ingress Controller?][agic] | +| keda | Use event-driven autoscaling for the applications on your AKS cluster. | [Simplified application autoscaling with Kubernetes Event-driven Autoscaling (KEDA) add-on][keda]| | monitoring | Use Container Insights monitoring with your AKS cluster. | [Container insights overview][container-insights] |-| virtual-node | Use virtual nodes with your AKS cluster. | [Use virtual nodes][virtual-nodes] | | azure-policy | Use Azure Policy for AKS, which enables at-scale enforcements and safeguards on your clusters in a centralized, consistent manner. | [Understand Azure Policy for Kubernetes clusters][azure-policy-aks] |-| ingress-appgw | Use Application Gateway Ingress Controller with your AKS cluster. | [What is Application Gateway Ingress Controller?][agic] | -| open-service-mesh | Use Open Service Mesh with your AKS cluster. | [Open Service Mesh AKS add-on][osm] | | azure-keyvault-secrets-provider | Use Azure Keyvault Secrets Provider addon.| [Use the Azure Key Vault Provider for Secrets Store CSI Driver in an AKS cluster][keyvault-secret-provider] |-| web_application_routing | Use a managed NGINX ingress controller with your AKS cluster.| [Application Routing Overview][app-routing] | -| keda | Use event-driven autoscaling for the applications on your AKS cluster. | [Simplified application autoscaling with Kubernetes Event-driven Autoscaling (KEDA) add-on][keda]| +| virtual-node | Use virtual nodes with your AKS cluster. | [Use virtual nodes][virtual-nodes] | +| http_application_routing | Configure ingress with automatic public DNS name creation for your AKS cluster (retired). | [HTTP application routing add-on on Azure Kubernetes Service (AKS) (retired)][http-app-routing] | +| open-service-mesh | Use Open Service Mesh with your AKS cluster (retired). | [Open Service Mesh AKS add-on (retired)][osm] | ## Extensions For more details, see [Windows AKS partner solutions][windows-aks-partner-soluti [spark-kubernetes]: https://spark.apache.org/docs/latest/running-on-kubernetes.html [managed-grafana]: ../managed-grafan [keda]: keda-about.md-[web-app-routing]: web-app-routing.md +[app-routing]: app-routing.md [maintenance-windows]: planned-maintenance.md [release-tracker]: release-tracker.md [github-actions]: /azure/developer/github/github-actions [github-actions-aks]: kubernetes-action.md [az-aks-enable-addons]: /cli/azure/aks#az-aks-enable-addons-[windows-aks-partner-solutions]: windows-aks-partner-solutions.md +[windows-aks-partner-solutions]: windows-aks-partner-solutions.md |
aks | Keda Deploy Add On Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-deploy-add-on-cli.md | Title: Install the Kubernetes Event-driven Autoscaling (KEDA) add-on by using Azure CLI -description: Use Azure CLI to deploy the Kubernetes Event-driven Autoscaling (KEDA) add-on to Azure Kubernetes Service (AKS). + Title: Install the Kubernetes Event-driven Autoscaling (KEDA) add-on using the Azure CLI +description: Use the Azure CLI to deploy the Kubernetes Event-driven Autoscaling (KEDA) add-on to Azure Kubernetes Service (AKS). Previously updated : 10/10/2022 Last updated : 09/26/2023 -# Install the Kubernetes Event-driven Autoscaling (KEDA) add-on by using Azure CLI +# Install the Kubernetes Event-driven Autoscaling (KEDA) add-on using the Azure CLI -This article shows you how to install the Kubernetes Event-driven Autoscaling (KEDA) add-on to Azure Kubernetes Service (AKS) by using Azure CLI. The article includes steps to verify that it's installed and running. +This article shows you how to install the Kubernetes Event-driven Autoscaling (KEDA) add-on to Azure Kubernetes Service (AKS) using the Azure CLI. [!INCLUDE [Current version callout](./includes/ked)] -## Prerequisites +## Before you begin -- An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free).-- [Azure CLI installed](/cli/azure/install-azure-cli).-- Firewall rules are configured to allow access to the Kubernetes API server. ([learn more][aks-firewall-requirements])+- You need an Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free). +- You need the [Azure CLI installed](/cli/azure/install-azure-cli). +- Ensure you have firewall rules configured to allow access to the Kubernetes API server. For more information, see [Outbound network and FQDN rules for Azure Kubernetes Service (AKS) clusters][aks-firewall-requirements]. +- [Install the `aks-preview` Azure CLI extension](#install-the-aks-preview-azure-cli-extension). +- [Register the `AKS-KedaPreview` feature flag](#register-the-aks-kedapreview-feature-flag). -## Install the aks-preview Azure CLI extension +### Install the `aks-preview` Azure CLI extension [!INCLUDE [preview features callout](includes/preview/preview-callout.md)] -To install the aks-preview extension, run the following command: +1. Install the `aks-preview` extension using the [`az extension add`][az-extension-add] command. -```azurecli -az extension add --name aks-preview -``` + ```azurecli-interactive + az extension add --name aks-preview + ``` -Run the following command to update to the latest version of the extension released: +2. Update to the latest version of the `aks-preview` extension using the [`az extension update`][az-extension-update] command. -```azurecli -az extension update --name aks-preview -``` + ```azurecli-interactive + az extension update --name aks-preview + ``` -## Register the 'AKS-KedaPreview' feature flag +### Register the `AKS-KedaPreview` feature flag -Register the `AKS-KedaPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example: +1. Register the `AKS-KedaPreview` feature flag using the [`az feature register`][az-feature-register] command. -```azurecli-interactive -az feature register --namespace "Microsoft.ContainerService" --name "AKS-KedaPreview" -``` + ```azurecli-interactive + az feature register --namespace "Microsoft.ContainerService" --name "AKS-KedaPreview" + ``` -It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature show][az-feature-show] command: + It takes a few minutes for the status to show *Registered*. -```azurecli-interactive -az feature show --namespace "Microsoft.ContainerService" --name "AKS-KedaPreview" -``` +2. Verify the registration status using the [`az feature show`][az-feature-show] command. -When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command: + ```azurecli-interactive + az feature show --namespace "Microsoft.ContainerService" --name "AKS-KedaPreview" + ``` -```azurecli-interactive -az provider register --namespace Microsoft.ContainerService -``` +3. When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider using the [`az provider register`][az-provider-register] command. -## Install the KEDA add-on with Azure CLI -To install the KEDA add-on, use `--enable-keda` when creating or updating a cluster. + ```azurecli-interactive + az provider register --namespace Microsoft.ContainerService + ``` -The following example creates a *myResourceGroup* resource group. Then it creates a *myAKSCluster* cluster with the KEDA add-on. +## Enable the KEDA add-on on your AKS cluster -```azurecli-interactive -az group create --name myResourceGroup --location eastus +> [!NOTE] +> While KEDA provides various customization options, the KEDA add-on currently provides basic common configuration. +> +> If you require custom configurations, you can manually edit the KEDA YAML files to customize the installation. **Azure doesn't offer support for custom configurations**. -az aks create \ - --resource-group myResourceGroup \ - --name myAKSCluster \ - --enable-keda -``` +### Create a new AKS cluster with KEDA add-on enabled -For existing clusters, use `az aks update` with `--enable-keda` option. The following code shows an example. +1. Create a resource group using the [`az group create`][az-group-create] command. -```azurecli-interactive -az aks update \ - --resource-group myResourceGroup \ - --name myAKSCluster \ - --enable-keda -``` + ```azurecli-interactive + az group create --name myResourceGroup --location eastus + ``` ++2. Create a new AKS cluster using the [`az aks create`][az-aks-create] command and enable the KEDA add-on using the `--enable-keda` flag. ++ ```azurecli-interactive + az aks create \ + --resource-group myResourceGroup \ + --name myAKSCluster \ + --enable-keda + ``` ++### Enable the KEDA add-on on an existing AKS cluster ++- Update an existing cluster using the [`az aks update`][az-aks-update] command and enable the KEDA add-on using the `--enable-keda` flag. ++ ```azurecli-interactive + az aks update \ + --resource-group myResourceGroup \ + --name myAKSCluster \ + --enable-keda + ``` ## Get the credentials for your cluster -Get the credentials for your AKS cluster by using the `az aks get-credentials` command. The following example command gets the credentials for *myAKSCluster* in the *myResourceGroup* resource group: --```azurecli-interactive -az aks get-credentials --resource-group myResourceGroup --name myAKSCluster -``` --## Verify that the KEDA add-on is installed on your cluster --To see if the KEDA add-on is installed on your cluster, verify that the `enabled` value is `true` for `keda` under `workloadAutoScalerProfile`. --The following example shows the status of the KEDA add-on for *myAKSCluster* in *myResourceGroup*: --```azurecli-interactive -az aks show -g "myResourceGroup" --name myAKSCluster --query "workloadAutoScalerProfile.keda.enabled" -``` -## Verify that KEDA is running on your cluster --You can verify KEDA that's running on your cluster. Use `kubectl` to display the operator and metrics server installed in the AKS cluster under kube-system namespace. For example: --```azurecli-interactive -kubectl get pods -n kube-system -``` --The following example output shows that the KEDA operator and metrics API server are installed in the AKS cluster along with its status. --```output -kubectl get pods -n kube-system --keda-operator-********-k5rfv 1/1 Running 0 43m -keda-operator-metrics-apiserver-*******-sj857 1/1 Running 0 43m -``` -To verify the version of your KEDA, use `kubectl get crd/scaledobjects.keda.sh -o yaml `. For example: --```azurecli-interactive -kubectl get crd/scaledobjects.keda.sh -o yaml -``` -The following example output shows the configuration of KEDA in the `app.kubernetes.io/version` label: --```yaml -kind: CustomResourceDefinition -metadata: - annotations: - controller-gen.kubebuilder.io/version: v0.8.0 - creationTimestamp: "2022-06-08T10:31:06Z" - generation: 1 - labels: - addonmanager.kubernetes.io/mode: Reconcile - app.kubernetes.io/component: operator - app.kubernetes.io/name: keda-operator - app.kubernetes.io/part-of: keda-operator - app.kubernetes.io/version: 2.7.0 - name: scaledobjects.keda.sh - resourceVersion: "2899" - uid: 85b8dec7-c3da-4059-8031-5954dc888a0b -spec: - conversion: - strategy: None - group: keda.sh - names: - kind: ScaledObject - listKind: ScaledObjectList - plural: scaledobjects - shortNames: - - so - singular: scaledobject - scope: Namespaced - # Redacted for simplicity - ``` --While KEDA provides various customization options, the KEDA add-on currently provides basic common configuration. --If you have requirement to run with another custom configurations, such as namespaces that should be watched or tweaking the log level, then you may edit the KEDA YAML manually and deploy it. --However, when the installation is customized there will no support offered for custom configurations. --## Disable KEDA add-on from your AKS cluster --When you no longer need KEDA add-on in the cluster, use the `az aks update` command with--disable-keda option. This execution will disable KEDA workload auto-scaler. --```azurecli-interactive -az aks update \ - --resource-group myResourceGroup \ - --name myAKSCluster \ - --disable-keda -``` +- Get the credentials for your AKS cluster using the [`az aks get-credentials`][az-aks-get-credentials] command. ++ ```azurecli-interactive + az aks get-credentials --resource-group myResourceGroup --name myAKSCluster + ``` ++## Verify the KEDA add-on is installed on your cluster ++- Verify the KEDA add-on is installed on your cluster using the [`az aks show`][az-aks-show] command and set the `--query` parameter to `workloadAutoScalerProfile.keda.enabled`. ++ ```azurecli-interactive + az aks show -g myResourceGroup --name myAKSCluster --query "workloadAutoScalerProfile.keda.enabled" + ``` ++ The following example output shows the KEDA add-on is installed on the cluster: ++ ```output + true + ``` ++## Verify KEDA is running on your cluster ++- Verify the KEDA add-on is running on your cluster using the [`kubectl get pods`][kubectl] command. ++ ```azurecli-interactive + kubectl get pods -n kube-system + ``` ++ The following example output shows the KEDA operator and metrics API server are installed on the cluster: ++ ```output + keda-operator-********-k5rfv 1/1 Running 0 43m + keda-operator-metrics-apiserver-*******-sj857 1/1 Running 0 43m + ``` ++## Verify the KEDA version on your cluster ++- Verify the KEDA version using the `kubectl get crd/scaledobjects.keda.sh -o yaml` command. ++ ```azurecli-interactive + kubectl get crd/scaledobjects.keda.sh -o yaml + ``` ++ The following condensed example output shows the configuration of KEDA in the `app.kubernetes.io/version` label: ++ ```output + apiVersion: apiextensions.k8s.io/v1 + kind: CustomResourceDefinition + metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.9.0 + meta.helm.sh/release-name: aks-managed-keda + meta.helm.sh/release-namespace: kube-system + creationTimestamp: "2023-09-26T10:31:06Z" + generation: 1 + labels: + app.kubernetes.io/component: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: keda-operator + app.kubernetes.io/part-of: keda-operator + app.kubernetes.io/version: 2.10.1 + ... + ``` ++## Disable the KEDA add-on on your AKS cluster ++- Disable the KEDA add-on on your cluster using the [`az aks update`][az-aks-update] command with the `--disable-keda` flag. ++ ```azurecli-interactive + az aks update \ + --resource-group myResourceGroup \ + --name myAKSCluster \ + --disable-keda + ``` ## Next steps-This article showed you how to install the KEDA add-on on an AKS cluster using Azure CLI. The steps to verify that KEDA add-on is installed and running are included. With the KEDA add-on installed on your cluster, you can [deploy a sample application][keda-sample] to start scaling apps. -You can troubleshoot KEDA add-on problems in [this article][keda-troubleshoot]. +This article showed you how to install the KEDA add-on on an AKS cluster using the Azure CLI. ++With the KEDA add-on installed on your cluster, you can [deploy a sample application][keda-sample] to start scaling apps. ++For information on KEDA troubleshooting, see [Troubleshoot the Kubernetes Event-driven Autoscaling (KEDA) add-on][keda-troubleshoot]. <!-- LINKS - internal --> [az-provider-register]: /cli/azure/provider#az-provider-register [az-feature-register]: /cli/azure/feature#az-feature-register [az-feature-show]: /cli/azure/feature#az-feature-show [az-aks-create]: /cli/azure/aks#az-aks-create-[az aks install-cli]: /cli/azure/aks#az-aks-install-cli -[az aks get-credentials]: /cli/azure/aks#az-aks-get-credentials -[az aks update]: /cli/azure/aks#az-aks-update -[az-group-delete]: /cli/azure/group#az-group-delete [keda-troubleshoot]: /troubleshoot/azure/azure-kubernetes/troubleshoot-kubernetes-event-driven-autoscaling-add-on?context=/azure/aks/context/aks-context [aks-firewall-requirements]: outbound-rules-control-egress.md#azure-global-required-network-rules-+[az-aks-update]: /cli/azure/aks#az-aks-update +[az-aks-get-credentials]: /cli/azure/aks#az-aks-get-credentials +[az-aks-show]: /cli/azure/aks#az-aks-show +[az-group-create]: /cli/azure/group#az-group-create +[az-extension-add]: /cli/azure/extension#az-extension-add +[az-extension-update]: /cli/azure/extension#az-extension-update ++<!-- LINKS - external --> [kubectl]: https://kubernetes.io/docs/user-guide/kubectl-[keda]: https://keda.sh/ -[keda-scalers]: https://keda.sh/docs/scalers/ [keda-sample]: https://github.com/kedacore/sample-dotnet-worker-servicebus-queue |
aks | Open Ai Secure Access Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-ai-secure-access-quickstart.md | Title: Secure access to Azure OpenAI from Azure Kubernetes Service (AKS) description: Learn how to secure access to Azure OpenAI from Azure Kubernetes Service (AKS). + Last updated 09/18/2023 |
aks | Static Ip | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/static-ip.md | description: Learn how to create and use a static IP address with the Azure Kube + Last updated 09/22/2023- #Customer intent: As a cluster operator or developer, I want to create and manage static IP address resources in Azure that I can use beyond the lifecycle of an individual Kubernetes service deployed in an AKS cluster. |
aks | Workload Identity Deploy Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-deploy-cluster.md | Title: Deploy and configure an Azure Kubernetes Service (AKS) cluster with workl description: In this Azure Kubernetes Service (AKS) article, you deploy an Azure Kubernetes Service cluster and configure it with an Azure AD workload identity. Previously updated : 07/26/2023 Last updated : 09/27/2023 # Deploy and configure workload identity on an Azure Kubernetes Service (AKS) cluster EOF ``` > [!IMPORTANT]-> Ensure your application pods using workload identity have added the following label [azure.workload.identity/use: "true"] to your running pods/deployments, otherwise the pods will fail once restarted. +> Ensure your application pods using workload identity have added the following label `azure.workload.identity/use: "true"` to your pod spec, otherwise the pods fail after their restarted. ```bash kubectl apply -f <your application> |
aks | Workload Identity Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-overview.md | This article helps you understand this new authentication feature, and reviews t In the Azure Identity client libraries, choose one of the following approaches: -- Use `DefaultAzureCredential`, which will attempt to use the `WorkloadIdentityCredential`.+- Use `DefaultAzureCredential`, which attempts to use the `WorkloadIdentityCredential`. - Create a `ChainedTokenCredential` instance that includes `WorkloadIdentityCredential`. - Use `WorkloadIdentityCredential` directly. The following table provides the **minimum** package version required for each l | Node.js | [@azure/identity](/javascript/api/overview/azure/identity-readme) | 3.2.0 | | Python | [azure-identity](/python/api/overview/azure/identity-readme) | 1.13.0 | -In the following code samples, `DefaultAzureCredential` is used. This credential type will use the environment variables injected by the Azure Workload Identity mutating webhook to authenticate with Azure Key Vault. +In the following code samples, `DefaultAzureCredential` is used. This credential type uses the environment variables injected by the Azure Workload Identity mutating webhook to authenticate with Azure Key Vault. ## [.NET](#tab/dotnet) The following diagram summarizes the authentication sequence using OpenID Connec ### Webhook Certificate Auto Rotation -Similar to other webhook addons, the certificate will be rotated by cluster certificate [auto rotation][auto-rotation] operation. +Similar to other webhook addons, the certificate is rotated by cluster certificate [auto rotation][auto-rotation] operation. ## Service account labels and annotations All annotations are optional. If the annotation isn't specified, the default val ### Pod labels > [!NOTE]-> For applications using Workload Identity it is now required to add the label 'azure.workload.identity/use: "true"' pod label in order for AKS to move Workload Identity to a "Fail Close" scenario before GA to provide a consistent and reliable behavior for pods that need to use workload identity. +> For applications using workload identity, it's required to add the label `azure.workload.identity/use: "true"` to the pod spec for AKS to move workload identity to a *Fail Close* scenario to provide a consistent and reliable behavior for pods that need to use workload identity. Otherwise the pods fail after their restarted. |Label |Description |Recommended value |Required | |||||-|`azure.workload.identity/use` | This label is required in the pod template spec. Only pods with this label will be mutated by the azure-workload-identity mutating admission webhook to inject the Azure specific environment variables and the projected service account token volume. |true |Yes | +|`azure.workload.identity/use` | This label is required in the pod template spec. Only pods with this label are mutated by the azure-workload-identity mutating admission webhook to inject the Azure specific environment variables and the projected service account token volume. |true |Yes | ### Pod annotations |
api-management | Api Management Howto App Insights | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-app-insights.md | |
application-gateway | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/overview.md | -Azure Application Gateway is a web traffic load balancer that enables you to manage traffic to your web applications. Traditional load balancers operate at the transport layer (OSI layer 4 - TCP and UDP) and route traffic based on source IP address and port, to a destination IP address and port. +Azure Application Gateway is a web traffic (OSI layer 7) load balancer that enables you to manage traffic to your web applications. Traditional load balancers operate at the transport layer (OSI layer 4 - TCP and UDP) and route traffic based on source IP address and port, to a destination IP address and port. Application Gateway can make routing decisions based on additional attributes of an HTTP request, for example URI path or host headers. For example, you can route traffic based on the incoming URL. So if `/images` is in the incoming URL, you can route traffic to a specific set of servers (known as a pool) configured for images. If `/video` is in the URL, that traffic is routed to another pool that's optimized for videos. |
application-gateway | Quick Create Terraform | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/quick-create-terraform.md | Title: 'Quickstart: Direct web traffic using Terraform' + Title: 'Quickstart: Direct web traffic with Azure Application Gateway - Terraform' description: In this quickstart, you learn how to use Terraform to create an Azure Application Gateway that directs web traffic to virtual machines in a backend pool. |
azure-arc | Upgrade Data Controller Direct Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-data-controller-direct-cli.md | description: Article describes how to upgrade a directly connected Azure Arc dat -+ |
azure-arc | Extensions Release | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/extensions-release.md | Flux version: [Release v2.0.1](https://github.com/fluxcd/flux2/releases/tag/v2.0 Changes made for this version: -- Updated SSH key entry to use the [updated RSA SSH host key](https://bitbucket.org/blog/ssh-host-key-changes) to prevent failures in configurations with `ssh` authentication type for Bitbucket.+- Updated SSH key entry to use the [Ed25519 SSH host key](https://bitbucket.org/blog/ssh-host-key-changes) to prevent failures in configurations with `ssh` authentication type for Bitbucket. ### 1.7.6 (August 2023) |
azure-arc | License Extended Security Updates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/license-extended-security-updates.md | Title: License provisioning guidelines for Extended Security Updates for Windows Server 2012 description: Learn about license provisioning guidelines for Extended Security Updates for Windows Server 2012 through Azure Arc. Previously updated : 09/14/2023 Last updated : 09/27/2023 An additional scenario (scenario 1, below) is a candidate for VM/Virtual core li > In all cases, you are required to attest to their conformance with SA or SPLA. There is no exception for these requirements. Software Assurance or an equivalent Server Subscription is required for you to purchase Extended Security Updates on-premises and in hosted environments. You will be able to purchase Extended Security Updates from Enterprise Agreement (EA), Enterprise Subscription Agreement (EAS), a Server & Cloud Enrollment (SCE), and Enrollment for Education Solutions (EES). On Azure, you do not need Software Assurance to get free Extended Security Updates, but Software Assurance or Server Subscription is required to take advantage of the Azure Hybrid Benefit. > +## Cost savings with migration and modernization of workloads ++As you migrate and modernize your Windows Server 2012 and Windows 2012 R2 infrastructure through the end of 2023, you can utilize the flexibility of monthly billing with Windows Server 2012 ESUs enabled by Azure Arc for cost savings benefits. ++As servers no longer require ESUs because they've been migrated to Azure, Azure VMware Solution (AVS), or Azure Stack HCI (where theyΓÇÖre eligible for free ESUs), or updated to Windows Server 2016 or higher, you can modify the number of cores associated with a license or delete/deactivate licenses. You can also link the license to a new scope of additional servers. See [Programmatically deploy and manage Azure Arc Extended Security Updates licenses](api-extended-security-updates.md) to learn more. ++> [!NOTE] +> This process is not automatic; billing is tied to the activated licenses and you are responsible for modifying your provisioned licensing to take advantage of cost savings. +> ## Scenario based examples: Compliant and Cost Effective Licensing ### Scenario 1: Eight modern 32-core hosts (not Windows Server 2012). While each of these hosts are running four 8-core VMs, only one VM on each host is running Windows Server 2012 R2 |
azure-arc | Prepare Extended Security Updates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prepare-extended-security-updates.md | Title: How to prepare to deliver Extended Security Updates for Windows Server 2012 through Azure Arc description: Learn how to prepare to deliver Extended Security Updates for Windows Server 2012 through Azure Arc. Previously updated : 07/12/2023 Last updated : 09/27/2023 Other Azure services through Azure Arc-enabled servers are available, with offer ## Prepare delivery of ESUs -To prepare for this new offer, you need to plan and prepare to onboard your machines to Azure Arc-enabled servers through the installation of the [Azure Connected Machine agent](agent-overview.md) and establishing a connection to Azure. +To prepare for this new offer, you need to plan and prepare to onboard your machines to Azure Arc-enabled servers through the installation of the [Azure Connected Machine agent](agent-overview.md) (version 1.34 or higher) and establishing a connection to Azure. - **Deployment options:** There are several at-scale onboarding options for Azure Arc-enabled servers, including running a [Custom Task Sequence](onboard-configuration-manager-custom-task.md) through Configuration Manager and deploying a [Scheduled Task through Group Policy](onboard-group-policy-powershell.md). |
azure-cache-for-redis | Cache Best Practices Enterprise Tiers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-best-practices-enterprise-tiers.md | Title: Best practices for the Enterprise tiers -description: Learn about the Azure Cache for Redis Enterprise and Enterprise Flash tiers +description: Learn the best practices when using the high performance Azure Cache for Redis Enterprise and Enterprise Flash tiers Previously updated : 03/09/2023 Last updated : 09/26/2023 -# Best Practices for the Enterprise and Enterprise Flash tiers of Azure Cache for Redis +# What are the best practices for the Enterprise and Enterprise Flash tiers ++Here are the best practices when using the Enterprise and Enterprise Flash tiers of Azure Cache for Redis. ## Zone Redundancy We strongly recommend that you deploy new caches in a [zone redundant](cache-high-availability.md) configuration. Zone redundancy ensures that Redis Enterprise nodes are spread among three availability zones, boosting redundancy from data center-level outages. Using zone redundancy increases availability. For more information, see [Service Level Agreements (SLA) for Online Services](https://azure.microsoft.com/support/legal/sla/cache/v1_1/). -Zone redundancy is important on the Enterprise tier because your cache instance always uses at least three nodes. Two nodes are data nodes, which hold your data, and a _quorum node_. Increasing capacity scales the number of data nodes in even-number increments. +Zone redundancy is important on the Enterprise tier because your cache instance always uses at least three nodes. Two nodes are data nodes, which hold your data, and a _quorum node_. Increasing capacity scales the number of data nodes in even-number increments. There's also another node called a quorum node. This node monitors the data nodes and automatically selects the new primary node if there was a failover. Zone redundancy ensures that the nodes are distributed evenly across three availability zones, minimizing the potential for quorum loss. Customers aren't charged for the quorum node and there's no other charge for using zone redundancy beyond [intra-zonal bandwidth charges](https://azure.microsoft.com/pricing/details/bandwidth/). ## Scaling -In the Enterprise and Enterprise Flash tiers of Azure Cache for Redis, we recommend prioritizing scaling up over scaling out. Prioritize scaling up because the Enterprise tiers are built on Redis Enterprise, which is able to utilize more CPU cores in larger VMs. --Conversely, the opposite recommendation is true for the Basic, Standard, and Premium tiers, which are built on open-source Redis. In those tiers, prioritizing scaling out over scaling up is recommended in most cases. +In the Enterprise and Enterprise Flash tiers of Azure Cache for Redis, we recommend prioritizing _scaling up_ over _scaling out_. Prioritize scaling up because the Enterprise tiers are built on Redis Enterprise, which is able to utilize more CPU cores in larger VMs. +Conversely, the opposite recommendation is true for the Basic, Standard, and Premium tiers, which are built on open-source Redis. In those tiers, prioritizing _scaling out_ over _scaling up_ is recommended in most cases. ## Sharding and CPU utilization -In the Basic, Standard, and Premium tiers of Azure Cache for Redis, determining the number of virtual CPUs (vCPUs) utilized is straightforward. Each Redis node runs on a dedicated VM. The Redis server process is single-threaded, utilizing one vCPU on each primary and each replica node. The other vCPUs on the VM are still used for other activities, such as workflow coordination for different tasks, health monitoring, and TLS load, among others. +In the Basic, Standard, and Premium tiers of Azure Cache for Redis, determining the number of virtual CPUs (vCPUs) utilized is straightforward. Each Redis node runs on a dedicated VM. The Redis server process is single-threaded, utilizing one vCPU on each primary and each replica node. The other vCPUs on the VM are still used for other activities, such as workflow coordination for different tasks, health monitoring, and TLS load, among others. -When you use clustering, the effect is to spread data across more nodes with one shard per node. By increasing the number of shards, you linearly increase the number of vCPUs you use, based on the number of shards in the cluster. +When you use clustering, the effect is to spread data across more nodes with one shard per node. By increasing the number of shards, you linearly increase the number of vCPUs you use, based on the number of shards in the cluster. -Redis Enterprise, on the other hand, can use multiple vCPUs for the Redis instance itself. In other words, all tiers of Azure Cache for Redis can use multiple vCPUs for background and monitoring tasks, but only the Enterprise and Enterprise Flash tiers are able to utilize multiple vCPUs per VM for Redis shards. The table shows the number of effective vCPUs used for each SKU and capacity (that is, scale-out) configuration. +Redis Enterprise, on the other hand, can use multiple vCPUs for the Redis instance itself. In other words, all tiers of Azure Cache for Redis can use multiple vCPUs for background and monitoring tasks, but only the Enterprise and Enterprise Flash tiers are able to utilize multiple vCPUs per VM for Redis shards. The table shows the number of effective vCPUs used for each SKU and capacity (that is, scale-out) configuration. -The tables show the number of vCPUs used for the primary shards, not the replica shards. Shards don't map one-to-one to the number of vCPUs. The tables only illustrate vCPUs, not shards. Some configurations use more shards than available vCPUs to boost performance in some usage scenarios. +The tables show the number of vCPUs used for the primary shards, not the replica shards. Shards don't map one-to-one to the number of vCPUs. The tables only illustrate vCPUs, not shards. Some configurations use more shards than available vCPUs to boost performance in some usage scenarios. -### E10 +### E5 +|Capacity|Effective vCPUs| +|:|:| +| 2 | 1 | +| 4 | 2 | +| 6 | 6 | +### E10 |Capacity|Effective vCPUs| |:|:| | 2 | 2 | The tables show the number of vCPUs used for the primary shards, not the replica | 8 | 16 | | 10 | 20 | - ### E20+ |Capacity|Effective vCPUs| |:|:| |2| 2| The tables show the number of vCPUs used for the primary shards, not the replica |8|30 | |10|30| - ### E100+ |Capacity|Effective vCPUs| |:|:| |2| 6| The tables show the number of vCPUs used for the primary shards, not the replica |8|30| |10|30| +### E200 +|Capacity|Effective vCPUs| +|:|:| +|2|30| +|4|60| +|6|60| +|8|120| +|10|120| ++### E400 +|Capacity|Effective vCPUs| +|:|:| +|2|60| +|4|120| +|6|120| +|8|240| +|10|240| + ### F300+ |Capacity|Effective vCPUs| |:|:| |3| 6| |9|30| ### F700+ |Capacity|Effective vCPUs| |:|:| |3| 30| |9| 30| ### F1500+ |Capacity|Effective vCPUs | |:|:| |3| 30 | |9| 90 | - ## Clustering on Enterprise Enterprise and Enterprise Flash tiers are inherently clustered, in contrast to the Basic, Standard, and Premium tiers. The implementation depends on the clustering policy that is selected.-The Enterprise tiers offer two choices for Clustering Policy: _OSS_ and _Enterprise_. _OSS_ cluster policy is recommended for most applications because it supports higher maximum throughput, but there are advantages and disadvantages to each version. +The Enterprise tiers offer two choices for Clustering Policy: _OSS_ and _Enterprise_. _OSS_ cluster policy is recommended for most applications because it supports higher maximum throughput, but there are advantages and disadvantages to each version. -The _OSS clustering policy_ implements the same [Redis Cluster API](https://redis.io/docs/reference/cluster-spec/) as open-source Redis. The Redis Cluster API allows the Redis client to connect directly to each Redis node, minimizing latency and optimizing network throughput. As a result, near-linear scalability is obtained when scaling out the cluster with more nodes. The OSS clustering policy generally provides the best latency and throughput performance, but requires your client library to support Redis Clustering. OSS clustering policy also can't be used with the [RediSearch module](cache-redis-modules.md). +The _OSS clustering policy_ implements the same [Redis Cluster API](https://redis.io/docs/reference/cluster-spec/) as open-source Redis. The Redis Cluster API allows the Redis client to connect directly to each Redis node, minimizing latency and optimizing network throughput. As a result, near-linear scalability is obtained when scaling out the cluster with more nodes. The OSS clustering policy generally provides the best latency and throughput performance, but requires your client library to support Redis Clustering. OSS clustering policy also can't be used with the [RediSearch module](cache-redis-modules.md). -The _Enterprise clustering policy_ is a simpler configuration that utilizes a single endpoint for all client connections. Using the Enterprise clustering policy routes all requests to a single Redis node that is then used as a proxy, internally routing requests to the correct node in the cluster. The advantage of this approach is that Redis client libraries donΓÇÖt need to support Redis Clustering to take advantage of multiple nodes. The downside is that the single node proxy can be a bottleneck, in either compute utilization or network throughput. The Enterprise clustering policy is the only one that can be used with the [RediSearch module](cache-redis-modules.md). +The _Enterprise clustering policy_ is a simpler configuration that utilizes a single endpoint for all client connections. Using the Enterprise clustering policy routes all requests to a single Redis node that is then used as a proxy, internally routing requests to the correct node in the cluster. The advantage of this approach is that Redis client libraries donΓÇÖt need to support Redis Clustering to take advantage of multiple nodes. The downside is that the single node proxy can be a bottleneck, in either compute utilization or network throughput. The Enterprise clustering policy is the only one that can be used with the [RediSearch module](cache-redis-modules.md). ## Multi-key commands -Because the Enterprise tiers use a clustered configuration, you might see `CROSSSLOT` exceptions on commands that operate on multiple keys. Behavior varies depending on the clustering policy used. If you use the OSS clustering policy, multi-key commands require all keys to be mapped to [the same hash slot](https://docs.redis.com/latest/rs/databases/configure/oss-cluster-api/#multi-key-command-support). +Because the Enterprise tiers use a clustered configuration, you might see `CROSSSLOT` exceptions on commands that operate on multiple keys. Behavior varies depending on the clustering policy used. If you use the OSS clustering policy, multi-key commands require all keys to be mapped to [the same hash slot](https://docs.redis.com/latest/rs/databases/configure/oss-cluster-api/#multi-key-command-support). You might also see `CROSSSLOT` errors with Enterprise clustering policy. Only the following multi-key commands are allowed across slots with Enterprise clustering: `DEL`, `MSET`, `MGET`, `EXISTS`, `UNLINK`, and `TOUCH`. For example, consider these tips: - Identify in advance which other cache in the geo-replication group to switch over to if a region goes down. - Ensure that firewalls are set so that any applications and clients can access the identified backup cache.-- Each cache in the geo-replication group has its own access key. Determine how the application will switch access keys when targeting a backup cache. +- Each cache in the geo-replication group has its own access key. Determine how the application switches to different access keys when targeting a backup cache. - If a cache in the geo-replication group goes down, a buildup of metadata starts to occur in all the caches in the geo-replication group. The metadata can't be discarded until writes can be synced again to all caches. You can prevent the metadata build-up by _force unlinking_ the cache that is down. Consider monitoring the available memory in the cache and unlinking if there's memory pressure, especially for write-heavy workloads. It's also possible to use a [circuit breaker pattern](/azure/architecture/patterns/circuit-breaker). Use the pattern to automatically redirect traffic away from a cache experiencing a region outage, and towards a backup cache in the same geo-replication group. Use Azure services such as [Azure Traffic Manager](../traffic-manager/traffic-manager-overview.md) or [Azure Load Balancer](../load-balancer/load-balancer-overview.md) to enable the redirection. It's also possible to use a [circuit breaker pattern](/azure/architecture/patter The [data persistence](cache-how-to-premium-persistence.md) feature in the Enterprise and Enterprise Flash tiers is designed to automatically provide a quick recovery point for data when a cache goes down. The quick recovery is made possible by storing the RDB or AOF file in a managed disk that is mounted to the cache instance. Persistence files on the disk aren't accessible to users. -Many customers want to use persistence to take periodic backups of the data on their cache. We don't recommend that you use data persistence in this way. Instead, use the [import/export](cache-how-to-import-export-data.md) feature. You can export copies of cache data in RDB format directly into your chosen storage account and trigger the data export as frequently as you require. Export can be triggered either from the portal or by using the CLI, PowerShell, or SDK tools. +Many customers want to use persistence to take periodic backups of the data on their cache. We don't recommend that you use data persistence in this way. Instead, use the [import/export](cache-how-to-import-export-data.md) feature. You can export copies of cache data in RDB format directly into your chosen storage account and trigger the data export as frequently as you require. Export can be triggered either from the portal or by using the CLI, PowerShell, or SDK tools. -## Next steps +## Related content - [Development](cache-best-practices-development.md)-- |
azure-cache-for-redis | Cache Overview Vector Similarity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-overview-vector-similarity.md | + + Title: About Vector Embeddings and Vector Search in Azure Cache for Redis +description: Learn about Azure Cache for Redis to store vector embeddings and provide similarity search. ++++ Last updated : 09/18/2023+++# About Vector Embeddings and Vector Search in Azure Cache for Redis ++Vector similarity search (VSS) has become a popular use-case for AI-driven applications. Azure Cache for Redis can be used to store vector embeddings and compare them through vector similarity search. This article is a high-level introduction to the concept of vector embeddings, vector comparison, and how Redis can be used as a seamless part of a vector similarity workflow. ++For a tutorial on how to use Azure Cache for Redis and Azure OpenAI to perform vector similarity search, see [Tutorial: Conduct vector similarity search on Azure OpenAI embeddings using Azure Cache for Redis](./cache-tutorial-vector-similarity.md) ++## Scope of Availability ++|Tier | Basic / Standard | Premium |Enterprise | Enterprise Flash | +| |::|:-:|::|::| +|Available | No | No | Yes | Yes (preview) | ++Vector search capabilities in Redis require [Redis Stack](https://redis.io/docs/about/about-stack/), specifically the [RediSearch](https://redis.io/docs/interact/search-and-query/) module. This capability is only available in the [Enterprise tiers of Azure Cache for Redis](./cache-redis-modules.md). ++## What are vector embeddings? ++### Concept ++Vector embeddings are a fundamental concept in machine learning and natural language processing that enable the representation of data, such as words, documents, or images as numerical vectors in a high-dimension vector space. The primary idea behind vector embeddings is to capture the underlying relationships and semantics of the data by mapping them to points in this vector space. In simpler terms, that means converting your text or images into a sequence of numbers that represents the data, and then comparing the different number sequences. This allows complex data to be manipulated and analyzed mathematically, making it easier to perform tasks like similarity comparison, recommendation, and classification. ++<!-- TODO - Add image example --> ++Each machine learning model classifies data and produces the vector in a different manner. Furthermore, it's typically not possible to determine exactly what semantic meaning each vector dimension represents. But because the model is consistent between each block of input data, similar words, documents, or images have vectors that are also similar. For example, the words `basketball` and `baseball` have embeddings vectors much closer to each other than a word like `rainforest`. ++### Vector comparison ++Vectors can be compared using various metrics. The most popular way to compare vectors is to use [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity), which measures the cosine of the angle between two vectors in a multi-dimensional space. The closer the vectors, the smaller the angle. Other common distance metrics include [Euclidean distance](https://en.wikipedia.org/wiki/Euclidean_distance) and [inner product](https://en.wikipedia.org/wiki/Inner_product_space). ++### Generating embeddings ++Many machine learning models support embeddings APIs. For an example of how to create vector embeddings using Azure OpenAI Service, see [Learn how to generate embeddings with Azure OpenAI](../ai-services/openai/how-to/embeddings.md). ++## What is a vector database? ++A vector database is a database that can store, manage, retrieve, and compare vectors. Vector databases must be able to efficiently store a high-dimensional vector and retrieve it with minimal latency and high throughput. Non-relational datastores are most commonly used as vector databases, although it's possible to use relational databases like PostgreSQL, for example, with the [pgvector](https://github.com/pgvector/pgvector) extension. ++### Index method ++Vector databases need to index data for fast search and retrieval. There are several common indexing methods, including: ++- **K-Nearest Neighbors (KNN)** - an exhaustive method that provides the most precision but with higher computational cost. +- **Approximate Nearest Neighbors (ANN)** - a more efficient by trading precision for greater speed and lower processing overhead. ++### Search capabilities ++Finally, vector databases execute vector searches by using the chosen vector comparison method to return the most similar vectors. Some vector databases can also perform _hybrid_ searches by first narrowing results based on characteristics or metadata also stored in the database before conducting the vector search. This is a way to make the vector search more effective and customizable. For example, a vector search could be limited to only vectors with a specific tag in the database, or vectors with geolocation data in a certain region. ++## Vector search key scenarios ++Vector similarity search can be used in multiple applications. Some common use-cases include: ++- **Semantic Q&A**. Create a chatbot that can respond to questions about your own data. For instance, a chatbot that can respond to employee questions on their healthcare coverage. Hundreds of pages of dense healthcare coverage documentation can be split into chunks, converted into embeddings vectors, and searched based on vector similarity. The resulting documents can then be summarized for employees using another large language model (LLM). [Semantic Q&A Example](https://techcommunity.microsoft.com/t5/azure-developer-community-blog/vector-similarity-search-with-azure-cache-for-redis-enterprise/ba-p/3822059) +- **Document Retrieval**. Use the deeper semantic understanding of text provided by LLMs to provide a richer document search experience where traditional keyword-based search falls short. [Document Retrieval Example](https://github.com/RedisVentures/redis-arXiv-search) +- **Product Recommendation**. Find similar products or services to recommend based on past user activities, like search history or previous purchases. [Product Recommendation Example](https://github.com/RedisVentures/LLM-Recommender) +- **Visual Search**. Search for products that look similar to a picture taken by a user or a picture of another product. [Visual Search Example](https://github.com/RedisVentures/redis-product-search) +- **Semantic Caching**. Reduce the cost and latency of LLMs by caching LLM completions. LLM queries are compared using vector similarity. If a new query is similar enough to a previously cached query, the cached query is returned. [Semantic Caching example using LangChain](https://python.langchain.com/docs/integrations/llms/llm_caching#redis-cache) +- **LLM Conversation Memory**. Persist conversation history with an LLM as embeddings in a vector database. Your application can use vector search to pull relevant history or "memories" into the response from the LLM. [LLM Conversation Memory example](https://github.com/continuum-llms/chatgpt-memory) ++## Why choose Azure Cache for Redis for storing and searching vectors? ++Azure Cache for Redis can be used effectively as a vector database to store embeddings vectors and to perform vector similarity searches. In many ways, Redis is naturally a great choice in this area. It's extremely fast because it runs in-memory, unlike other vector databases that run on-disk. This can be useful when processing large datasets! Redis is also battle-hardened. Support for vector storage and search has been available for years, and many key machine learning frameworks like [LangChain](https://python.langchain.com/docs/integrations/vectorstores/redis) and [LlamaIndex](https://gpt-index.readthedocs.io/en/latest/examples/vector_stores/RedisIndexDemo.html) feature rich integrations with Redis. For example, the Redis LangChain integration [automatically generates an index schema for metadata](https://python.langchain.com/docs/integrations/vectorstores/redis#inspecting-the-created-index) passed in when using Redis as a vector store. This makes it much easier to filter results based on metadata. ++Redis has a wide range of vector search capabilities through the [RediSearch module](cache-redis-modules.md#redisearch), which is available in the Enterprise tier of Azure Cache for Redis. These include: ++- Multiple distance metrics, including `Euclidean`, `Cosine`, and `Internal Product`. +- Support for both KNN (using `FLAT`) and ANN (using `HNSW`) indexing methods. +- Vector storage in hash or JSON data structures +- Top K queries +- [Vector range queries](https://redis.io/docs/interact/search-and-query/search/vectors/#creating-a-vss-range-query) (i.e., find all items within a specific vector distance) +- Hybrid search with [powerful query features](https://redis.io/docs/interact/search-and-query/) such as: + - Geospatial filtering + - Numeric and text filters + - Prefix and fuzzy matching + - Phonetic matching + - Boolean queries ++Additionally, Redis is often an economical choice because it's already so commonly used for caching or session store applications. In these scenarios, it can pull double-duty by serving a typical caching role while simultaneously handling vector search applications. ++## What are my other options for storing and searching for vectors? ++There are multiple other solutions on Azure for vector storage and search. These include: ++- [Azure Cognitive Search](../search/vector-search-overview.md) +- [Azure Cosmos DB](../cosmos-db/mongodb/vcore/vector-search.md) using the MongoDB vCore API +- [Azure Database for PostgreSQL - Flexible Server](../postgresql/flexible-server/how-to-use-pgvector.md) using `pgvector` ++## Next Steps ++The best way to get started with embeddings and vector search is to try it yourself! ++> [!div class="nextstepaction"] +> [Tutorial: Conduct vector similarity search on Azure OpenAI embeddings using Azure Cache for Redis](./cache-tutorial-vector-similarity.md) |
azure-cache-for-redis | Cache Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-overview.md | -# About Azure Cache for Redis +# What is Azure Cache for Redis? Azure Cache for Redis provides an in-memory data store based on the [Redis](https://redis.io/) software. Redis improves the performance and scalability of an application that uses backend data stores heavily. It's able to process large volumes of application requests by keeping frequently accessed data in the server memory, which can be written to and read from quickly. Redis brings a critical low-latency and high-throughput data storage solution to modern applications. The [Azure Cache for Redis Pricing](https://azure.microsoft.com/pricing/details/ Consider the following options when choosing an Azure Cache for Redis tier: -- **Memory**: The Basic and Standard tiers offer 250 MB ΓÇô 53 GB; the Premium tier 6 GB - 1.2 TB; the Enterprise tiers 12 GB - 14 TB. To create a Premium tier cache larger than 120 GB, you can use Redis OSS clustering. For more information, see [Azure Cache for Redis Pricing](https://azure.microsoft.com/pricing/details/cache/). For more information, see [How to configure clustering for a Premium Azure Cache for Redis](cache-how-to-premium-clustering.md).-- **Performance**: Caches in the Premium and Enterprise tiers are deployed on hardware that has faster processors, giving better performance compared to the Basic or Standard tier. Premium tier Caches have higher throughput and lower latencies. For more information, see [Azure Cache for Redis performance](./cache-planning-faq.yml#azure-cache-for-redis-performance).-- **Dedicated core for Redis server**: All caches except C0 run dedicated VM cores. Redis, by design, uses only one thread for command processing. Azure Cache for Redis uses other cores for I/O processing. Having more cores improves throughput performance even though it may not produce linear scaling. Furthermore, larger VM sizes typically come with higher bandwidth limits than smaller ones. That helps you avoid network saturation that cause timeouts in your application.-- **Network performance**: If you have a workload that requires high throughput, the Premium or Enterprise tier offers more bandwidth compared to Basic or Standard. Also within each tier, larger size caches have more bandwidth because of the underlying VM that hosts the cache. For more information, see [Azure Cache for Redis performance](./cache-planning-faq.yml#azure-cache-for-redis-performance).+- **Memory**: The Basic and Standard tiers offer 250 MB ΓÇô 53 GB; the Premium tier 6 GB - 1.2 TB; the Enterprise tier 4 GB - 2 TB, and the Enterprise Flash tier 300 GB - 4.5 TB. To create a Premium tier cache larger than 120 GB, you can use Redis OSS clustering. For more information, see [Azure Cache for Redis Pricing](https://azure.microsoft.com/pricing/details/cache/). For more information, see [How to configure clustering for a Premium Azure Cache for Redis](cache-how-to-premium-clustering.md). +- **Performance**: Caches in the Premium and Enterprise tiers are deployed on hardware that has faster processors, giving better performance compared to the Basic or Standard tier. The Enterprise tier typically has the best performance for most workloads, especially with larger cache instances. For more information, see [Performance testing](cache-best-practices-performance.md). +- **Dedicated core for Redis server**: All caches except C0 run dedicated vCPUs. The Basic, Standard, and Premium tiers run open source Redis, which by design uses only one thread for command processing. On these tiers, having more vCPUs usually improves throughput performance because Azure Cache for Redis uses other vCPUs for I/O processing or for OS processes. However, adding more vCPUs per instance may not produce linear performance increases. Scaling out usually boosts performance more than scaling up in these tiers. Enterprise and Enterprise Flash tier caches run on Redis Enterprise which is able to utilize multiple vCPUs per instance, which can also significantly increase performance over other tiers. For Enterprise and Enterprise flash tiers, scaling up is recommended before scaling out. For more information, see [Sharding and CPU utilization](cache-best-practices-enterprise-tiers.md#sharding-and-cpu-utilization). +- **Network performance**: If you have a workload that requires high throughput, the Premium or Enterprise tier offers more bandwidth compared to Basic or Standard. Also within each tier, larger size caches have more bandwidth because of the underlying VM that hosts the cache. Higher bandwidth limits help you avoid network saturation that cause timeouts in your application.For more information, see [Performance testing](cache-best-practices-performance.md). - **Maximum number of client connections**: The Premium and Enterprise tiers offer the maximum numbers of clients that can connect to Redis, offering higher numbers of connections for larger sized caches. Clustering increases the total amount of network bandwidth available for a clustered cache. - **High availability**: Azure Cache for Redis provides multiple [high availability](cache-high-availability.md) options. It guarantees that a Standard, Premium, or Enterprise cache is available according to our [SLA](https://azure.microsoft.com/support/legal/sla/cache/v1_0/). The SLA only covers connectivity to the cache endpoints. The SLA doesn't cover protection from data loss. We recommend using the Redis data persistence feature in the Premium and Enterprise tiers to increase resiliency against data loss. - **Data persistence**: The Premium and Enterprise tiers allow you to persist the cache data to an Azure Storage account and a Managed Disk respectively. Underlying infrastructure issues might result in potential data loss. We recommend using the Redis data persistence feature in these tiers to increase resiliency against data loss. Azure Cache for Redis offers both RDB and AOF (preview) options. Data persistence can be enabled through Azure portal and CLI. For the Premium tier, see [How to configure persistence for a Premium Azure Cache for Redis](cache-how-to-premium-persistence.md).-- **Network isolation**: Azure Private Link and Virtual Network (VNET) deployments provide enhanced security and traffic isolation for your Azure Cache for Redis. VNET allows you to further restrict access through network access control policies. For more information, see [Azure Cache for Redis with Azure Private Link](cache-private-link.md) and [How to configure Virtual Network support for a Premium Azure Cache for Redis](cache-how-to-premium-vnet.md).+- **Network isolation**: Azure Private Link and Virtual Network (VNet) deployments provide enhanced security and traffic isolation for your Azure Cache for Redis. VNet allows you to further restrict access through network access control policies. For more information, see [Azure Cache for Redis with Azure Private Link](cache-private-link.md) and [How to configure Virtual Network support for a Premium Azure Cache for Redis](cache-how-to-premium-vnet.md). - **Redis Modules**: Enterprise tiers support [RediSearch](https://docs.redis.com/latest/modules/redisearch/), [RedisBloom](https://docs.redis.com/latest/modules/redisbloom/), [RedisTimeSeries](https://docs.redis.com/latest/modules/redistimeseries/), and [RedisJSON](https://docs.redis.com/latest/modules/redisjson/). These modules add new data types and functionality to Redis. You can scale your cache from the Basic tier up to Premium after it has been created. Scaling down to a lower tier isn't supported currently. For step-by-step scaling instructions, see [How to Scale Azure Cache for Redis](cache-how-to-scale.md) and [How to scale - Basic, Standard, and Premium tiers](cache-how-to-scale.md#how-to-scalebasic-standard-and-premium-tiers). ### Special considerations for Enterprise tiers -The Enterprise tiers rely on Redis Enterprise, a commercial variant of Redis from Redis Inc. Customers obtain and pay for a license to this software through an Azure Marketplace offer. Azure Cache for Redis manages the license acquisition so that you won't have to do it separately. To purchase in the Azure Marketplace, you must have the following prerequisites: +The Enterprise tiers rely on Redis Enterprise, a commercial variant of Redis from Redis Inc. Customers obtain and pay for a license to this software through an Azure Marketplace offer. Azure Cache for Redis manages the license acquisition so that you don't have to do it separately. To purchase in the Azure Marketplace, you must have the following prerequisites: - Your Azure subscription has a valid payment instrument. Azure credits or free MSDN subscriptions aren't supported. - Your organization allows [Azure Marketplace purchases](../cost-management-billing/manage/ea-azure-marketplace.md#enabling-azure-marketplace-purchases). The Enterprise tiers rely on Redis Enterprise, a commercial variant of Redis fro Azure Cache for Redis is continually expanding into new regions. To check the availability by region, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=redis-cache®ions=all). -## Next steps +## Related content - [Create an open-source Redis cache](quickstart-create-redis.md) - [Create a Redis Enterprise cache](quickstart-create-redis-enterprise.md) |
azure-cache-for-redis | Cache Tutorial Vector Similarity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-tutorial-vector-similarity.md | + + Title: 'Tutorial: Conduct vector similarity search on Azure OpenAI embeddings using Azure Cache for Redis' +description: In this tutorial, you learn how to use Azure Cache for Redis to store and search for vector embeddings. ++++ Last updated : 09/15/2023++#CustomerIntent: As a < type of user >, I want < what? > so that < why? >. +++# Tutorial: Conduct vector similarity search on Azure OpenAI embeddings using Azure Cache for Redis ++In this tutorial, you'll walk through a basic vector similarity search use-case. You'll use embeddings generated by Azure OpenAI Service and the built-in vector search capabilities of the Enterprise tier of Azure Cache for Redis to query a dataset of movies to find the most relevant match. ++The tutorial uses the [Wikipedia Movie Plots dataset](https://www.kaggle.com/datasets/jrobischon/wikipedia-movie-plots) that features plot descriptions of over 35,000 movies from Wikipedia covering the years 1901 to 2017. +The dataset includes a plot summary for each movie, plus metadata such as the year the film was released, the director(s), main cast, and genre. You'll follow the steps of the tutorial to generate embeddings based on the plot summary and use the other metadata to run hybrid queries. ++In this tutorial, you learn how to: ++> [!div class="checklist"] +> * Create an Azure Cache for Redis instance configured for vector search +> * Install Azure OpenAI and other required Python libraries. +> * Download the movie dataset and prepare it for analysis. +> * Use the **text-embedding-ada-002 (Version 2)** model to generate embeddings. +> * Create a vector index in Azure Cache for Redis +> * Use cosine similarity to rank search results. +> * Use hybrid query functionality through [RediSearch](https://redis.io/docs/interact/search-and-query/) to prefilter the data and make the vector search even more powerful. ++>[!IMPORTANT] +>This tutorial will walk you through building a Jupyter Notebook. You can follow this tutorial with a Python code file (.py) and get *similar* results, but you will need to add all of the code blocks in this tutorial into the `.py` file and execute once to see results. In other words, Jupyter Notebooks provides intermediate results as you execute cells, but this is not behavior you should expect when working in a Python code file. ++>[!IMPORTANT] +>If you would like to follow along in a completed Jupyter notebook instead, [download the Jupyter notebook file named *tutorial.ipynb*](https://github.com/Azure-Samples/azure-cache-redis-samples/tree/main/tutorial/vector-similarity-search-open-ai) and save it into the new *redis-vector* folder. ++## Prerequisites ++* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services?azure-portal=true) +* Access granted to Azure OpenAI in the desired Azure subscription + Currently, you must apply for access to Azure OpenAI. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. +* <a href="https://www.python.org/" target="_blank">Python 3.7.1 or later version</a> +* [Jupyter Notebooks](https://jupyter.org/) (optional) +* An Azure OpenAI resource with the **text-embedding-ada-002 (Version 2)** model deployed. This model is currently only available in [certain regions](../ai-services/openai/concepts/models.md#model-summary-table-and-region-availability). See the [resource deployment guide](../ai-services/openai/how-to/create-resource.md) for instructions on how to deploy the model. ++## Create an Azure Cache for Redis Instance ++1. Follow the [Quickstart: Create a Redis Enterprise cache](quickstart-create-redis-enterprise.md) guide. On the **Advanced** page, make sure that you've added the **RediSearch** module and have chosen the **Enterprise** Cluster Policy. All other settings can match the default described in the quickstart. ++ It takes a few minutes for the cache to create. You can move on to the next step in the meantime. +++## Set up your development environment ++1. Create a folder on your local computer named *redis-vector* in the location where you typically save your projects. ++1. Create a new python file (*tutorial.py*) or Jupyter notebook (*tutorial.ipynb*) in the folder. ++1. Install the required Python packages: ++ ```python + pip install openai num2words matplotlib plotly scipy scikit-learn pandas tiktoken redis langchain + ``` ++## Download the dataset ++1. In a web browser, navigate to [https://www.kaggle.com/datasets/jrobischon/wikipedia-movie-plots](https://www.kaggle.com/datasets/jrobischon/wikipedia-movie-plots). ++1. Sign in or register with Kaggle. Registration is required to download the file. ++1. Select the **Download** link on Kaggle to download the *archive.zip* file. ++1. Extract the *archive.zip* file and move the *wiki_movie_plots_deduped.csv* into the *redis-vector* folder. ++## Import libraries and set up connection information ++To successfully make a call against Azure OpenAI, you need an **endpoint** and a **key**. You also need an **endpoint** and a **key** to connect to Azure Cache for Redis. ++1. Go to your Azure Open AI resource in the Azure portal. ++1. Locate **Endpoint and Keys** in the **Resource Management** section. Copy your endpoint and access key as you'll need both for authenticating your API calls. An example endpoint is: `https://docs-test-001.openai.azure.com`. You can use either `KEY1` or `KEY2`. ++1. Go to the **Overview** page of your Azure Cache for Redis resource in the Azure portal. Copy your endpoint. ++1. Locate **Access keys** in the **Settings** section. Copy your access key. You can use either `Primary` or `Secondary`. ++1. Add the following code to a new code cell: ++ ```python + # Code cell 2 ++ import re + from num2words import num2words + import os + import pandas as pd + from openai.embeddings_utils import get_embedding + import tiktoken + from typing import List + from langchain.embeddings import OpenAIEmbeddings + from langchain.vectorstores.redis import Redis as RedisVectorStore + from langchain.document_loaders import DataFrameLoader ++ API_KEY = "<your-azure-openai-key>" + RESOURCE_ENDPOINT = "<your-azure-openai-endpoint>" + DEPLOYMENT_NAME = "<name-of-your-model-deployment>" + MODEL_NAME = "text-embedding-ada-002" + REDIS_ENDPOINT = "<your-azure-redis-endpoint>" + REDIS_PASSWORD = "<your-azure-redis-password>" + ``` ++1. Update the value of `API_KEY` and `RESOURCE_ENDPOINT` with the key and endpoint values from your Azure OpenAI deployment. `DEPLOYMENT_NAME` should be set to the name of your deployment using the `text-embedding-ada-002 (Version 2)` embeddings model, and `MODEL_NAME` should be the specific embeddings model used. ++1. Update `REDIS_ENDPOINT` and `REDIS_PASSWORD` with the endpoint and key value from your Azure Cache for Redis instance. ++ > [!Important] + > We strongly recommend using environmental variables or a secret manager like [Azure Key Vault](../key-vault/general/overview.md) to pass in the API key, endpoint, and deployment name information. These variables are set in plaintext here for the sake of simplicity. ++1. Execute code cell 2. ++## Import dataset into pandas and process data ++Next, you'll read the csv file into a pandas DataFrame. ++1. Add the following code to a new code cell: ++ ```python + # Code cell 3 ++ df=pd.read_csv(os.path.join(os.getcwd(),'wiki_movie_plots_deduped.csv')) + df + ``` ++1. Execute code cell 3. You should see the following output: + + :::image type="content" source="media/cache-tutorial-vector-similarity/code-cell-3.png" alt-text="Screenshot of results from executing code cell 3, displaying eight columns and a sampling of 10 rows of data." lightbox="media/cache-tutorial-vector-similarity/code-cell-3.png"::: ++1. Next, process the data by adding an `id` index, removing spaces from the column titles, and filters the movies to take only movies made after 1970 and from English speaking countries. This filtering step reduces the number of movies in the dataset, which lowers the cost and time required to generate embeddings. You're free to change or remove the filter parameters based on your preferences. ++ To filter the data, add the following code to a new code cell: ++ ```python + # Code cell 4 ++ df.insert(0, 'id', range(0, len(df))) + df['year'] = df['Release Year'].astype(int) + df['origin'] = df['Origin/Ethnicity'].astype(str) + del df['Release Year'] + del df['Origin/Ethnicity'] + df = df[df.year > 1970] # only movies made after 1970 + df = df[df.origin.isin(['American','British','Canadian'])] # only movies from English-speaking cinema + df + ``` ++1. Execute code cell 4. You should see the following results: ++ :::image type="content" source="media/cache-tutorial-vector-similarity/code-cell-4.png" alt-text="Screenshot of results from executing code cell 4, displaying nine columns and a sampling of 10 rows of data." lightbox="media/cache-tutorial-vector-similarity/code-cell-4.png"::: ++1. Create a function to clean the data by removing whitespace and punctuation, then use it against the dataframe containing the plot. ++ Add the following code to a new code cell and execute it: ++ ```python + # Code cell 5 ++ pd.options.mode.chained_assignment = None ++ # s is input text + def normalize_text(s, sep_token = " \n "): + s = re.sub(r'\s+', ' ', s).strip() + s = re.sub(r". ,","",s) + # remove all instances of multiple spaces + s = s.replace("..",".") + s = s.replace(". .",".") + s = s.replace("\n", "") + s = s.strip() + + return s ++ df['Plot']= df['Plot'].apply(lambda x : normalize_text(x)) + ``` ++1. Finally, remove any entries that contain plot descriptions that are too long for the embeddings model. (In other words, they require more tokens than the 8192 token limit.) and then calculate the numbers of tokens required to generate embeddings. This also impacts pricing for embedding generation. ++ Add the following code to a new code cell: ++ ```python + # Code cell 6 ++ tokenizer = tiktoken.get_encoding("cl100k_base") + df['n_tokens'] = df["Plot"].apply(lambda x: len(tokenizer.encode(x))) + df = df[df.n_tokens<8192] + print('Number of movies: ' + str(len(df))) + print('Number of tokens required:' + str(df['n_tokens'].sum())) + ``` ++1. Execute code cell 6. You should see this output: ++ ```output + Number of movies: 11125 + Number of tokens required:7044844 + ``` ++ > [!Important] + > Refer to [Azure OpenAI Service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) to caculate the cost of generating embeddings based on the number of tokens required. ++## Load DataFrame into LangChain ++Load the DataFrame into LangChain using the `DataFrameLoader` class. Once the data is in LangChain documents, it's far easier to use LangChain libraries to generate embeddings and conduct similarity searches. Set *Plot* as the `page_content_column` so that embeddings are generated on this column. ++1. Add the following code to a new code cell and execute it: ++ ```python + # Code cell 7 ++ loader = DataFrameLoader(df, page_content_column="Plot" ) + movie_list = loader.load() + ``` ++## Generate embeddings and load them into Redis ++Now that the data has been filtered and loaded into LangChain, you'll create embeddings so you can query on the plot for each movie. The following code configures Azure OpenAI, generates embeddings, and loads the embeddings vectors into Azure Cache for Redis. ++1. Add the following code a new code cell: ++ ```python + # Code cell 8 ++ embedding = OpenAIEmbeddings( + deployment=DEPLOYMENT_NAME, + model=MODEL_NAME, + openai_api_base=RESOURCE_ENDPOINT, + openai_api_type="azure", + openai_api_key=API_KEY, + openai_api_version="2023-05-15", + chunk_size=16 # current limit with Azure OpenAI service. This will likely increase in the future. + ) ++ # name of the Redis search index to create + index_name = "movieindex" ++ # create a connection string for the Redis Vector Store. Uses Redis-py format: https://redis-py.readthedocs.io/en/stable/connections.html#redis.Redis.from_url + # This example assumes TLS is enabled. If not, use "redis://" instead of "rediss:// + redis_url = "rediss://:" + REDIS_PASSWORD + "@"+ REDIS_ENDPOINT ++ # create and load redis with documents + vectorstore = RedisVectorStore.from_documents( + documents=movie_list, + embedding=embedding, + index_name=index_name, + redis_url=redis_url + ) ++ # save index schema so you can reload in the future without re-generating embeddings + vectorstore.write_schema("redis_schema.yaml") + ``` ++1. Execute code cell 8. This can take up to 10 minutes to complete. A `redis_schema.yaml` file is generated as well. This file is useful if you want to connect to your index in Azure Cache for Redis instance without re-generating embeddings. ++## Run vector search queries ++Now that your dataset, Azure OpenAI service API, and Redis instance are set up, you can search using vectors. In this example, the top 10 results for a given query are returned. ++1. Add the following code to your Python code file: ++ ```python + # Code cell 9 ++ query = "Spaceships, aliens, and heroes saving America" + results = vectorstore.similarity_search_with_score(query, k=10) ++ for i, j in enumerate(results): + movie_title = str(results[i][0].metadata['Title']) + similarity_score = str(round((1 - results[i][1]),4)) + print(movie_title + ' (Score: ' + similarity_score + ')') + ``` ++1. Execute code cell 9. You should see the following output: ++ ```output + Independence Day (Score: 0.8348) + The Flying Machine (Score: 0.8332) + Remote Control (Score: 0.8301) + Bravestarr: The Legend (Score: 0.83) + Xenogenesis (Score: 0.8291) + Invaders from Mars (Score: 0.8291) + Apocalypse Earth (Score: 0.8287) + Invasion from Inner Earth (Score: 0.8287) + Thru the Moebius Strip (Score: 0.8283) + Solar Crisis (Score: 0.828) + ``` ++ The similarity score is returned along with the ordinal ranking of movies by similarity. Notice that more specific queries have similarity scores decrease faster down the list. ++## Hybrid searches ++1. Since RediSearch also features rich search functionality on top of vector search, it's possible to filter results by the metadata in the data set, such as film genre, cast, release year, or director. In this case, filter based on the genre `comedy`. ++ Add the following code to a new code cell: ++ ```python + # Code cell 10 ++ from langchain.vectorstores.redis import RedisText ++ query = "Spaceships, aliens, and heroes saving America" + genre_filter = RedisText("Genre") == "comedy" + results = vectorstore.similarity_search_with_score(query, filter=genre_filter, k=10) + for i, j in enumerate(results): + movie_title = str(results[i][0].metadata['Title']) + similarity_score = str(round((1 - results[i][1]),4)) + print(movie_title + ' (Score: ' + similarity_score + ')') + ``` ++1. Execute code cell 10. You should see the following output: ++ ```output + Remote Control (Score: 0.8301) + Meet Dave (Score: 0.8236) + Elf-Man (Score: 0.8208) + Fifty/Fifty (Score: 0.8167) + Mars Attacks! (Score: 0.8165) + Strange Invaders (Score: 0.8143) + Amanda and the Alien (Score: 0.8136) + Suburban Commando (Score: 0.8129) + Coneheads (Score: 0.8129) + Morons from Outer Space (Score: 0.8121) + ``` ++With Azure Cache for Redis and Azure OpenAI Service, you can use embeddings and vector search to add powerful search capabilities to your application. +++## Related Content ++* [Learn more about Azure Cache for Redis](cache-overview.md) +* Learn more about Azure Cache for Redis [vector search capabilities](./cache-overview-vector-similarity.md) +* Learn more about [embeddings generated by Azure OpenAI Service](../ai-services/openai/concepts/understand-embeddings.md) +* Learn more about [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity) +* [Read how to build an AI-powered app with OpenAI and Redis](https://techcommunity.microsoft.com/t5/azure-developer-community-blog/vector-similarity-search-with-azure-cache-for-redis-enterprise/ba-p/3822059) +* [Build a Q&A app with semantic answers](https://github.com/ruoccofabrizio/azure-open-ai-embeddings-qna) |
azure-functions | Dotnet Isolated In Process Differences | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-in-process-differences.md | recommendations: false #Customer intent: As a developer, I need to understand the differences between running in-process and running in an isolated worker process so that I can choose the best process model for my functions. -# Differences between in-process and isolated worker process .NET Azure Functions +# Differences between isolated worker model and in-process model .NET Azure Functions There are two process models for .NET functions: This article describes the current state of the functional and behavioral differ Use the following table to compare feature and functional differences between the two models: -| Feature/behavior | In-process<sup>3</sup> | Isolated worker process | +| Feature/behavior | Isolated worker process | In-process<sup>3</sup> | | - | - | - |-| [Supported .NET versions](#supported-versions) | Long Term Support (LTS) versions<sup>6</sup> | Long Term Support (LTS) versions<sup>6</sup>,<br/>Standard Term Support (STS) versions,<br/>.NET Framework | -| Core packages | [Microsoft.NET.Sdk.Functions](https://www.nuget.org/packages/Microsoft.NET.Sdk.Functions/) | [Microsoft.Azure.Functions.Worker](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker/)<br/>[Microsoft.Azure.Functions.Worker.Sdk](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Sdk) | -| Binding extension packages | [Microsoft.Azure.WebJobs.Extensions.*](https://www.nuget.org/packages?q=Microsoft.Azure.WebJobs.Extensions) | [Microsoft.Azure.Functions.Worker.Extensions.*](https://www.nuget.org/packages?q=Microsoft.Azure.Functions.Worker.Extensions) | -| Durable Functions | [Supported](durable/durable-functions-overview.md) | [Supported](durable/durable-functions-isolated-create-first-csharp.md?pivots=code-editor-visualstudio) (Support does not yet include Durable Entities) | -| Model types exposed by bindings | Simple types<br/>[JSON serializable](/dotnet/api/system.text.json.jsonserializeroptions) types<br/>Arrays/enumerations<br/>Service SDK types<sup>4</sup> | Simple types<br/>JSON serializable types<br/>Arrays/enumerations<br/>[Service SDK types](dotnet-isolated-process-guide.md#sdk-types)<sup>4</sup> | -| HTTP trigger model types| [HttpRequest] / [IActionResult]<sup>5</sup><br/>[HttpRequestMessage] / [HttpResponseMessage] | [HttpRequestData] / [HttpResponseData]<br/>[HttpRequest] / [IActionResult] (using [ASP.NET Core integration][aspnetcore-integration])<sup>5</sup>| -| Output binding interactions | Return values (single output only),<br/>`out` parameters,<br/>`IAsyncCollector` | Return values in an expanded model with:<br/> - single or [multiple outputs](dotnet-isolated-process-guide.md#multiple-output-bindings)<br/> - arrays of outputs| -| Imperative bindings<sup>1</sup> | [Supported](functions-dotnet-class-library.md#binding-at-runtime) | Not supported - instead [work with SDK types directly](./dotnet-isolated-process-guide.md#register-azure-clients) | -| Dependency injection | [Supported](functions-dotnet-dependency-injection.md) | [Supported](dotnet-isolated-process-guide.md#dependency-injection) (improved model consistent with .NET ecosystem) | -| Middleware | Not supported | [Supported](dotnet-isolated-process-guide.md#middleware) | -| Logging | [ILogger] passed to the function<br/>[ILogger<T>] via [dependency injection](functions-dotnet-dependency-injection.md) | [ILogger<T>]/[ILogger] obtained from [FunctionContext](/dotnet/api/microsoft.azure.functions.worker.functioncontext) or via [dependency injection](dotnet-isolated-process-guide.md#dependency-injection)| -| Application Insights dependencies | [Supported](functions-monitoring.md#dependencies) | [Supported](./dotnet-isolated-process-guide.md#application-insights) | -| Cancellation tokens | [Supported](functions-dotnet-class-library.md#cancellation-tokens) | [Supported](dotnet-isolated-process-guide.md#cancellation-tokens) | -| Cold start times<sup>2</sup> | Optimized | [Configurable optimizations (preview)](./dotnet-isolated-process-guide.md#performance-optimizations) | -| ReadyToRun | [Supported](functions-dotnet-class-library.md#readytorun) | [Supported](dotnet-isolated-process-guide.md#readytorun) | +| [Supported .NET versions](#supported-versions) | Long Term Support (LTS) versions<sup>6</sup>,<br/>Standard Term Support (STS) versions,<br/>.NET Framework | Long Term Support (LTS) versions<sup>6</sup> | +| Core packages | [Microsoft.Azure.Functions.Worker](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker/)<br/>[Microsoft.Azure.Functions.Worker.Sdk](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Sdk) | [Microsoft.NET.Sdk.Functions](https://www.nuget.org/packages/Microsoft.NET.Sdk.Functions/) | +| Binding extension packages | [Microsoft.Azure.Functions.Worker.Extensions.*](https://www.nuget.org/packages?q=Microsoft.Azure.Functions.Worker.Extensions) | [Microsoft.Azure.WebJobs.Extensions.*](https://www.nuget.org/packages?q=Microsoft.Azure.WebJobs.Extensions) | +| Durable Functions | [Supported](durable/durable-functions-isolated-create-first-csharp.md?pivots=code-editor-visualstudio) (Support does not yet include Durable Entities) | [Supported](durable/durable-functions-overview.md) | +| Model types exposed by bindings | Simple types<br/>JSON serializable types<br/>Arrays/enumerations<br/>[Service SDK types](dotnet-isolated-process-guide.md#sdk-types)<sup>4</sup> | Simple types<br/>[JSON serializable](/dotnet/api/system.text.json.jsonserializeroptions) types<br/>Arrays/enumerations<br/>Service SDK types<sup>4</sup> | +| HTTP trigger model types| [HttpRequestData] / [HttpResponseData]<br/>[HttpRequest] / [IActionResult] (using [ASP.NET Core integration][aspnetcore-integration])<sup>5</sup>| [HttpRequest] / [IActionResult]<sup>5</sup><br/>[HttpRequestMessage] / [HttpResponseMessage] | +| Output binding interactions | Return values in an expanded model with:<br/> - single or [multiple outputs](dotnet-isolated-process-guide.md#multiple-output-bindings)<br/> - arrays of outputs| Return values (single output only),<br/>`out` parameters,<br/>`IAsyncCollector` | +| Imperative bindings<sup>1</sup> | Not supported - instead [work with SDK types directly](./dotnet-isolated-process-guide.md#register-azure-clients) | [Supported](functions-dotnet-class-library.md#binding-at-runtime) | +| Dependency injection | [Supported](dotnet-isolated-process-guide.md#dependency-injection) (improved model consistent with .NET ecosystem) | [Supported](functions-dotnet-dependency-injection.md) | +| Middleware | [Supported](dotnet-isolated-process-guide.md#middleware) | Not supported | +| Logging | [ILogger<T>]/[ILogger] obtained from [FunctionContext](/dotnet/api/microsoft.azure.functions.worker.functioncontext) or via [dependency injection](dotnet-isolated-process-guide.md#dependency-injection)| [ILogger] passed to the function<br/>[ILogger<T>] via [dependency injection](functions-dotnet-dependency-injection.md) | +| Application Insights dependencies | [Supported](./dotnet-isolated-process-guide.md#application-insights) | [Supported](functions-monitoring.md#dependencies) | +| Cancellation tokens | [Supported](dotnet-isolated-process-guide.md#cancellation-tokens) | [Supported](functions-dotnet-class-library.md#cancellation-tokens) | +| Cold start times<sup>2</sup> | [Configurable optimizations (preview)](./dotnet-isolated-process-guide.md#performance-optimizations) | Optimized | +| ReadyToRun | [Supported](dotnet-isolated-process-guide.md#readytorun) | [Supported](functions-dotnet-class-library.md#readytorun) | <sup>1</sup> When you need to interact with a service using parameters determined at runtime, using the corresponding service SDKs directly is recommended over using imperative bindings. The SDKs are less verbose, cover more scenarios, and have advantages for error handling and debugging purposes. This recommendation applies to both models. Use the following table to compare feature and functional differences between th <sup>5</sup> ASP.NET Core types are not supported for .NET Framework. -<sup>6</sup> The isolated worker model supports .NET 8 as a preview, currently for Linux applications only. .NET 8 is not yet available for the in-process model. See the [Azure Functions Roadmap Update post](https://aka.ms/azure-functions-dotnet-roadmap) for more information about .NET 8 plans. +<sup>6</sup> The isolated worker model supports .NET 8 [as a preview](./dotnet-isolated-process-guide.md#preview-net-versions). For information about .NET 8 plans, including future options for the in-process model, see the [Azure Functions Roadmap Update post](https://aka.ms/azure-functions-dotnet-roadmap). [HttpRequest]: /dotnet/api/microsoft.aspnetcore.http.httprequest [IActionResult]: /dotnet/api/microsoft.aspnetcore.mvc.iactionresult |
azure-functions | Dotnet Isolated Process Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md | After the debugger is attached, the process execution resumes, and you'll be abl Because your isolated worker process app runs outside the Functions runtime, you need to attach the remote debugger to a separate process. To learn more about debugging using Visual Studio, see [Remote Debugging](functions-develop-vs.md?tabs=isolated-process#remote-debugging). +## Preview .NET versions ++Azure Functions currently can be used with the following preview versions of .NET: ++| Operating system | .NET preview version | +| - | - | +| Windows | .NET 8 Preview 7 | +| Linux | .NET 8 RC1 | ++### Using a preview .NET SDK ++To use Azure Functions with a preview version of .NET, you need to update your project by: ++1. Installing the relevant .NET SDK version in your development +1. Changing the `TargetFramework` setting in your `.csproj` file ++When deploying to a function app in Azure, you also need to ensure that the framework is made available to the app. To do so on Windows, you can use the following CLI command. Replace `<groupName>` with the name of the resource group, and replace `<appName>` with the name of your function app. Replace `<framework>` with the appropriate version string, such as "v8.0". ++```azurecli +az functionapp config set -g <groupName> -n <appName> --net-framework-version <framework> +``` ++### Considerations for using .NET preview versions ++Keep these considerations in mind when using Functions with preview versions of .NET: ++If you author your functions in Visual Studio, you must use [Visual Studio Preview](https://visualstudio.microsoft.com/vs/preview/), which supports building Azure Functions projects with .NET preview SDKs. You should also ensure you have the latest Functions tools and templates. To update these, navigate to `Tools->Options`, select `Azure Functions` under `Projects and Solutions`, and then click the `Check for updates` button, installing updates as prompted. ++During the preview period, your development environment might have a more recent version of the .NET preview than the hosted service. This can cause the application to fail when deployed. To address this, you can configure which version of the SDK to use in [`global.json`](/dotnet/core/tools/global-json). First, identify which versions you have installed using `dotnet --list-sdks` and note the version that matches what the service supports. Then you can run `dotnet new globaljson --sdk-version <sdk-version> --force`, substituting `<sdk-version>` for the version you noted in the previous command. For example, `dotnet new globaljson --sdk-version dotnet-sdk-8.0.100-preview.7.23376.3 --force` will cause the system to use the .NET 8 Preview 7 SDK when building your project. ++Note that due to just-in-time loading of preview frameworks, function apps running on Windows may experience increased cold start times when compared against earlier GA versions. + ## Next steps > [!div class="nextstepaction"] |
azure-functions | Functions Add Output Binding Azure Sql Vs Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-add-output-binding-azure-sql-vs-code.md | Because you're using an Azure SQL output binding, you must have the correspondin With the exception of HTTP and timer triggers, bindings are implemented as extension packages. Run the following [dotnet add package](/dotnet/core/tools/dotnet-add-package) command in the Terminal window to add the Azure SQL extension package to your project. -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) ```bash-dotnet add package Microsoft.Azure.WebJobs.Extensions.Sql +dotnet add package Microsoft.Azure.Functions.Worker.Extensions.Sql ```-# [Isolated process](#tab/isolated-process) +# [In-process model](#tab/in-process) ```bash-dotnet add package Microsoft.Azure.Functions.Worker.Extensions.Sql +dotnet add package Microsoft.Azure.WebJobs.Extensions.Sql ``` ::: zone-end Open the *HttpExample.cs* project file and add the following `ToDoItem` class, w In a C# class library project, the bindings are defined as binding attributes on the function method. The *function.json* file required by Functions is then auto-generated based on these attributes. -# [In-process](#tab/in-process) -Open the *HttpExample.cs* project file and add the following parameter to the `Run` method definition: ---The `toDoItems` parameter is an `IAsyncCollector<ToDoItem>` type, which represents a collection of ToDo items that are written to your Azure SQL Database when the function completes. Specific attributes indicate the names of the database table (`dbo.ToDo`) and the connection string for your Azure SQL Database (`SqlConnectionString`). --# [Isolated process](#tab/isolated-process) +# [Isolated worker model](#tab/isolated-process) Open the *HttpExample.cs* project file and add the following output type class, which defines the combined objects that will be output from our function for both the HTTP response and the SQL output: Add a using statement to the `Microsoft.Azure.Functions.Worker.Extensions.Sql` l using Microsoft.Azure.Functions.Worker.Extensions.Sql; ``` +# [In-process model](#tab/in-process) +Open the *HttpExample.cs* project file and add the following parameter to the `Run` method definition: +++The `toDoItems` parameter is an `IAsyncCollector<ToDoItem>` type, which represents a collection of ToDo items that are written to your Azure SQL Database when the function completes. Specific attributes indicate the names of the database table (`dbo.ToDo`) and the connection string for your Azure SQL Database (`SqlConnectionString`). + ::: zone-end In this code, `arg_name` identifies the binding parameter referenced in your cod ::: zone pivot="programming-language-csharp" -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) ++Replace the existing Run method with the following code: ++```cs +[Function("HttpExample")] +public static OutputType Run([HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequestData req, + FunctionContext executionContext) +{ + var logger = executionContext.GetLogger("HttpExample"); + logger.LogInformation("C# HTTP trigger function processed a request."); ++ var message = "Welcome to Azure Functions!"; ++ var response = req.CreateResponse(HttpStatusCode.OK); + response.Headers.Add("Content-Type", "text/plain; charset=utf-8"); + response.WriteString(message); ++ // Return a response to both HTTP trigger and Azure SQL output binding. + return new OutputType() + { + ToDoItem = new ToDoItem + { + id = System.Guid.NewGuid().ToString(), + title = message, + completed = false, + url = "" + }, + HttpResponse = response + }; +} +``` ++# [In-process model](#tab/in-process) Add code that uses the `toDoItems` output binding object to create a new `ToDoItem`. Add this code before the method returns. public static async Task<IActionResult> Run( } ``` -# [Isolated process](#tab/isolated-process) --Replace the existing Run method with the following code: --```cs -[Function("HttpExample")] -public static OutputType Run([HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequestData req, - FunctionContext executionContext) -{ - var logger = executionContext.GetLogger("HttpExample"); - logger.LogInformation("C# HTTP trigger function processed a request."); -- var message = "Welcome to Azure Functions!"; -- var response = req.CreateResponse(HttpStatusCode.OK); - response.Headers.Add("Content-Type", "text/plain; charset=utf-8"); - response.WriteString(message); -- // Return a response to both HTTP trigger and Azure SQL output binding. - return new OutputType() - { - ToDoItem = new ToDoItem - { - id = System.Guid.NewGuid().ToString(), - title = message, - completed = false, - url = "" - }, - HttpResponse = response - }; -} -``` - ::: zone-end |
azure-functions | Functions Add Output Binding Cosmos Db Vs Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-add-output-binding-cosmos-db-vs-code.md | Because you're using an Azure Cosmos DB output binding, you must have the corres Except for HTTP and timer triggers, bindings are implemented as extension packages. Run the following [dotnet add package](/dotnet/core/tools/dotnet-add-package) command in the Terminal window to add the Azure Cosmos DB extension package to your project. -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) ```command-dotnet add package Microsoft.Azure.WebJobs.Extensions.CosmosDB --version 3.0.10 +dotnet add package Microsoft.Azure.Functions.Worker.Extensions.CosmosDB --version 3.0.9 ```-# [Isolated process](#tab/isolated-process) +# [In-process model](#tab/in-process) ```command-dotnet add package Microsoft.Azure.Functions.Worker.Extensions.CosmosDB --version 3.0.9 +dotnet add package Microsoft.Azure.WebJobs.Extensions.CosmosDB --version 3.0.10 ``` ::: zone-end Now, you can add the Azure Cosmos DB output binding to your project. ::: zone pivot="programming-language-csharp" In a C# class library project, the bindings are defined as binding attributes on the function method. -# [In-process](#tab/in-process) -Open the *HttpExample.cs* project file and add the following parameter to the `Run` method definition: ---The `documentsOut` parameter is an `IAsyncCollector<T>` type, which represents a collection of JSON documents that are written to your Azure Cosmos DB container when the function completes. Specific attributes indicate the names of the container and its parent database. The connection string for your Azure Cosmos DB account is set by the `ConnectionStringSettingAttribute`. --# [Isolated process](#tab/isolated-process) +# [Isolated worker model](#tab/isolated-process) Open the *HttpExample.cs* project file and add the following classes: The `MyDocument` class defines an object that gets written to the database. The The `MultiResponse` class allows you to both write to the specified collection in the Azure Cosmos DB and return an HTTP success message. Because you need to return a `MultiResponse` object, you need to also update the method signature. +# [In-process model](#tab/in-process) +Open the *HttpExample.cs* project file and add the following parameter to the `Run` method definition: +++The `documentsOut` parameter is an `IAsyncCollector<T>` type, which represents a collection of JSON documents that are written to your Azure Cosmos DB container when the function completes. Specific attributes indicate the names of the container and its parent database. The connection string for your Azure Cosmos DB account is set by the `ConnectionStringSettingAttribute`. + Specific attributes specify the name of the container and the name of its parent database. The connection string for your Azure Cosmos DB account is set by the `CosmosDbConnectionString`. In this code, `arg_name` identifies the binding parameter referenced in your cod ::: zone pivot="programming-language-csharp" -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) ++Replace the existing Run method with the following code: +++# [In-process model](#tab/in-process) Add code that uses the `documentsOut` output binding object to create a JSON document. Add this code before the method returns. public static async Task<IActionResult> Run( } ``` -# [Isolated process](#tab/isolated-process) --Replace the existing Run method with the following code: -- ::: zone-end |
azure-functions | Functions Add Output Binding Storage Queue Vs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-add-output-binding-storage-queue-vs.md | Because you're using a Queue storage output binding, you need the Storage bindin 1. In the console, run the following [Install-Package](/nuget/tools/ps-ref-install-package) command to install the Storage extensions: - # [In-process](#tab/in-process) + # [Isolated worker model](#tab/isolated-process) ```bash- Install-Package Microsoft.Azure.WebJobs.Extensions.Storage + Install-Package /dotnet/api/microsoft.azure.webjobs.blobattribute.Queues -IncludePrerelease ```- # [Isolated process](#tab/isolated-process) + # [In-process model](#tab/in-process) ```bash- Install-Package /dotnet/api/microsoft.azure.webjobs.blobattribute.Queues -IncludePrerelease + Install-Package Microsoft.Azure.WebJobs.Extensions.Storage ``` |
azure-functions | Functions Bindings Azure Data Explorer Input | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-data-explorer-input.md | The Azure Data Explorer input binding retrieves data from a database. [!INCLUDE [functions-bindings-csharp-intro-with-csx](../../includes/functions-bindings-csharp-intro-with-csx.md)] -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) ++More samples for the Azure Data Explorer input binding (out of process) are available in the [GitHub repository](https://github.com/Azure/Webjobs.Extensions.Kusto/tree/main/samples/samples-outofproc). ++This section contains the following examples: ++* [HTTP trigger, get row by ID from query string](#http-trigger-look-up-id-from-query-string-c-oop) +* [HTTP trigger, get multiple rows from route data](#http-trigger-get-multiple-items-from-route-data-c-oop) ++The examples refer to a `Product` class and the Products table, both of which are defined in the previous sections. ++<a id="http-trigger-look-up-id-from-query-string-c-oop"></a> ++### HTTP trigger, get row by ID from query string ++The following example shows a [C# function](functions-dotnet-class-library.md) that retrieves a single record. The function is triggered by an HTTP request that uses a query string to specify the ID. That ID is used to retrieve a `Product` record with the specified query. ++> [!NOTE] +> The HTTP query string parameter is case sensitive. +> ++```cs +using System.Text.Json.Nodes; +using Microsoft.Azure.Functions.Worker; +using Microsoft.Azure.Functions.Worker.Extensions.Kusto; +using Microsoft.Azure.Functions.Worker.Http; +using Microsoft.Azure.WebJobs.Extensions.Kusto.SamplesOutOfProc.OutputBindingSamples.Common; ++namespace Microsoft.Azure.WebJobs.Extensions.Kusto.SamplesOutOfProc.InputBindingSamples +{ + public static class GetProductsQuery + { + [Function("GetProductsQuery")] + public static JsonArray Run( + [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "getproductsquery")] HttpRequestData req, + [KustoInput(Database: "productsdb", + KqlCommand = "declare query_parameters (productId:long);Products | where ProductID == productId", + KqlParameters = "@productId={Query.productId}",Connection = "KustoConnectionString")] JsonArray products) + { + return products; + } + } +} +``` ++<a id="http-trigger-get-multiple-items-from-route-data-c-oop"></a> ++### HTTP trigger, get multiple rows from route parameter ++The following example shows a [C# function](functions-dotnet-class-library.md) that retrieves records returned by the query (based on the name of the product, in this case). The function is triggered by an HTTP request that uses route data to specify the value of a query parameter. That parameter is used to filter the `Product` records in the specified query. ++```cs +using Microsoft.Azure.Functions.Worker; +using Microsoft.Azure.Functions.Worker.Extensions.Kusto; +using Microsoft.Azure.Functions.Worker.Http; +using Microsoft.Azure.WebJobs.Extensions.Kusto.SamplesOutOfProc.OutputBindingSamples.Common; ++namespace Microsoft.Azure.WebJobs.Extensions.Kusto.SamplesOutOfProc.InputBindingSamples +{ + public static class GetProductsFunction + { + [Function("GetProductsFunction")] + public static IEnumerable<Product> Run( + [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "getproductsfn/{name}")] HttpRequestData req, + [KustoInput(Database: "productsdb", + KqlCommand = "declare query_parameters (name:string);GetProductsByName(name)", + KqlParameters = "@name={name}",Connection = "KustoConnectionString")] IEnumerable<Product> products) + { + return products; + } + } +} +``` ++# [In-process model](#tab/in-process) More samples for the Azure Data Explorer input binding are available in the [GitHub repository](https://github.com/Azure/Webjobs.Extensions.Kusto/blob/main/samples/samples-csharp). namespace Microsoft.Azure.WebJobs.Extensions.Kusto.Samples.InputBindingSamples } ``` -# [Isolated process](#tab/isolated-process) --More samples for the Azure Data Explorer input binding (out of process) are available in the [GitHub repository](https://github.com/Azure/Webjobs.Extensions.Kusto/tree/main/samples/samples-outofproc). --This section contains the following examples: --* [HTTP trigger, get row by ID from query string](#http-trigger-look-up-id-from-query-string-c-oop) -* [HTTP trigger, get multiple rows from route data](#http-trigger-get-multiple-items-from-route-data-c-oop) --The examples refer to a `Product` class and the Products table, both of which are defined in the previous sections. --<a id="http-trigger-look-up-id-from-query-string-c-oop"></a> --### HTTP trigger, get row by ID from query string --The following example shows a [C# function](functions-dotnet-class-library.md) that retrieves a single record. The function is triggered by an HTTP request that uses a query string to specify the ID. That ID is used to retrieve a `Product` record with the specified query. --> [!NOTE] -> The HTTP query string parameter is case sensitive. -> --```cs -using System.Text.Json.Nodes; -using Microsoft.Azure.Functions.Worker; -using Microsoft.Azure.Functions.Worker.Extensions.Kusto; -using Microsoft.Azure.Functions.Worker.Http; -using Microsoft.Azure.WebJobs.Extensions.Kusto.SamplesOutOfProc.OutputBindingSamples.Common; --namespace Microsoft.Azure.WebJobs.Extensions.Kusto.SamplesOutOfProc.InputBindingSamples -{ - public static class GetProductsQuery - { - [Function("GetProductsQuery")] - public static JsonArray Run( - [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "getproductsquery")] HttpRequestData req, - [KustoInput(Database: "productsdb", - KqlCommand = "declare query_parameters (productId:long);Products | where ProductID == productId", - KqlParameters = "@productId={Query.productId}",Connection = "KustoConnectionString")] JsonArray products) - { - return products; - } - } -} -``` --<a id="http-trigger-get-multiple-items-from-route-data-c-oop"></a> --### HTTP trigger, get multiple rows from route parameter --The following example shows a [C# function](functions-dotnet-class-library.md) that retrieves records returned by the query (based on the name of the product, in this case). The function is triggered by an HTTP request that uses route data to specify the value of a query parameter. That parameter is used to filter the `Product` records in the specified query. --```cs -using Microsoft.Azure.Functions.Worker; -using Microsoft.Azure.Functions.Worker.Extensions.Kusto; -using Microsoft.Azure.Functions.Worker.Http; -using Microsoft.Azure.WebJobs.Extensions.Kusto.SamplesOutOfProc.OutputBindingSamples.Common; --namespace Microsoft.Azure.WebJobs.Extensions.Kusto.SamplesOutOfProc.InputBindingSamples -{ - public static class GetProductsFunction - { - [Function("GetProductsFunction")] - public static IEnumerable<Product> Run( - [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "getproductsfn/{name}")] HttpRequestData req, - [KustoInput(Database: "productsdb", - KqlCommand = "declare query_parameters (name:string);GetProductsByName(name)", - KqlParameters = "@name={name}",Connection = "KustoConnectionString")] IEnumerable<Product> products) - { - return products; - } - } -} -``` --<!-- Uncomment to support C# script examples. -# [C# Script](#tab/csharp-script) -> ::: zone-end |
azure-functions | Functions Bindings Azure Data Explorer Output | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-data-explorer-output.md | For information on setup and configuration details, see the [overview](functions [!INCLUDE [functions-bindings-csharp-intro-with-csx](../../includes/functions-bindings-csharp-intro-with-csx.md)] -### [In-process](#tab/in-process) +### [Isolated worker model](#tab/isolated-process) ++More samples for the Azure Data Explorer output binding are available in the [GitHub repository](https://github.com/Azure/Webjobs.Extensions.Kusto/tree/main/samples/samples-outofproc). ++This section contains the following examples: ++* [HTTP trigger, write one record](#http-trigger-write-one-record-c-oop) +* [HTTP trigger, write records with mapping](#http-trigger-write-records-with-mapping-oop) ++The examples refer to `Product` class and a corresponding database table: ++```cs +public class Product +{ + [JsonProperty(nameof(ProductID))] + public long ProductID { get; set; } ++ [JsonProperty(nameof(Name))] + public string Name { get; set; } ++ [JsonProperty(nameof(Cost))] + public double Cost { get; set; } +} +``` ++```kusto +.create-merge table Products (ProductID:long, Name:string, Cost:double) +``` ++<a id="http-trigger-write-one-record-c-oop"></a> ++#### HTTP trigger, write one record ++The following example shows a [C# function](functions-dotnet-class-library.md) that adds a record to a database. The function uses data provided in an HTTP POST request as a JSON body. ++```cs +using Microsoft.Azure.Functions.Worker; +using Microsoft.Azure.Functions.Worker.Extensions.Kusto; +using Microsoft.Azure.Functions.Worker.Http; +using Microsoft.Azure.WebJobs.Extensions.Kusto.SamplesOutOfProc.OutputBindingSamples.Common; ++namespace Microsoft.Azure.WebJobs.Extensions.Kusto.SamplesOutOfProc.OutputBindingSamples +{ + public static class AddProduct + { + [Function("AddProduct")] + [KustoOutput(Database: "productsdb", Connection = "KustoConnectionString", TableName = "Products")] + public static async Task<Product> Run( + [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "addproductuni")] + HttpRequestData req) + { + Product? prod = await req.ReadFromJsonAsync<Product>(); + return prod ?? new Product { }; + } + } +} ++``` ++<a id="http-trigger-write-records-with-mapping-oop"></a> ++#### HTTP trigger, write records with mapping ++The following example shows a [C# function](functions-dotnet-class-library.md) that adds a collection of records to a database. The function uses mapping that transforms a `Product` to `Item`. ++To transform data from `Product` to `Item`, the function uses a mapping reference: ++```kusto +.create-merge table Item (ItemID:long, ItemName:string, ItemCost:float) +++-- Create a mapping that transforms an Item to a Product ++.create-or-alter table Product ingestion json mapping "item_to_product_json" '[{"column":"ProductID","path":"$.ItemID"},{"column":"Name","path":"$.ItemName"},{"column":"Cost","path":"$.ItemCost"}]' +``` ++```cs +namespace Microsoft.Azure.WebJobs.Extensions.Kusto.SamplesOutOfProc.OutputBindingSamples.Common +{ + public class Item + { + public long ItemID { get; set; } ++ public string? ItemName { get; set; } ++ public double ItemCost { get; set; } + } +} +``` ++```cs +using Microsoft.Azure.Functions.Worker; +using Microsoft.Azure.Functions.Worker.Extensions.Kusto; +using Microsoft.Azure.Functions.Worker.Http; +using Microsoft.Azure.WebJobs.Extensions.Kusto.SamplesOutOfProc.OutputBindingSamples.Common; ++namespace Microsoft.Azure.WebJobs.Extensions.Kusto.SamplesOutOfProc.OutputBindingSamples +{ + public static class AddProductsWithMapping + { + [Function("AddProductsWithMapping")] + [KustoOutput(Database: "productsdb", Connection = "KustoConnectionString", TableName = "Products", MappingRef = "item_to_product_json")] + public static async Task<Item> Run( + [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "addproductswithmapping")] + HttpRequestData req) + { + Item? item = await req.ReadFromJsonAsync<Item>(); + return item ?? new Item { }; + } + } +} +``` +### [In-process model](#tab/in-process) More samples for the Azure Data Explorer output binding are available in the [GitHub repository](https://github.com/Azure/Webjobs.Extensions.Kusto/tree/main/samples/samples-csharp). namespace Microsoft.Azure.WebJobs.Extensions.Kusto.Samples.OutputBindingSamples } ``` -### [Isolated process](#tab/isolated-process) --More samples for the Azure Data Explorer output binding are available in the [GitHub repository](https://github.com/Azure/Webjobs.Extensions.Kusto/tree/main/samples/samples-outofproc). --This section contains the following examples: --* [HTTP trigger, write one record](#http-trigger-write-one-record-c-oop) -* [HTTP trigger, write records with mapping](#http-trigger-write-records-with-mapping-oop) --The examples refer to `Product` class and a corresponding database table: --```cs -public class Product -{ - [JsonProperty(nameof(ProductID))] - public long ProductID { get; set; } -- [JsonProperty(nameof(Name))] - public string Name { get; set; } -- [JsonProperty(nameof(Cost))] - public double Cost { get; set; } -} -``` --```kusto -.create-merge table Products (ProductID:long, Name:string, Cost:double) -``` --<a id="http-trigger-write-one-record-c-oop"></a> --#### HTTP trigger, write one record --The following example shows a [C# function](functions-dotnet-class-library.md) that adds a record to a database. The function uses data provided in an HTTP POST request as a JSON body. --```cs -using Microsoft.Azure.Functions.Worker; -using Microsoft.Azure.Functions.Worker.Extensions.Kusto; -using Microsoft.Azure.Functions.Worker.Http; -using Microsoft.Azure.WebJobs.Extensions.Kusto.SamplesOutOfProc.OutputBindingSamples.Common; --namespace Microsoft.Azure.WebJobs.Extensions.Kusto.SamplesOutOfProc.OutputBindingSamples -{ - public static class AddProduct - { - [Function("AddProduct")] - [KustoOutput(Database: "productsdb", Connection = "KustoConnectionString", TableName = "Products")] - public static async Task<Product> Run( - [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "addproductuni")] - HttpRequestData req) - { - Product? prod = await req.ReadFromJsonAsync<Product>(); - return prod ?? new Product { }; - } - } -} --``` --<a id="http-trigger-write-records-with-mapping-oop"></a> --#### HTTP trigger, write records with mapping --The following example shows a [C# function](functions-dotnet-class-library.md) that adds a collection of records to a database. The function uses mapping that transforms a `Product` to `Item`. --To transform data from `Product` to `Item`, the function uses a mapping reference: --```kusto -.create-merge table Item (ItemID:long, ItemName:string, ItemCost:float) -- Create a mapping that transforms an Item to a Product--.create-or-alter table Product ingestion json mapping "item_to_product_json" '[{"column":"ProductID","path":"$.ItemID"},{"column":"Name","path":"$.ItemName"},{"column":"Cost","path":"$.ItemCost"}]' -``` --```cs -namespace Microsoft.Azure.WebJobs.Extensions.Kusto.SamplesOutOfProc.OutputBindingSamples.Common -{ - public class Item - { - public long ItemID { get; set; } -- public string? ItemName { get; set; } -- public double ItemCost { get; set; } - } -} -``` --```cs -using Microsoft.Azure.Functions.Worker; -using Microsoft.Azure.Functions.Worker.Extensions.Kusto; -using Microsoft.Azure.Functions.Worker.Http; -using Microsoft.Azure.WebJobs.Extensions.Kusto.SamplesOutOfProc.OutputBindingSamples.Common; --namespace Microsoft.Azure.WebJobs.Extensions.Kusto.SamplesOutOfProc.OutputBindingSamples -{ - public static class AddProductsWithMapping - { - [Function("AddProductsWithMapping")] - [KustoOutput(Database: "productsdb", Connection = "KustoConnectionString", TableName = "Products", MappingRef = "item_to_product_json")] - public static async Task<Item> Run( - [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "addproductswithmapping")] - HttpRequestData req) - { - Item? item = await req.ReadFromJsonAsync<Item>(); - return item ?? new Item { }; - } - } -} -``` ::: zone-end |
azure-functions | Functions Bindings Azure Data Explorer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-data-explorer.md | This set of articles explains how to work with [Azure Data Explorer](/azure/data The extension NuGet package you install depends on the C# mode you're using in your function app. -# [In-process](#tab/in-process) --Functions run in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md). --Add the extension to your project by installing [this NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Kusto). --```bash -dotnet add package Microsoft.Azure.WebJobs.Extensions.Kusto --prerelease -``` --# [Isolated process](#tab/isolated-process) +# [Isolated worker model](#tab/isolated-process) Functions run in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md). Add the extension to your project by installing [this NuGet package](https://www dotnet add package Microsoft.Azure.Functions.Worker.Extensions.Kusto --prerelease ``` -<!-- awaiting bundle support -# [C# script](#tab/csharp-script) +# [In-process model](#tab/in-process) ++Functions run in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md). -Functions run as C# script, which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions]. +Add the extension to your project by installing [this NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Kusto). -You can install this version of the extension in your function app by registering the [extension bundle], version 4.x, or a later version. >+```bash +dotnet add package Microsoft.Azure.WebJobs.Extensions.Kusto --prerelease +``` |
azure-functions | Functions Bindings Azure Sql Input | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-input.md | For information on setup and configuration details, see the [overview](./functio [!INCLUDE [functions-bindings-csharp-intro-with-csx](../../includes/functions-bindings-csharp-intro-with-csx.md)] -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) -More samples for the Azure SQL input binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-csharp). +More samples for the Azure SQL input binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-outofproc). This section contains the following examples: -* [HTTP trigger, get row by ID from query string](#http-trigger-look-up-id-from-query-string-c) -* [HTTP trigger, get multiple rows from route data](#http-trigger-get-multiple-items-from-route-data-c) -* [HTTP trigger, delete rows](#http-trigger-delete-one-or-multiple-rows-c) +* [HTTP trigger, get row by ID from query string](#http-trigger-look-up-id-from-query-string-c-oop) +* [HTTP trigger, get multiple rows from route data](#http-trigger-get-multiple-items-from-route-data-c-oop) +* [HTTP trigger, delete rows](#http-trigger-delete-one-or-multiple-rows-c-oop) The examples refer to a `ToDoItem` class and a corresponding database table: The examples refer to a `ToDoItem` class and a corresponding database table: :::code language="sql" source="~/functions-sql-todo-sample/sql/create.sql" range="1-7"::: -<a id="http-trigger-look-up-id-from-query-string-c"></a> +<a id="http-trigger-look-up-id-from-query-string-c-oop"></a> ### HTTP trigger, get row by ID from query string The following example shows a [C# function](functions-dotnet-class-library.md) that retrieves a single record. The function is triggered by an HTTP request that uses a query string to specify the ID. That ID is used to retrieve a `ToDoItem` record with the specified query. using System.Collections.Generic; using System.Linq; using Microsoft.AspNetCore.Http; using Microsoft.AspNetCore.Mvc;-using Microsoft.Azure.WebJobs; -using Microsoft.Azure.WebJobs.Extensions.Http; +using Microsoft.Azure.Functions.Worker; +using Microsoft.Azure.Functions.Worker.Extensions.Sql; +using Microsoft.Azure.Functions.Worker.Http; namespace AzureSQLSamples { namespace AzureSQLSamples public static IActionResult Run( [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "gettodoitem")] HttpRequest req,- [Sql(commandText: "select [Id], [order], [title], [url], [completed] from dbo.ToDo where Id = @Id", + [SqlInput(commandText: "select [Id], [order], [title], [url], [completed] from dbo.ToDo where Id = @Id", commandType: System.Data.CommandType.Text, parameters: "@Id={Query.id}", connectionStringSetting: "SqlConnectionString")] namespace AzureSQLSamples } ``` -<a id="http-trigger-get-multiple-items-from-route-data-c"></a> +<a id="http-trigger-get-multiple-items-from-route-data-c-oop"></a> ### HTTP trigger, get multiple rows from route parameter The following example shows a [C# function](functions-dotnet-class-library.md) that retrieves documents returned by the query. The function is triggered by an HTTP request that uses route data to specify the value of a query parameter. That parameter is used to filter the `ToDoItem` records in the specified query. The following example shows a [C# function](functions-dotnet-class-library.md) t using System.Collections.Generic; using Microsoft.AspNetCore.Http; using Microsoft.AspNetCore.Mvc;-using Microsoft.Azure.WebJobs; -using Microsoft.Azure.WebJobs.Extensions.Http; +using Microsoft.Azure.Functions.Worker; +using Microsoft.Azure.Functions.Worker.Extensions.Sql; +using Microsoft.Azure.Functions.Worker.Http; namespace AzureSQLSamples { namespace AzureSQLSamples public static IActionResult Run( [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "gettodoitems/{priority}")] HttpRequest req,- [Sql(commandText: "select [Id], [order], [title], [url], [completed] from dbo.ToDo where [Priority] > @Priority", + [SqlInput(commandText: "select [Id], [order], [title], [url], [completed] from dbo.ToDo where [Priority] > @Priority", commandType: System.Data.CommandType.Text, parameters: "@Priority={priority}", connectionStringSetting: "SqlConnectionString")] namespace AzureSQLSamples } ``` -<a id="http-trigger-delete-one-or-multiple-rows-c"></a> +<a id="http-trigger-delete-one-or-multiple-rows-c-oop"></a> ### HTTP trigger, delete rows The following example shows a [C# function](functions-dotnet-class-library.md) that executes a stored procedure with input from the HTTP request query parameter. The stored procedure `dbo.DeleteToDo` must be created on the SQL database. In t :::code language="sql" source="~/functions-sql-todo-sample/sql/create.sql" range="11-25"::: +```cs +namespace AzureSQL.ToDo +{ + public static class DeleteToDo + { + // delete all items or a specific item from querystring + // returns remaining items + // uses input binding with a stored procedure DeleteToDo to delete items and return remaining items + [FunctionName("DeleteToDo")] + public static IActionResult Run( + [HttpTrigger(AuthorizationLevel.Anonymous, "delete", Route = "DeleteFunction")] HttpRequest req, + ILogger log, + [SqlInput(commandText: "DeleteToDo", commandType: System.Data.CommandType.StoredProcedure, + parameters: "@Id={Query.id}", connectionStringSetting: "SqlConnectionString")] + IEnumerable<ToDoItem> toDoItems) + { + return new OkObjectResult(toDoItems); + } + } +} +``` -# [Isolated process](#tab/isolated-process) +# [In-process model](#tab/in-process) -More samples for the Azure SQL input binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-outofproc). +More samples for the Azure SQL input binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-csharp). This section contains the following examples: -* [HTTP trigger, get row by ID from query string](#http-trigger-look-up-id-from-query-string-c-oop) -* [HTTP trigger, get multiple rows from route data](#http-trigger-get-multiple-items-from-route-data-c-oop) -* [HTTP trigger, delete rows](#http-trigger-delete-one-or-multiple-rows-c-oop) +* [HTTP trigger, get row by ID from query string](#http-trigger-look-up-id-from-query-string-c) +* [HTTP trigger, get multiple rows from route data](#http-trigger-get-multiple-items-from-route-data-c) +* [HTTP trigger, delete rows](#http-trigger-delete-one-or-multiple-rows-c) The examples refer to a `ToDoItem` class and a corresponding database table: The examples refer to a `ToDoItem` class and a corresponding database table: :::code language="sql" source="~/functions-sql-todo-sample/sql/create.sql" range="1-7"::: -<a id="http-trigger-look-up-id-from-query-string-c-oop"></a> +<a id="http-trigger-look-up-id-from-query-string-c"></a> ### HTTP trigger, get row by ID from query string The following example shows a [C# function](functions-dotnet-class-library.md) that retrieves a single record. The function is triggered by an HTTP request that uses a query string to specify the ID. That ID is used to retrieve a `ToDoItem` record with the specified query. using System.Collections.Generic; using System.Linq; using Microsoft.AspNetCore.Http; using Microsoft.AspNetCore.Mvc;-using Microsoft.Azure.Functions.Worker; -using Microsoft.Azure.Functions.Worker.Extensions.Sql; -using Microsoft.Azure.Functions.Worker.Http; +using Microsoft.Azure.WebJobs; +using Microsoft.Azure.WebJobs.Extensions.Http; namespace AzureSQLSamples { namespace AzureSQLSamples public static IActionResult Run( [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "gettodoitem")] HttpRequest req,- [SqlInput(commandText: "select [Id], [order], [title], [url], [completed] from dbo.ToDo where Id = @Id", + [Sql(commandText: "select [Id], [order], [title], [url], [completed] from dbo.ToDo where Id = @Id", commandType: System.Data.CommandType.Text, parameters: "@Id={Query.id}", connectionStringSetting: "SqlConnectionString")] namespace AzureSQLSamples } ``` -<a id="http-trigger-get-multiple-items-from-route-data-c-oop"></a> +<a id="http-trigger-get-multiple-items-from-route-data-c"></a> ### HTTP trigger, get multiple rows from route parameter The following example shows a [C# function](functions-dotnet-class-library.md) that retrieves documents returned by the query. The function is triggered by an HTTP request that uses route data to specify the value of a query parameter. That parameter is used to filter the `ToDoItem` records in the specified query. The following example shows a [C# function](functions-dotnet-class-library.md) t using System.Collections.Generic; using Microsoft.AspNetCore.Http; using Microsoft.AspNetCore.Mvc;-using Microsoft.Azure.Functions.Worker; -using Microsoft.Azure.Functions.Worker.Extensions.Sql; -using Microsoft.Azure.Functions.Worker.Http; +using Microsoft.Azure.WebJobs; +using Microsoft.Azure.WebJobs.Extensions.Http; namespace AzureSQLSamples { namespace AzureSQLSamples public static IActionResult Run( [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "gettodoitems/{priority}")] HttpRequest req,- [SqlInput(commandText: "select [Id], [order], [title], [url], [completed] from dbo.ToDo where [Priority] > @Priority", + [Sql(commandText: "select [Id], [order], [title], [url], [completed] from dbo.ToDo where [Priority] > @Priority", commandType: System.Data.CommandType.Text, parameters: "@Priority={priority}", connectionStringSetting: "SqlConnectionString")] namespace AzureSQLSamples } ``` -<a id="http-trigger-delete-one-or-multiple-rows-c-oop"></a> +<a id="http-trigger-delete-one-or-multiple-rows-c"></a> ### HTTP trigger, delete rows The following example shows a [C# function](functions-dotnet-class-library.md) that executes a stored procedure with input from the HTTP request query parameter. The stored procedure `dbo.DeleteToDo` must be created on the SQL database. In t :::code language="sql" source="~/functions-sql-todo-sample/sql/create.sql" range="11-25"::: -```cs -namespace AzureSQL.ToDo -{ - public static class DeleteToDo - { - // delete all items or a specific item from querystring - // returns remaining items - // uses input binding with a stored procedure DeleteToDo to delete items and return remaining items - [FunctionName("DeleteToDo")] - public static IActionResult Run( - [HttpTrigger(AuthorizationLevel.Anonymous, "delete", Route = "DeleteFunction")] HttpRequest req, - ILogger log, - [SqlInput(commandText: "DeleteToDo", commandType: System.Data.CommandType.StoredProcedure, - parameters: "@Id={Query.id}", connectionStringSetting: "SqlConnectionString")] - IEnumerable<ToDoItem> toDoItems) - { - return new OkObjectResult(toDoItems); - } - } -} -``` --# [C# Script](#tab/csharp-script) ---More samples for the Azure SQL input binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-csx). --This section contains the following examples: --* [HTTP trigger, get row by ID from query string](#http-trigger-look-up-id-from-query-string-csharpscript) -* [HTTP trigger, delete rows](#http-trigger-delete-one-or-multiple-rows-csharpscript) --The examples refer to a `ToDoItem` class and a corresponding database table: ----<a id="http-trigger-look-up-id-from-query-string-csharpscript"></a> -### HTTP trigger, get row by ID from query string --The following example shows an Azure SQL input binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function is triggered by an HTTP request that uses a query string to specify the ID. That ID is used to retrieve a `ToDoItem` record with the specified query. --> [!NOTE] -> The HTTP query string parameter is case-sensitive. -> --Here's the binding data in the *function.json* file: --```json -{ - "authLevel": "anonymous", - "type": "httpTrigger", - "direction": "in", - "name": "req", - "methods": [ - "get" - ] -}, -{ - "type": "http", - "direction": "out", - "name": "res" -}, -{ - "name": "todoItem", - "type": "sql", - "direction": "in", - "commandText": "select [Id], [order], [title], [url], [completed] from dbo.ToDo where Id = @Id", - "commandType": "Text", - "parameters": "@Id = {Query.id}", - "connectionStringSetting": "SqlConnectionString" -} -``` --The [configuration](#configuration) section explains these properties. --Here's the C# script code: --```cs -#r "Newtonsoft.Json" --using System.Net; -using Microsoft.AspNetCore.Mvc; -using Microsoft.Extensions.Primitives; -using Newtonsoft.Json; -using System.Collections.Generic; --public static IActionResult Run(HttpRequest req, ILogger log, IEnumerable<ToDoItem> todoItem) -{ - return new OkObjectResult(todoItem); -} -``` ---<a id="http-trigger-delete-one-or-multiple-rows-csharpscript"></a> -### HTTP trigger, delete rows --The following example shows an Azure SQL input binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding to execute a stored procedure with input from the HTTP request query parameter. In this example, the stored procedure deletes a single record or all records depending on the value of the parameter. --The stored procedure `dbo.DeleteToDo` must be created on the SQL database. ---Here's the binding data in the *function.json* file: --```json -{ - "authLevel": "anonymous", - "type": "httpTrigger", - "direction": "in", - "name": "req", - "methods": [ - "get" - ] -}, -{ - "type": "http", - "direction": "out", - "name": "res" -}, -{ - "name": "todoItems", - "type": "sql", - "direction": "in", - "commandText": "DeleteToDo", - "commandType": "StoredProcedure", - "parameters": "@Id = {Query.id}", - "connectionStringSetting": "SqlConnectionString" -} -``` - :::code language="csharp" source="~/functions-sql-todo-sample/DeleteToDo.cs" range="4-30"::: -The [configuration](#configuration) section explains these properties. --Here's the C# script code: --```cs -#r "Newtonsoft.Json" --using System.Net; -using Microsoft.AspNetCore.Mvc; -using Microsoft.Extensions.Primitives; -using Newtonsoft.Json; -using System.Collections.Generic; --public static IActionResult Run(HttpRequest req, ILogger log, IEnumerable<ToDoItem> todoItems) -{ - return new OkObjectResult(todoItems); -} -``` - ::: zone-end |
azure-functions | Functions Bindings Azure Sql Output | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-output.md | For information on setup and configuration details, see the [overview](./functio [!INCLUDE [functions-bindings-csharp-intro-with-csx](../../includes/functions-bindings-csharp-intro-with-csx.md)] -# [In-process](#tab/in-process) --More samples for the Azure SQL output binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-csharp). --This section contains the following examples: --* [HTTP trigger, write one record](#http-trigger-write-one-record-c) -* [HTTP trigger, write to two tables](#http-trigger-write-to-two-tables-c) -* [HTTP trigger, write records using IAsyncCollector](#http-trigger-write-records-using-iasynccollector-c) --The examples refer to a `ToDoItem` class and a corresponding database table: -----<a id="http-trigger-write-one-record-c"></a> --### HTTP trigger, write one record --The following example shows a [C# function](functions-dotnet-class-library.md) that adds a record to a database, using data provided in an HTTP POST request as a JSON body. ---<a id="http-trigger-write-to-two-tables-c"></a> --### HTTP trigger, write to two tables --The following example shows a [C# function](functions-dotnet-class-library.md) that adds records to a database in two different tables (`dbo.ToDo` and `dbo.RequestLog`), using data provided in an HTTP POST request as a JSON body and multiple output bindings. --```sql -CREATE TABLE dbo.RequestLog ( - Id int identity(1,1) primary key, - RequestTimeStamp datetime2 not null, - ItemCount int not null -) -``` ---```cs -namespace AzureSQL.ToDo -{ - public static class PostToDo - { - // create a new ToDoItem from body object - // uses output binding to insert new item into ToDo table - [FunctionName("PostToDo")] - public static async Task<IActionResult> Run( - [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "PostFunction")] HttpRequest req, - ILogger log, - [Sql(commandText: "dbo.ToDo", connectionStringSetting: "SqlConnectionString")] IAsyncCollector<ToDoItem> toDoItems, - [Sql(commandText: "dbo.RequestLog", connectionStringSetting: "SqlConnectionString")] IAsyncCollector<RequestLog> requestLogs) - { - string requestBody = await new StreamReader(req.Body).ReadToEndAsync(); - ToDoItem toDoItem = JsonConvert.DeserializeObject<ToDoItem>(requestBody); -- // generate a new id for the todo item - toDoItem.Id = Guid.NewGuid(); -- // set Url from env variable ToDoUri - toDoItem.url = Environment.GetEnvironmentVariable("ToDoUri")+"?id="+toDoItem.Id.ToString(); -- // if completed is not provided, default to false - if (toDoItem.completed == null) - { - toDoItem.completed = false; - } -- await toDoItems.AddAsync(toDoItem); - await toDoItems.FlushAsync(); - List<ToDoItem> toDoItemList = new List<ToDoItem> { toDoItem }; -- requestLog = new RequestLog(); - requestLog.RequestTimeStamp = DateTime.Now; - requestLog.ItemCount = 1; - await requestLogs.AddAsync(requestLog); - await requestLogs.FlushAsync(); -- return new OkObjectResult(toDoItemList); - } - } -- public class RequestLog { - public DateTime RequestTimeStamp { get; set; } - public int ItemCount { get; set; } - } -} -``` --<a id="http-trigger-write-records-using-iasynccollector-c"></a> --### HTTP trigger, write records using IAsyncCollector --The following example shows a [C# function](functions-dotnet-class-library.md) that adds a collection of records to a database, using data provided in an HTTP POST body JSON array. --```cs -using Microsoft.AspNetCore.Http; -using Microsoft.AspNetCore.Mvc; -using Microsoft.Azure.WebJobs; -using Microsoft.Azure.WebJobs.Extensions.Http; -using Newtonsoft.Json; -using System.IO; -using System.Threading.Tasks; --namespace AzureSQLSamples -{ - public static class WriteRecordsAsync - { - [FunctionName("WriteRecordsAsync")] - public static async Task<IActionResult> Run( - [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "addtodo-asynccollector")] - HttpRequest req, - [Sql(commandText: "dbo.ToDo", connectionStringSetting: "SqlConnectionString")] IAsyncCollector<ToDoItem> newItems) - { - string requestBody = await new StreamReader(req.Body).ReadToEndAsync(); - var incomingItems = JsonConvert.DeserializeObject<ToDoItem[]>(requestBody); - foreach (ToDoItem newItem in incomingItems) - { - await newItems.AddAsync(newItem); - } - // Rows are upserted here - await newItems.FlushAsync(); -- return new CreatedResult($"/api/addtodo-asynccollector", "done"); - } - } -} -``` ---# [Isolated process](#tab/isolated-process) +# [Isolated worker model](#tab/isolated-process) More samples for the Azure SQL output binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-outofproc). namespace AzureSQL.ToDo } ``` -# [C# Script](#tab/csharp-script) +# [In-process model](#tab/in-process) -More samples for the Azure SQL output binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-csx). +More samples for the Azure SQL output binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-csharp). This section contains the following examples: -* [HTTP trigger, write records to a table](#http-trigger-write-records-to-table-csharpscript) -* [HTTP trigger, write to two tables](#http-trigger-write-to-two-tables-csharpscript) +* [HTTP trigger, write one record](#http-trigger-write-one-record-c) +* [HTTP trigger, write to two tables](#http-trigger-write-to-two-tables-c) +* [HTTP trigger, write records using IAsyncCollector](#http-trigger-write-records-using-iasynccollector-c) The examples refer to a `ToDoItem` class and a corresponding database table: The examples refer to a `ToDoItem` class and a corresponding database table: :::code language="sql" source="~/functions-sql-todo-sample/sql/create.sql" range="1-7"::: -<a id="http-trigger-write-records-to-table-csharpscript"></a> -### HTTP trigger, write records to a table --The following example shows a SQL output binding in a function.json file and a [C# script function](functions-reference-csharp.md) that adds records to a table, using data provided in an HTTP POST request as a JSON body. --The following is binding data in the function.json file: --```json -{ - "authLevel": "anonymous", - "type": "httpTrigger", - "direction": "in", - "name": "req", - "methods": [ - "post" - ] -}, -{ - "type": "http", - "direction": "out", - "name": "res" -}, -{ - "name": "todoItem", - "type": "sql", - "direction": "out", - "commandText": "dbo.ToDo", - "connectionStringSetting": "SqlConnectionString" -} -``` --The [configuration](#configuration) section explains these properties. --The following is sample C# script code: --```cs -#r "Newtonsoft.Json" +<a id="http-trigger-write-one-record-c"></a> -using System.Net; -using Microsoft.AspNetCore.Mvc; -using Microsoft.Extensions.Primitives; -using Newtonsoft.Json; +### HTTP trigger, write one record -public static IActionResult Run(HttpRequest req, ILogger log, out ToDoItem todoItem) -{ - log.LogInformation("C# HTTP trigger function processed a request."); +The following example shows a [C# function](functions-dotnet-class-library.md) that adds a record to a database, using data provided in an HTTP POST request as a JSON body. - string requestBody = new StreamReader(req.Body).ReadToEnd(); - todoItem = JsonConvert.DeserializeObject<ToDoItem>(requestBody); - return new OkObjectResult(todoItem); -} -``` +<a id="http-trigger-write-to-two-tables-c"></a> -<a id="http-trigger-write-to-two-tables-csharpscript"></a> ### HTTP trigger, write to two tables -The following example shows a SQL output binding in a function.json file and a [C# script function](functions-reference-csharp.md) that adds records to a database in two different tables (`dbo.ToDo` and `dbo.RequestLog`), using data provided in an HTTP POST request as a JSON body and multiple output bindings. --The second table, `dbo.RequestLog`, corresponds to the following definition: +The following example shows a [C# function](functions-dotnet-class-library.md) that adds records to a database in two different tables (`dbo.ToDo` and `dbo.RequestLog`), using data provided in an HTTP POST request as a JSON body and multiple output bindings. ```sql CREATE TABLE dbo.RequestLog ( CREATE TABLE dbo.RequestLog ( ) ``` -The following is binding data in the function.json file: -```json -{ - "authLevel": "anonymous", - "type": "httpTrigger", - "direction": "in", - "name": "req", - "methods": [ - "post" - ] -}, -{ - "type": "http", - "direction": "out", - "name": "res" -}, -{ - "name": "todoItem", - "type": "sql", - "direction": "out", - "commandText": "dbo.ToDo", - "connectionStringSetting": "SqlConnectionString" -}, +```cs +namespace AzureSQL.ToDo {- "name": "requestLog", - "type": "sql", - "direction": "out", - "commandText": "dbo.RequestLog", - "connectionStringSetting": "SqlConnectionString" + public static class PostToDo + { + // create a new ToDoItem from body object + // uses output binding to insert new item into ToDo table + [FunctionName("PostToDo")] + public static async Task<IActionResult> Run( + [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "PostFunction")] HttpRequest req, + ILogger log, + [Sql(commandText: "dbo.ToDo", connectionStringSetting: "SqlConnectionString")] IAsyncCollector<ToDoItem> toDoItems, + [Sql(commandText: "dbo.RequestLog", connectionStringSetting: "SqlConnectionString")] IAsyncCollector<RequestLog> requestLogs) + { + string requestBody = await new StreamReader(req.Body).ReadToEndAsync(); + ToDoItem toDoItem = JsonConvert.DeserializeObject<ToDoItem>(requestBody); ++ // generate a new id for the todo item + toDoItem.Id = Guid.NewGuid(); ++ // set Url from env variable ToDoUri + toDoItem.url = Environment.GetEnvironmentVariable("ToDoUri")+"?id="+toDoItem.Id.ToString(); ++ // if completed is not provided, default to false + if (toDoItem.completed == null) + { + toDoItem.completed = false; + } ++ await toDoItems.AddAsync(toDoItem); + await toDoItems.FlushAsync(); + List<ToDoItem> toDoItemList = new List<ToDoItem> { toDoItem }; ++ requestLog = new RequestLog(); + requestLog.RequestTimeStamp = DateTime.Now; + requestLog.ItemCount = 1; + await requestLogs.AddAsync(requestLog); + await requestLogs.FlushAsync(); ++ return new OkObjectResult(toDoItemList); + } + } ++ public class RequestLog { + public DateTime RequestTimeStamp { get; set; } + public int ItemCount { get; set; } + } } ``` -The [configuration](#configuration) section explains these properties. +<a id="http-trigger-write-records-using-iasynccollector-c"></a> -The following is sample C# script code: +### HTTP trigger, write records using IAsyncCollector -```cs -#r "Newtonsoft.Json" +The following example shows a [C# function](functions-dotnet-class-library.md) that adds a collection of records to a database, using data provided in an HTTP POST body JSON array. -using System.Net; +```cs +using Microsoft.AspNetCore.Http; using Microsoft.AspNetCore.Mvc;-using Microsoft.Extensions.Primitives; +using Microsoft.Azure.WebJobs; +using Microsoft.Azure.WebJobs.Extensions.Http; using Newtonsoft.Json;+using System.IO; +using System.Threading.Tasks; -public static IActionResult Run(HttpRequest req, ILogger log, out ToDoItem todoItem, out RequestLog requestLog) +namespace AzureSQLSamples {- log.LogInformation("C# HTTP trigger function processed a request."); -- string requestBody = new StreamReader(req.Body).ReadToEnd(); - todoItem = JsonConvert.DeserializeObject<ToDoItem>(requestBody); -- requestLog = new RequestLog(); - requestLog.RequestTimeStamp = DateTime.Now; - requestLog.ItemCount = 1; -- return new OkObjectResult(todoItem); -} + public static class WriteRecordsAsync + { + [FunctionName("WriteRecordsAsync")] + public static async Task<IActionResult> Run( + [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "addtodo-asynccollector")] + HttpRequest req, + [Sql(commandText: "dbo.ToDo", connectionStringSetting: "SqlConnectionString")] IAsyncCollector<ToDoItem> newItems) + { + string requestBody = await new StreamReader(req.Body).ReadToEndAsync(); + var incomingItems = JsonConvert.DeserializeObject<ToDoItem[]>(requestBody); + foreach (ToDoItem newItem in incomingItems) + { + await newItems.AddAsync(newItem); + } + // Rows are upserted here + await newItems.FlushAsync(); -public class RequestLog { - public DateTime RequestTimeStamp { get; set; } - public int ItemCount { get; set; } + return new CreatedResult($"/api/addtodo-asynccollector", "done"); + } + } } ``` - ::: zone-end |
azure-functions | Functions Bindings Azure Sql Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-trigger.md | For more information on change tracking and how it's used by applications such a <a id="example"></a> -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) -More samples for the Azure SQL trigger are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-csharp). +More samples for the Azure SQL trigger are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-outofproc). The example refers to a `ToDoItem` class and a corresponding database table: The SQL trigger binds to a `IReadOnlyList<SqlChange<T>>`, a list of `SqlChange` The following example shows a [C# function](functions-dotnet-class-library.md) that is invoked when there are changes to the `ToDo` table: ```cs+using System; using System.Collections.Generic;-using Microsoft.Azure.WebJobs; +using Microsoft.Azure.Functions.Worker; +using Microsoft.Azure.Functions.Worker.Extensions.Sql; using Microsoft.Extensions.Logging;-using Microsoft.Azure.WebJobs.Extensions.Sql; +using Newtonsoft.Json; + namespace AzureSQL.ToDo { public static class ToDoTrigger {- [FunctionName("ToDoTrigger")] + [Function("ToDoTrigger")] public static void Run( [SqlTrigger("[dbo].[ToDo]", "SqlConnectionString")] IReadOnlyList<SqlChange<ToDoItem>> changes,- ILogger logger) + FunctionContext context) {+ var logger = context.GetLogger("ToDoTrigger"); foreach (SqlChange<ToDoItem> change in changes) { ToDoItem toDoItem = change.Item; namespace AzureSQL.ToDo } ``` -# [Isolated process](#tab/isolated-process) -More samples for the Azure SQL trigger are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-outofproc). +# [In-process model](#tab/in-process) ++More samples for the Azure SQL trigger are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-csharp). The example refers to a `ToDoItem` class and a corresponding database table: The SQL trigger binds to a `IReadOnlyList<SqlChange<T>>`, a list of `SqlChange` The following example shows a [C# function](functions-dotnet-class-library.md) that is invoked when there are changes to the `ToDo` table: ```cs-using System; using System.Collections.Generic;-using Microsoft.Azure.Functions.Worker; -using Microsoft.Azure.Functions.Worker.Extensions.Sql; +using Microsoft.Azure.WebJobs; using Microsoft.Extensions.Logging;-using Newtonsoft.Json; -+using Microsoft.Azure.WebJobs.Extensions.Sql; namespace AzureSQL.ToDo { public static class ToDoTrigger {- [Function("ToDoTrigger")] + [FunctionName("ToDoTrigger")] public static void Run( [SqlTrigger("[dbo].[ToDo]", "SqlConnectionString")] IReadOnlyList<SqlChange<ToDoItem>> changes,- FunctionContext context) + ILogger logger) {- var logger = context.GetLogger("ToDoTrigger"); foreach (SqlChange<ToDoItem> change in changes) { ToDoItem toDoItem = change.Item; namespace AzureSQL.ToDo } ``` --# [C# Script](#tab/csharp-script) --More samples for the Azure SQL trigger are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-csx). ---The example refers to a `ToDoItem` class and a corresponding database table: ----[Change tracking](#set-up-change-tracking-required) is enabled on the database and on the table: --```sql -ALTER DATABASE [SampleDatabase] -SET CHANGE_TRACKING = ON -(CHANGE_RETENTION = 2 DAYS, AUTO_CLEANUP = ON); --ALTER TABLE [dbo].[ToDo] -ENABLE CHANGE_TRACKING; -``` --The SQL trigger binds to a `IReadOnlyList<SqlChange<T>>`, a list of `SqlChange` objects each with two properties: -- **Item:** the item that was changed. The type of the item should follow the table schema as seen in the `ToDoItem` class.-- **Operation:** a value from `SqlChangeOperation` enum. The possible values are `Insert`, `Update`, and `Delete`.--The following example shows a SQL trigger in a function.json file and a [C# script function](functions-reference-csharp.md) that is invoked when there are changes to the `ToDo` table: --The following is binding data in the function.json file: --```json -{ - "name": "todoChanges", - "type": "sqlTrigger", - "direction": "in", - "tableName": "dbo.ToDo", - "connectionStringSetting": "SqlConnectionString" -} -``` -The following is the C# script function: --```csharp -#r "Newtonsoft.Json" --using System.Net; -using Microsoft.AspNetCore.Mvc; -using Microsoft.Extensions.Primitives; -using Newtonsoft.Json; --public static void Run(IReadOnlyList<SqlChange<ToDoItem>> todoChanges, ILogger log) -{ - log.LogInformation($"C# SQL trigger function processed a request."); -- foreach (SqlChange<ToDoItem> change in todoChanges) - { - ToDoItem toDoItem = change.Item; - log.LogInformation($"Change operation: {change.Operation}"); - log.LogInformation($"Id: {toDoItem.Id}, Title: {toDoItem.title}, Url: {toDoItem.url}, Completed: {toDoItem.completed}"); - } -} -``` - - ::: zone-end ::: zone pivot="programming-language-java" param($todoChanges) $changesJson = $todoChanges | ConvertTo-Json -Compress Write-Host "SQL Changes: $changesJson" ```-- ::: zone-end---- ::: zone pivot="programming-language-javascript" ## Example usage <a id="example"></a> |
azure-functions | Functions Bindings Azure Sql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql.md | This set of articles explains how to work with [Azure SQL](/azure/azure-sql/inde The extension NuGet package you install depends on the C# mode you're using in your function app: -# [In-process](#tab/in-process) --Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md). --Add the extension to your project by installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Sql). --```bash -dotnet add package Microsoft.Azure.WebJobs.Extensions.Sql -``` --To use a preview version of the Microsoft.Azure.WebJobs.Extensions.Sql package for [SQL trigger](functions-bindings-azure-sql-trigger.md) functionality, add the `--prerelease` flag to the command. --```bash -dotnet add package Microsoft.Azure.WebJobs.Extensions.Sql --prerelease -``` --> [!NOTE] -> Breaking changes between preview releases of the Azure SQL trigger for Functions requires that all Functions targeting the same database use the same version of the SQL extension package. --# [Isolated process](#tab/isolated-process) +# [Isolated worker model](#tab/isolated-process) Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md). dotnet add package Microsoft.Azure.Functions.Worker.Extensions.Sql --prerelease > [!NOTE] > Breaking changes between preview releases of the Azure SQL trigger for Functions requires that all Functions targeting the same database use the same version of the SQL extension package. -# [C# script](#tab/csharp-script) +# [In-process model](#tab/in-process) -Functions run as C# script, which is supported primarily for C# portal editing. The SQL bindings extension is part of the v4 [extension bundle], which is specified in your host.json project file. +Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md). -This extension is available from the extension bundle v4, which is specified in your `host.json` file by: +Add the extension to your project by installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Sql). -```json -{ - "version": "2.0", - "extensionBundle": { - "id": "Microsoft.Azure.Functions.ExtensionBundle", - "version": "[4.*, 5.0.0)" - } -} +```bash +dotnet add package Microsoft.Azure.WebJobs.Extensions.Sql ``` +To use a preview version of the Microsoft.Azure.WebJobs.Extensions.Sql package for [SQL trigger](functions-bindings-azure-sql-trigger.md) functionality, add the `--prerelease` flag to the command. -You can add the preview extension bundle to use the [SQL trigger](functions-bindings-azure-sql-trigger.md) by adding or replacing the following code in your `host.json` file: --```json -{ - "version": "2.0", - "extensionBundle": { - "id": "Microsoft.Azure.Functions.ExtensionBundle.Preview", - "version": "[4.*, 5.0.0)" - } -} +```bash +dotnet add package Microsoft.Azure.WebJobs.Extensions.Sql --prerelease ``` > [!NOTE]-> Breaking changes between preview releases of the Azure SQL trigger for Functions requires that all Functions targeting the same database use the same version of the extension bundle. -+> Breaking changes between preview releases of the Azure SQL trigger for Functions requires that all Functions targeting the same database use the same version of the SQL extension package. |
azure-functions | Functions Bindings Cache Trigger Redislist | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cache-trigger-redislist.md | The `RedisListTrigger` pops new elements from a list and surfaces those entries The following sample polls the key `listTest` at a localhost Redis instance at `127.0.0.1:6379`: -### [In-process](#tab/in-process) +### [Isolated worker model](#tab/isolated-process) ++The isolated process examples aren't available in preview. ++### [In-process model](#tab/in-process) ```csharp [FunctionName(nameof(ListsTrigger))] public static void ListsTrigger( } ``` -### [Isolated process](#tab/isolated-process) --The isolated process examples aren't available in preview. - ::: zone-end |
azure-functions | Functions Bindings Cache Trigger Redispubsub | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cache-trigger-redispubsub.md | Redis features [publish/subscribe functionality](https://redis.io/docs/interact/ [!INCLUDE [dotnet-execution](../../includes/functions-dotnet-execution-model.md)] -### [In-process](#tab/in-process) +### [Isolated worker model](#tab/isolated-process) ++The isolated process examples aren't available in preview. ++```csharp +//TBD +``` ++### [In-process model](#tab/in-process) This sample listens to the channel `pubsubTest`. public static void KeyeventTrigger( } ``` -### [Isolated process](#tab/isolated-process) --The isolated process examples aren't available in preview. --```csharp -//TBD -``` - ::: zone-end |
azure-functions | Functions Bindings Cache Trigger Redisstream | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cache-trigger-redisstream.md | The `RedisStreamTrigger` reads new entries from a stream and surfaces those elem [!INCLUDE [dotnet-execution](../../includes/functions-dotnet-execution-model.md)] -### [In-process](#tab/in-process) +### [Isolated worker model](#tab/isolated-process) ++The isolated process examples aren't available in preview. ++```csharp +//TBD +``` ++### [In-process model](#tab/in-process) ```csharp public static void StreamsTrigger( } ``` -### [Isolated process](#tab/isolated-process) --The isolated process examples aren't available in preview. --```csharp -//TBD -``` - ::: zone-end |
azure-functions | Functions Bindings Cache | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cache.md | You can integrate Azure Cache for Redis and Azure Functions to build functions t ## Install extension -### [In-process](#tab/in-process) +### [Isolated worker model](#tab/isolated-process) -Functions run in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md). +Functions run in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md). -Add the extension to your project by installing [this NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Redis). +Add the extension to your project by installing [this NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Redis). ```bash-dotnet add package Microsoft.Azure.WebJobs.Extensions.Redis --prerelease +dotnet add package Microsoft.Azure.Functions.Worker.Extensions.Redis --prerelease ``` -### [Isolated process](#tab/isolated-process) +### [In-process model](#tab/in-process) -Functions run in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md). +Functions run in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md). -Add the extension to your project by installing [this NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Redis). +Add the extension to your project by installing [this NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Redis). ```bash-dotnet add package Microsoft.Azure.Functions.Worker.Extensions.Redis --prerelease +dotnet add package Microsoft.Azure.WebJobs.Extensions.Redis --prerelease ``` |
azure-functions | Functions Bindings Cosmosdb V2 Input | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-input.md | Unless otherwise noted, examples in this article target version 3.x of the [Azur [!INCLUDE [functions-bindings-csharp-intro](../../includes/functions-bindings-csharp-intro.md)] -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) ++This section contains examples that require version 3.x of Azure Cosmos DB extension and 5.x of Azure Storage extension. If not already present in your function app, add reference to the following NuGet packages: ++ * [Microsoft.Azure.Functions.Worker.Extensions.CosmosDB](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.CosmosDB) + * [Microsoft.Azure.Functions.Worker.Extensions.Storage.Queues](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Storage.Queues/5.0.0) ++* [Queue trigger, look up ID from JSON](#queue-trigger-look-up-id-from-json-isolated) ++The examples refer to a simple `ToDoItem` type: +++<a id="queue-trigger-look-up-id-from-json-isolated"></a> ++### Queue trigger, look up ID from JSON ++The following example shows a function that retrieves a single document. The function is triggered by a JSON message in the storage queue. The queue trigger parses the JSON into an object of type `ToDoItemLookup`, which contains the ID and partition key value to retrieve. That ID and partition key value are used to return a `ToDoItem` document from the specified database and collection. +++# [In-process model](#tab/in-process) This section contains the following examples for using [in-process C# class library functions](functions-dotnet-class-library.md) with extension version 3.x: namespace CosmosDBSamplesV2 } ``` -# [Isolated process](#tab/isolated-process) --This section contains examples that require version 3.x of Azure Cosmos DB extension and 5.x of Azure Storage extension. If not already present in your function app, add reference to the following NuGet packages: -- * [Microsoft.Azure.Functions.Worker.Extensions.CosmosDB](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.CosmosDB) - * [Microsoft.Azure.Functions.Worker.Extensions.Storage.Queues](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Storage.Queues/5.0.0) --* [Queue trigger, look up ID from JSON](#queue-trigger-look-up-id-from-json-isolated) --The examples refer to a simple `ToDoItem` type: ---<a id="queue-trigger-look-up-id-from-json-isolated"></a> --### Queue trigger, look up ID from JSON --The following example shows a function that retrieves a single document. The function is triggered by a JSON message in the storage queue. The queue trigger parses the JSON into an object of type `ToDoItemLookup`, which contains the ID and partition key value to retrieve. That ID and partition key value are used to return a `ToDoItem` document from the specified database and collection. -- ::: zone-end Here's the binding data in the *function.json* file: ::: zone pivot="programming-language-csharp" ## Attributes -Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#cosmos-db-input). +Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#azure-cosmos-db-v2-input). # [Extension 4.x+](#tab/extensionv4/in-process) |
azure-functions | Functions Bindings Cosmosdb V2 Output | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-output.md | The Python v1 programming model requires you to define bindings in a separate *f This article supports both programming models. ::: zone-end- ## Example Unless otherwise noted, examples in this article target version 3.x of the [Azure Cosmos DB extension](functions-bindings-cosmosdb-v2.md). For use with extension version 4.x, you need to replace the string `collection` in property and attribute names with `container`. ::: zone pivot="programming-language-csharp" -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) ++The following code defines a `MyDocument` type: +++In the following example, the return type is an [`IReadOnlyList<T>`](/dotnet/api/system.collections.generic.ireadonlylist-1), which is a modified list of documents from trigger binding parameter: +++# [In-process model](#tab/in-process) This section contains the following examples: namespace CosmosDBSamplesV2 } ``` --# [Isolated process](#tab/isolated-process) --The following code defines a `MyDocument` type: ---In the following example, the return type is an [`IReadOnlyList<T>`](/dotnet/api/system.collections.generic.ireadonlylist-1), which is a modified list of documents from trigger binding parameter: -- ::: zone-end def main(req: func.HttpRequest, doc: func.Out[func.Document]) -> func.HttpRespon ::: zone pivot="programming-language-csharp" ## Attributes -Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#cosmos-db-output). +Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#azure-cosmos-db-v2-output). # [Extension 4.x+](#tab/extensionv4/in-process) Both [in-process](functions-dotnet-class-library.md) and [isolated worker proces [!INCLUDE [functions-cosmosdb-output-attributes-v3](../../includes/functions-cosmosdb-output-attributes-v3.md)] -# [Extension 4.x+](#tab/functionsv4/isolated-process) +# [Extension 4.x+](#tab/extensionv4/isolated-process) [!INCLUDE [functions-cosmosdb-output-attributes-v4](../../includes/functions-cosmosdb-output-attributes-v4.md)] |
azure-functions | Functions Bindings Cosmosdb V2 Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-trigger.md | This article supports both programming models. The usage of the trigger depends on the extension package version and the C# modality used in your function app, which can be one of the following: -# [In-process](#tab/in-process) --An in-process class library is a compiled C# function runs in the same process as the Functions runtime. - -# [Isolated process](#tab/isolated-process) +# [Isolated worker model](#tab/isolated-process) An isolated worker process class library compiled C# function runs in a process isolated from the runtime. +# [In-process model](#tab/in-process) ++An in-process class library is a compiled C# function runs in the same process as the Functions runtime. + The following examples depend on the extension version for the given C# mode. Here's the Python code: ::: zone pivot="programming-language-csharp" ## Attributes -Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use the [CosmosDBTriggerAttribute](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.CosmosDB/Trigger/CosmosDBTriggerAttribute.cs) to define the function. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#cosmos-db-trigger). +Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use the [CosmosDBTriggerAttribute](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.CosmosDB/Trigger/CosmosDBTriggerAttribute.cs) to define the function. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#azure-cosmos-db-v2-trigger). # [Extension 4.x+](#tab/extensionv4/in-process) |
azure-functions | Functions Bindings Cosmosdb V2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2.md | This set of articles explains how to work with [Azure Cosmos DB](../cosmos-db/se The extension NuGet package you install depends on the C# mode you're using in your function app: -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) -Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md). +Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md). -In a variation of this model, Functions can be run using [C# scripting], which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions]. +# [In-process model](#tab/in-process) -# [Isolated process](#tab/isolated-process) +Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md). -Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md). +In a variation of this model, Functions can be run using [C# scripting], which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions]. You can install this version of the extension in your function app by registerin The binding types supported for .NET depend on both the extension version and C# execution mode, which can be one of the following: -# [In-process](#tab/in-process) --An in-process class library is a compiled C# function runs in the same process as the Functions runtime. - -# [Isolated process](#tab/isolated-process) +# [Isolated worker model](#tab/isolated-process) An isolated worker process class library compiled C# function runs in a process isolated from the runtime. +# [In-process model](#tab/in-process) ++An in-process class library is a compiled C# function runs in the same process as the Functions runtime. + Choose a version to see binding type details for the mode and version. |
azure-functions | Functions Bindings Cosmosdb | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb.md | The Azure Cosmos DB Trigger uses the [Azure Cosmos DB Change Feed](../cosmos-db/ # [C#](#tab/csharp) -The following example shows a [C# function](functions-dotnet-class-library.md) that is invoked when there are inserts or updates in the specified database and collection. +The following example shows an [in-process C# function](functions-dotnet-class-library.md) that is invoked when there are inserts or updates in the specified database and collection. ```cs using Microsoft.Azure.Documents; namespace CosmosDBSamplesV1 } ``` -# [C# Script](#tab/csharp-script) --The following example shows an Azure Cosmos DB trigger binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function writes log messages when Azure Cosmos DB records are modified. --Here's the binding data in the *function.json* file: --```json -{ - "type": "cosmosDBTrigger", - "name": "documents", - "direction": "in", - "leaseCollectionName": "leases", - "connectionStringSetting": "<connection-app-setting>", - "databaseName": "Tasks", - "collectionName": "Items", - "createLeaseCollectionIfNotExists": true -} -``` --Here's the C# script code: --```cs - #r "Microsoft.Azure.Documents.Client" - - using System; - using Microsoft.Azure.Documents; - using System.Collections.Generic; - -- public static void Run(IReadOnlyList<Document> documents, TraceWriter log) - { - log.Info("Documents modified " + documents.Count); - log.Info("First document Id " + documents[0].Id); - } -``` - # [JavaScript](#tab/javascript) The following example shows an Azure Cosmos DB trigger binding in a *function.json* file and a [JavaScript function](functions-reference-node.md) that uses the binding. The function writes log messages when Azure Cosmos DB records are modified. Here's the JavaScript code: # [C#](#tab/csharp) -In [C# class libraries](functions-dotnet-class-library.md), use the [CosmosDBTrigger](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.CosmosDB/Trigger/CosmosDBTriggerAttribute.cs) attribute. +For [in-process C# class libraries](functions-dotnet-class-library.md), use the [CosmosDBTrigger](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.CosmosDB/Trigger/CosmosDBTriggerAttribute.cs) attribute. The attribute's constructor takes the database name and collection name. For information about those settings and other properties that you can configure, see [Trigger - configuration](#triggerconfiguration). Here's a `CosmosDBTrigger` attribute example in a method signature: The attribute's constructor takes the database name and collection name. For inf For a complete example, see [Trigger - C# example](#trigger). -# [C# Script](#tab/csharp-script) --Attributes are not supported by C# Script. - # [JavaScript](#tab/javascript) Attributes are not supported by JavaScript. namespace CosmosDBSamplesV1 } ``` -# [C# Script](#tab/csharp-script) --This section contains the following examples: --* [Queue trigger, look up ID from string](#queue-trigger-look-up-id-from-string-c-script) -* [Queue trigger, get multiple docs, using SqlQuery](#queue-trigger-get-multiple-docs-using-sqlquery-c-script) -* [HTTP trigger, look up ID from query string](#http-trigger-look-up-id-from-query-string-c-script) -* [HTTP trigger, look up ID from route data](#http-trigger-look-up-id-from-route-data-c-script) -* [HTTP trigger, get multiple docs, using SqlQuery](#http-trigger-get-multiple-docs-using-sqlquery-c-script) -* [HTTP trigger, get multiple docs, using DocumentClient](#http-trigger-get-multiple-docs-using-documentclient-c-script) --The HTTP trigger examples refer to a simple `ToDoItem` type: --```cs -namespace CosmosDBSamplesV1 -{ - public class ToDoItem - { - public string Id { get; set; } - public string Description { get; set; } - } -} -``` --<a id="queue-trigger-look-up-id-from-string-c-script"></a> --### Queue trigger, look up ID from string --The following example shows an Azure Cosmos DB input binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function reads a single document and updates the document's text value. --Here's the binding data in the *function.json* file: --```json -{ - "name": "inputDocument", - "type": "documentDB", - "databaseName": "MyDatabase", - "collectionName": "MyCollection", - "id" : "{queueTrigger}", - "partitionKey": "{partition key value}", - "connection": "MyAccount_COSMOSDB", - "direction": "in" -} -``` --The [configuration](#inputconfiguration) section explains these properties. --Here's the C# script code: --```cs - using System; -- // Change input document contents using Azure Cosmos DB input binding - public static void Run(string myQueueItem, dynamic inputDocument) - { - inputDocument.text = "This has changed."; - } -``` --<a id="queue-trigger-get-multiple-docs-using-sqlquery-c-script"></a> --### Queue trigger, get multiple docs, using SqlQuery --The following example shows an Azure Cosmos DB input binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function retrieves multiple documents specified by a SQL query, using a queue trigger to customize the query parameters. --The queue trigger provides a parameter `departmentId`. A queue message of `{ "departmentId" : "Finance" }` would return all records for the finance department. --Here's the binding data in the *function.json* file: --```json -{ - "name": "documents", - "type": "documentdb", - "direction": "in", - "databaseName": "MyDb", - "collectionName": "MyCollection", - "sqlQuery": "SELECT * from c where c.departmentId = {departmentId}", - "connection": "CosmosDBConnection" -} -``` --The [configuration](#inputconfiguration) section explains these properties. --Here's the C# script code: --```csharp - public static void Run(QueuePayload myQueueItem, IEnumerable<dynamic> documents) - { - foreach (var doc in documents) - { - // operate on each document - } - } -- public class QueuePayload - { - public string departmentId { get; set; } - } -``` --<a id="http-trigger-look-up-id-from-query-string-c-script"></a> --### HTTP trigger, look up ID from query string --The following example shows a [C# script function](functions-reference-csharp.md) that retrieves a single document. The function is triggered by an HTTP request that uses a query string to specify the ID to look up. That ID is used to retrieve a `ToDoItem` document from the specified database and collection. --Here's the *function.json* file: --```json -{ - "bindings": [ - { - "authLevel": "anonymous", - "name": "req", - "type": "httpTrigger", - "direction": "in", - "methods": [ - "get", - "post" - ] - }, - { - "name": "$return", - "type": "http", - "direction": "out" - }, - { - "type": "documentDB", - "name": "toDoItem", - "databaseName": "ToDoItems", - "collectionName": "Items", - "connection": "CosmosDBConnection", - "direction": "in", - "Id": "{Query.id}" - } - ], - "disabled": true -} -``` --Here's the C# script code: --```cs -using System.Net; --public static HttpResponseMessage Run(HttpRequestMessage req, ToDoItem toDoItem, TraceWriter log) -{ - log.Info("C# HTTP trigger function processed a request."); -- if (toDoItem == null) - { - log.Info($"ToDo item not found"); - } - else - { - log.Info($"Found ToDo item, Description={toDoItem.Description}"); - } - return req.CreateResponse(HttpStatusCode.OK); -} -``` --<a id="http-trigger-look-up-id-from-route-data-c-script"></a> --### HTTP trigger, look up ID from route data --The following example shows a [C# script function](functions-reference-csharp.md) that retrieves a single document. The function is triggered by an HTTP request that uses route data to specify the ID to look up. That ID is used to retrieve a `ToDoItem` document from the specified database and collection. --Here's the *function.json* file: --```json -{ - "bindings": [ - { - "authLevel": "anonymous", - "name": "req", - "type": "httpTrigger", - "direction": "in", - "methods": [ - "get", - "post" - ], - "route":"todoitems/{id}" - }, - { - "name": "$return", - "type": "http", - "direction": "out" - }, - { - "type": "documentDB", - "name": "toDoItem", - "databaseName": "ToDoItems", - "collectionName": "Items", - "connection": "CosmosDBConnection", - "direction": "in", - "Id": "{id}" - } - ], - "disabled": false -} -``` --Here's the C# script code: --```cs -using System.Net; --public static HttpResponseMessage Run(HttpRequestMessage req, ToDoItem toDoItem, TraceWriter log) -{ - log.Info("C# HTTP trigger function processed a request."); -- if (toDoItem == null) - { - log.Info($"ToDo item not found"); - } - else - { - log.Info($"Found ToDo item, Description={toDoItem.Description}"); - } - return req.CreateResponse(HttpStatusCode.OK); -} -``` --<a id="http-trigger-get-multiple-docs-using-sqlquery-c-script"></a> --### HTTP trigger, get multiple docs, using SqlQuery --The following example shows a [C# script function](functions-reference-csharp.md) that retrieves a list of documents. The function is triggered by an HTTP request. The query is specified in the `SqlQuery` attribute property. --Here's the *function.json* file: --```json -{ - "bindings": [ - { - "authLevel": "anonymous", - "name": "req", - "type": "httpTrigger", - "direction": "in", - "methods": [ - "get", - "post" - ] - }, - { - "name": "$return", - "type": "http", - "direction": "out" - }, - { - "type": "documentDB", - "name": "toDoItems", - "databaseName": "ToDoItems", - "collectionName": "Items", - "connection": "CosmosDBConnection", - "direction": "in", - "sqlQuery": "SELECT top 2 * FROM c order by c._ts desc" - } - ], - "disabled": false -} -``` --Here's the C# script code: --```cs -using System.Net; --public static HttpResponseMessage Run(HttpRequestMessage req, IEnumerable<ToDoItem> toDoItems, TraceWriter log) -{ - log.Info("C# HTTP trigger function processed a request."); -- foreach (ToDoItem toDoItem in toDoItems) - { - log.Info(toDoItem.Description); - } - return req.CreateResponse(HttpStatusCode.OK); -} -``` --<a id="http-trigger-get-multiple-docs-using-documentclient-c-script"></a> --### HTTP trigger, get multiple docs, using DocumentClient --The following example shows a [C# script function](functions-reference-csharp.md) that retrieves a list of documents. The function is triggered by an HTTP request. The code uses a `DocumentClient` instance provided by the Azure Cosmos DB binding to read a list of documents. The `DocumentClient` instance could also be used for write operations. --Here's the *function.json* file: --```json -{ - "bindings": [ - { - "authLevel": "anonymous", - "name": "req", - "type": "httpTrigger", - "direction": "in", - "methods": [ - "get", - "post" - ] - }, - { - "name": "$return", - "type": "http", - "direction": "out" - }, - { - "type": "documentDB", - "name": "client", - "databaseName": "ToDoItems", - "collectionName": "Items", - "connection": "CosmosDBConnection", - "direction": "inout" - } - ], - "disabled": false -} -``` --Here's the C# script code: --```cs -#r "Microsoft.Azure.Documents.Client" --using System.Net; -using Microsoft.Azure.Documents.Client; -using Microsoft.Azure.Documents.Linq; --public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, DocumentClient client, TraceWriter log) -{ - log.Info("C# HTTP trigger function processed a request."); -- Uri collectionUri = UriFactory.CreateDocumentCollectionUri("ToDoItems", "Items"); - string searchterm = req.GetQueryNameValuePairs() - .FirstOrDefault(q => string.Compare(q.Key, "searchterm", true) == 0) - .Value; -- if (searchterm == null) - { - return req.CreateResponse(HttpStatusCode.NotFound); - } -- log.Info($"Searching for word: {searchterm} using Uri: {collectionUri.ToString()}"); - IDocumentQuery<ToDoItem> query = client.CreateDocumentQuery<ToDoItem>(collectionUri) - .Where(p => p.Description.Contains(searchterm)) - .AsDocumentQuery(); -- while (query.HasMoreResults) - { - foreach (ToDoItem result in await query.ExecuteNextAsync()) - { - log.Info(result.Description); - } - } - return req.CreateResponse(HttpStatusCode.OK); -} -``` - # [JavaScript](#tab/javascript) This section contains the following examples: Here's the JavaScript code: # [C#](#tab/csharp) -In [C# class libraries](functions-dotnet-class-library.md), use the [DocumentDB](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/v2.x/src/WebJobs.Extensions.DocumentDB/DocumentDBAttribute.cs) attribute. +In [in-process C# class libraries](functions-dotnet-class-library.md), use the [DocumentDB](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/v2.x/src/WebJobs.Extensions.DocumentDB/DocumentDBAttribute.cs) attribute. The attribute's constructor takes the database name and collection name. For information about those settings and other properties that you can configure, see [the following configuration section](#inputconfiguration). -# [C# Script](#tab/csharp-script) --Attributes are not supported by C# Script. - # [JavaScript](#tab/javascript) Attributes are not supported by JavaScript. The following table explains the binding configuration properties that you set i When the function exits successfully, any changes made to the input document via named input parameters are automatically persisted. -# [C# Script](#tab/csharp-script) --When the function exits successfully, any changes made to the input document via named input parameters are automatically persisted. - # [JavaScript](#tab/javascript) Updates are not made automatically upon function exit. Instead, use `context.bindings.<documentName>In` and `context.bindings.<documentName>Out` to make updates. See the [input example](#input). namespace CosmosDBSamplesV1 } ``` -# [C# Script](#tab/csharp-script) --This section contains the following examples: --* Queue trigger, write one doc -* Queue trigger, write docs using `IAsyncCollector` --### Queue trigger, write one doc --The following example shows an Azure Cosmos DB output binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function uses a queue input binding for a queue that receives JSON in the following format: --```json -{ - "name": "John Henry", - "employeeId": "123456", - "address": "A town nearby" -} -``` --The function creates Azure Cosmos DB documents in the following format for each record: --```json -{ - "id": "John Henry-123456", - "name": "John Henry", - "employeeId": "123456", - "address": "A town nearby" -} -``` --Here's the binding data in the *function.json* file: --```json -{ - "name": "employeeDocument", - "type": "documentDB", - "databaseName": "MyDatabase", - "collectionName": "MyCollection", - "createIfNotExists": true, - "connection": "MyAccount_COSMOSDB", - "direction": "out" -} -``` --The [configuration](#outputconfiguration) section explains these properties. --Here's the C# script code: --```cs - #r "Newtonsoft.Json" -- using Microsoft.Azure.WebJobs.Host; - using Newtonsoft.Json.Linq; -- public static void Run(string myQueueItem, out object employeeDocument, TraceWriter log) - { - log.Info($"C# Queue trigger function processed: {myQueueItem}"); -- dynamic employee = JObject.Parse(myQueueItem); -- employeeDocument = new { - id = employee.name + "-" + employee.employeeId, - name = employee.name, - employeeId = employee.employeeId, - address = employee.address - }; - } -``` --### Queue trigger, write docs using IAsyncCollector --To create multiple documents, you can bind to `ICollector<T>` or `IAsyncCollector<T>` where `T` is one of the supported types. --This example refers to a simple `ToDoItem` type: --```cs -namespace CosmosDBSamplesV1 -{ - public class ToDoItem - { - public string Id { get; set; } - public string Description { get; set; } - } -} -``` --Here's the function.json file: --```json -{ - "bindings": [ - { - "name": "toDoItemsIn", - "type": "queueTrigger", - "direction": "in", - "queueName": "todoqueueforwritemulti", - "connection": "AzureWebJobsStorage" - }, - { - "type": "documentDB", - "name": "toDoItemsOut", - "databaseName": "ToDoItems", - "collectionName": "Items", - "connection": "CosmosDBConnection", - "direction": "out" - } - ], - "disabled": false -} -``` --Here's the C# script code: --```cs -using System; --public static async Task Run(ToDoItem[] toDoItemsIn, IAsyncCollector<ToDoItem> toDoItemsOut, TraceWriter log) -{ - log.Info($"C# Queue trigger function processed {toDoItemsIn?.Length} items"); -- foreach (ToDoItem toDoItem in toDoItemsIn) - { - log.Info($"Description={toDoItem.Description}"); - await toDoItemsOut.AddAsync(toDoItem); - } -} -``` - # [JavaScript](#tab/javascript) The following example shows an Azure Cosmos DB output binding in a *function.json* file and a [JavaScript function](functions-reference-node.md) that uses the binding. The function uses a queue input binding for a queue that receives JSON in the following format: Here's the JavaScript code: # [C#](#tab/csharp) -In [C# class libraries](functions-dotnet-class-library.md), use the [DocumentDB](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/v2.x/src/WebJobs.Extensions.DocumentDB/DocumentDBAttribute.cs) attribute. +In [in-process C# class libraries](functions-dotnet-class-library.md), use the [DocumentDB](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/v2.x/src/WebJobs.Extensions.DocumentDB/DocumentDBAttribute.cs) attribute. The attribute's constructor takes the database name and collection name. For information about those settings and other properties that you can configure, see [Output - configuration](#outputconfiguration). Here's a `DocumentDB` attribute example in a method signature: The attribute's constructor takes the database name and collection name. For inf For a complete example, see [Output](#output). -# [C# Script](#tab/csharp-script) --Attributes are not supported by C# Script. - # [JavaScript](#tab/javascript) Attributes are not supported by JavaScript. |
azure-functions | Functions Bindings Error Pages | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-error-pages.md | This behavior means that the maximum retry count is a best effort. In some rare ::: zone pivot="programming-language-csharp" -# [In-process](#tab/in-process/fixed-delay) --Retries require NuGet package [Microsoft.Azure.WebJobs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs) >= 3.0.23 --```csharp -[FunctionName("EventHubTrigger")] -[FixedDelayRetry(5, "00:00:10")] -public static async Task Run([EventHubTrigger("myHub", Connection = "EventHubConnection")] EventData[] events, ILogger log) -{ -// ... -} -``` --|Property | Description | -||-| -|MaxRetryCount|Required. The maximum number of retries allowed per function execution. `-1` means to retry indefinitely.| -|DelayInterval|The delay that's used between retries. Specify it as a string with the format `HH:mm:ss`.| --# [Isolated process](#tab/isolated-process/fixed-delay) +# [Isolated worker model](#tab/isolated-process/fixed-delay) Function-level retries are supported with the following NuGet packages: Function-level retries are supported with the following NuGet packages: |MaxRetryCount|Required. The maximum number of retries allowed per function execution. `-1` means to retry indefinitely.| |DelayInterval|The delay that's used between retries. Specify it as a string with the format `HH:mm:ss`.| --# [C# script](#tab/csharp-script/fixed-delay) --Here's the retry policy in the *function.json* file: --```json -{ - "disabled": false, - "bindings": [ - { - .... - } - ], - "retry": { - "strategy": "fixedDelay", - "maxRetryCount": 4, - "delayInterval": "00:00:10" - } -} -``` --|*function.json* property | Description | -||-| -|strategy|Use `fixedDelay`.| -|maxRetryCount|Required. The maximum number of retries allowed per function execution. `-1` means to retry indefinitely.| -|delayInterval|The delay that's used between retries. Specify it as a string with the format `HH:mm:ss`.| --# [In-process](#tab/in-process/exponential-backoff) +# [In-process model](#tab/in-process/fixed-delay) Retries require NuGet package [Microsoft.Azure.WebJobs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs) >= 3.0.23 ```csharp [FunctionName("EventHubTrigger")]-[ExponentialBackoffRetry(5, "00:00:04", "00:15:00")] +[FixedDelayRetry(5, "00:00:10")] public static async Task Run([EventHubTrigger("myHub", Connection = "EventHubConnection")] EventData[] events, ILogger log) { // ... public static async Task Run([EventHubTrigger("myHub", Connection = "EventHubCon |Property | Description | ||-| |MaxRetryCount|Required. The maximum number of retries allowed per function execution. `-1` means to retry indefinitely.|-|MinimumInterval|The minimum retry delay. Specify it as a string with the format `HH:mm:ss`.| -|MaximumInterval|The maximum retry delay. Specify it as a string with the format `HH:mm:ss`.| +|DelayInterval|The delay that's used between retries. Specify it as a string with the format `HH:mm:ss`.| -# [Isolated process](#tab/isolated-process/exponential-backoff) +# [Isolated worker model](#tab/isolated-process/exponential-backoff) Function-level retries are supported with the following NuGet packages: Function-level retries are supported with the following NuGet packages: :::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/CosmosDB/CosmosDBFunction.cs" id="docsnippet_exponential_backoff_retry_example" ::: -# [C# script](#tab/csharp-script/exponential-backoff) +# [In-process model](#tab/in-process/exponential-backoff) -Here's the retry policy in the *function.json* file: +Retries require NuGet package [Microsoft.Azure.WebJobs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs) >= 3.0.23 -```json +```csharp +[FunctionName("EventHubTrigger")] +[ExponentialBackoffRetry(5, "00:00:04", "00:15:00")] +public static async Task Run([EventHubTrigger("myHub", Connection = "EventHubConnection")] EventData[] events, ILogger log) {- "disabled": false, - "bindings": [ - { - .... - } - ], - "retry": { - "strategy": "exponentialBackoff", - "maxRetryCount": 5, - "minimumInterval": "00:00:10", - "maximumInterval": "00:15:00" - } +// ... } ``` -|*function.json* property | Description | +|Property | Description | ||-|-|strategy|Use `exponentialBackoff`.| -|maxRetryCount|Required. The maximum number of retries allowed per function execution. `-1` means to retry indefinitely.| -|minimumInterval|The minimum retry delay. Specify it as a string with the format `HH:mm:ss`.| -|maximumInterval|The maximum retry delay. Specify it as a string with the format `HH:mm:ss`.| +|MaxRetryCount|Required. The maximum number of retries allowed per function execution. `-1` means to retry indefinitely.| +|MinimumInterval|The minimum retry delay. Specify it as a string with the format `HH:mm:ss`.| +|MaximumInterval|The maximum retry delay. Specify it as a string with the format `HH:mm:ss`.| ::: zone-end |
azure-functions | Functions Bindings Event Grid Output | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid-output.md | The type of the output parameter used with an Event Grid output binding depends * [In-process class library](functions-dotnet-class-library.md): compiled C# function that runs in the same process as the Functions runtime. * [Isolated worker process class library](dotnet-isolated-process-guide.md): compiled C# function that runs in a worker process isolated from the runtime. -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) ++The following example shows how the custom type is used in both the trigger and an Event Grid output binding: +++# [In-process model](#tab/in-process) The following example shows a C# function that publishes a `CloudEvent` using version 3.x of the extension: When you use the `Connection` property, the `topicEndpointUri` must be specified ``` When deployed, you must add this same information to application settings for the function app. For more information, see [Identity-based authentication](#identity-based-authentication). -# [Isolated process](#tab/isolated-process) --The following example shows how the custom type is used in both the trigger and an Event Grid output binding: -- ::: zone-end Both [in-process](functions-dotnet-class-library.md) and [isolated worker proces The attribute's constructor takes the name of an application setting that contains the name of the custom topic, and the name of an application setting that contains the topic key. -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) -The following table explains the parameters for the `EventGridAttribute`. +The following table explains the parameters for the `EventGridOutputAttribute`. |Parameter | Description|-|||-| +||| |**TopicEndpointUri** | The name of an app setting that contains the URI for the custom topic, such as `MyTopicEndpointUri`. | |**TopicKeySetting** | The name of an app setting that contains an access key for the custom topic. |-|**Connection**<sup>*</sup> | The value of the common prefix for the setting that contains the topic endpoint URI. For more information about the naming format of this application setting, see [Identity-based authentication](#identity-based-authentication). | +|**connection**<sup>*</sup> | The value of the common prefix for the setting that contains the topic endpoint URI. For more information about the naming format of this application setting, see [Identity-based authentication](#identity-based-authentication). | -# [Isolated process](#tab/isolated-process) +# [In-process model](#tab/in-process) -The following table explains the parameters for the `EventGridOutputAttribute`. +The following table explains the parameters for the `EventGridAttribute`. |Parameter | Description|-|||-| +||| |**TopicEndpointUri** | The name of an app setting that contains the URI for the custom topic, such as `MyTopicEndpointUri`. | |**TopicKeySetting** | The name of an app setting that contains an access key for the custom topic. |-|**connection**<sup>*</sup> | The value of the common prefix for the setting that contains the topic endpoint URI. For more information about the naming format of this application setting, see [Identity-based authentication](#identity-based-authentication). | +|**Connection**<sup>*</sup> | The value of the common prefix for the setting that contains the topic endpoint URI. For more information about the naming format of this application setting, see [Identity-based authentication](#identity-based-authentication). | |
azure-functions | Functions Bindings Event Grid Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid-trigger.md | The type of the input parameter used with an Event Grid trigger depends on these [!INCLUDE [functions-bindings-csharp-intro](../../includes/functions-bindings-csharp-intro.md)] -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) ++When running your C# function in an isolated worker process, you need to define a custom type for event properties. The following example defines a `MyEventType` class. +++The following example shows how the custom type is used in both the trigger and an Event Grid output binding: +++# [In-process model](#tab/in-process) The following example shows a Functions version 4.x function that uses a `CloudEvent` binding parameter: namespace Company.Function } } ```-# [Isolated process](#tab/isolated-process) --When running your C# function in an isolated worker process, you need to define a custom type for event properties. The following example defines a `MyEventType` class. ---The following example shows how the custom type is used in both the trigger and an Event Grid output binding: -- ::: zone-end def main(event: func.EventGridEvent): Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the [EventGridTrigger](https://github.com/Azure/azure-functions-eventgrid-extension/blob/master/src/EventGridExtension/TriggerBinding/EventGridTriggerAttribute.cs) attribute. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#event-grid-trigger). -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) ++Here's an `EventGridTrigger` attribute in a method signature: +++# [In-process model](#tab/in-process) Here's an `EventGridTrigger` attribute in a method signature: Here's an `EventGridTrigger` attribute in a method signature: public static void EventGridTest([EventGridTrigger] JObject eventGridEvent, ILogger log) { ```-# [Isolated process](#tab/isolated-process) --Here's an `EventGridTrigger` attribute in a method signature: -- ::: zone-end |
azure-functions | Functions Bindings Event Grid | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid.md | This reference shows how to connect to Azure Event Grid using Azure Functions tr The extension NuGet package you install depends on the C# mode you're using in your function app: -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) -Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md). +Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md). -In a variation of this model, Functions can be run using [C# scripting], which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions]. +# [In-process model](#tab/in-process) -# [Isolated process](#tab/isolated-process) +Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md). -Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md). +In a variation of this model, Functions can be run using [C# scripting], which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions]. The Event Grid output binding is only available for Functions 2.x and higher. Ev The binding types supported for .NET depend on both the extension version and C# execution mode, which can be one of the following: -# [In-process](#tab/in-process) --An in-process class library is a compiled C# function runs in the same process as the Functions runtime. - -# [Isolated process](#tab/isolated-process) +# [Isolated worker model](#tab/isolated-process) An isolated worker process class library compiled C# function runs in a process isolated from the runtime. +# [In-process model](#tab/in-process) ++An in-process class library is a compiled C# function runs in the same process as the Functions runtime. + Choose a version to see binding type details for the mode and version. |
azure-functions | Functions Bindings Event Hubs Output | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-hubs-output.md | This article supports both programming models. ::: zone pivot="programming-language-csharp" -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) ++The following example shows a [C# function](dotnet-isolated-process-guide.md) that writes a message string to an event hub, using the method return value as the output: +++# [In-process model](#tab/in-process) The following example shows a [C# function](functions-dotnet-class-library.md) that writes a message to an event hub, using the method return value as the output: public static async Task Run( } } ```-# [Isolated process](#tab/isolated-process) --The following example shows a [C# function](dotnet-isolated-process-guide.md) that writes a message string to an event hub, using the method return value as the output: -- ::: zone-end In the [Java functions runtime library](/java/api/overview/azure/functions/runti Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attribute to configure the binding. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#event-hubs-output). -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) -Use the [EventHubAttribute] to define an output binding to an event hub, which supports the following properties. +Use the [EventHubOutputAttribute] to define an output binding to an event hub, which supports the following properties. | Parameters | Description| ||-| |**EventHubName** | The name of the event hub. When the event hub name is also present in the connection string, that value overrides this property at runtime. | |**Connection** | The name of an app setting or setting collection that specifies how to connect to Event Hubs. To learn more, see [Connections](#connections).| -# [Isolated process](#tab/isolated-process) +# [In-process model](#tab/in-process) -Use the [EventHubOutputAttribute] to define an output binding to an event hub, which supports the following properties. +Use the [EventHubAttribute] to define an output binding to an event hub, which supports the following properties. | Parameters | Description| ||-| |
azure-functions | Functions Bindings Http Webhook Output | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-http-webhook-output.md | The default return value for an HTTP-triggered function is: Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries don't require an attribute. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#http-output). -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) A return value attribute isn't required. To learn more, see [Usage](#usage). -# [Isolated process](#tab/isolated-process) +# [In-process model](#tab/in-process) A return value attribute isn't required. To learn more, see [Usage](#usage). To send an HTTP response, use the language-standard response patterns. ::: zone pivot="programming-language-csharp" The response type depends on the C# mode: -# [In-process](#tab/in-process) --The HTTP triggered function returns a type of [IActionResult] or `Task<IActionResult>`. --# [Isolated process](#tab/isolated-process) +# [Isolated worker model](#tab/isolated-process) The HTTP triggered function returns an [HttpResponseData](/dotnet/api/microsoft.azure.functions.worker.http.httpresponsedata) object or a `Task<HttpResponseData>`. If the app uses [ASP.NET Core integration in .NET Isolated](./dotnet-isolated-process-guide.md#aspnet-core-integration), it could also use [IActionResult], `Task<IActionResult>`, [HttpResponse], or `Task<HttpResponse>`. [IActionResult]: /dotnet/api/microsoft.aspnetcore.mvc.iactionresult [HttpResponse]: /dotnet/api/microsoft.aspnetcore.http.httpresponse +# [In-process model](#tab/in-process) ++The HTTP triggered function returns a type of [IActionResult] or `Task<IActionResult>`. + ::: zone-end |
azure-functions | Functions Bindings Http Webhook Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-http-webhook-trigger.md | This article supports both programming models. The code in this article defaults to .NET Core syntax, used in Functions version 2.x and higher. For information on the 1.x syntax, see the [1.x functions templates](https://github.com/Azure/azure-functions-templates/tree/v1.x/Functions.Templates/Templates). -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) ++The following example shows an HTTP trigger that returns a "hello world" response as an [HttpResponseData](/dotnet/api/microsoft.azure.functions.worker.http.httpresponsedata) object: +++The following example shows an HTTP trigger that returns a "hello, world" response as an [IActionResult], using [ASP.NET Core integration in .NET Isolated]: ++```csharp +[Function("HttpFunction")] +public IActionResult Run( + [HttpTrigger(AuthorizationLevel.Anonymous, "get")] HttpRequest req) +{ + return new OkObjectResult($"Welcome to Azure Functions, {req.Query["name"]}!"); +} +``` ++[IActionResult]: /dotnet/api/microsoft.aspnetcore.mvc.iactionresult ++# [In-process model](#tab/in-process) The following example shows a [C# function](functions-dotnet-class-library.md) that looks for a `name` parameter either in the query string or the body of the HTTP request. Notice that the return value is used for the output binding, but a return value attribute isn't required. public static async Task<IActionResult> Run( } ``` -# [Isolated process](#tab/isolated-process) --The following example shows an HTTP trigger that returns a "hello world" response as an [HttpResponseData](/dotnet/api/microsoft.azure.functions.worker.http.httpresponsedata) object: ---The following example shows an HTTP trigger that returns a "hello, world" response as an [IActionResult], using [ASP.NET Core integration in .NET Isolated]: --```csharp -[Function("HttpFunction")] -public IActionResult Run( - [HttpTrigger(AuthorizationLevel.Anonymous, "get")] HttpRequest req) -{ - return new OkObjectResult($"Welcome to Azure Functions, {req.Query["name"]}!"); -} -``` --[IActionResult]: /dotnet/api/microsoft.aspnetcore.mvc.iactionresult - ::: zone-end def main(req: func.HttpRequest) -> func.HttpResponse: Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the `HttpTriggerAttribute` to define the trigger binding. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#http-trigger). -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) -In [in-process functions](functions-dotnet-class-library.md), the `HttpTriggerAttribute` supports the following parameters: +In [isolated worker process](dotnet-isolated-process-guide.md) function apps, the `HttpTriggerAttribute` supports the following parameters: | Parameters | Description| ||-| | **AuthLevel** | Determines what keys, if any, need to be present on the request in order to invoke the function. For supported values, see [Authorization level](#http-auth). | | **Methods** | An array of the HTTP methods to which the function responds. If not specified, the function responds to all HTTP methods. See [customize the HTTP endpoint](#customize-the-http-endpoint). | | **Route** | Defines the route template, controlling to which request URLs your function responds. The default value if none is provided is `<functionname>`. For more information, see [customize the HTTP endpoint](#customize-the-http-endpoint). |-| **WebHookType** | _Supported only for the version 1.x runtime._<br/><br/>Configures the HTTP trigger to act as a [webhook](https://en.wikipedia.org/wiki/Webhook) receiver for the specified provider. For supported values, see [WebHook type](#webhook-type).| -# [Isolated process](#tab/isolated-process) +# [In-process model](#tab/in-process) -In [isolated worker process](dotnet-isolated-process-guide.md) function apps, the `HttpTriggerAttribute` supports the following parameters: +In [in-process functions](functions-dotnet-class-library.md), the `HttpTriggerAttribute` supports the following parameters: | Parameters | Description| ||-| | **AuthLevel** | Determines what keys, if any, need to be present on the request in order to invoke the function. For supported values, see [Authorization level](#http-auth). | | **Methods** | An array of the HTTP methods to which the function responds. If not specified, the function responds to all HTTP methods. See [customize the HTTP endpoint](#customize-the-http-endpoint). | | **Route** | Defines the route template, controlling to which request URLs your function responds. The default value if none is provided is `<functionname>`. For more information, see [customize the HTTP endpoint](#customize-the-http-endpoint). |+| **WebHookType** | _Supported only for the version 1.x runtime._<br/><br/>Configures the HTTP trigger to act as a [webhook](https://en.wikipedia.org/wiki/Webhook) receiver for the specified provider. For supported values, see [WebHook type](#webhook-type).| The [HttpTrigger](/java/api/com.microsoft.azure.functions.annotation.httptrigger ### Payload -# [In-process](#tab/in-process) --The trigger input type is declared as either `HttpRequest` or a custom type. If you choose `HttpRequest`, you get full access to the request object. For a custom type, the runtime tries to parse the JSON request body to set the object properties. --# [Isolated process](#tab/isolated-process) +# [Isolated worker model](#tab/isolated-process) The trigger input type is declared as one of the following types: namespace AspNetIntegration } ``` +# [In-process model](#tab/in-process) ++The trigger input type is declared as either `HttpRequest` or a custom type. If you choose `HttpRequest`, you get full access to the request object. For a custom type, the runtime tries to parse the JSON request body to set the object properties. + ::: zone-end You can customize this route using the optional `route` property on the HTTP tri ::: zone pivot="programming-language-csharp" -# [In-process](#tab/in-process) --The following C# function code accepts two parameters `category` and `id` in the route and writes a response using both parameters. --```csharp -[FunctionName("Function1")] -public static IActionResult Run( -[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = "products/{category:alpha}/{id:int?}")] HttpRequest req, -string category, int? id, ILogger log) -{ - log.LogInformation("C# HTTP trigger function processed a request."); -- var message = String.Format($"Category: {category}, ID: {id}"); - return (ActionResult)new OkObjectResult(message); -} -``` -# [Isolated process](#tab/isolated-process) +# [Isolated worker model](#tab/isolated-process) The following function code accepts two parameters `category` and `id` in the route and writes a response using both parameters. FunctionContext executionContext) } ``` +# [In-process model](#tab/in-process) ++The following C# function code accepts two parameters `category` and `id` in the route and writes a response using both parameters. ++```csharp +[FunctionName("Function1")] +public static IActionResult Run( +[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = "products/{category:alpha}/{id:int?}")] HttpRequest req, +string category, int? id, ILogger log) +{ + log.LogInformation("C# HTTP trigger function processed a request."); ++ var message = String.Format($"Category: {category}, ID: {id}"); + return (ActionResult)new OkObjectResult(message); +} +``` ::: zone-end You can also read this information from binding data. This capability is only av ::: zone pivot="programming-language-csharp" Information regarding authenticated clients is available as a [ClaimsPrincipal], which is available as part of the request context as shown in the following example: -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) ++The authenticated user is available via [HTTP Headers](../app-service/configure-authentication-user-identities.md#access-user-claims-in-app-code). ++# [In-process model](#tab/in-process) ```csharp using System.Net; public static void Run(JObject input, ClaimsPrincipal principal, ILogger log) return; } ```-# [Isolated process](#tab/isolated-process) --The authenticated user is available via [HTTP Headers](../app-service/configure-authentication-user-identities.md#access-user-claims-in-app-code). - ::: zone-end |
azure-functions | Functions Bindings Http Webhook | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-http-webhook.md | Azure Functions may be invoked via HTTP requests to build serverless APIs and re The extension NuGet package you install depends on the C# mode you're using in your function app: -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) -Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md). +Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md). -# [Isolated process](#tab/isolated-process) +# [In-process model](#tab/in-process) -Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md). +Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md). |
azure-functions | Functions Bindings Kafka Output | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-kafka-output.md | The output binding allows an Azure Functions app to write messages to a Kafka to The usage of the binding depends on the C# modality used in your function app, which can be one of the following: -# [In-process](#tab/in-process) --An [in-process class library](functions-dotnet-class-library.md) is a compiled C# function runs in the same process as the Functions runtime. - -# [Isolated process](#tab/isolated-process) +# [Isolated worker model](#tab/isolated-process) An [isolated worker process class library](dotnet-isolated-process-guide.md) compiled C# function runs in a process isolated from the runtime. +# [In-process model](#tab/in-process) ++An [in-process class library](functions-dotnet-class-library.md) is a compiled C# function runs in the same process as the Functions runtime. + The attributes you use depend on the specific event provider. |
azure-functions | Functions Bindings Kafka Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-kafka-trigger.md | You can use the Apache Kafka trigger in Azure Functions to run your function cod The usage of the trigger depends on the C# modality used in your function app, which can be one of the following modes: -# [In-process](#tab/in-process) --An [in-process class library](functions-dotnet-class-library.md) is a compiled C# function runs in the same process as the Functions runtime. - -# [Isolated process](#tab/isolated-process) +# [Isolated worker model](#tab/isolated-process) An [isolated worker process class library](dotnet-isolated-process-guide.md) compiled C# function runs in a process isolated from the runtime. +# [In-process model](#tab/in-process) ++An [in-process class library](functions-dotnet-class-library.md) is a compiled C# function runs in the same process as the Functions runtime. + The attributes you use depend on the specific event provider. The following table explains the binding configuration properties that you set i ::: zone pivot="programming-language-csharp" -# [In-process](#tab/in-process) --Kafka events are passed to the function as `KafkaEventData<string>` objects or arrays. Strings and string arrays that are JSON payloads are also supported. - -# [Isolated process](#tab/isolated-process) +# [Isolated worker model](#tab/isolated-process) Kafka events are currently supported as strings and string arrays that are JSON payloads. +# [In-process model](#tab/in-process) ++Kafka events are passed to the function as `KafkaEventData<string>` objects or arrays. Strings and string arrays that are JSON payloads are also supported. + ::: zone-end |
azure-functions | Functions Bindings Kafka | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-kafka.md | The Kafka extension for Azure Functions lets you write values out to [Apache Kaf The extension NuGet package you install depends on the C# mode you're using in your function app: -# [In-process](#tab/in-process) --Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md). --Add the extension to your project by installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Kafka). --# [Isolated process](#tab/isolated-process) +# [Isolated worker model](#tab/isolated-process) Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md). Add the extension to your project by installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Kafka). -<!-- -# [C# script](#tab/csharp-script) -Functions run as C# script, which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions]. +# [In-process model](#tab/in-process) ++Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md). ++Add the extension to your project by installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Kafka). -The Kafka extension is part of an [extension bundle], which is specified in your host.json project file. When you create a project that targets version 2.x or later, you should already have this bundle installed. To learn more, see [extension bundle]. > |
azure-functions | Functions Bindings Rabbitmq Output | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-rabbitmq-output.md | For information on setup and configuration details, see the [overview](functions [!INCLUDE [functions-bindings-csharp-intro-with-csx](../../includes/functions-bindings-csharp-intro-with-csx.md)] -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) ++++# [In-process model](#tab/in-process) The following example shows a [C# function](functions-dotnet-class-library.md) that sends a RabbitMQ message when triggered by a TimerTrigger every 5 minutes using the method return value as the output: namespace Company.Function } ``` -# [Isolated process](#tab/isolated-process) ----# [C# Script](#tab/csharp-script) --The following example shows a RabbitMQ output binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function reads in the message from an HTTP trigger and outputs it to the RabbitMQ queue. --Here's the binding data in the *function.json* file: --```json -{ - "bindings": [ - { - "type": "httpTrigger", - "direction": "in", - "authLevel": "function", - "name": "input", - "methods": [ - "get", - "post" - ] - }, - { - "type": "rabbitMQ", - "name": "outputMessage", - "queueName": "outputQueue", - "connectionStringSetting": "rabbitMQConnectionAppSetting", - "direction": "out" - } - ] -} -``` --Here's the C# script code: --```C# -using System; -using Microsoft.Extensions.Logging; --public static void Run(string input, out string outputMessage, ILogger log) -{ - log.LogInformation(input); - outputMessage = input; -} -``` ::: zone-end def main(req: func.HttpRequest, outputMessage: func.Out[str]) -> func.HttpRespon ## Attributes -Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the <!--attribute API here--> attribute to define the function. C# script instead uses a function.json configuration file. +Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the <!--attribute API here--> attribute to define the function. C# script instead uses a [function.json configuration file](#configuration). The attribute's constructor takes the following parameters: The attribute's constructor takes the following parameters: |**ConnectionStringSetting**|The name of the app setting that contains the RabbitMQ message queue connection string. The trigger won't work when you specify the connection string directly instead through an app setting. For example, when you have set `ConnectionStringSetting: "rabbitMQConnection"`, then in both the *local.settings.json* and in your function app you need a setting like `"RabbitMQConnection" : "< ActualConnectionstring >"`.| |**Port**|Gets or sets the port used. Defaults to 0, which points to the RabbitMQ client's default port setting of `5672`. | -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) ++In [C# class libraries](functions-dotnet-class-library.md), use the [RabbitMQTrigger](https://github.com/Azure/azure-functions-rabbitmq-extension/blob/dev/extension/WebJobs.Extensions.RabbitMQ/Trigger/RabbitMQTriggerAttribute.cs) attribute. ++Here's a `RabbitMQTrigger` attribute in a method signature for an isolated worker process library: ++++# [In-process model](#tab/in-process) In [C# class libraries](functions-dotnet-class-library.md), use the [RabbitMQAttribute](https://github.com/Azure/azure-functions-rabbitmq-extension/blob/dev/extension/WebJobs.Extensions.RabbitMQ/RabbitMQAttribute.cs). ILogger log) } ``` -# [Isolated process](#tab/isolated-process) --In [C# class libraries](functions-dotnet-class-library.md), use the [RabbitMQTrigger](https://github.com/Azure/azure-functions-rabbitmq-extension/blob/dev/extension/WebJobs.Extensions.RabbitMQ/Trigger/RabbitMQTriggerAttribute.cs) attribute. --Here's a `RabbitMQTrigger` attribute in a method signature for an isolated worker process library: ----# [C# script](#tab/csharp-script) --C# script uses a function.json file for configuration instead of attributes. --The following table explains the binding configuration properties for C# script that you set in the *function.json* file. --|function.json property | Description| -||-| -|**type** | Must be set to `RabbitMQ`.| -|**direction** | Must be set to `out`.| -|**name** | The name of the variable that represents the queue in function code. | -|**queueName**| See the **QueueName** attribute above.| -|**hostName**|See the **HostName** attribute above.| -|**userNameSetting**|See the **UserNameSetting** attribute above.| -|**passwordSetting**|See the **PasswordSetting** attribute above.| -|**connectionStringSetting**|See the **ConnectionStringSetting** attribute above.| -|**port**|See the **Port** attribute above.| - ::: zone-end See the [Example section](#example) for complete examples. ::: zone pivot="programming-language-csharp" The parameter type supported by the RabbitMQ trigger depends on the Functions runtime version, the extension package version, and the C# modality used. -# [In-process](#tab/in-process) --Use the following parameter types for the output binding: --* `byte[]` - If the parameter value is null when the function exits, Functions doesn't create a message. -* `string` - If the parameter value is null when the function exits, Functions doesn't create a message. -* `POCO` - The message is formatted as a C# object. --When working with C# functions: --* Async functions need a return value or `IAsyncCollector` instead of an `out` parameter. --# [Isolated process](#tab/isolated-process) +# [Isolated worker model](#tab/isolated-process) The RabbitMQ bindings currently support only string and serializable object types when running in an isolated worker process. -# [C# script](#tab/csharp-script) +# [In-process model](#tab/in-process) Use the following parameter types for the output binding: * `byte[]` - If the parameter value is null when the function exits, Functions doesn't create a message. * `string` - If the parameter value is null when the function exits, Functions doesn't create a message.-* `POCO` - If the parameter value isn't formatted as a C# object, an error will be received. For a complete example, see C# Script [example](#example). +* `POCO` - The message is formatted as a C# object. -When working with C# Script functions: +When working with C# functions: * Async functions need a return value or `IAsyncCollector` instead of an `out` parameter. |
azure-functions | Functions Bindings Rabbitmq Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-rabbitmq-trigger.md | For information on setup and configuration details, see the [overview](functions [!INCLUDE [functions-bindings-csharp-intro-with-csx](../../includes/functions-bindings-csharp-intro-with-csx.md)] -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) +++# [In-process model](#tab/in-process) The following example shows a [C# function](functions-dotnet-class-library.md) that reads and logs the RabbitMQ message as a [RabbitMQ Event](https://rabbitmq.github.io/rabbitmq-dotnet-client/api/RabbitMQ.Client.Events.BasicDeliverEventArgs.html): namespace Company.Function Like with Json objects, an error will occur if the message isn't properly formatted as a C# object. If it is, it's then bound to the variable pocObj, which can be used for what whatever it's needed for. -# [Isolated process](#tab/isolated-process) ---# [C# Script](#tab/csharp-script) --The following example shows a RabbitMQ trigger binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function reads and logs the RabbitMQ message. --Here's the binding data in the *function.json* file: --```json -{ΓÇïΓÇï - "bindings": [ - {ΓÇïΓÇï - "name": "myQueueItem", - "type": "rabbitMQTrigger", - "direction": "in", - "queueName": "queue", - "connectionStringSetting": "rabbitMQConnectionAppSetting" - }ΓÇïΓÇï - ] -}ΓÇïΓÇï -``` --Here's the C# script code: --```C# -using System; --public static void Run(string myQueueItem, ILogger log) -{ΓÇïΓÇï - log.LogInformation($"C# Script RabbitMQ trigger function processed: {ΓÇïΓÇïmyQueueItem}ΓÇïΓÇï"); -}ΓÇïΓÇï -``` ::: zone-end def main(myQueueItem) -> None: ## Attributes -Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the <!--attribute API here--> attribute to define the function. C# script instead uses a function.json configuration file. +Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the <!--attribute API here--> attribute to define the function. C# script instead uses a [function.json configuration file](#configuration). The attribute's constructor takes the following parameters: The attribute's constructor takes the following parameters: |**ConnectionStringSetting**|The name of the app setting that contains the RabbitMQ message queue connection string. The trigger won't work when you specify the connection string directly instead through an app setting. For example, when you have set `ConnectionStringSetting: "rabbitMQConnection"`, then in both the *local.settings.json* and in your function app you need a setting like `"RabbitMQConnection" : "< ActualConnectionstring >"`.| |**Port**|Gets or sets the port used. Defaults to 0, which points to the RabbitMQ client's default port setting of `5672`. | -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) ++In [C# class libraries](functions-dotnet-class-library.md), use the [RabbitMQTrigger](https://github.com/Azure/azure-functions-rabbitmq-extension/blob/dev/extension/WebJobs.Extensions.RabbitMQ/Trigger/RabbitMQTriggerAttribute.cs) attribute. ++Here's a `RabbitMQTrigger` attribute in a method signature for an isolated worker process library: +++# [In-process model](#tab/in-process) In [C# class libraries](functions-dotnet-class-library.md), use the [RabbitMQTrigger](https://github.com/Azure/azure-functions-rabbitmq-extension/blob/dev/extension/WebJobs.Extensions.RabbitMQ/Trigger/RabbitMQTriggerAttribute.cs) attribute. public static void RabbitMQTest([RabbitMQTrigger("queue")] string message, ILogg } ``` -# [Isolated process](#tab/isolated-process) --In [C# class libraries](functions-dotnet-class-library.md), use the [RabbitMQTrigger](https://github.com/Azure/azure-functions-rabbitmq-extension/blob/dev/extension/WebJobs.Extensions.RabbitMQ/Trigger/RabbitMQTriggerAttribute.cs) attribute. --Here's a `RabbitMQTrigger` attribute in a method signature for an isolated worker process library: ---# [C# script](#tab/csharp-script) --C# script uses a function.json file for configuration instead of attributes. --The following table explains the binding configuration properties for C# script that you set in the *function.json* file. --|function.json property | Description| -||-| -|**type** | Must be set to `RabbitMQTrigger`.| -|**direction** | Must be set to "in".| -|**name** | The name of the variable that represents the queue in function code. | -|**queueName**| See the **QueueName** attribute above.| -|**hostName**|See the **HostName** attribute above.| -|**userNameSetting**|See the **UserNameSetting** attribute above.| -|**passwordSetting**|See the **PasswordSetting** attribute above.| -|**connectionStringSetting**|See the **ConnectionStringSetting** attribute above.| -|**port**|See the **Port** attribute above.| - ::: zone-end See the [Example section](#example) for complete examples. ::: zone pivot="programming-language-csharp" The parameter type supported by the RabbitMQ trigger depends on the C# modality used. -# [In-process](#tab/in-process) --The default message type is [RabbitMQ Event](https://rabbitmq.github.io/rabbitmq-dotnet-client/api/RabbitMQ.Client.Events.BasicDeliverEventArgs.html), and the `Body` property of the RabbitMQ Event can be read as the types listed below: --* `An object serializable as JSON` - The message is delivered as a valid JSON string. -* `string` -* `byte[]` -* `POCO` - The message is formatted as a C# object. For complete code, see C# [example](#example). --# [Isolated process](#tab/isolated-process) +# [Isolated worker model](#tab/isolated-process) The RabbitMQ bindings currently support only string and serializable object types when running in an isolated process. -# [C# script](#tab/csharp-script) +# [In-process model](#tab/in-process) The default message type is [RabbitMQ Event](https://rabbitmq.github.io/rabbitmq-dotnet-client/api/RabbitMQ.Client.Events.BasicDeliverEventArgs.html), and the `Body` property of the RabbitMQ Event can be read as the types listed below: * `An object serializable as JSON` - The message is delivered as a valid JSON string. * `string` * `byte[]`-* `POCO` - The message is formatted as a C# object. For a complete example, see C# Script [example](#example). +* `POCO` - The message is formatted as a C# object. For complete code, see C# [example](#example). |
azure-functions | Functions Bindings Rabbitmq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-rabbitmq.md | Before working with the RabbitMQ extension, you must [set up your RabbitMQ endpo The extension NuGet package you install depends on the C# mode you're using in your function app: -# [In-process](#tab/in-process) --Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md). --Add the extension to your project by installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.RabbitMQ). --# [Isolated process](#tab/isolated-process) +# [Isolated worker model](#tab/isolated-process) Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md). Add the extension to your project by installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Rabbitmq). -# [C# script](#tab/csharp-script) +# [In-process model](#tab/in-process) -Functions run as C# script, which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions]. +Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md). -You can install this version of the extension in your function app by registering the [extension bundle], version 2.x, or a later version. +Add the extension to your project by installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.RabbitMQ). |
azure-functions | Functions Bindings Return Value | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-return-value.md | Set the `name` property in *function.json* to `$return`. If there are multiple o How return values are used depends on the C# mode you're using in your function app: -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) ++See [Output bindings in the .NET worker guide](./dotnet-isolated-process-guide.md#output-bindings) for details and examples. ++# [In-process model](#tab/in-process) In a C# class library, apply the output binding attribute to the method return value. In C# and C# script, alternative ways to send data to an output binding are `out` parameters and [collector objects](functions-reference-csharp.md#writing-multiple-output-values). public static Task<string> Run([QueueTrigger("inputqueue")]WorkItem input, ILogg } ``` -# [Isolated process](#tab/isolated-process) --See [Output bindings in the .NET worker guide](./dotnet-isolated-process-guide.md#output-bindings) for details and examples. - ::: zone-end |
azure-functions | Functions Bindings Sendgrid | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-sendgrid.md | This article explains how to send email by using [SendGrid](https://sendgrid.com The extension NuGet package you install depends on the C# mode you're using in your function app: -# [In-process](#tab/in-process) --Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md). --# [Isolated process](#tab/isolated-process) +# [Isolated worker model](#tab/isolated-process) Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md). -# [C# script](#tab/csharp-script) +# [In-process model](#tab/in-process) -Functions run as C# script, which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions]. +Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md). Add the extension to your project by installing the [NuGet package](https://www. Functions 1.x doesn't support running in an isolated worker process. -# [Functions v2.x+](#tab/functionsv2/csharp-script) --This version of the extension should already be available to your function app with [extension bundle], version 2.x. --# [Functions 1.x](#tab/functionsv1/csharp-script) --You can add the extension to your project by explicitly installing the [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.SendGrid), version 2.x. To learn more, see [Explicitly install extensions](functions-bindings-register.md#explicitly-install-extensions). - ::: zone-end You can add the extension to your project by explicitly installing the [NuGet pa ::: zone pivot="programming-language-csharp" [!INCLUDE [functions-bindings-csharp-intro-with-csx](../../includes/functions-bindings-csharp-intro-with-csx.md)] -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) ++We don't currently have an example for using the SendGrid binding in a function app running in an isolated worker process. ++# [In-process model](#tab/in-process) The following examples shows a [C# function](functions-dotnet-class-library.md) that uses a Service Bus queue trigger and a SendGrid output binding. public class OutgoingEmail You can omit setting the attribute's `ApiKey` property if you have your API key in an app setting named "AzureWebJobsSendGridApiKey". -# [Isolated process](#tab/isolated-process) --We don't currently have an example for using the SendGrid binding in a function app running in an isolated worker process. --# [C# Script](#tab/csharp-script) --The following example shows a SendGrid output binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. --Here's the binding data in the *function.json* file: --```json -{ - "bindings": [ - { - "type": "queueTrigger", - "name": "mymsg", - "queueName": "myqueue", - "connection": "AzureWebJobsStorage", - "direction": "in" - }, - { - "type": "sendGrid", - "name": "$return", - "direction": "out", - "apiKey": "SendGridAPIKeyAsAppSetting", - "from": "{FromEmail}", - "to": "{ToEmail}" - } - ] -} -``` --The [configuration](#configuration) section explains these properties. --Here's the C# script code: --```csharp -#r "SendGrid" --using System; -using SendGrid.Helpers.Mail; -using Microsoft.Azure.WebJobs.Host; --public static SendGridMessage Run(Message mymsg, ILogger log) -{ - SendGridMessage message = new SendGridMessage() - { - Subject = $"{mymsg.Subject}" - }; - - message.AddContent("text/plain", $"{mymsg.Content}"); -- return message; -} -public class Message -{ - public string ToEmail { get; set; } - public string FromEmail { get; set; } - public string Subject { get; set; } - public string Content { get; set; } -} -``` ::: zone-end public class HttpTriggerSendGrid { Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the output binding. C# script instead uses a function.json configuration file. -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) -In [in-process](functions-dotnet-class-library.md) function apps, use the [SendGridAttribute](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.SendGrid/SendGridAttribute.cs), which supports the following parameters. +In [isolated worker process](dotnet-isolated-process-guide.md) function apps, the `SendGridOutputAttribute` supports the following parameters: | Attribute/annotation property | Description | |-|-| In [in-process](functions-dotnet-class-library.md) function apps, use the [SendG | **Subject** | (Optional) The subject of the email. | | **Text** | (Optional) The email content. | -# [Isolated process](#tab/isolated-process) +# [In-process model](#tab/in-process) -In [isolated worker process](dotnet-isolated-process-guide.md) function apps, the `SendGridOutputAttribute` supports the following parameters: +In [in-process](functions-dotnet-class-library.md) function apps, use the [SendGridAttribute](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.SendGrid/SendGridAttribute.cs), which supports the following parameters. | Attribute/annotation property | Description | |-|-| In [isolated worker process](dotnet-isolated-process-guide.md) function apps, th | **Subject** | (Optional) The subject of the email. | | **Text** | (Optional) The email content. | -# [C# Script](#tab/csharp-script) --The following table explains the trigger configuration properties that you set in the *function.json* file: --| *function.json* property | Description | -|--|| -| **type** | Must be set to `sendGrid`.| -| **direction** | Must be set to `out`.| -| **name** | The variable name used in function code for the request or request body. This value is `$return` when there is only one return value. | -| **apiKey** | The name of an app setting that contains your API key. If not set, the default app setting name is *AzureWebJobsSendGridApiKey*.| -| **to**| (Optional) The recipient's email address. | -| **from**| (Optional) The sender's email address. | -| **subject**| (Optional) The subject of the email. | -| **text**| (Optional) The email content. | - ::: zone-end |
azure-functions | Functions Bindings Service Bus Output | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus-output.md | This article supports both programming models. [!INCLUDE [functions-bindings-csharp-intro](../../includes/functions-bindings-csharp-intro.md)] -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) ++The following example shows a [C# function](dotnet-isolated-process-guide.md) that receives a Service Bus queue message, logs the message, and sends a message to different Service Bus queue: ++++# [In-process model](#tab/in-process) The following example shows a [C# function](functions-dotnet-class-library.md) that sends a Service Bus queue message: public static string ServiceBusOutput([HttpTrigger] dynamic input, ILogger log) return input.Text; } ```-# [Isolated process](#tab/isolated-process) --The following example shows a [C# function](dotnet-isolated-process-guide.md) that receives a Service Bus queue message, logs the message, and sends a message to different Service Bus queue: --- ::: zone-end def main(req: func.HttpRequest, msg: func.Out[str]) -> func.HttpResponse: Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the output binding. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#service-bus-output). -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) ++In [C# class libraries](dotnet-isolated-process-guide.md), use the [ServiceBusOutputAttribute](https://github.com/Azure/azure-functions-dotnet-worker/blob/main/extensions/Worker.Extensions.ServiceBus/src/ServiceBusOutputAttribute.cs) to define the queue or topic written to by the output. ++The following table explains the properties you can set using the attribute: ++| Property |Description| +| | | +|**EntityType**|Sets the entity type as either `Queue` for sending messages to a queue or `Topic` when sending messages to a topic. | +|**QueueOrTopicName**|Name of the topic or queue to send messages to. Use `EntityType` to set the destination type.| +|**Connection**|The name of an app setting or setting collection that specifies how to connect to Service Bus. See [Connections](#connections).| ++# [In-process model](#tab/in-process) In [C# class libraries](functions-dotnet-class-library.md), use the [ServiceBusAttribute](https://github.com/Azure/azure-functions-servicebus-extension/blob/master/src/Microsoft.Azure.WebJobs.Extensions.ServiceBus/ServiceBusAttribute.cs). For a complete example, see [Example](#example). You can use the `ServiceBusAccount` attribute to specify the Service Bus account to use at class, method, or parameter level. For more information, see [Attributes](functions-bindings-service-bus-trigger.md#attributes) in the trigger reference. -# [Isolated process](#tab/isolated-process) --In [C# class libraries](dotnet-isolated-process-guide.md), use the [ServiceBusOutputAttribute](https://github.com/Azure/azure-functions-dotnet-worker/blob/main/extensions/Worker.Extensions.ServiceBus/src/ServiceBusOutputAttribute.cs) to define the queue or topic written to by the output. --The following table explains the properties you can set using the attribute: --| Property |Description| -| | | -|**EntityType**|Sets the entity type as either `Queue` for sending messages to a queue or `Topic` when sending messages to a topic. | -|**QueueOrTopicName**|Name of the topic or queue to send messages to. Use `EntityType` to set the destination type.| -|**Connection**|The name of an app setting or setting collection that specifies how to connect to Service Bus. See [Connections](#connections).| - ::: zone-end |
azure-functions | Functions Bindings Service Bus Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus-trigger.md | This article supports both programming models. [!INCLUDE [functions-bindings-csharp-intro](../../includes/functions-bindings-csharp-intro.md)] -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) ++The following example shows a [C# function](dotnet-isolated-process-guide.md) that receives a Service Bus queue message, logs the message, and sends a message to different Service Bus queue: ++++# [In-process model](#tab/in-process) The following example shows a [C# function](functions-dotnet-class-library.md) that reads [message metadata](#message-metadata) and logs a Service Bus queue message: public static void Run( log.LogInformation($"MessageId={messageId}"); } ```-# [Isolated process](#tab/isolated-process) --The following example shows a [C# function](dotnet-isolated-process-guide.md) that receives a Service Bus queue message, logs the message, and sends a message to different Service Bus queue: --- ::: zone-end def main(msg: azf.ServiceBusMessage) -> str: Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the [ServiceBusTriggerAttribute](https://github.com/Azure/azure-functions-servicebus-extension/blob/master/src/Microsoft.Azure.WebJobs.Extensions.ServiceBus/ServiceBusTriggerAttribute.cs) attribute to define the function trigger. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#service-bus-trigger). -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) The following table explains the properties you can set using this trigger attribute: The following table explains the properties you can set using this trigger attri |**TopicName**|Name of the topic to monitor. Set only if monitoring a topic, not for a queue.| |**SubscriptionName**|Name of the subscription to monitor. Set only if monitoring a topic, not for a queue.| |**Connection**| The name of an app setting or setting collection that specifies how to connect to Service Bus. See [Connections](#connections).|-|**Access**|Access rights for the connection string. Available values are `manage` and `listen`. The default is `manage`, which indicates that the `connection` has the **Manage** permission. If you use a connection string that does not have the **Manage** permission, set `accessRights` to "listen". Otherwise, the Functions runtime might fail trying to do operations that require manage rights. In Azure Functions version 2.x and higher, this property is not available because the latest version of the Service Bus SDK doesn't support manage operations.| |**IsBatched**| Messages are delivered in batches. Requires an array or collection type. | |**IsSessionsEnabled**|`true` if connecting to a [session-aware](../service-bus-messaging/message-sessions.md) queue or subscription. `false` otherwise, which is the default value.|-|**AutoComplete**|`true` Whether the trigger should automatically call complete after processing, or if the function code will manually call complete.<br/><br/>If set to `true`, the trigger completes the message automatically if the function execution completes successfully, and abandons the message otherwise.<br/><br/>When set to `false`, you are responsible for calling [MessageReceiver](/dotnet/api/microsoft.azure.servicebus.core.messagereceiver) methods to complete, abandon, or deadletter the message. If an exception is thrown (and none of the `MessageReceiver` methods are called), then the lock remains. Once the lock expires, the message is re-queued with the `DeliveryCount` incremented and the lock is automatically renewed. | -# [Isolated process](#tab/isolated-process) +# [In-process model](#tab/in-process) The following table explains the properties you can set using this trigger attribute: The following table explains the properties you can set using this trigger attri |**TopicName**|Name of the topic to monitor. Set only if monitoring a topic, not for a queue.| |**SubscriptionName**|Name of the subscription to monitor. Set only if monitoring a topic, not for a queue.| |**Connection**| The name of an app setting or setting collection that specifies how to connect to Service Bus. See [Connections](#connections).|+|**Access**|Access rights for the connection string. Available values are `manage` and `listen`. The default is `manage`, which indicates that the `connection` has the **Manage** permission. If you use a connection string that does not have the **Manage** permission, set `accessRights` to "listen". Otherwise, the Functions runtime might fail trying to do operations that require manage rights. In Azure Functions version 2.x and higher, this property is not available because the latest version of the Service Bus SDK doesn't support manage operations.| |**IsBatched**| Messages are delivered in batches. Requires an array or collection type. | |**IsSessionsEnabled**|`true` if connecting to a [session-aware](../service-bus-messaging/message-sessions.md) queue or subscription. `false` otherwise, which is the default value.|+|**AutoComplete**|`true` Whether the trigger should automatically call complete after processing, or if the function code will manually call complete.<br/><br/>If set to `true`, the trigger completes the message automatically if the function execution completes successfully, and abandons the message otherwise.<br/><br/>When set to `false`, you are responsible for calling [MessageReceiver](/dotnet/api/microsoft.azure.servicebus.core.messagereceiver) methods to complete, abandon, or deadletter the message. If an exception is thrown (and none of the `MessageReceiver` methods are called), then the lock remains. Once the lock expires, the message is re-queued with the `DeliveryCount` incremented and the lock is automatically renewed. | [!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)] |
azure-functions | Functions Bindings Service Bus | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus.md | Azure Functions integrates with [Azure Service Bus](https://azure.microsoft.com/ The extension NuGet package you install depends on the C# mode you're using in your function app: -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) -_This section describes using a [class library](./functions-dotnet-class-library.md). For [C# scripting], you would need to instead [install the extension bundle][Update your extensions], version 2.x or later._ +Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md). -Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md). +Add the extension to your project installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.servicebus). -Add the extension to your project installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.ServiceBus). +# [In-process model](#tab/in-process) -# [Isolated process](#tab/isolated-process) +_This section describes using a [class library](./functions-dotnet-class-library.md). For [C# scripting], you would need to instead [install the extension bundle][Update your extensions], version 2.x or later._ -Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md). +Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md). -Add the extension to your project installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.servicebus). +Add the extension to your project installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.ServiceBus). Functions 1.x apps automatically have a reference to the extension. The binding types supported for .NET depend on both the extension version and C# execution mode, which can be one of the following: +# [Isolated worker model](#tab/isolated-process) ++An isolated worker process class library compiled C# function runs in a process isolated from the runtime. + # [In-process class library](#tab/in-process) An in-process class library is a compiled C# function runs in the same process as the Functions runtime. -# [Isolated process](#tab/isolated-process) --An isolated worker process class library compiled C# function runs in a process isolated from the runtime. - Choose a version to see binding type details for the mode and version. |
azure-functions | Functions Bindings Signalr Service Input | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-signalr-service-input.md | For information on setup and configuration details, see the [overview](functions [!INCLUDE [functions-bindings-csharp-intro-with-csx](../../includes/functions-bindings-csharp-intro-with-csx.md)] -# [In-process](#tab/in-process) --The following example shows a [C# function](functions-dotnet-class-library.md) that acquires SignalR connection information using the input binding and returns it over HTTP. --```cs -[FunctionName("negotiate")] -public static SignalRConnectionInfo Negotiate( - [HttpTrigger(AuthorizationLevel.Anonymous)]HttpRequest req, - [SignalRConnectionInfo(HubName = "chat")]SignalRConnectionInfo connectionInfo) -{ - return connectionInfo; -} -``` --# [Isolated process](#tab/isolated-process) +# [Isolated worker model](#tab/isolated-process) The following example shows a [C# function](dotnet-isolated-process-guide.md) that acquires SignalR connection information using the input binding and returns it over HTTP. :::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/SignalR/SignalRNegotiationFunctions.cs" id="snippet_negotiate"::: -# [C# Script](#tab/csharp-script) --The following example shows a SignalR connection info input binding in a *function.json* file and a [C# Script function](functions-reference-csharp.md) that uses the binding to return the connection information. --Here's binding data in the *function.json* file: --Example function.json: +# [In-process model](#tab/in-process) -```json -{ - "type": "signalRConnectionInfo", - "name": "connectionInfo", - "hubName": "chat", - "connectionStringSetting": "<name of setting containing SignalR Service connection string>", - "direction": "in" -} -``` --Here's the C# Script code: +The following example shows a [C# function](functions-dotnet-class-library.md) that acquires SignalR connection information using the input binding and returns it over HTTP. ```cs-#r "Microsoft.Azure.WebJobs.Extensions.SignalRService" -using Microsoft.Azure.WebJobs.Extensions.SignalRService; --public static SignalRConnectionInfo Run(HttpRequest req, SignalRConnectionInfo connectionInfo) +[FunctionName("negotiate")] +public static SignalRConnectionInfo Negotiate( + [HttpTrigger(AuthorizationLevel.Anonymous)]HttpRequest req, + [SignalRConnectionInfo(HubName = "chat")]SignalRConnectionInfo connectionInfo) { return connectionInfo; } App Service authentication sets HTTP headers named `x-ms-client-principal-id` an ::: zone pivot="programming-language-csharp" -# [In-process](#tab/in-process) --You can set the `UserId` property of the binding to the value from either header using a [binding expression](#binding-expressions-for-http-trigger): `{headers.x-ms-client-principal-id}` or `{headers.x-ms-client-principal-name}`. --```cs -[FunctionName("negotiate")] -public static SignalRConnectionInfo Negotiate( - [HttpTrigger(AuthorizationLevel.Anonymous)]HttpRequest req, - [SignalRConnectionInfo - (HubName = "chat", UserId = "{headers.x-ms-client-principal-id}")] - SignalRConnectionInfo connectionInfo) -{ - // connectionInfo contains an access key token with a name identifier claim set to the authenticated user - return connectionInfo; -} -``` --# [Isolated process](#tab/isolated-process) +# [Isolated worker model](#tab/isolated-process) ```cs [Function("Negotiate")] public static string Negotiate([HttpTrigger(AuthorizationLevel.Anonymous)] HttpR } ``` -# [C# Script](#tab/csharp-script) --You can set the `userId` property of the binding to the value from either header using a [binding expression](#binding-expressions-for-http-trigger): `{headers.x-ms-client-principal-id}` or `{headers.x-ms-client-principal-name}`. --Example function.json: +# [In-process model](#tab/in-process) -```json -{ - "type": "signalRConnectionInfo", - "name": "connectionInfo", - "hubName": "chat", - "userId": "{headers.x-ms-client-principal-id}", - "connectionStringSetting": "<name of setting containing SignalR Service connection string>", - "direction": "in" -} -``` --Here's the C# Script code: +You can set the `UserId` property of the binding to the value from either header using a [binding expression](#binding-expressions-for-http-trigger): `{headers.x-ms-client-principal-id}` or `{headers.x-ms-client-principal-name}`. ```cs-#r "Microsoft.Azure.WebJobs.Extensions.SignalRService" -using Microsoft.Azure.WebJobs.Extensions.SignalRService; --public static SignalRConnectionInfo Run(HttpRequest req, SignalRConnectionInfo connectionInfo) +[FunctionName("negotiate")] +public static SignalRConnectionInfo Negotiate( + [HttpTrigger(AuthorizationLevel.Anonymous)]HttpRequest req, + [SignalRConnectionInfo + (HubName = "chat", UserId = "{headers.x-ms-client-principal-id}")] + SignalRConnectionInfo connectionInfo) {- // connectionInfo contains an access key token with a name identifier - // claim set to the authenticated user + // connectionInfo contains an access key token with a name identifier claim set to the authenticated user return connectionInfo; } ```+ ::: zone-end public SignalRConnectionInfo negotiate( Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attribute to define the function. C# script instead uses a function.json configuration file. -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) -The following table explains the properties of the `SignalRConnectionInfo` attribute: +The following table explains the properties of the `SignalRConnectionInfoInput` attribute: | Attribute property |Description| ||-| The following table explains the properties of the `SignalRConnectionInfo` attri |**IdToken**| Optional. A JWT token whose claims will be added to the user claims. It should be used together with **ClaimTypeList**. You can use a [binding expression](#binding-expressions-for-http-trigger) to bind the value to an HTTP request header or query. | |**ClaimTypeList**| Optional. A list of claim types, which filter the claims in **IdToken** . | -# [Isolated process](#tab/isolated-process) +# [In-process model](#tab/in-process) -The following table explains the properties of the `SignalRConnectionInfoInput` attribute: +The following table explains the properties of the `SignalRConnectionInfo` attribute: | Attribute property |Description| ||-| The following table explains the properties of the `SignalRConnectionInfoInput` |**IdToken**| Optional. A JWT token whose claims will be added to the user claims. It should be used together with **ClaimTypeList**. You can use a [binding expression](#binding-expressions-for-http-trigger) to bind the value to an HTTP request header or query. | |**ClaimTypeList**| Optional. A list of claim types, which filter the claims in **IdToken** . | -# [C# Script](#tab/csharp-script) --The following table explains the binding configuration properties that you set in the *function.json* file. --|function.json property | Description| -||--| -|**type**| Must be set to `signalRConnectionInfo`.| -|**direction**| Must be set to `in`.| -|**name**| Variable name used in function code for connection info object. | -|**hubName**| Required. The hub name. | -|**connectionStringSetting**| The name of the app setting that contains the SignalR Service connection string, which defaults to `AzureSignalRConnectionString`. | -|**userId**| Optional. The user identifier of a SignalR connection. You can use a [binding expression](#binding-expressions-for-http-trigger) to bind the value to an HTTP request header or query. | -|**idToken**| Optional. A JWT token whose claims will be added to the user claims. It should be used together with **claimTypeList**. You can use a [binding expression](#binding-expressions-for-http-trigger) to bind the value to an HTTP request header or query. | -|**claimTypeList**| Optional. A list of claim types, which filter the claims in **idToken** . | - ::: zone-end |
azure-functions | Functions Bindings Signalr Service Output | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-signalr-service-output.md | For information on setup and configuration details, see the [overview](functions ### Broadcast to all clients -# [In-process](#tab/in-process) --The following example shows a function that sends a message using the output binding to all connected clients. The *target* is the name of the method to be invoked on each client. The *Arguments* property is an array of zero or more objects to be passed to the client method. --```cs -[FunctionName("SendMessage")] -public static Task SendMessage( - [HttpTrigger(AuthorizationLevel.Anonymous, "post")]object message, - [SignalR(HubName = "chat")]IAsyncCollector<SignalRMessage> signalRMessages) -{ - return signalRMessages.AddAsync( - new SignalRMessage - { - Target = "newMessage", - Arguments = new [] { message } - }); -} -``` --# [Isolated process](#tab/isolated-process) +# [Isolated worker model](#tab/isolated-process) The following example shows a function that sends a message using the output binding to all connected clients. The *newMessage* is the name of the method to be invoked on each client. :::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/SignalR/SignalROutputBindingFunctions2.cs" id="snippet_broadcast_to_all"::: -# [C# Script](#tab/csharp-script) --Here's binding data in the *function.json* file: --Example function.json: --```json -{ - "type": "signalR", - "name": "signalRMessages", - "hubName": "<hub_name>", - "connectionStringSetting": "<name of setting containing SignalR Service connection string>", - "direction": "out" -} -``` +# [In-process model](#tab/in-process) -Here's the C# Script code: +The following example shows a function that sends a message using the output binding to all connected clients. The *target* is the name of the method to be invoked on each client. The *Arguments* property is an array of zero or more objects to be passed to the client method. ```cs-#r "Microsoft.Azure.WebJobs.Extensions.SignalRService" -using Microsoft.Azure.WebJobs.Extensions.SignalRService; --public static Task Run( - object message, - IAsyncCollector<SignalRMessage> signalRMessages) +[FunctionName("SendMessage")] +public static Task SendMessage( + [HttpTrigger(AuthorizationLevel.Anonymous, "post")]object message, + [SignalR(HubName = "chat")]IAsyncCollector<SignalRMessage> signalRMessages) { return signalRMessages.AddAsync( new SignalRMessage public SignalRMessage sendMessage( You can send a message only to connections that have been authenticated to a user by setting the *user ID* in the SignalR message. -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) +++# [In-process model](#tab/in-process) ```cs [FunctionName("SendMessage")] public static Task SendMessage( } ``` -# [Isolated process](#tab/isolated-process) ---# [C# Script](#tab/csharp-script) --Example function.json: --```json -{ - "type": "signalR", - "name": "signalRMessages", - "hubName": "<hub_name>", - "connectionStringSetting": "<name of setting containing SignalR Service connection string>", - "direction": "out" -} -``` --Here's the C# script code: --```cs -#r "Microsoft.Azure.WebJobs.Extensions.SignalRService" -using Microsoft.Azure.WebJobs.Extensions.SignalRService; --public static Task Run( - object message, - IAsyncCollector<SignalRMessage> signalRMessages) -{ - return signalRMessages.AddAsync( - new SignalRMessage - { - // the message will only be sent to this user ID - UserId = "userId1", - Target = "newMessage", - Arguments = new [] { message } - }); -} -``` - ::: zone-end public SignalRMessage sendMessage( You can send a message only to connections that have been added to a group by setting the *group name* in the SignalR message. -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) +++# [In-process model](#tab/in-process) ```cs [FunctionName("SendMessage")] public static Task SendMessage( }); } ```-# [Isolated process](#tab/isolated-process) ---# [C# Script](#tab/csharp-script) --Example function.json: --```json -{ - "type": "signalR", - "name": "signalRMessages", - "hubName": "<hub_name>", - "connectionStringSetting": "<name of setting containing SignalR Service connection string>", - "direction": "out" -} -``` --Here's the C# Script code: --```cs -#r "Microsoft.Azure.WebJobs.Extensions.SignalRService" -using Microsoft.Azure.WebJobs.Extensions.SignalRService; --public static Task Run( - object message, - IAsyncCollector<SignalRMessage> signalRMessages) -{ - return signalRMessages.AddAsync( - new SignalRMessage - { - // the message will be sent to the group with this name - GroupName = "myGroup", - Target = "newMessage", - Arguments = new [] { message } - }); -} -``` - ::: zone-end public SignalRMessage sendMessage( SignalR Service allows users or connections to be added to groups. Messages can then be sent to a group. You can use the `SignalR` output binding to manage groups. -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) ++Specify `SignalRGroupActionType` to add or remove a member. The following example removes a user from a group. +++# [In-process model](#tab/in-process) Specify `GroupAction` to add or remove a member. The following example adds a user to a group. public static Task AddToGroup( } ``` -# [Isolated process](#tab/isolated-process) --Specify `SignalRGroupActionType` to add or remove a member. The following example removes a user from a group. ---# [C# Script](#tab/csharp-script) --The following example adds a user to a group. --Example *function.json* --```json -{ - "type": "signalR", - "name": "signalRGroupActions", - "connectionStringSetting": "<name of setting containing SignalR Service connection string>", - "hubName": "chat", - "direction": "out" -} -``` --*Run.csx* --```cs -#r "Microsoft.Azure.WebJobs.Extensions.SignalRService" -using Microsoft.Azure.WebJobs.Extensions.SignalRService; --public static Task Run( - HttpRequest req, - ClaimsPrincipal claimsPrincipal, - IAsyncCollector<SignalRGroupAction> signalRGroupActions) -{ - var userIdClaim = claimsPrincipal.FindFirst(ClaimTypes.NameIdentifier); - return signalRGroupActions.AddAsync( - new SignalRGroupAction - { - UserId = userIdClaim.Value, - GroupName = "myGroup", - Action = GroupAction.Add - }); -} -``` --The following example removes a user from a group. --Example *function.json* --```json -{ - "type": "signalR", - "name": "signalRGroupActions", - "connectionStringSetting": "<name of setting containing SignalR Service connection string>", - "hubName": "chat", - "direction": "out" -} -``` --*Run.csx* --```cs -#r "Microsoft.Azure.WebJobs.Extensions.SignalRService" -using Microsoft.Azure.WebJobs.Extensions.SignalRService; --public static Task Run( - HttpRequest req, - ClaimsPrincipal claimsPrincipal, - IAsyncCollector<SignalRGroupAction> signalRGroupActions) -{ - var userIdClaim = claimsPrincipal.FindFirst(ClaimTypes.NameIdentifier); - return signalRGroupActions.AddAsync( - new SignalRGroupAction - { - UserId = userIdClaim.Value, - GroupName = "myGroup", - Action = GroupAction.Remove - }); -} -``` - > [!NOTE] public SignalRGroupAction removeFromGroup( ## Attributes -Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attribute to define the function. C# script instead uses a function.json configuration file. +Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attribute to define the function. C# script instead uses a [function.json configuration file](#configuration). -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) -The following table explains the properties of the `SignalR` output attribute. +The following table explains the properties of the `SignalROutput` attribute. | Attribute property |Description| ||-| |**HubName**| This value must be set to the name of the SignalR hub for which the connection information is generated.| |**ConnectionStringSetting**| The name of the app setting that contains the SignalR Service connection string, which defaults to `AzureSignalRConnectionString`. |+# [In-process model](#tab/in-process) --# [Isolated process](#tab/isolated-process) --The following table explains the properties of the `SignalROutput` attribute. +The following table explains the properties of the `SignalR` output attribute. | Attribute property |Description| ||-| |**HubName**| This value must be set to the name of the SignalR hub for which the connection information is generated.| |**ConnectionStringSetting**| The name of the app setting that contains the SignalR Service connection string, which defaults to `AzureSignalRConnectionString`. | -# [C# Script](#tab/csharp-script) --The following table explains the binding configuration properties that you set in the *function.json* file. --|function.json property | Description| -||-| -|**type**| Must be set to `signalR`.| -|**direction**|Must be set to `out`.| -|**name**| Variable name used in function code for connection info object. | -|**hubName**| This value must be set to the name of the SignalR hub for which the connection information is generated.| -|**connectionStringSetting**| The name of the app setting that contains the SignalR Service connection string, which defaults to `AzureSignalRConnectionString`. | |
azure-functions | Functions Bindings Signalr Service Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-signalr-service-trigger.md | For information on setup and configuration details, see the [overview](functions [!INCLUDE [functions-bindings-csharp-intro-with-csx](../../includes/functions-bindings-csharp-intro-with-csx.md)] -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) ++The following sample shows a C# function that receives a message event from clients and logs the message content. ++++# [In-process model](#tab/in-process) SignalR Service trigger binding for C# has two programming models. Class based model and traditional model. Class based model provides a consistent SignalR server-side programming experience. Traditional model provides more flexibility and is similar to other function bindings. public static async Task Run([SignalRTrigger("SignalRTest", "messages", "SendMes } ``` --# [Isolated process](#tab/isolated-process) --The following sample shows a C# function that receives a message event from clients and logs the message content. ----# [C# Script](#tab/csharp-script) --Here's example binding data in the *function.json* file: --```json -{ - "type": "signalRTrigger", - "name": "invocation", - "hubName": "SignalRTest", - "category": "messages", - "event": "SendMessage", - "parameterNames": [ - "message" - ], - "direction": "in" -} -``` --And, here's the code: --```cs -#r "Microsoft.Azure.WebJobs.Extensions.SignalRService" -using System; -using Microsoft.Azure.WebJobs.Extensions.SignalRService; -using Microsoft.Extensions.Logging; --public static void Run(InvocationContext invocation, string message, ILogger logger) -{ - logger.LogInformation($"Receive {message} from {invocationContext.ConnectionId}."); -} -``` - ::: zone-end def main(invocation) -> None: ## Attributes -Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the `SignalRTrigger` attribute to define the function. C# script instead uses a function.json configuration file. +Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the `SignalRTrigger` attribute to define the function. C# script instead uses a [function.json configuration file](#configuration). -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) The following table explains the properties of the `SignalRTrigger` attribute. The following table explains the properties of the `SignalRTrigger` attribute. |**ParameterNames**| (Optional) A list of names that binds to the parameters. | |**ConnectionStringSetting**| The name of the app setting that contains the SignalR Service connection string, which defaults to `AzureSignalRConnectionString`. | -# [Isolated process](#tab/isolated-process) +# [In-process model](#tab/in-process) The following table explains the properties of the `SignalRTrigger` attribute. The following table explains the properties of the `SignalRTrigger` attribute. |**ParameterNames**| (Optional) A list of names that binds to the parameters. | |**ConnectionStringSetting**| The name of the app setting that contains the SignalR Service connection string, which defaults to `AzureSignalRConnectionString`. | -# [C# script](#tab/csharp-script) --C# script uses a function.json file for configuration instead of attributes. --The following table explains the binding configuration properties for C# script that you set in the *function.json* file. --|function.json property |Description| -||--| -|**type**| Must be set to `SignalRTrigger`.| -|**direction**| Must be set to `in`.| -|**name**| Variable name used in function code for trigger invocation context object. | -|**hubName**| This value must be set to the name of the SignalR hub for the function to be triggered.| -|**category**| This value must be set as the category of messages for the function to be triggered. The category can be one of the following values: <ul><li>**connections**: Including *connected* and *disconnected* events</li><li>**messages**: Including all other events except those in *connections* category</li></ul> | -|**event**| This value must be set as the event of messages for the function to be triggered. For *messages* category, event is the *target* in [invocation message](https://github.com/dotnet/aspnetcore/blob/master/src/SignalR/docs/specs/HubProtocol.md#invocation-message-encoding) that clients send. For *connections* category, only *connected* and *disconnected* is used. | -|**parameterNames**| (Optional) A list of names that binds to the parameters. | -|**connectionStringSetting**| The name of the app setting that contains the SignalR Service connection string, which defaults to `AzureSignalRConnectionString`. | - ::: zone-end |
azure-functions | Functions Bindings Signalr Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-signalr-service.md | This set of articles explains how to authenticate and send real-time messages to The extension NuGet package you install depends on the C# mode you're using in your function app: -# [In-process](#tab/in-process) --Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md). --Add the extension to your project by installing this [NuGet package]. --# [Isolated process](#tab/isolated-process) +# [Isolated worker model](#tab/isolated-process) Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md). Add the extension to your project by installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.SignalRService/). -# [C# script](#tab/csharp-script) +# [In-process model](#tab/in-process) -Functions run as C# script, which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions]. +Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md). -You can install this version of the extension in your function app by registering the [extension bundle], version 2.x, or a later version. +Add the extension to your project by installing this [NuGet package]. |
azure-functions | Functions Bindings Storage Blob Input | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-input.md | This article supports both programming models. [!INCLUDE [functions-bindings-csharp-intro](../../includes/functions-bindings-csharp-intro.md)] +# [Isolated process](#tab/isolated-process) ++The following example is a [C# function](dotnet-isolated-process-guide.md) that runs in an isolated worker process and uses a blob trigger with both blob input and blob output blob bindings. The function is triggered by the creation of a blob in the *test-samples-trigger* container. It reads a text file from the *test-samples-input* container and creates a new text file in an output container based on the name of the triggered file. ++ # [In-process](#tab/in-process) The following example is a [C# function](functions-dotnet-class-library.md) that uses a queue trigger and an input blob binding. The queue message contains the name of the blob, and the function logs the size of the blob. public static void Run( } ``` -# [Isolated process](#tab/isolated-process) --The following example is a [C# function](dotnet-isolated-process-guide.md) that runs in an isolated worker process and uses a blob trigger with both blob input and blob output blob bindings. The function is triggered by the creation of a blob in the *test-samples-trigger* container. It reads a text file from the *test-samples-input* container and creates a new text file in an output container based on the name of the triggered file. -- ::: zone-end def main(queuemsg: func.QueueMessage, inputblob: bytes) -> bytes: Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#blob-input). +# [Isolated process](#tab/isolated-process) ++isolated worker process defines an input binding by using a `BlobInputAttribute` attribute, which takes the following parameters: ++|Parameter | Description| +||-| +|**BlobPath** | The path to the blob.| +|**Connection** | The name of an app setting or setting collection that specifies how to connect to Azure Blobs. See [Connections](#connections).| + # [In-process](#tab/in-process) In [C# class libraries](functions-dotnet-class-library.md), use the [BlobAttribute](/dotnet/api/microsoft.azure.webjobs.blobattribute), which takes the following parameters: public static void Run( [!INCLUDE [functions-bindings-storage-attribute](../../includes/functions-bindings-storage-attribute.md)] -# [Isolated process](#tab/isolated-process) --isolated worker process defines an input binding by using a `BlobInputAttribute` attribute, which takes the following parameters: --|Parameter | Description| -||-| -|**BlobPath** | The path to the blob.| -|**Connection** | The name of an app setting or setting collection that specifies how to connect to Azure Blobs. See [Connections](#connections).| - [!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)] See the [Example section](#example) for complete examples. The binding types supported by Blob input depend on the extension package version and the C# modality used in your function app. -# [In-process](#tab/in-process) --See [Binding types](./functions-bindings-storage-blob.md?tabs=in-process#binding-types) for a list of supported types. - # [Isolated process](#tab/isolated-process) [!INCLUDE [functions-bindings-storage-blob-input-dotnet-isolated-types](../../includes/functions-bindings-storage-blob-input-dotnet-isolated-types.md)] +# [In-process](#tab/in-process) ++See [Binding types](./functions-bindings-storage-blob.md?tabs=in-process#binding-types) for a list of supported types. + Binding to `string`, or `Byte[]` is only recommended when the blob size is small. This is recommended because the entire blob contents are loaded into memory. For most blobs, use a `Stream` or `BlobClient` type. For more information, see [Concurrency and memory usage](./functions-bindings-storage-blob-trigger.md#concurrency-and-memory-usage). |
azure-functions | Functions Bindings Storage Blob Output | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-output.md | This article supports both programming models. [!INCLUDE [functions-bindings-csharp-intro](../../includes/functions-bindings-csharp-intro.md)] -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) ++The following example is a [C# function](dotnet-isolated-process-guide.md) that runs in an isolated worker process and uses a blob trigger with both blob input and blob output blob bindings. The function is triggered by the creation of a blob in the *test-samples-trigger* container. It reads a text file from the *test-samples-input* container and creates a new text file in an output container based on the name of the triggered file. +++# [In-process model](#tab/in-process) The following example is a [C# function](functions-dotnet-class-library.md) that runs in-process and uses a blob trigger and two output blob bindings. The function is triggered by the creation of an image blob in the *sample-images* container. It creates small and medium size copies of the image blob. public class ResizeImages } ``` -# [Isolated process](#tab/isolated-process) --The following example is a [C# function](dotnet-isolated-process-guide.md) that runs in an isolated worker process and uses a blob trigger with both blob input and blob output blob bindings. The function is triggered by the creation of a blob in the *test-samples-trigger* container. It reads a text file from the *test-samples-input* container and creates a new text file in an output container based on the name of the triggered file. -- ::: zone-end def main(queuemsg: func.QueueMessage, inputblob: bytes, outputblob: func.Out[byt Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attribute to define the function. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#blob-output). -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) ++The `BlobOutputAttribute` constructor takes the following parameters: ++|Parameter | Description| +||-| +|**BlobPath** | The path to the blob.| +|**Connection** | The name of an app setting or setting collection that specifies how to connect to Azure Blobs. See [Connections](#connections).| ++# [In-process model](#tab/in-process) The [BlobAttribute](/dotnet/api/microsoft.azure.webjobs.blobattribute) attribute's constructor takes the following parameters: public static void Run( [!INCLUDE [functions-bindings-storage-attribute](../../includes/functions-bindings-storage-attribute.md)] -# [Isolated process](#tab/isolated-process) --The `BlobOutputAttribute` constructor takes the following parameters: --|Parameter | Description| -||-| -|**BlobPath** | The path to the blob.| -|**Connection** | The name of an app setting or setting collection that specifies how to connect to Azure Blobs. See [Connections](#connections).| - [!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)] See the [Example section](#example) for complete examples. The binding types supported by blob output depend on the extension package version and the C# modality used in your function app. -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) -See [Binding types](./functions-bindings-storage-blob.md?tabs=in-process#binding-types) for a list of supported types. -# [Isolated process](#tab/isolated-process) +# [In-process model](#tab/in-process) +See [Binding types](./functions-bindings-storage-blob.md?tabs=in-process#binding-types) for a list of supported types. |
azure-functions | Functions Bindings Storage Blob Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-trigger.md | This article supports both programming models. [!INCLUDE [functions-bindings-csharp-intro](../../includes/functions-bindings-csharp-intro.md)] -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) ++The following example is a [C# function](dotnet-isolated-process-guide.md) that runs in an isolated worker process and uses a blob trigger with both blob input and blob output blob bindings. The function is triggered by the creation of a blob in the *test-samples-trigger* container. It reads a text file from the *test-samples-input* container and creates a new text file in an output container based on the name of the triggered file. +++# [In-process model](#tab/in-process) The following example shows a [C# function](functions-dotnet-class-library.md) that writes a log when a blob is added or updated in the `samples-workitems` container. The string `{name}` in the blob trigger path `samples-workitems/{name}` creates For more information about the `BlobTrigger` attribute, see [Attributes](#attributes). -# [Isolated process](#tab/isolated-process) --The following example is a [C# function](dotnet-isolated-process-guide.md) that runs in an isolated worker process and uses a blob trigger with both blob input and blob output blob bindings. The function is triggered by the creation of a blob in the *test-samples-trigger* container. It reads a text file from the *test-samples-input* container and creates a new text file in an output container based on the name of the triggered file. -- ::: zone-end The attribute's constructor takes the following parameters: |**Access** | Indicates whether you will be reading or writing.| |**Source** | Sets the source of the triggering event. Use `BlobTriggerSource.EventGrid` for an [Event Grid-based blob trigger](functions-event-grid-blob-trigger.md), which provides much lower latency. The default is `BlobTriggerSource.LogsAndContainerScan`, which uses the standard polling mechanism to detect changes in the container. | -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) ++Here's an `BlobTrigger` attribute in a method signature: +++# [In-process model](#tab/in-process) In [C# class libraries](functions-dotnet-class-library.md), the attribute's constructor takes a path string that indicates the container to watch and optionally a [blob name pattern](#blob-name-patterns). Here's an example: public static void Run( [!INCLUDE [functions-bindings-storage-attribute](../../includes/functions-bindings-storage-attribute.md)] -# [Isolated process](#tab/isolated-process) --Here's an `BlobTrigger` attribute in a method signature: -- [!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)] Metadata is available through the `$TriggerMetadata` parameter. The binding types supported by Blob trigger depend on the extension package version and the C# modality used in your function app. -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) -See [Binding types](./functions-bindings-storage-blob.md?tabs=in-process#binding-types) for a list of supported types. -# [Isolated process](#tab/isolated-process) +# [In-process model](#tab/in-process) +See [Binding types](./functions-bindings-storage-blob.md?tabs=in-process#binding-types) for a list of supported types. |
azure-functions | Functions Bindings Storage Blob | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob.md | Azure Functions integrates with [Azure Storage](../storage/index.yml) via [trigg The extension NuGet package you install depends on the C# mode you're using in your function app: -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) -Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md). +Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md). -In a variation of this model, Functions can be run using [C# scripting], which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions]. +# [In-process model](#tab/in-process) -# [Isolated process](#tab/isolated-process) +Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md). -Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md). +In a variation of this model, Functions can be run using [C# scripting], which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions]. Functions 1.x apps automatically have a reference to the extension. The binding types supported for .NET depend on both the extension version and C# execution mode, which can be one of the following: -# [In-process](#tab/in-process) --An in-process class library is a compiled C# function runs in the same process as the Functions runtime. - -# [Isolated process](#tab/isolated-process) +# [Isolated worker model](#tab/isolated-process) An isolated worker process class library compiled C# function runs in a process isolated from the runtime. +# [In-process model](#tab/in-process) ++An in-process class library is a compiled C# function runs in the same process as the Functions runtime. + Choose a version to see binding type details for the mode and version. |
azure-functions | Functions Bindings Storage Queue Output | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-queue-output.md | This article supports both programming models. [!INCLUDE [functions-bindings-csharp-intro](../../includes/functions-bindings-csharp-intro.md)] -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) +++# [In-process model](#tab/in-process) The following example shows a [C# function](functions-dotnet-class-library.md) that creates a queue message for each HTTP request received. public static class QueueFunctions } ``` -# [Isolated process](#tab/isolated-process) -- ::: zone-end def main(req: func.HttpRequest, msg: func.Out[typing.List[str]]) -> func.HttpRes The attribute that defines an output binding in C# libraries depends on the mode in which the C# class library runs. -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) ++When running in an isolated worker process, you use the [QueueOutputAttribute](https://github.com/Azure/azure-functions-dotnet-worker/blob/main/extensions/Worker.Extensions.Storage.Queues/src/QueueOutputAttribute.cs), which takes the name of the queue, as shown in the following example: +++Only returned variables are supported when running in an isolated worker process. Output parameters can't be used. ++# [In-process model](#tab/in-process) In [C# class libraries](functions-dotnet-class-library.md), use the [QueueAttribute](/dotnet/api/microsoft.azure.webjobs.queueattribute). C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#queue-output). public static string Run([HttpTrigger] dynamic input, ILogger log) You can use the `StorageAccount` attribute to specify the storage account at class, method, or parameter level. For more information, see Trigger - attributes. -# [Isolated process](#tab/isolated-process) --When running in an isolated worker process, you use the [QueueOutputAttribute](https://github.com/Azure/azure-functions-dotnet-worker/blob/main/extensions/Worker.Extensions.Storage.Queues/src/QueueOutputAttribute.cs), which takes the name of the queue, as shown in the following example: ---Only returned variables are supported when running in an isolated worker process. Output parameters can't be used. - ::: zone-end See the [Example section](#example) for complete examples. ::: zone pivot="programming-language-csharp" The usage of the Queue output binding depends on the extension package version and the C# modality used in your function app, which can be one of the following: -# [In-process](#tab/in-process) --An in-process class library is a compiled C# function runs in the same process as the Functions runtime. - -# [Isolated process](#tab/isolated-process) +# [Isolated worker model](#tab/isolated-process) An isolated worker process class library compiled C# function runs in a process isolated from the runtime. +# [In-process model](#tab/in-process) ++An in-process class library is a compiled C# function runs in the same process as the Functions runtime. + Choose a version to see usage details for the mode and version. |
azure-functions | Functions Bindings Storage Queue Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-queue-trigger.md | Use the queue trigger to start a function when a new item is received on a queue [!INCLUDE [functions-bindings-csharp-intro](../../includes/functions-bindings-csharp-intro.md)] -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) ++The following example shows a [C# function](dotnet-isolated-process-guide.md) that polls the `input-queue` queue and writes several messages to an output queue each time a queue item is processed. +++# [In-process model](#tab/in-process) The following example shows a [C# function](functions-dotnet-class-library.md) that polls the `myqueue-items` queue and writes a log each time a queue item is processed. public static class QueueFunctions } ``` -# [Isolated process](#tab/isolated-process) --The following example shows a [C# function](dotnet-isolated-process-guide.md) that polls the `input-queue` queue and writes several messages to an output queue each time a queue item is processed. -- ::: zone-end def main(msg: func.QueueMessage): Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the [QueueTriggerAttribute](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs.Extensions.Storage/Queues/QueueTriggerAttribute.cs) to define the function. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#queue-trigger). -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) ++In [C# class libraries](dotnet-isolated-process-guide.md), the attribute's constructor takes the name of the queue to monitor, as shown in the following example: +++This example also demonstrates setting the [connection string setting](#connections) in the attribute itself. ++# [In-process model](#tab/in-process) In [C# class libraries](functions-dotnet-class-library.md), the attribute's constructor takes the name of the queue to monitor, as shown in the following example: public static void Run( } ``` -# [Isolated process](#tab/isolated-process) --In [C# class libraries](dotnet-isolated-process-guide.md), the attribute's constructor takes the name of the queue to monitor, as shown in the following example: ---This example also demonstrates setting the [connection string setting](#connections) in the attribute itself. - ::: zone-end See the [Example section](#example) for complete examples. The usage of the Queue trigger depends on the extension package version, and the C# modality used in your function app, which can be one of the following: +# [Isolated worker model](#tab/isolated-process) ++An isolated worker process class library compiled C# function runs in a process isolated from the runtime. + # [In-process class library](#tab/in-process) An in-process class library is a compiled C# function runs in the same process as the Functions runtime. -# [Isolated process](#tab/isolated-process) --An isolated worker process class library compiled C# function runs in a process isolated from the runtime. - Choose a version to see usage details for the mode and version. |
azure-functions | Functions Bindings Storage Queue | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-queue.md | Azure Functions can run as new Azure Queue storage messages are created and can The extension NuGet package you install depends on the C# mode you're using in your function app: -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) -Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md). +Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md). -In a variation of this model, Functions can be run using [C# scripting], which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions]. +# [In-process model](#tab/in-process) -# [Isolated process](#tab/isolated-process) +Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md). -Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md). +In a variation of this model, Functions can be run using [C# scripting], which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions]. Functions 1.x apps automatically have a reference to the extension. The binding types supported for .NET depend on both the extension version and C# execution mode, which can be one of the following: -# [In-process](#tab/in-process) --An in-process class library is a compiled C# function runs in the same process as the Functions runtime. - -# [Isolated process](#tab/isolated-process) +# [Isolated worker model](#tab/isolated-process) An isolated worker process class library compiled C# function runs in a process isolated from the runtime. +# [In-process model](#tab/in-process) ++An in-process class library is a compiled C# function runs in the same process as the Functions runtime. + Choose a version to see binding type details for the mode and version. |
azure-functions | Functions Bindings Storage Table Input | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-table-input.md | For information on setup and configuration details, see the [overview](./functio The usage of the binding depends on the extension package version and the C# modality used in your function app, which can be one of the following: -# [In-process](#tab/in-process) --An [in-process class library](functions-dotnet-class-library.md) is a compiled C# function runs in the same process as the Functions runtime. - -# [Isolated process](#tab/isolated-process) +# [Isolated worker model](#tab/isolated-process) An [isolated worker process class library](dotnet-isolated-process-guide.md) compiled C# function runs in a process isolated from the runtime. +# [In-process model](#tab/in-process) ++An [in-process class library](functions-dotnet-class-library.md) is a compiled C# function runs in the same process as the Functions runtime. + Choose a version to see examples for the mode and version. With this simple binding, you can't programmatically handle a case in which no r Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#table-input). -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) ++In [C# class libraries](dotnet-isolated-process-guide.md), the `TableInputAttribute` supports the following properties: ++| Attribute property |Description| +||| +| **TableName** | The name of the table.| +| **PartitionKey** |Optional. The partition key of the table entity to read. | +|**RowKey** | Optional. The row key of the table entity to read. | +| **Take** | Optional. The maximum number of entities to read into an [`IEnumerable<T>`]. Can't be used with `RowKey`.| +|**Filter** | Optional. An OData filter expression for entities to read into an [`IEnumerable<T>`]. Can't be used with `RowKey`. | +|**Connection** | The name of an app setting or setting collection that specifies how to connect to the table service. See [Connections](#connections). | ++# [In-process model](#tab/in-process) In [C# class libraries](functions-dotnet-class-library.md), the `TableAttribute` supports the following properties: public static void Run( [!INCLUDE [functions-bindings-storage-attribute](../../includes/functions-bindings-storage-attribute.md)] -# [Isolated process](#tab/isolated-process) --In [C# class libraries](dotnet-isolated-process-guide.md), the `TableInputAttribute` supports the following properties: --| Attribute property |Description| -||| -| **TableName** | The name of the table.| -| **PartitionKey** |Optional. The partition key of the table entity to read. | -|**RowKey** | Optional. The row key of the table entity to read. | -| **Take** | Optional. The maximum number of entities to read into an [`IEnumerable<T>`]. Can't be used with `RowKey`.| -|**Filter** | Optional. An OData filter expression for entities to read into an [`IEnumerable<T>`]. Can't be used with `RowKey`. | -|**Connection** | The name of an app setting or setting collection that specifies how to connect to the table service. See [Connections](#connections). | - ::: zone-end The following table explains the binding configuration properties that you set i The usage of the binding depends on the extension package version, and the C# modality used in your function app, which can be one of the following: -# [In-process](#tab/in-process) --An in-process class library is a compiled C# function that runs in the same process as the Functions runtime. - -# [Isolated process](#tab/isolated-process) +# [Isolated worker model](#tab/isolated-process) An isolated worker process class library compiled C# function runs in a process isolated from the runtime. +# [In-process model](#tab/in-process) ++An in-process class library is a compiled C# function that runs in the same process as the Functions runtime. + Choose a version to see usage details for the mode and version. |
azure-functions | Functions Bindings Storage Table Output | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-table-output.md | For information on setup and configuration details, see the [overview](./functio [!INCLUDE [functions-bindings-csharp-intro](../../includes/functions-bindings-csharp-intro.md)] -# [In-process](#tab/in-process) --The following example shows a [C# function](functions-dotnet-class-library.md) that uses an HTTP trigger to write a single table row. --```csharp -public class TableStorage -{ - public class MyPoco - { - public string PartitionKey { get; set; } - public string RowKey { get; set; } - public string Text { get; set; } - } -- [FunctionName("TableOutput")] - [return: Table("MyTable")] - public static MyPoco TableOutput([HttpTrigger] dynamic input, ILogger log) - { - log.LogInformation($"C# http trigger function processed: {input.Text}"); - return new MyPoco { PartitionKey = "Http", RowKey = Guid.NewGuid().ToString(), Text = input.Text }; - } -} -``` ---# [Isolated process](#tab/isolated-process) +# [Isolated worker model](#tab/isolated-process) The following `MyTableData` class represents a row of data in the table: public static MyTableData Run( } ``` +# [In-process model](#tab/in-process) ++The following example shows a [C# function](functions-dotnet-class-library.md) that uses an HTTP trigger to write a single table row. ++```csharp +public class TableStorage +{ + public class MyPoco + { + public string PartitionKey { get; set; } + public string RowKey { get; set; } + public string Text { get; set; } + } ++ [FunctionName("TableOutput")] + [return: Table("MyTable")] + public static MyPoco TableOutput([HttpTrigger] dynamic input, ILogger log) + { + log.LogInformation($"C# http trigger function processed: {input.Text}"); + return new MyPoco { PartitionKey = "Http", RowKey = Guid.NewGuid().ToString(), Text = input.Text }; + } +} +``` ++ ::: zone-end def main(req: func.HttpRequest, message: func.Out[str]) -> func.HttpResponse: Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#table-output). -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) ++In [C# class libraries](dotnet-isolated-process-guide.md), the `TableInputAttribute` supports the following properties: ++| Attribute property |Description| +||| +|**TableName** | The name of the table to which to write.| +|**PartitionKey** | The partition key of the table entity to write. | +|**RowKey** | The row key of the table entity to write. | +|**Connection** | The name of an app setting or setting collection that specifies how to connect to the table service. See [Connections](#connections). | ++# [In-process model](#tab/in-process) In [C# class libraries](functions-dotnet-class-library.md), the `TableAttribute` supports the following properties: public static MyPoco TableOutput( [!INCLUDE [functions-bindings-storage-attribute](../../includes/functions-bindings-storage-attribute.md)] -# [Isolated process](#tab/isolated-process) --In [C# class libraries](dotnet-isolated-process-guide.md), the `TableInputAttribute` supports the following properties: --| Attribute property |Description| -||| -|**TableName** | The name of the table to which to write.| -|**PartitionKey** | The partition key of the table entity to write. | -|**RowKey** | The row key of the table entity to write. | -|**Connection** | The name of an app setting or setting collection that specifies how to connect to the table service. See [Connections](#connections). | - ::: zone-end The following table explains the binding configuration properties that you set i The usage of the binding depends on the extension package version, and the C# modality used in your function app, which can be one of the following: -# [In-process](#tab/in-process) --An in-process class library is a compiled C# function runs in the same process as the Functions runtime. - -# [Isolated process](#tab/isolated-process) +# [Isolated worker model](#tab/isolated-process) An isolated worker process class library compiled C# function runs in a process isolated from the runtime. +# [In-process model](#tab/in-process) ++An in-process class library is a compiled C# function runs in the same process as the Functions runtime. + Choose a version to see usage details for the mode and version. |
azure-functions | Functions Bindings Storage Table | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-table.md | Azure Functions integrates with [Azure Tables](../cosmos-db/table/introduction.m The extension NuGet package you install depends on the C# mode you're using in your function app: -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) -Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md). +Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md). -In a variation of this model, Functions can be run using [C# scripting], which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions]. +# [In-process model](#tab/in-process) -# [Isolated process](#tab/isolated-process) +Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md). -Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md). +In a variation of this model, Functions can be run using [C# scripting], which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions]. Functions 1.x apps automatically have a reference to the extension. The binding types supported for .NET depend on both the extension version and C# execution mode, which can be one of the following: -# [In-process](#tab/in-process) --An in-process class library is a compiled C# function runs in the same process as the Functions runtime. - -# [Isolated process](#tab/isolated-process) +# [Isolated worker model](#tab/isolated-process) An isolated worker process class library compiled C# function runs in a process isolated from the runtime. +# [In-process model](#tab/in-process) ++An in-process class library is a compiled C# function runs in the same process as the Functions runtime. + Choose a version to see binding type details for the mode and version. |
azure-functions | Functions Bindings Timer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-timer.md | This example shows a C# function that executes each time the minutes have a valu [!INCLUDE [functions-bindings-csharp-intro](../../includes/functions-bindings-csharp-intro.md)] -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) +++# [In-process model](#tab/in-process) ```csharp [FunctionName("TimerTriggerCSharp")] public static void Run([TimerTrigger("0 */5 * * * *")]TimerInfo myTimer, ILogger } ``` -# [Isolated process](#tab/isolated-process) -- ::: zone-end Write-Host "PowerShell timer trigger function ran! TIME: $currentU ::: zone pivot="programming-language-csharp" ## Attributes -[In-process](functions-dotnet-class-library.md) C# library uses [TimerTriggerAttribute](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions/Extensions/Timers/TimerTriggerAttribute.cs) from [Microsoft.Azure.WebJobs.Extensions](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions) whereas [isolated worker process](dotnet-isolated-process-guide.md) C# library uses [TimerTriggerAttribute](https://github.com/Azure/azure-functions-dotnet-worker/blob/main/extensions/Worker.Extensions.Timer/src/TimerTriggerAttribute.cs) from [Microsoft.Azure.Functions.Worker.Extensions.Timer](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Timer) to define the function. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#timer-trigger). +[In-process](functions-dotnet-class-library.md) C# library uses [TimerTriggerAttribute](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions/Extensions/Timers/TimerTriggerAttribute.cs) from [Microsoft.Azure.WebJobs.Extensions](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions) whereas [isolated worker process](dotnet-isolated-process-guide.md) C# library uses [TimerTriggerAttribute](https://github.com/Azure/azure-functions-dotnet-worker/blob/main/extensions/Worker.Extensions.Timer/src/TimerTriggerAttribute.cs) from [Microsoft.Azure.Functions.Worker.Extensions.Timer](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Timer) to define the function. C# script instead uses a [function.json configuration file](#configuration). -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) |Attribute property | Description| ||-| Write-Host "PowerShell timer trigger function ran! TIME: $currentU |**RunOnStartup**| If `true`, the function is invoked when the runtime starts. For example, the runtime starts when the function app wakes up after going idle due to inactivity. when the function app restarts due to function changes, and when the function app scales out. *Use with caution.* **RunOnStartup** should rarely if ever be set to `true`, especially in production. | |**UseMonitor**| Set to `true` or `false` to indicate whether the schedule should be monitored. Schedule monitoring persists schedule occurrences to aid in ensuring the schedule is maintained correctly even when function app instances restart. If not set explicitly, the default is `true` for schedules that have a recurrence interval greater than or equal to 1 minute. For schedules that trigger more than once per minute, the default is `false`. | -# [Isolated process](#tab/isolated-process) +# [In-process model](#tab/in-process) |Attribute property | Description| ||-| |
azure-functions | Functions Bindings Twilio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-twilio.md | This article explains how to send text messages by using [Twilio](https://www.tw The extension NuGet package you install depends on the C# mode you're using in your function app: -# [In-process](#tab/in-process) --Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md). --# [Isolated process](#tab/isolated-process) +# [Isolated worker model](#tab/isolated-process) Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md). -# [C# script](#tab/csharp-script) +# [In-process model](#tab/in-process) -Functions run as C# script, which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions]. +Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md). There is currently no support for Twilio for an isolated worker process app. Functions 1.x doesn't support running in an isolated worker process. -# [Functions v2.x+](#tab/functionsv2/csharp-script) --This version of the extension should already be available to your function app with [extension bundle], version 2.x. --# [Functions 1.x](#tab/functionsv1/csharp-script) --You can add the extension to your project by explicitly installing the [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Twilio), version 1.x. To learn more, see [Explicitly install extensions](functions-bindings-register.md#explicitly-install-extensions). - ::: zone-end Unless otherwise noted, these examples are specific to version 2.x and later ver ::: zone pivot="programming-language-csharp" [!INCLUDE [functions-bindings-csharp-intro-with-csx](../../includes/functions-bindings-csharp-intro-with-csx.md)] -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) ++The Twilio binding isn't currently supported for a function app running in an isolated worker process. ++# [In-process model](#tab/in-process) The following example shows a [C# function](functions-dotnet-class-library.md) that sends a text message when triggered by a queue message. namespace TwilioQueueOutput This example uses the `TwilioSms` attribute with the method return value. An alternative is to use the attribute with an `out CreateMessageOptions` parameter or an `ICollector<CreateMessageOptions>` or `IAsyncCollector<CreateMessageOptions>` parameter. -# [Isolated process](#tab/isolated-process) --The Twilio binding isn't currently supported for a function app running in an isolated worker process. --# [C# Script](#tab/csharp-script) --The following example shows a Twilio output binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function uses an `out` parameter to send a text message. --Here's binding data in the *function.json* file: --Example function.json: --```json -{ - "type": "twilioSms", - "name": "message", - "accountSidSetting": "TwilioAccountSid", - "authTokenSetting": "TwilioAuthToken", - "from": "+1425XXXXXXX", - "direction": "out", - "body": "Azure Functions Testing" -} -``` --Here's C# script code: --```cs -#r "Newtonsoft.Json" -#r "Twilio" -#r "Microsoft.Azure.WebJobs.Extensions.Twilio" --using System; -using Microsoft.Extensions.Logging; -using Newtonsoft.Json; -using Microsoft.Azure.WebJobs.Extensions.Twilio; -using Twilio.Rest.Api.V2010.Account; -using Twilio.Types; --public static void Run(string myQueueItem, out CreateMessageOptions message, ILogger log) -{ - log.LogInformation($"C# Queue trigger function processed: {myQueueItem}"); -- // In this example the queue item is a JSON string representing an order that contains the name of a - // customer and a mobile number to send text updates to. - dynamic order = JsonConvert.DeserializeObject(myQueueItem); - string msg = "Hello " + order.name + ", thank you for your order."; -- // You must initialize the CreateMessageOptions variable with the "To" phone number. - message = new CreateMessageOptions(new PhoneNumber("+1704XXXXXXX")); -- // A dynamic message can be set instead of the body in the output binding. In this example, we use - // the order information to personalize a text message. - message.Body = msg; -} -``` --You can't use out parameters in asynchronous code. Here's an asynchronous C# script code example: --```cs -#r "Newtonsoft.Json" -#r "Twilio" -#r "Microsoft.Azure.WebJobs.Extensions.Twilio" --using System; -using Microsoft.Extensions.Logging; -using Newtonsoft.Json; -using Microsoft.Azure.WebJobs.Extensions.Twilio; -using Twilio.Rest.Api.V2010.Account; -using Twilio.Types; --public static async Task Run(string myQueueItem, IAsyncCollector<CreateMessageOptions> message, ILogger log) -{ - log.LogInformation($"C# Queue trigger function processed: {myQueueItem}"); -- // In this example the queue item is a JSON string representing an order that contains the name of a - // customer and a mobile number to send text updates to. - dynamic order = JsonConvert.DeserializeObject(myQueueItem); - string msg = "Hello " + order.name + ", thank you for your order."; -- // You must initialize the CreateMessageOptions variable with the "To" phone number. - CreateMessageOptions smsText = new CreateMessageOptions(new PhoneNumber("+1704XXXXXXX")); -- // A dynamic message can be set instead of the body in the output binding. In this example, we use - // the order information to personalize a text message. - smsText.Body = msg; -- await message.AddAsync(smsText); -} -``` - ::: zone-end public class TwilioOutput { ::: zone pivot="programming-language-csharp" ## Attributes -Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the output binding. C# script instead uses a function.json configuration file. +Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the output binding. C# script instead uses a [function.json configuration file](#configuration). ++# [Isolated worker model](#tab/isolated-process) ++The Twilio binding isn't currently supported for a function app running in an isolated worker process. -# [In-process](#tab/in-process) +# [In-process model](#tab/in-process) In [in-process](functions-dotnet-class-library.md) function apps, use the [TwilioSmsAttribute](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.Twilio/TwilioSMSAttribute.cs), which supports the following parameters. In [in-process](functions-dotnet-class-library.md) function apps, use the [Twili | **Body**| This value can be used to hard code the SMS text message if you don't need to set it dynamically in the code for your function. | -# [Isolated process](#tab/isolated-process) --The Twilio binding isn't currently supported for a function app running in an isolated worker process. --# [C# Script](#tab/csharp-script) - ::: zone-end |
azure-functions | Functions Bindings Warmup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-warmup.md | The following considerations apply when using a warmup trigger: [!INCLUDE [functions-bindings-csharp-intro-with-csx](../../includes/functions-bindings-csharp-intro-with-csx.md)] -# [In-process](#tab/in-process) +# [Isolated worker model](#tab/isolated-process) ++The following example shows a [C# function](dotnet-isolated-process-guide.md) that runs on each new instance when it's added to your app. +++# [In-process model](#tab/in-process) The following example shows a [C# function](functions-dotnet-class-library.md) that runs on each new instance when it's added to your app. namespace WarmupSample { //Initialize shared dependencies here - log.LogInformation("Function App instance is warm 🌞🌞🌞"); + log.LogInformation("Function App instance is warm."); } } } ``` -# [Isolated process](#tab/isolated-process) --The following example shows a [C# function](dotnet-isolated-process-guide.md) that runs on each new instance when it's added to your app. ---# [C# Script](#tab/csharp-script) --The following example shows a warmup trigger in a *function.json* file and a [C# script function](functions-reference-csharp.md) that runs on each new instance when it's added to your app. --Here's the *function.json* file: --```json -{ - "bindings": [ - { - "type": "warmupTrigger", - "direction": "in", - "name": "warmupContext" - } - ] -} -``` --For more information, see [Attributes](#attributes). --```cs -public static void Run(WarmupContext warmupContext, ILogger log) -{ - log.LogInformation("Function App instance is warm 🌞🌞🌞"); -} -``` - ::: zone-end The following example shows a warmup trigger that runs when each new instance is ```java @FunctionName("Warmup") public void warmup( @WarmupTrigger Object warmupContext, ExecutionContext context) {- context.getLogger().info("Function App instance is warm 🌞🌞🌞"); + context.getLogger().info("Function App instance is warm."); } ``` Here's the JavaScript code: ```javascript module.exports = async function (context, warmupContext) {- context.log('Function App instance is warm 🌞🌞🌞'); + context.log('Function App instance is warm.'); }; ``` import azure.functions as func def main(warmupContext: func.Context) -> None:- logging.info('Function App instance is warm 🌞🌞🌞') + logging.info('Function App instance is warm.') ``` ::: zone-end ::: zone pivot="programming-language-csharp" ## Attributes -Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the `WarmupTrigger` attribute to define the function. C# script instead uses a *function.json* configuration file. +Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the `WarmupTrigger` attribute to define the function. C# script instead uses a [function.json configuration file](#configuration). -# [In-process](#tab/in-process) --Use the `WarmupTrigger` attribute to define the function. This attribute has no parameters. --# [Isolated process](#tab/isolated-process) +# [Isolated worker model](#tab/isolated-process) Use the `WarmupTrigger` attribute to define the function. This attribute has no parameters. -# [C# script](#tab/csharp-script) --C# script uses a function.json file for configuration instead of attributes. +# [In-process model](#tab/in-process) -The following table explains the binding configuration properties for C# script that you set in the *function.json* file. --|function.json property |Description | -||-| -| **type** | Required - must be set to `warmupTrigger`. | -| **direction** | Required - must be set to `in`. | -| **name** | Required - the name of the binding parameter, which is usually `warmupContext`. | +Use the `WarmupTrigger` attribute to define the function. This attribute has no parameters. See the [Example section](#example) for complete examples. ::: zone pivot="programming-language-csharp" The following considerations apply to using a warmup function in C#: -# [In-process](#tab/in-process) --- Your function must be named `warmup` (case-insensitive) using the `FunctionName` attribute.-- A return value attribute isn't required.-- You must be using version `3.0.5` of the `Microsoft.Azure.WebJobs.Extensions` package, or a later version. -- You can pass a `WarmupContext` instance to the function.--# [Isolated process](#tab/isolated-process) +# [Isolated worker model](#tab/isolated-process) - Your function must be named `warmup` (case-insensitive) using the `Function` attribute. - A return value attribute isn't required. - Use the `Microsoft.Azure.Functions.Worker.Extensions.Warmup` package - You can pass an object instance to the function. -# [C# script](#tab/csharp-script) +# [In-process model](#tab/in-process) -Not supported for version 1.x of the Functions runtime. +- Your function must be named `warmup` (case-insensitive) using the `FunctionName` attribute. +- A return value attribute isn't required. +- You must be using version `3.0.5` of the `Microsoft.Azure.WebJobs.Extensions` package, or a later version. +- You can pass a `WarmupContext` instance to the function. |
azure-functions | Functions Develop Vs Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-vs-code.md | Title: Develop Azure Functions by using Visual Studio Code description: Learn how to develop and test Azure Functions by using the Azure Functions extension for Visual Studio Code. ms.devlang: csharp, java, javascript, powershell, python-+ Last updated 09/01/2023 zone_pivot_groups: programming-languages-set-functions #Customer intent: As an Azure Functions developer, I want to understand how Visual Studio Code supports Azure Functions so that I can more efficiently create, publish, and maintain my Functions projects. You can connect your function to other Azure services by adding input and output ::: zone pivot="programming-language-csharp" For example, the way you define an output binding that writes data to a storage queue depends on your process model: -### [In-process](#tab/in-process) --Update the function method to add a binding parameter defined by using the `Queue` attribute. You can use an `ICollector<T>` type to represent a collection of messages. - ### [Isolated process](#tab/isolated-process) Update the function method to add a binding parameter defined by using the `QueueOutput` attribute. You can use a `MultiResponse` object to return multiple messages or multiple output streams. +### [In-process](#tab/in-process) ++Update the function method to add a binding parameter defined by using the `Queue` attribute. You can use an `ICollector<T>` type to represent a collection of messages. + ::: zone-end |
azure-functions | Functions Develop Vs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-vs.md | As with triggers, input and output bindings are added to your function as bindin 1. Use the following command in the Package Manager Console to install a specific package: - # [In-process](#tab/in-process) + # [Isolated worker model](#tab/isolated-process) ```powershell- Install-Package Microsoft.Azure.WebJobs.Extensions.<BINDING_TYPE> -Version <TARGET_VERSION> + Install-Package Microsoft.Azure.Functions.Worker.Extensions.<BINDING_TYPE> -Version <TARGET_VERSION> ``` - # [Isolated process](#tab/isolated-process) + # [In-process model](#tab/in-process) ```powershell- Install-Package Microsoft.Azure.Functions.Worker.Extensions.<BINDING_TYPE> -Version <TARGET_VERSION> + Install-Package Microsoft.Azure.WebJobs.Extensions.<BINDING_TYPE> -Version <TARGET_VERSION> ``` The way you attach the debugger depends on your execution mode. When debugging a When you're done, you should [disable remote debugging](#disable-remote-debugging). -# [In-process](#tab/in-process) --To attach a remote debugger to a function app running in-process with the Functions host: --+ From the **Publish** tab, select the ellipses (**...**) in the **Hosting** section, and then choose **Attach debugger**. -- :::image type="content" source="media/functions-develop-vs/attach-to-process-in-process.png" alt-text="Screenshot of attaching the debugger from Visual Studio."::: --Visual Studio connects to your function app and enables remote debugging, if it's not already enabled. It also locates and attaches the debugger to the host process for the app. At this point, you can debug your function app as normal. --# [Isolated process](#tab/isolated-process) +# [Isolated worker model](#tab/isolated-process) To attach a remote debugger to a function app running in a process separate from the Functions host: To attach a remote debugger to a function app running in a process separate from 1. Check **Show process from all users** and then choose **dotnet.exe** and select **Attach**. When the operation completes, you're attached to your C# class library code running in an isolated worker process. At this point, you can debug your function app as normal. +# [In-process model](#tab/in-process) ++To attach a remote debugger to a function app running in-process with the Functions host: +++ From the **Publish** tab, select the ellipses (**...**) in the **Hosting** section, and then choose **Attach debugger**. ++ :::image type="content" source="media/functions-develop-vs/attach-to-process-in-process.png" alt-text="Screenshot of attaching the debugger from Visual Studio."::: ++Visual Studio connects to your function app and enables remote debugging, if it's not already enabled. It also locates and attaches the debugger to the host process for the app. At this point, you can debug your function app as normal. + ### Disable remote debugging |
azure-functions | Functions Reference Csharp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-csharp.md | The following assemblies are automatically added by the Azure Functions hosting The following assemblies may be referenced by simple-name, by runtime version: -# [v2.x+](#tab/functionsv2) +### [v2.x+](#tab/functionsv2) * `Newtonsoft.Json` * `Microsoft.WindowsAzure.Storage`<sup>*</sup> <sup>*</sup>Removed in version 4.x of the runtime. -# [v1.x](#tab/functionsv1) +### [v1.x](#tab/functionsv1) * `Newtonsoft.Json` * `Microsoft.WindowsAzure.Storage` The directory that contains the function script file is automatically watched fo The way that both binding extension packages and other NuGet packages are added to your function app depends on the [targeted version of the Functions runtime](functions-versions.md). -# [v2.x+](#tab/functionsv2) +### [v2.x+](#tab/functionsv2) By default, the [supported set of Functions extension NuGet packages](functions-triggers-bindings.md#supported-bindings) are made available to your C# script function app by using extension bundles. To learn more, see [Extension bundles](functions-bindings-register.md#extension-bundles). By default, Core Tools reads the function.json files and adds the required packa > [!NOTE] > For C# script (.csx), you must set `TargetFramework` to a value of `netstandard2.0`. Other target frameworks, such as `net6.0`, aren't supported. -# [v1.x](#tab/functionsv1) +### [v1.x](#tab/functionsv1) Version 1.x of the Functions runtime uses a *project.json* file to define dependencies. Here's an example *project.json* file: public static string GetEnvironmentVariable(string name) } ``` +## Retry policies ++Functions supports two built-in retry policies. For more information, see [Retry policies](functions-bindings-error-pages.md#retry-policies). ++### [Fixed delay](#tab/fixed-delay) ++Here's the retry policy in the *function.json* file: ++```json +{ + "disabled": false, + "bindings": [ + { + .... + } + ], + "retry": { + "strategy": "fixedDelay", + "maxRetryCount": 4, + "delayInterval": "00:00:10" + } +} +``` ++|*function.json* property | Description | +||-| +|strategy|Use `fixedDelay`.| +|maxRetryCount|Required. The maximum number of retries allowed per function execution. `-1` means to retry indefinitely.| +|delayInterval|The delay that's used between retries. Specify it as a string with the format `HH:mm:ss`.| ++### [Exponential backoff](#tab/exponential-backoff) ++Here's the retry policy in the *function.json* file: ++```json +{ + "disabled": false, + "bindings": [ + { + .... + } + ], + "retry": { + "strategy": "exponentialBackoff", + "maxRetryCount": 5, + "minimumInterval": "00:00:10", + "maximumInterval": "00:15:00" + } +} +``` ++|*function.json* property | Description | +||-| +|strategy|Use `exponentialBackoff`.| +|maxRetryCount|Required. The maximum number of retries allowed per function execution. `-1` means to retry indefinitely.| +|minimumInterval|The minimum retry delay. Specify it as a string with the format `HH:mm:ss`.| +|maximumInterval|The maximum retry delay. Specify it as a string with the format `HH:mm:ss`.| +++ <a name="imperative-bindings"></a> ## Binding at runtime public static void Run(string myQueueItem, string myInputBlob, out string myOutp } ``` +### RabbitMQ trigger ++The following example shows a RabbitMQ trigger binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function reads and logs the RabbitMQ message. ++Here's the binding data in the *function.json* file: ++```json +{ΓÇïΓÇï + "bindings": [ + {ΓÇïΓÇï + "name": "myQueueItem", + "type": "rabbitMQTrigger", + "direction": "in", + "queueName": "queue", + "connectionStringSetting": "rabbitMQConnectionAppSetting" + }ΓÇïΓÇï + ] +}ΓÇïΓÇï +``` ++Here's the C# script code: ++```C# +using System; ++public static void Run(string myQueueItem, ILogger log) +{ΓÇïΓÇï + log.LogInformation($"C# Script RabbitMQ trigger function processed: {ΓÇïΓÇïmyQueueItem}ΓÇïΓÇï"); +}ΓÇïΓÇï +``` + ### Queue trigger The following table explains the binding configuration properties for C# script that you set in the *function.json* file. public static async Task Run(TimerInfo myTimer, ILogger log, IAsyncCollector<str } ``` -### Cosmos DB trigger +### Azure Cosmos DB v2 trigger This section outlines support for the [version 4.x+ of the extension](./functions-bindings-cosmosdb-v2.md?tabs=in-process%2Cextensionv4) only. Here's the C# script code: } ``` -### Cosmos DB input +### Azure Cosmos DB v2 input This section outlines support for the [version 4.x+ of the extension](./functions-bindings-cosmosdb-v2.md?tabs=in-process%2Cextensionv4) only. public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, Docume } ``` -### Cosmos DB output +### Azure Cosmos DB v2 output This section outlines support for the [version 4.x+ of the extension](./functions-bindings-cosmosdb-v2.md?tabs=in-process%2Cextensionv4) only. public static async Task Run(ToDoItem[] toDoItemsIn, IAsyncCollector<ToDoItem> t } ``` +### Azure Cosmos DB v1 trigger ++The following example shows an Azure Cosmos DB trigger binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function writes log messages when Azure Cosmos DB records are modified. ++Here's the binding data in the *function.json* file: ++```json +{ + "type": "cosmosDBTrigger", + "name": "documents", + "direction": "in", + "leaseCollectionName": "leases", + "connectionStringSetting": "<connection-app-setting>", + "databaseName": "Tasks", + "collectionName": "Items", + "createLeaseCollectionIfNotExists": true +} +``` ++Here's the C# script code: ++```cs + #r "Microsoft.Azure.Documents.Client" + + using System; + using Microsoft.Azure.Documents; + using System.Collections.Generic; + ++ public static void Run(IReadOnlyList<Document> documents, TraceWriter log) + { + log.Info("Documents modified " + documents.Count); + log.Info("First document Id " + documents[0].Id); + } +``` ++### Azure Cosmos DB v1 input ++This section contains the following examples: ++* [Queue trigger, look up ID from string](#queue-trigger-look-up-id-from-string-c-script) +* [Queue trigger, get multiple docs, using SqlQuery](#queue-trigger-get-multiple-docs-using-sqlquery-c-script) +* [HTTP trigger, look up ID from query string](#http-trigger-look-up-id-from-query-string-c-script) +* [HTTP trigger, look up ID from route data](#http-trigger-look-up-id-from-route-data-c-script) +* [HTTP trigger, get multiple docs, using SqlQuery](#http-trigger-get-multiple-docs-using-sqlquery-c-script) +* [HTTP trigger, get multiple docs, using DocumentClient](#http-trigger-get-multiple-docs-using-documentclient-c-script) ++The HTTP trigger examples refer to a simple `ToDoItem` type: ++```cs +namespace CosmosDBSamplesV1 +{ + public class ToDoItem + { + public string Id { get; set; } + public string Description { get; set; } + } +} +``` ++<a id="queue-trigger-look-up-id-from-string-c-script"></a> ++#### Queue trigger, look up ID from string ++The following example shows an Azure Cosmos DB input binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function reads a single document and updates the document's text value. ++Here's the binding data in the *function.json* file: ++```json +{ + "name": "inputDocument", + "type": "documentDB", + "databaseName": "MyDatabase", + "collectionName": "MyCollection", + "id" : "{queueTrigger}", + "partitionKey": "{partition key value}", + "connection": "MyAccount_COSMOSDB", + "direction": "in" +} +``` ++Here's the C# script code: ++```cs + using System; ++ // Change input document contents using Azure Cosmos DB input binding + public static void Run(string myQueueItem, dynamic inputDocument) + { + inputDocument.text = "This has changed."; + } +``` ++<a id="queue-trigger-get-multiple-docs-using-sqlquery-c-script"></a> ++#### Queue trigger, get multiple docs, using SqlQuery ++The following example shows an Azure Cosmos DB input binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function retrieves multiple documents specified by a SQL query, using a queue trigger to customize the query parameters. ++The queue trigger provides a parameter `departmentId`. A queue message of `{ "departmentId" : "Finance" }` would return all records for the finance department. ++Here's the binding data in the *function.json* file: ++```json +{ + "name": "documents", + "type": "documentdb", + "direction": "in", + "databaseName": "MyDb", + "collectionName": "MyCollection", + "sqlQuery": "SELECT * from c where c.departmentId = {departmentId}", + "connection": "CosmosDBConnection" +} +``` ++Here's the C# script code: ++```csharp + public static void Run(QueuePayload myQueueItem, IEnumerable<dynamic> documents) + { + foreach (var doc in documents) + { + // operate on each document + } + } ++ public class QueuePayload + { + public string departmentId { get; set; } + } +``` ++<a id="http-trigger-look-up-id-from-query-string-c-script"></a> ++#### HTTP trigger, look up ID from query string ++The following example shows a [C# script function](functions-reference-csharp.md) that retrieves a single document. The function is triggered by an HTTP request that uses a query string to specify the ID to look up. That ID is used to retrieve a `ToDoItem` document from the specified database and collection. ++Here's the *function.json* file: ++```json +{ + "bindings": [ + { + "authLevel": "anonymous", + "name": "req", + "type": "httpTrigger", + "direction": "in", + "methods": [ + "get", + "post" + ] + }, + { + "name": "$return", + "type": "http", + "direction": "out" + }, + { + "type": "documentDB", + "name": "toDoItem", + "databaseName": "ToDoItems", + "collectionName": "Items", + "connection": "CosmosDBConnection", + "direction": "in", + "Id": "{Query.id}" + } + ], + "disabled": true +} +``` ++Here's the C# script code: ++```cs +using System.Net; ++public static HttpResponseMessage Run(HttpRequestMessage req, ToDoItem toDoItem, TraceWriter log) +{ + log.Info("C# HTTP trigger function processed a request."); ++ if (toDoItem == null) + { + log.Info($"ToDo item not found"); + } + else + { + log.Info($"Found ToDo item, Description={toDoItem.Description}"); + } + return req.CreateResponse(HttpStatusCode.OK); +} +``` ++<a id="http-trigger-look-up-id-from-route-data-c-script"></a> ++#### HTTP trigger, look up ID from route data ++The following example shows a [C# script function](functions-reference-csharp.md) that retrieves a single document. The function is triggered by an HTTP request that uses route data to specify the ID to look up. That ID is used to retrieve a `ToDoItem` document from the specified database and collection. ++Here's the *function.json* file: ++```json +{ + "bindings": [ + { + "authLevel": "anonymous", + "name": "req", + "type": "httpTrigger", + "direction": "in", + "methods": [ + "get", + "post" + ], + "route":"todoitems/{id}" + }, + { + "name": "$return", + "type": "http", + "direction": "out" + }, + { + "type": "documentDB", + "name": "toDoItem", + "databaseName": "ToDoItems", + "collectionName": "Items", + "connection": "CosmosDBConnection", + "direction": "in", + "Id": "{id}" + } + ], + "disabled": false +} +``` ++Here's the C# script code: ++```cs +using System.Net; ++public static HttpResponseMessage Run(HttpRequestMessage req, ToDoItem toDoItem, TraceWriter log) +{ + log.Info("C# HTTP trigger function processed a request."); ++ if (toDoItem == null) + { + log.Info($"ToDo item not found"); + } + else + { + log.Info($"Found ToDo item, Description={toDoItem.Description}"); + } + return req.CreateResponse(HttpStatusCode.OK); +} +``` ++<a id="http-trigger-get-multiple-docs-using-sqlquery-c-script"></a> ++#### HTTP trigger, get multiple docs, using SqlQuery ++The following example shows a [C# script function](functions-reference-csharp.md) that retrieves a list of documents. The function is triggered by an HTTP request. The query is specified in the `SqlQuery` attribute property. ++Here's the *function.json* file: ++```json +{ + "bindings": [ + { + "authLevel": "anonymous", + "name": "req", + "type": "httpTrigger", + "direction": "in", + "methods": [ + "get", + "post" + ] + }, + { + "name": "$return", + "type": "http", + "direction": "out" + }, + { + "type": "documentDB", + "name": "toDoItems", + "databaseName": "ToDoItems", + "collectionName": "Items", + "connection": "CosmosDBConnection", + "direction": "in", + "sqlQuery": "SELECT top 2 * FROM c order by c._ts desc" + } + ], + "disabled": false +} +``` ++Here's the C# script code: ++```cs +using System.Net; ++public static HttpResponseMessage Run(HttpRequestMessage req, IEnumerable<ToDoItem> toDoItems, TraceWriter log) +{ + log.Info("C# HTTP trigger function processed a request."); ++ foreach (ToDoItem toDoItem in toDoItems) + { + log.Info(toDoItem.Description); + } + return req.CreateResponse(HttpStatusCode.OK); +} +``` ++<a id="http-trigger-get-multiple-docs-using-documentclient-c-script"></a> ++#### HTTP trigger, get multiple docs, using DocumentClient ++The following example shows a [C# script function](functions-reference-csharp.md) that retrieves a list of documents. The function is triggered by an HTTP request. The code uses a `DocumentClient` instance provided by the Azure Cosmos DB binding to read a list of documents. The `DocumentClient` instance could also be used for write operations. ++Here's the *function.json* file: ++```json +{ + "bindings": [ + { + "authLevel": "anonymous", + "name": "req", + "type": "httpTrigger", + "direction": "in", + "methods": [ + "get", + "post" + ] + }, + { + "name": "$return", + "type": "http", + "direction": "out" + }, + { + "type": "documentDB", + "name": "client", + "databaseName": "ToDoItems", + "collectionName": "Items", + "connection": "CosmosDBConnection", + "direction": "inout" + } + ], + "disabled": false +} +``` ++Here's the C# script code: ++```cs +#r "Microsoft.Azure.Documents.Client" ++using System.Net; +using Microsoft.Azure.Documents.Client; +using Microsoft.Azure.Documents.Linq; ++public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, DocumentClient client, TraceWriter log) +{ + log.Info("C# HTTP trigger function processed a request."); ++ Uri collectionUri = UriFactory.CreateDocumentCollectionUri("ToDoItems", "Items"); + string searchterm = req.GetQueryNameValuePairs() + .FirstOrDefault(q => string.Compare(q.Key, "searchterm", true) == 0) + .Value; ++ if (searchterm == null) + { + return req.CreateResponse(HttpStatusCode.NotFound); + } ++ log.Info($"Searching for word: {searchterm} using Uri: {collectionUri.ToString()}"); + IDocumentQuery<ToDoItem> query = client.CreateDocumentQuery<ToDoItem>(collectionUri) + .Where(p => p.Description.Contains(searchterm)) + .AsDocumentQuery(); ++ while (query.HasMoreResults) + { + foreach (ToDoItem result in await query.ExecuteNextAsync()) + { + log.Info(result.Description); + } + } + return req.CreateResponse(HttpStatusCode.OK); +} +``` ++### Azure Cosmos DB v1 output ++This section contains the following examples: ++* Queue trigger, write one doc +* Queue trigger, write docs using `IAsyncCollector` ++#### Queue trigger, write one doc ++The following example shows an Azure Cosmos DB output binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function uses a queue input binding for a queue that receives JSON in the following format: ++```json +{ + "name": "John Henry", + "employeeId": "123456", + "address": "A town nearby" +} +``` ++The function creates Azure Cosmos DB documents in the following format for each record: ++```json +{ + "id": "John Henry-123456", + "name": "John Henry", + "employeeId": "123456", + "address": "A town nearby" +} +``` ++Here's the binding data in the *function.json* file: ++```json +{ + "name": "employeeDocument", + "type": "documentDB", + "databaseName": "MyDatabase", + "collectionName": "MyCollection", + "createIfNotExists": true, + "connection": "MyAccount_COSMOSDB", + "direction": "out" +} +``` ++Here's the C# script code: ++```cs + #r "Newtonsoft.Json" ++ using Microsoft.Azure.WebJobs.Host; + using Newtonsoft.Json.Linq; ++ public static void Run(string myQueueItem, out object employeeDocument, TraceWriter log) + { + log.Info($"C# Queue trigger function processed: {myQueueItem}"); ++ dynamic employee = JObject.Parse(myQueueItem); ++ employeeDocument = new { + id = employee.name + "-" + employee.employeeId, + name = employee.name, + employeeId = employee.employeeId, + address = employee.address + }; + } +``` ++#### Queue trigger, write docs using IAsyncCollector ++To create multiple documents, you can bind to `ICollector<T>` or `IAsyncCollector<T>` where `T` is one of the supported types. ++This example refers to a simple `ToDoItem` type: ++```cs +namespace CosmosDBSamplesV1 +{ + public class ToDoItem + { + public string Id { get; set; } + public string Description { get; set; } + } +} +``` ++Here's the function.json file: ++```json +{ + "bindings": [ + { + "name": "toDoItemsIn", + "type": "queueTrigger", + "direction": "in", + "queueName": "todoqueueforwritemulti", + "connection": "AzureWebJobsStorage" + }, + { + "type": "documentDB", + "name": "toDoItemsOut", + "databaseName": "ToDoItems", + "collectionName": "Items", + "connection": "CosmosDBConnection", + "direction": "out" + } + ], + "disabled": false +} +``` ++Here's the C# script code: ++```cs +using System; ++public static async Task Run(ToDoItem[] toDoItemsIn, IAsyncCollector<ToDoItem> toDoItemsOut, TraceWriter log) +{ + log.Info($"C# Queue trigger function processed {toDoItemsIn?.Length} items"); ++ foreach (ToDoItem toDoItem in toDoItemsIn) + { + log.Info($"Description={toDoItem.Description}"); + await toDoItemsOut.AddAsync(toDoItem); + } +} +``` ++### Azure SQL trigger ++More samples for the Azure SQL trigger are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-csx). +++The example refers to a `ToDoItem` class and a corresponding database table: ++++[Change tracking](./functions-bindings-azure-sql-trigger.md#set-up-change-tracking-required) is enabled on the database and on the table: ++```sql +ALTER DATABASE [SampleDatabase] +SET CHANGE_TRACKING = ON +(CHANGE_RETENTION = 2 DAYS, AUTO_CLEANUP = ON); ++ALTER TABLE [dbo].[ToDo] +ENABLE CHANGE_TRACKING; +``` ++The SQL trigger binds to a `IReadOnlyList<SqlChange<T>>`, a list of `SqlChange` objects each with two properties: +- **Item:** the item that was changed. The type of the item should follow the table schema as seen in the `ToDoItem` class. +- **Operation:** a value from `SqlChangeOperation` enum. The possible values are `Insert`, `Update`, and `Delete`. ++The following example shows a SQL trigger in a function.json file and a [C# script function](functions-reference-csharp.md) that is invoked when there are changes to the `ToDo` table: ++The following is binding data in the function.json file: ++```json +{ + "name": "todoChanges", + "type": "sqlTrigger", + "direction": "in", + "tableName": "dbo.ToDo", + "connectionStringSetting": "SqlConnectionString" +} +``` +The following is the C# script function: ++```csharp +#r "Newtonsoft.Json" ++using System.Net; +using Microsoft.AspNetCore.Mvc; +using Microsoft.Extensions.Primitives; +using Newtonsoft.Json; ++public static void Run(IReadOnlyList<SqlChange<ToDoItem>> todoChanges, ILogger log) +{ + log.LogInformation($"C# SQL trigger function processed a request."); ++ foreach (SqlChange<ToDoItem> change in todoChanges) + { + ToDoItem toDoItem = change.Item; + log.LogInformation($"Change operation: {change.Operation}"); + log.LogInformation($"Id: {toDoItem.Id}, Title: {toDoItem.title}, Url: {toDoItem.url}, Completed: {toDoItem.completed}"); + } +} +``` ++### Azure SQL input ++More samples for the Azure SQL input binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-csx). ++This section contains the following examples: ++* [HTTP trigger, get row by ID from query string](#http-trigger-look-up-id-from-query-string-csharpscript) +* [HTTP trigger, delete rows](#http-trigger-delete-one-or-multiple-rows-csharpscript) ++The examples refer to a `ToDoItem` class and a corresponding database table: ++++<a id="http-trigger-look-up-id-from-query-string-csharpscript"></a> +#### HTTP trigger, get row by ID from query string ++The following example shows an Azure SQL input binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function is triggered by an HTTP request that uses a query string to specify the ID. That ID is used to retrieve a `ToDoItem` record with the specified query. ++> [!NOTE] +> The HTTP query string parameter is case-sensitive. +> ++Here's the binding data in the *function.json* file: ++```json +{ + "authLevel": "anonymous", + "type": "httpTrigger", + "direction": "in", + "name": "req", + "methods": [ + "get" + ] +}, +{ + "type": "http", + "direction": "out", + "name": "res" +}, +{ + "name": "todoItem", + "type": "sql", + "direction": "in", + "commandText": "select [Id], [order], [title], [url], [completed] from dbo.ToDo where Id = @Id", + "commandType": "Text", + "parameters": "@Id = {Query.id}", + "connectionStringSetting": "SqlConnectionString" +} +``` ++Here's the C# script code: ++```cs +#r "Newtonsoft.Json" ++using System.Net; +using Microsoft.AspNetCore.Mvc; +using Microsoft.Extensions.Primitives; +using Newtonsoft.Json; +using System.Collections.Generic; ++public static IActionResult Run(HttpRequest req, ILogger log, IEnumerable<ToDoItem> todoItem) +{ + return new OkObjectResult(todoItem); +} +``` +++<a id="http-trigger-delete-one-or-multiple-rows-csharpscript"></a> +#### HTTP trigger, delete rows ++The following example shows an Azure SQL input binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding to execute a stored procedure with input from the HTTP request query parameter. In this example, the stored procedure deletes a single record or all records depending on the value of the parameter. ++The stored procedure `dbo.DeleteToDo` must be created on the SQL database. +++Here's the binding data in the *function.json* file: ++```json +{ + "authLevel": "anonymous", + "type": "httpTrigger", + "direction": "in", + "name": "req", + "methods": [ + "get" + ] +}, +{ + "type": "http", + "direction": "out", + "name": "res" +}, +{ + "name": "todoItems", + "type": "sql", + "direction": "in", + "commandText": "DeleteToDo", + "commandType": "StoredProcedure", + "parameters": "@Id = {Query.id}", + "connectionStringSetting": "SqlConnectionString" +} +``` +++Here's the C# script code: ++```cs +#r "Newtonsoft.Json" ++using System.Net; +using Microsoft.AspNetCore.Mvc; +using Microsoft.Extensions.Primitives; +using Newtonsoft.Json; +using System.Collections.Generic; ++public static IActionResult Run(HttpRequest req, ILogger log, IEnumerable<ToDoItem> todoItems) +{ + return new OkObjectResult(todoItems); +} +``` ++### Azure SQL output ++More samples for the Azure SQL output binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-csx). ++This section contains the following examples: ++* [HTTP trigger, write records to a table](#http-trigger-write-records-to-table-csharpscript) +* [HTTP trigger, write to two tables](#http-trigger-write-to-two-tables-csharpscript) ++The examples refer to a `ToDoItem` class and a corresponding database table: +++++<a id="http-trigger-write-records-to-table-csharpscript"></a> +#### HTTP trigger, write records to a table ++The following example shows a SQL output binding in a function.json file and a [C# script function](functions-reference-csharp.md) that adds records to a table, using data provided in an HTTP POST request as a JSON body. ++The following is binding data in the function.json file: ++```json +{ + "authLevel": "anonymous", + "type": "httpTrigger", + "direction": "in", + "name": "req", + "methods": [ + "post" + ] +}, +{ + "type": "http", + "direction": "out", + "name": "res" +}, +{ + "name": "todoItem", + "type": "sql", + "direction": "out", + "commandText": "dbo.ToDo", + "connectionStringSetting": "SqlConnectionString" +} +``` ++The following is sample C# script code: ++```cs +#r "Newtonsoft.Json" ++using System.Net; +using Microsoft.AspNetCore.Mvc; +using Microsoft.Extensions.Primitives; +using Newtonsoft.Json; ++public static IActionResult Run(HttpRequest req, ILogger log, out ToDoItem todoItem) +{ + log.LogInformation("C# HTTP trigger function processed a request."); ++ string requestBody = new StreamReader(req.Body).ReadToEnd(); + todoItem = JsonConvert.DeserializeObject<ToDoItem>(requestBody); ++ return new OkObjectResult(todoItem); +} +``` ++<a id="http-trigger-write-to-two-tables-csharpscript"></a> +#### HTTP trigger, write to two tables ++The following example shows a SQL output binding in a function.json file and a [C# script function](functions-reference-csharp.md) that adds records to a database in two different tables (`dbo.ToDo` and `dbo.RequestLog`), using data provided in an HTTP POST request as a JSON body and multiple output bindings. ++The second table, `dbo.RequestLog`, corresponds to the following definition: ++```sql +CREATE TABLE dbo.RequestLog ( + Id int identity(1,1) primary key, + RequestTimeStamp datetime2 not null, + ItemCount int not null +) +``` ++The following is binding data in the function.json file: ++```json +{ + "authLevel": "anonymous", + "type": "httpTrigger", + "direction": "in", + "name": "req", + "methods": [ + "post" + ] +}, +{ + "type": "http", + "direction": "out", + "name": "res" +}, +{ + "name": "todoItem", + "type": "sql", + "direction": "out", + "commandText": "dbo.ToDo", + "connectionStringSetting": "SqlConnectionString" +}, +{ + "name": "requestLog", + "type": "sql", + "direction": "out", + "commandText": "dbo.RequestLog", + "connectionStringSetting": "SqlConnectionString" +} +``` ++The following is sample C# script code: ++```cs +#r "Newtonsoft.Json" ++using System.Net; +using Microsoft.AspNetCore.Mvc; +using Microsoft.Extensions.Primitives; +using Newtonsoft.Json; ++public static IActionResult Run(HttpRequest req, ILogger log, out ToDoItem todoItem, out RequestLog requestLog) +{ + log.LogInformation("C# HTTP trigger function processed a request."); ++ string requestBody = new StreamReader(req.Body).ReadToEnd(); + todoItem = JsonConvert.DeserializeObject<ToDoItem>(requestBody); ++ requestLog = new RequestLog(); + requestLog.RequestTimeStamp = DateTime.Now; + requestLog.ItemCount = 1; ++ return new OkObjectResult(todoItem); +} ++public class RequestLog { + public DateTime RequestTimeStamp { get; set; } + public int ItemCount { get; set; } +} +``` ++### RabbitMQ output ++The following example shows a RabbitMQ output binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function reads in the message from an HTTP trigger and outputs it to the RabbitMQ queue. ++Here's the binding data in the *function.json* file: ++```json +{ + "bindings": [ + { + "type": "httpTrigger", + "direction": "in", + "authLevel": "function", + "name": "input", + "methods": [ + "get", + "post" + ] + }, + { + "type": "rabbitMQ", + "name": "outputMessage", + "queueName": "outputQueue", + "connectionStringSetting": "rabbitMQConnectionAppSetting", + "direction": "out" + } + ] +} +``` ++Here's the C# script code: ++```C# +using System; +using Microsoft.Extensions.Logging; ++public static void Run(string input, out string outputMessage, ILogger log) +{ + log.LogInformation(input); + outputMessage = input; +} +``` +### SendGrid output ++The following example shows a SendGrid output binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. ++Here's the binding data in the *function.json* file: ++```json +{ + "bindings": [ + { + "type": "queueTrigger", + "name": "mymsg", + "queueName": "myqueue", + "connection": "AzureWebJobsStorage", + "direction": "in" + }, + { + "type": "sendGrid", + "name": "$return", + "direction": "out", + "apiKey": "SendGridAPIKeyAsAppSetting", + "from": "{FromEmail}", + "to": "{ToEmail}" + } + ] +} +``` ++Here's the C# script code: ++```csharp +#r "SendGrid" ++using System; +using SendGrid.Helpers.Mail; +using Microsoft.Azure.WebJobs.Host; ++public static SendGridMessage Run(Message mymsg, ILogger log) +{ + SendGridMessage message = new SendGridMessage() + { + Subject = $"{mymsg.Subject}" + }; + + message.AddContent("text/plain", $"{mymsg.Content}"); ++ return message; +} +public class Message +{ + public string ToEmail { get; set; } + public string FromEmail { get; set; } + public string Subject { get; set; } + public string Content { get; set; } +} +``` ++### SignalR trigger ++Here's example binding data in the *function.json* file: ++```json +{ + "type": "signalRTrigger", + "name": "invocation", + "hubName": "SignalRTest", + "category": "messages", + "event": "SendMessage", + "parameterNames": [ + "message" + ], + "direction": "in" +} +``` ++And, here's the code: ++```cs +#r "Microsoft.Azure.WebJobs.Extensions.SignalRService" +using System; +using Microsoft.Azure.WebJobs.Extensions.SignalRService; +using Microsoft.Extensions.Logging; ++public static void Run(InvocationContext invocation, string message, ILogger logger) +{ + logger.LogInformation($"Receive {message} from {invocationContext.ConnectionId}."); +} +``` ++### SignalR input ++The following example shows a SignalR connection info input binding in a *function.json* file and a [C# Script function](functions-reference-csharp.md) that uses the binding to return the connection information. ++Here's binding data in the *function.json* file: ++Example function.json: ++```json +{ + "type": "signalRConnectionInfo", + "name": "connectionInfo", + "hubName": "chat", + "connectionStringSetting": "<name of setting containing SignalR Service connection string>", + "direction": "in" +} +``` ++Here's the C# Script code: ++```cs +#r "Microsoft.Azure.WebJobs.Extensions.SignalRService" +using Microsoft.Azure.WebJobs.Extensions.SignalRService; ++public static SignalRConnectionInfo Run(HttpRequest req, SignalRConnectionInfo connectionInfo) +{ + return connectionInfo; +} +``` ++You can set the `userId` property of the binding to the value from either header using a [binding expression](./functions-bindings-signalr-service-input.md#binding-expressions-for-http-trigger): `{headers.x-ms-client-principal-id}` or `{headers.x-ms-client-principal-name}`. ++Example function.json: ++```json +{ + "type": "signalRConnectionInfo", + "name": "connectionInfo", + "hubName": "chat", + "userId": "{headers.x-ms-client-principal-id}", + "connectionStringSetting": "<name of setting containing SignalR Service connection string>", + "direction": "in" +} +``` ++Here's the C# Script code: ++```cs +#r "Microsoft.Azure.WebJobs.Extensions.SignalRService" +using Microsoft.Azure.WebJobs.Extensions.SignalRService; ++public static SignalRConnectionInfo Run(HttpRequest req, SignalRConnectionInfo connectionInfo) +{ + // connectionInfo contains an access key token with a name identifier + // claim set to the authenticated user + return connectionInfo; +} +``` ++### SignalR output ++Here's binding data in the *function.json* file: ++Example function.json: ++```json +{ + "type": "signalR", + "name": "signalRMessages", + "hubName": "<hub_name>", + "connectionStringSetting": "<name of setting containing SignalR Service connection string>", + "direction": "out" +} +``` ++Here's the C# Script code: ++```cs +#r "Microsoft.Azure.WebJobs.Extensions.SignalRService" +using Microsoft.Azure.WebJobs.Extensions.SignalRService; ++public static Task Run( + object message, + IAsyncCollector<SignalRMessage> signalRMessages) +{ + return signalRMessages.AddAsync( + new SignalRMessage + { + Target = "newMessage", + Arguments = new [] { message } + }); +} +``` ++You can send a message only to connections that have been authenticated to a user by setting the *user ID* in the SignalR message. ++Example function.json: ++```json +{ + "type": "signalR", + "name": "signalRMessages", + "hubName": "<hub_name>", + "connectionStringSetting": "<name of setting containing SignalR Service connection string>", + "direction": "out" +} +``` ++Here's the C# script code: ++```cs +#r "Microsoft.Azure.WebJobs.Extensions.SignalRService" +using Microsoft.Azure.WebJobs.Extensions.SignalRService; ++public static Task Run( + object message, + IAsyncCollector<SignalRMessage> signalRMessages) +{ + return signalRMessages.AddAsync( + new SignalRMessage + { + // the message will only be sent to this user ID + UserId = "userId1", + Target = "newMessage", + Arguments = new [] { message } + }); +} +``` ++You can send a message only to connections that have been added to a group by setting the *group name* in the SignalR message. ++Example function.json: ++```json +{ + "type": "signalR", + "name": "signalRMessages", + "hubName": "<hub_name>", + "connectionStringSetting": "<name of setting containing SignalR Service connection string>", + "direction": "out" +} +``` ++Here's the C# Script code: ++```cs +#r "Microsoft.Azure.WebJobs.Extensions.SignalRService" +using Microsoft.Azure.WebJobs.Extensions.SignalRService; ++public static Task Run( + object message, + IAsyncCollector<SignalRMessage> signalRMessages) +{ + return signalRMessages.AddAsync( + new SignalRMessage + { + // the message will be sent to the group with this name + GroupName = "myGroup", + Target = "newMessage", + Arguments = new [] { message } + }); +} +``` ++SignalR Service allows users or connections to be added to groups. Messages can then be sent to a group. You can use the `SignalR` output binding to manage groups. ++The following example adds a user to a group. ++Example *function.json* ++```json +{ + "type": "signalR", + "name": "signalRGroupActions", + "connectionStringSetting": "<name of setting containing SignalR Service connection string>", + "hubName": "chat", + "direction": "out" +} +``` ++*Run.csx* ++```cs +#r "Microsoft.Azure.WebJobs.Extensions.SignalRService" +using Microsoft.Azure.WebJobs.Extensions.SignalRService; ++public static Task Run( + HttpRequest req, + ClaimsPrincipal claimsPrincipal, + IAsyncCollector<SignalRGroupAction> signalRGroupActions) +{ + var userIdClaim = claimsPrincipal.FindFirst(ClaimTypes.NameIdentifier); + return signalRGroupActions.AddAsync( + new SignalRGroupAction + { + UserId = userIdClaim.Value, + GroupName = "myGroup", + Action = GroupAction.Add + }); +} +``` ++The following example removes a user from a group. ++Example *function.json* ++```json +{ + "type": "signalR", + "name": "signalRGroupActions", + "connectionStringSetting": "<name of setting containing SignalR Service connection string>", + "hubName": "chat", + "direction": "out" +} +``` ++*Run.csx* ++```cs +#r "Microsoft.Azure.WebJobs.Extensions.SignalRService" +using Microsoft.Azure.WebJobs.Extensions.SignalRService; ++public static Task Run( + HttpRequest req, + ClaimsPrincipal claimsPrincipal, + IAsyncCollector<SignalRGroupAction> signalRGroupActions) +{ + var userIdClaim = claimsPrincipal.FindFirst(ClaimTypes.NameIdentifier); + return signalRGroupActions.AddAsync( + new SignalRGroupAction + { + UserId = userIdClaim.Value, + GroupName = "myGroup", + Action = GroupAction.Remove + }); +} +``` ++### Twilio output ++The following example shows a Twilio output binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function uses an `out` parameter to send a text message. ++Here's binding data in the *function.json* file: ++Example function.json: ++```json +{ + "type": "twilioSms", + "name": "message", + "accountSidSetting": "TwilioAccountSid", + "authTokenSetting": "TwilioAuthToken", + "from": "+1425XXXXXXX", + "direction": "out", + "body": "Azure Functions Testing" +} +``` ++Here's C# script code: ++```cs +#r "Newtonsoft.Json" +#r "Twilio" +#r "Microsoft.Azure.WebJobs.Extensions.Twilio" ++using System; +using Microsoft.Extensions.Logging; +using Newtonsoft.Json; +using Microsoft.Azure.WebJobs.Extensions.Twilio; +using Twilio.Rest.Api.V2010.Account; +using Twilio.Types; ++public static void Run(string myQueueItem, out CreateMessageOptions message, ILogger log) +{ + log.LogInformation($"C# Queue trigger function processed: {myQueueItem}"); ++ // In this example the queue item is a JSON string representing an order that contains the name of a + // customer and a mobile number to send text updates to. + dynamic order = JsonConvert.DeserializeObject(myQueueItem); + string msg = "Hello " + order.name + ", thank you for your order."; ++ // You must initialize the CreateMessageOptions variable with the "To" phone number. + message = new CreateMessageOptions(new PhoneNumber("+1704XXXXXXX")); ++ // A dynamic message can be set instead of the body in the output binding. In this example, we use + // the order information to personalize a text message. + message.Body = msg; +} +``` ++You can't use out parameters in asynchronous code. Here's an asynchronous C# script code example: ++```cs +#r "Newtonsoft.Json" +#r "Twilio" +#r "Microsoft.Azure.WebJobs.Extensions.Twilio" ++using System; +using Microsoft.Extensions.Logging; +using Newtonsoft.Json; +using Microsoft.Azure.WebJobs.Extensions.Twilio; +using Twilio.Rest.Api.V2010.Account; +using Twilio.Types; ++public static async Task Run(string myQueueItem, IAsyncCollector<CreateMessageOptions> message, ILogger log) +{ + log.LogInformation($"C# Queue trigger function processed: {myQueueItem}"); ++ // In this example the queue item is a JSON string representing an order that contains the name of a + // customer and a mobile number to send text updates to. + dynamic order = JsonConvert.DeserializeObject(myQueueItem); + string msg = "Hello " + order.name + ", thank you for your order."; ++ // You must initialize the CreateMessageOptions variable with the "To" phone number. + CreateMessageOptions smsText = new CreateMessageOptions(new PhoneNumber("+1704XXXXXXX")); ++ // A dynamic message can be set instead of the body in the output binding. In this example, we use + // the order information to personalize a text message. + smsText.Body = msg; ++ await message.AddAsync(smsText); +} +``` ++### Warmup trigger ++The following example shows a warmup trigger in a *function.json* file and a [C# script function](functions-reference-csharp.md) that runs on each new instance when it's added to your app. ++Not supported for version 1.x of the Functions runtime. ++Here's the *function.json* file: ++```json +{ + "bindings": [ + { + "type": "warmupTrigger", + "direction": "in", + "name": "warmupContext" + } + ] +} +``` ++```cs +public static void Run(WarmupContext warmupContext, ILogger log) +{ + log.LogInformation("Function App instance is warm."); +} +``` ++ ## Next steps > [!div class="nextstepaction"] |
azure-functions | Functions Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference.md | description: Learn the Azure Functions concepts and techniques that you need to ms.assetid: d8efe41a-bef8-4167-ba97-f3e016fcd39e Last updated 09/06/2023-+ zone_pivot_groups: programming-languages-set-functions You need to create a role assignment that provides access to Azure SignalR Servi An identity-based connection for an Azure service accepts the following common properties, where `<CONNECTION_NAME_PREFIX>` is the value of your `connection` property in the trigger or binding definition: | Property | Environment variable template | Description |-||||| +|||| | Token Credential | `<CONNECTION_NAME_PREFIX>__credential` | Defines how a token should be obtained for the connection. This setting should be set to `managedidentity` if your deployed Azure Function intends to use managed identity authentication. This value is only valid when a managed identity is available in the hosting environment. | | Client ID | `<CONNECTION_NAME_PREFIX>__clientId` | When `credential` is set to `managedidentity`, this property can be set to specify the user-assigned identity to be used when obtaining a token. The property accepts a client ID corresponding to a user-assigned identity assigned to the application. It's invalid to specify both a Resource ID and a client ID. If not specified, the system-assigned identity is used. This property is used differently in [local development scenarios](#local-development-with-identity-based-connections), when `credential` shouldn't be set. | | Resource ID | `<CONNECTION_NAME_PREFIX>__managedIdentityResourceId` | When `credential` is set to `managedidentity`, this property can be set to specify the resource Identifier to be used when obtaining a token. The property accepts a resource identifier corresponding to the resource ID of the user-defined managed identity. It's invalid to specify both a resource ID and a client ID. If neither are specified, the system-assigned identity is used. This property is used differently in [local development scenarios](#local-development-with-identity-based-connections), when `credential` shouldn't be set. |
azure-functions | Migrate Cosmos Db Version 3 Version 4 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-cosmos-db-version-3-version-4.md | This article walks you through the process of migrating your function app to run Update your `.csproj` project file to use the latest extension version for your process model. The following `.csproj` file uses version 4 of the Azure Cosmos DB extension. -### [In-process](#tab/in-process) +### [Isolated worker model](#tab/isolated-process) ```xml <Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <TargetFramework>net7.0</TargetFramework> <AzureFunctionsVersion>v4</AzureFunctionsVersion>+ <OutputType>Exe</OutputType> </PropertyGroup> <ItemGroup>- <PackageReference Include="Microsoft.Azure.WebJobs.Extensions.CosmosDB" Version="4.3.0" /> - <PackageReference Include="Microsoft.NET.Sdk.Functions" Version="4.1.1" /> + <PackageReference Include="Microsoft.Azure.Functions.Worker" Version="1.14.1" /> + <PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.CosmosDB" Version="4.4.1" /> + <PackageReference Include="Microsoft.Azure.Functions.Worker.Sdk" Version="1.10.0" /> </ItemGroup> <ItemGroup> <None Update="host.json"> Update your `.csproj` project file to use the latest extension version for your </Project> ``` -### [Isolated process](#tab/isolated-process) +### [In-process model](#tab/in-process) ```xml <Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <TargetFramework>net7.0</TargetFramework> <AzureFunctionsVersion>v4</AzureFunctionsVersion>- <OutputType>Exe</OutputType> </PropertyGroup> <ItemGroup>- <PackageReference Include="Microsoft.Azure.Functions.Worker" Version="1.14.1" /> - <PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.CosmosDB" Version="4.4.1" /> - <PackageReference Include="Microsoft.Azure.Functions.Worker.Sdk" Version="1.10.0" /> + <PackageReference Include="Microsoft.Azure.WebJobs.Extensions.CosmosDB" Version="4.3.0" /> + <PackageReference Include="Microsoft.NET.Sdk.Functions" Version="4.1.1" /> </ItemGroup> <ItemGroup> <None Update="host.json"> |
azure-functions | Recover Python Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/recover-python-functions.md | zone_pivot_groups: python-mode-functions # Troubleshoot Python errors in Azure Functions -This article provides information to help you troubleshoot errors with your Python functions in Azure Functions. This article supports both the v1 and v2 programming models. Choose the model you want to use from the selector at the top of the article. The v2 model is currently in preview. For more information on Python programming models, see the [Python developer guide](./functions-reference-python.md). +This article provides information to help you troubleshoot errors with your Python functions in Azure Functions. This article supports both the v1 and v2 programming models. Choose the model you want to use from the selector at the top of the article. > [!NOTE] > The Python v2 programming model is only supported in the 4.x functions runtime. For more information, see [Azure Functions runtime versions overview](./functions-versions.md). |
azure-functions | Supported Languages | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/supported-languages.md | Title: Supported languages in Azure Functions description: Learn which languages are supported for developing your Functions in Azure, the support level of the various language versions, and potential end-of-life dates. + Last updated 08/27/2023 zone_pivot_groups: programming-languages-set-functions Starting with version 2.x, the runtime is designed to offer [language extensibil ## Next steps ::: zone pivot="programming-language-csharp" -### [Isolated process](#tab/isolated-process) +### [Isolated worker model](#tab/isolated-process) > [!div class="nextstepaction"] > [.NET isolated worker process reference](dotnet-isolated-process-guide.md). -### [In-process](#tab/in-process) +### [In-process model](#tab/in-process) > [!div class="nextstepaction"] > [In-process C# developer reference](functions-dotnet-class-library.md) |
azure-maps | Azure Maps Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/azure-maps-authentication.md | After the application receives a SAS token, the Azure Maps SDK and/or applicatio ## Cross origin resource sharing (CORS) +[CORS] is an HTTP protocol that enables a web application running under one domain to access resources in another domain. Web browsers implement a security restriction known as [same-origin policy] that prevents a web page from calling APIs in a different domain; CORS provides a secure way to allow one domain (the origin domain) to call APIs in another domain. Using the Azure Maps account resource, you can configure which origins are allowed to access the Azure Maps REST API from your applications. -Cross Origin Resource Sharing (CORS) is in preview. +> [!IMPORTANT] +> CORS is not an authorization mechanism. Any request made to a map account using REST API, when CORS is enabled, also needs a valid map account authentication scheme such as Shared Key, Azure AD, or SAS token. +> +> CORS is supported for all map account pricing tiers, data-plane endpoints, and locations. ### Prerequisites To prevent malicious code execution on the client, modern browsers block request - If you're unfamiliar with CORS, see [Cross-origin resource sharing (CORS)], it lets an `Access-Control-Allow-Origin` header declare which origins are allowed to call endpoints of an Azure Maps account. CORS protocol isn't specific to Azure Maps. -### Account CORS --[CORS] is an HTTP protocol that enables a web application running under one domain to access resources in another domain. Web browsers implement a security restriction known as [same-origin policy] that prevents a web page from calling APIs in a different domain; CORS provides a secure way to allow one domain (the origin domain) to call APIs in another domain. Using the Azure Maps account resource, you can configure which origins are allowed to access the Azure Maps REST API from your applications. --> [!IMPORTANT] -> CORS is not an authorization mechanism. Any request made to a map account using REST API, when CORS is enabled, also needs a valid map account authentication scheme such as Shared Key, Azure AD, or SAS token. -> -> CORS is supported for all map account pricing tiers, data-plane endpoints, and locations. - ### CORS requests A CORS request from an origin domain may consist of two separate requests: |
azure-maps | How To Manage Pricing Tier | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-manage-pricing-tier.md | To change your pricing tier from Gen1 to Gen2 in the Azure Portal, navigate to t To change your pricing tier from Gen1 to Gen2 in the ARM template, update `pricingTier` to **G2** and `kind` to **Gen2**. For more info on using ARM templates, see [Create account with ARM template]. +<! + :::image type="content" source="./media/how-to-manage-pricing-tier/arm-template.png" border="true" alt-text="Screenshot of an ARM template that demonstrates updating pricingTier to G2 and kind to Gen2."::: -<! ```json "pricingTier": { "type": "string", To change your pricing tier from Gen1 to Gen2 in the ARM template, update `prici } } ```- :::code language="json" source="~/quickstart-templates/quickstarts/microsoft.maps/maps-create/azuredeploy.json" range="27-46"::: > + ## Next steps Learn how to see the API usage metrics for your Azure Maps account: |
azure-maps | How To Render Custom Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-render-custom-data.md | -This article describes how to use the [static image service] with image composition functionality. Image composition functionality supports the retrieval of static raster tile that contains custom data. +This article describes how to use the [Get Map Static Image] command with image composition functionality. Image composition functionality supports the retrieval of static raster tiles that contain custom data. The following are examples of custom data: The following are examples of custom data: - Geometry overlays > [!TIP]-> To show a simple map on a web page, it's often more cost effective to use the Azure Maps Web SDK, rather than to use the static image service. The web SDK uses map tiles; and unless the user pans and zooms the map, they will often generate only a fraction of a transaction per map load. The Azure Maps web SDK has options for disabling panning and zooming. Also, the Azure Maps web SDK provides a richer set of data visualization options than a static map web service does. +> To show a simple map on a web page, it's often more cost effective to use the Azure Maps Web SDK, rather than to use the static image service. The web SDK uses map tiles; and unless the user pans and zooms the map, they will often generate only a fraction of a transaction per map load. The Azure Maps Web SDK has options for disabling panning and zooming. Also, the Azure Maps Web SDK provides a richer set of data visualization options than a static map web service does. ## Prerequisites This article uses the [Postman] application, but you may use a different API dev > [!NOTE] > The procedure in this section requires an Azure Maps account in the Gen1 or Gen2 pricing tier.-The Azure Maps account Gen1 Standard S0 tier supports only a single instance of the `pins` parameter. It allows you to render up to five pushpins, specified in the URL request, with a custom image. +The Azure Maps account Gen1 S0 pricing tier only supports a single instance of the [pins] parameter. It allows you to render up to five pushpins, specified in the URL request, with a custom image. > > **Azure Maps Gen1 pricing tier retirement** > To get a static image with custom pins and labels: 2. In the **Create New** window, select **HTTP Request**. -3. Enter a **Request name** for the request, such as *GET Static Image*. +3. Enter a **Request name** for the request, such as *Get Map Static Image*. 4. Select the **GET** HTTP method. 5. Enter the following URL: ```HTTP- https://atlas.microsoft.com/map/static/png?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=1.0&layer=basic&style=main&zoom=12¢er=-73.98,%2040.77&pins=custom%7Cla15+50%7Cls12%7Clc003b61%7C%7C%27CentralPark%27-73.9657974+40.781971%7C%7Chttps%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2FAzureMapsCodeSamples%2Fmaster%2FAzureMapsCodeSamples%2FCommon%2Fimages%2Ficons%2Fylw-pushpin.png + https://atlas.microsoft.com/map/static/png?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=2022-08-01&layer=basic&style=main&zoom=12¢er=-73.98,%2040.77&pins=custom%7Cla15+50%7Cls12%7Clc003b61%7C%7C%27CentralPark%27-73.9657974+40.781971%7C%7Chttps%3A%2F%2Fsamples.azuremaps.com%2Fimages%2Ficons%2Fylw-pushpin.png ``` 6. Select **Send**. To get a static image with custom pins and labels: > [!NOTE] > The procedure in this section requires an Azure Maps account Gen1 (S1) or Gen2 pricing tier. -You can modify the appearance of a polygon by using style modifiers with the [path parameter]. +You can modify the appearance of a polygon by using style modifiers with the [path] parameter. To render a polygon with color and opacity: To render a polygon with color and opacity: 4. Select the **GET** HTTP method. -5. Enter the following URL to the [Render service]: +5. Enter the following URL to the [Render] service: ```HTTP https://atlas.microsoft.com/map/static/png?api-version=2022-08-01&style=main&layer=basic&sku=S1&zoom=14&height=500&Width=500¢er=-74.040701, 40.698666&path=lc0000FF|fc0000FF|lw3|la0.80|fa0.50||-74.03995513916016 40.70090237454063|-74.04082417488098 40.70028420372218|-74.04113531112671 40.70049568385827|-74.04298067092896 40.69899904076542|-74.04271245002747 40.69879568992435|-74.04367804527283 40.6980961582905|-74.04364585876465 40.698055487620714|-74.04368877410889 40.698022951066996|-74.04168248176573 40.696444909137|-74.03901100158691 40.69837271818651|-74.03824925422668 40.69837271818651|-74.03809905052185 40.69903971085914|-74.03771281242369 40.699340668780984|-74.03940796852112 40.70058515602143|-74.03948307037354 40.70052821920425|-74.03995513916016 40.70090237454063 To render a polygon with color and opacity: > [!NOTE] > The procedure in this section requires an Azure Maps account Gen1 (S1) or Gen2 pricing tier. -You can modify the appearance of the pins by adding style modifiers. For example, to make pushpins and their labels larger or smaller, use the `sc` "scale style" modifier. This modifier takes a value that's greater than zero. A value of 1 is the standard scale. Values larger than 1 makes the pins larger, and values smaller than 1 makes them smaller. For more information about style modifiers, see [static image service path parameters]. +You can modify the appearance of the pins by adding style modifiers. For example, to make pushpins and their labels larger or smaller, use the `sc` "scale style" modifier. This modifier takes a value that's greater than zero. A value of 1 is the standard scale. Values larger than 1 makes the pins larger, and values smaller than 1 makes them smaller. For more information about style modifiers, see the [Path] parameter of the [Get Map Static Image] command. To render a circle and pushpins with custom labels: To render a circle and pushpins with custom labels: 4. Select the **GET** HTTP method. -5. Enter the following URL to the [Render service]: +5. Enter the following URL to the [Render] service: ```HTTP https://atlas.microsoft.com/map/static/png?api-version=2022-08-01&style=main&layer=basic&zoom=14&height=700&Width=700¢er=-122.13230609893799,47.64599069048016&path=lcFF0000|lw2|la0.60|ra1000||-122.13230609893799 47.64599069048016&pins=default|la15+50|al0.66|lc003C62|co002D62||'Microsoft Corporate Headquarters'-122.14131832122801 47.64690503939462|'Microsoft Visitor Center'-122.136828 47.642224|'Microsoft Conference Center'-122.12552547454833 47.642940335653996|'Microsoft The Commons'-122.13687658309935 47.64452336193245&subscription-key={Your-Azure-Maps-Subscription-key} Similarly, you can change, add, and remove other style modifiers. ## Next steps > [!div class="nextstepaction"]-> [Render - Get Map Image] +> [Render - Get Map Static Image] [Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account-[Render - Get Map Image]: /rest/api/maps/render/getmapimage -[path parameter]: /rest/api/maps/render/getmapimage#uri-parameters [Postman]: https://www.postman.com/-[Render service]: /rest/api/maps/render/get-map-image -[static image service path parameters]: /rest/api/maps/render/getmapimage#uri-parameters -[static image service]: /rest/api/maps/render/getmapimage [Subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account++[Get Map Static Image]: /rest/api/maps/render-v2/get-map-static-image +[Manage the pricing tier of your Azure Maps account]: how-to-manage-pricing-tier.md +[path]: /rest/api/maps/render-v2/get-map-static-image#uri-parameters +[pins]: /rest/api/maps/render-v2/get-map-static-image#uri-parameters +[Render]: /rest/api/maps/render-v2/get-map-static-image +[Render - Get Map Static Image]: /rest/api/maps/render-v2/get-map-static-image |
azure-monitor | Availability Test Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-test-migration.md | Title: Migrate from Azure Monitor Application Insights classic URL ping tests to description: How to migrate from Azure Monitor Application Insights classic availability URL ping tests to standard tests. Previously updated : 07/19/2023 Last updated : 09/27/2023 # Migrate availability tests -In this article, we guide you through the process of migrating from [classic URL ping tests](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability) to the modern and efficient [standard tests](availability-standard-tests.md) . +In this article, we guide you through the process of migrating from [classic URL ping tests](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability) to the modern and efficient [standard tests](availability-standard-tests.md). We simplify this process by providing clear step-by-step instructions to ensure a seamless transition and equip your applications with the most up-to-date monitoring capabilities. We simplify this process by providing clear step-by-step instructions to ensure The following steps walk you through the process of creating [standard tests](availability-standard-tests.md) that replicate the functionality of your [URL ping tests](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability). It allows you to more easily start using the advanced features of [standard tests](availability-standard-tests.md) using your previously created [URL ping tests](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability). -> [!NOTE] -> A cost is associated with running [standard tests](availability-standard-tests.md). Once you create a [standard test](availability-standard-tests.md), you will be charged for test executions. -> Refer to [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#pricing) before starting this process. +> [!IMPORTANT] +> +> On 30 September 2026, the **[URL ping tests](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability)** will be retired, and ping tests. Before that date, you'll need to transition to **[standard tests](/editor/availability-standard-tests.md)**. +> +> - A cost is associated with running **[standard tests](/editor/availability-standard-tests.md)**. Once you create a **[standard test](/editor/availability-standard-tests.md)**, you will be charged for test executions. +> - Refer to **[Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#pricing)** before starting this process. ### Prerequisites The following steps walk you through the process of creating [standard tests](av We recommend using these commands to migrate a URL ping test to a standard test and take advantage of the available capabilities. Remember, this migration is optional. - #### Do these steps work for both HTTP and HTTPS endpoints? Yes, these commands work for both HTTP and HTTPS endpoints, which are used in your URL ping Tests. Yes, these commands work for both HTTP and HTTPS endpoints, which are used in yo * [Availability alerts](availability-alerts.md) * [Troubleshooting](troubleshoot-availability.md) * [Web tests Azure Resource Manager template](/azure/templates/microsoft.insights/webtests?tabs=json)-* [Web test REST API](/rest/api/application-insights/web-tests) +* [Web test REST API](/rest/api/application-insights/web-tests) |
azure-monitor | Opentelemetry Add Modify | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-add-modify.md | You can't extend the Java Distro with community instrumentation libraries. To re Other OpenTelemetry Instrumentations are available [here](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node) and could be added using TraceHandler in ApplicationInsightsClient. ```javascript+ // Import the Azure Monitor OpenTelemetry plugin and OpenTelemetry API const { useAzureMonitor } = require("@azure/monitor-opentelemetry"); const { metrics, trace, ProxyTracerProvider } = require("@opentelemetry/api");++ // Import the OpenTelemetry instrumentation registration function and Express instrumentation const { registerInstrumentations } = require( "@opentelemetry/instrumentation"); const { ExpressInstrumentation } = require('@opentelemetry/instrumentation-express'); - useAzureMonitor(); + // Get the OpenTelemetry tracer provider and meter provider const tracerProvider = (trace.getTracerProvider() as ProxyTracerProvider).getDelegate(); const meterProvider = metrics.getMeterProvider();++ // Enable Azure Monitor integration + useAzureMonitor(); + + // Register the Express instrumentation registerInstrumentations({- instrumentations: [ - new ExpressInstrumentation(), - ], - tracerProvider: tracerProvider, - meterProvider: meterProvider + // List of instrumentations to register + instrumentations: [ + new ExpressInstrumentation(), // Express instrumentation + ], + // OpenTelemetry tracer provider + tracerProvider: tracerProvider, + // OpenTelemetry meter provider + meterProvider: meterProvider });-``` + ``` ### [Python](#tab/python) public class Program { #### [Node.js](#tab/nodejs) ```javascript+ // Import the Azure Monitor OpenTelemetry plugin and OpenTelemetry API const { useAzureMonitor } = require("@azure/monitor-opentelemetry"); const { metrics } = require("@opentelemetry/api"); + // Enable Azure Monitor integration useAzureMonitor();++ // Get the meter for the "testMeter" namespace const meter = metrics.getMeter("testMeter");++ // Create a histogram metric let histogram = meter.createHistogram("histogram");++ // Record values to the histogram metric with different tags histogram.record(1, { "testKey": "testValue" }); histogram.record(30, { "testKey": "testValue2" }); histogram.record(100, { "testKey2": "testValue" }); public class Program { #### [Node.js](#tab/nodejs) ```javascript+ // Import the Azure Monitor OpenTelemetry plugin and OpenTelemetry API const { useAzureMonitor } = require("@azure/monitor-opentelemetry"); const { metrics } = require("@opentelemetry/api"); + // Enable Azure Monitor integration useAzureMonitor();++ // Get the meter for the "testMeter" namespace const meter = metrics.getMeter("testMeter");++ // Create a counter metric let counter = meter.createCounter("counter");++ // Add values to the counter metric with different tags counter.add(1, { "testKey": "testValue" }); counter.add(5, { "testKey2": "testValue" }); counter.add(3, { "testKey": "testValue2" }); public class Program { #### [Node.js](#tab/nodejs) ```typescript+ // Import the useAzureMonitor function and the metrics module from the @azure/monitor-opentelemetry and @opentelemetry/api packages, respectively. const { useAzureMonitor } = require("@azure/monitor-opentelemetry"); const { metrics } = require("@opentelemetry/api"); + // Enable Azure Monitor integration. useAzureMonitor();- const meter = metrics.getMeter("testMeter"); ++ // Get the meter for the "testMeter" meter name. + const meter = metrics.getMeter("testMeter"); ++ // Create an observable gauge metric with the name "gauge". let gauge = meter.createObservableGauge("gauge");++ // Add a callback to the gauge metric. The callback will be invoked periodically to generate a new value for the gauge metric. gauge.addCallback((observableResult: ObservableResult) => {- let randomNumber = Math.floor(Math.random() * 100); - observableResult.observe(randomNumber, {"testKey": "testValue"}); + // Generate a random number between 0 and 99. + let randomNumber = Math.floor(Math.random() * 100); ++ // Set the value of the gauge metric to the random number. + observableResult.observe(randomNumber, {"testKey": "testValue"}); }); ``` You can use `opentelemetry-api` to update the status of a span and record except #### [Node.js](#tab/nodejs) ```javascript+ // Import the Azure Monitor OpenTelemetry plugin and OpenTelemetry API const { useAzureMonitor } = require("@azure/monitor-opentelemetry"); const { trace } = require("@opentelemetry/api"); + // Enable Azure Monitor integration useAzureMonitor();++ // Get the tracer for the "testTracer" namespace const tracer = trace.getTracer("testTracer");++ // Start a span with the name "hello" let span = tracer.startSpan("hello");++ // Try to throw an error try{- throw new Error("Test Error"); + throw new Error("Test Error"); }++ // Catch the error and record it to the span catch(error){- span.recordException(error); + span.recordException(error); } ``` you can add your spans by using the OpenTelemetry API. #### [Node.js](#tab/nodejs) ```javascript+ // Import the Azure Monitor OpenTelemetry plugin and OpenTelemetry API const { useAzureMonitor } = require("@azure/monitor-opentelemetry"); const { trace } = require("@opentelemetry/api"); + // Enable Azure Monitor integration useAzureMonitor();++ // Get the tracer for the "testTracer" namespace const tracer = trace.getTracer("testTracer");++ // Start a span with the name "hello" let span = tracer.startSpan("hello");++ // End the span span.end(); ``` - #### [Python](#tab/python) The OpenTelemetry API can be used to add your own spans, which appear in the `requests` and `dependencies` tables in Application Insights. If you want to add custom events or access the Application Insights API, replace You need to use the `applicationinsights` v3 Beta package to send custom telemetry using the Application Insights classic API. (https://www.npmjs.com/package/applicationinsights/v/beta) ```javascript+ // Import the TelemetryClient class from the Application Insights SDK for JavaScript. const { TelemetryClient } = require("applicationinsights"); + // Create a new TelemetryClient instance. const telemetryClient = new TelemetryClient(); ``` Then use the `TelemetryClient` to send custom telemetry: ##### Events ```javascript+ // Create an event telemetry object. let eventTelemetry = {- name: "testEvent" + name: "testEvent" };++ // Send the event telemetry object to Azure Monitor Application Insights. telemetryClient.trackEvent(eventTelemetry); ``` ##### Logs ```javascript+ // Create a trace telemetry object. let traceTelemetry = {- message: "testMessage", - severity: "Information" + message: "testMessage", + severity: "Information" };++ // Send the trace telemetry object to Azure Monitor Application Insights. telemetryClient.trackTrace(traceTelemetry); ``` ##### Exceptions ```javascript+ // Try to execute a block of code. try {- ... - } catch (error) { - let exceptionTelemetry = { - exception: error, - severity: "Critical" - }; - telemetryClient.trackException(exceptionTelemetry); + ... }++ // If an error occurs, catch it and send it to Azure Monitor Application Insights as an exception telemetry item. + catch (error) { + let exceptionTelemetry = { + exception: error, + severity: "Critical" + }; + telemetryClient.trackException(exceptionTelemetry); +} ``` #### [Python](#tab/python) Adding one or more span attributes populates the `customDimensions` field in the ##### [Node.js](#tab/nodejs) ```typescript- const { useAzureMonitor } = require("@azure/monitor-opentelemetry"); - const { trace, ProxyTracerProvider } = require("@opentelemetry/api"); - const { ReadableSpan, Span, SpanProcessor } = require("@opentelemetry/sdk-trace-base"); - const { NodeTracerProvider } = require("@opentelemetry/sdk-trace-node"); - const { SemanticAttributes } = require("@opentelemetry/semantic-conventions"); +// Import the necessary packages. +const { useAzureMonitor } = require("@azure/monitor-opentelemetry"); +const { trace, ProxyTracerProvider } = require("@opentelemetry/api"); +const { ReadableSpan, Span, SpanProcessor } = require("@opentelemetry/sdk-trace-base"); +const { NodeTracerProvider } = require("@opentelemetry/sdk-trace-node"); +const { SemanticAttributes } = require("@opentelemetry/semantic-conventions"); ++// Enable Azure Monitor integration. +useAzureMonitor(); ++// Get the NodeTracerProvider instance. +const tracerProvider = ((trace.getTracerProvider() as ProxyTracerProvider).getDelegate() as NodeTracerProvider); ++// Create a new SpanEnrichingProcessor class. +class SpanEnrichingProcessor implements SpanProcessor { + forceFlush(): Promise<void> { + return Promise.resolve(); + } - useAzureMonitor(); - const tracerProvider = ((trace.getTracerProvider() as ProxyTracerProvider).getDelegate() as NodeTracerProvider); + shutdown(): Promise<void> { + return Promise.resolve(); + } - class SpanEnrichingProcessor implements SpanProcessor{ - forceFlush(): Promise<void>{ - return Promise.resolve(); - } - shutdown(): Promise<void>{ - return Promise.resolve(); - } - onStart(_span: Span): void{} - onEnd(span: ReadableSpan){ - span.attributes["CustomDimension1"] = "value1"; - span.attributes["CustomDimension2"] = "value2"; - } - } + onStart(_span: Span): void {} - tracerProvider.addSpanProcessor(new SpanEnrichingProcessor()); + onEnd(span: ReadableSpan) { + // Add custom dimensions to the span. + span.attributes["CustomDimension1"] = "value1"; + span.attributes["CustomDimension2"] = "value2"; + } +} ++// Add the SpanEnrichingProcessor instance to the NodeTracerProvider instance. +tracerProvider.addSpanProcessor(new SpanEnrichingProcessor()); ``` ##### [Python](#tab/python) Use the add [custom property example](#add-a-custom-property-to-a-span), but rep ```typescript ...+ // Import the SemanticAttributes class from the @opentelemetry/semantic-conventions package. const { SemanticAttributes } = require("@opentelemetry/semantic-conventions"); - class SpanEnrichingProcessor implements SpanProcessor{ - ... + // Create a new SpanEnrichingProcessor class. + class SpanEnrichingProcessor implements SpanProcessor { - onEnd(span){ - span.attributes[SemanticAttributes.HTTP_CLIENT_IP] = "<IP Address>"; - } + onEnd(span) { + // Set the HTTP_CLIENT_IP attribute on the span to the IP address of the client. + span.attributes[SemanticAttributes.HTTP_CLIENT_IP] = "<IP Address>"; + } } ``` Use the add [custom property example](#add-a-custom-property-to-a-span), but rep ```typescript ...+ // Import the SemanticAttributes class from the @opentelemetry/semantic-conventions package. import { SemanticAttributes } from "@opentelemetry/semantic-conventions"; - class SpanEnrichingProcessor implements SpanProcessor{ - ... + // Create a new SpanEnrichingProcessor class. + class SpanEnrichingProcessor implements SpanProcessor { - onEnd(span: ReadableSpan){ - span.attributes[SemanticAttributes.ENDUSER_ID] = "<User ID>"; - } + onEnd(span: ReadableSpan) { + // Set the ENDUSER_ID attribute on the span to the ID of the user. + span.attributes[SemanticAttributes.ENDUSER_ID] = "<User ID>"; + } } ``` Logback, Log4j, and java.util.logging are [autoinstrumented](#logs). Attaching c #### [Node.js](#tab/nodejs) ```typescript+ // Import the useAzureMonitor function and the logs module from the @azure/monitor-opentelemetry and @opentelemetry/api-logs packages, respectively. const { useAzureMonitor } = require("@azure/monitor-opentelemetry"); const { logs } = require("@opentelemetry/api-logs"); import { Logger } from "@opentelemetry/sdk-logs"; + // Enable Azure Monitor integration. useAzureMonitor();++ // Get the logger for the "testLogger" logger name. const logger = (logs.getLogger("testLogger") as Logger);++ // Create a new log record. const logRecord = {- body: "testEvent", - attributes: { - "testAttribute1": "testValue1", - "testAttribute2": "testValue2", - "testAttribute3": "testValue3" - } + body: "testEvent", + attributes: { + "testAttribute1": "testValue1", + "testAttribute2": "testValue2", + "testAttribute3": "testValue3" + } };++ // Emit the log record. logger.emit(logRecord); ``` See [sampling overrides](java-standalone-config.md#sampling-overrides-preview) a The following example shows how to exclude a certain URL from being tracked by using the [HTTP/HTTPS instrumentation library](https://github.com/open-telemetry/opentelemetry-js/tree/main/experimental/packages/opentelemetry-instrumentation-http): ```typescript+ // Import the useAzureMonitor function and the ApplicationInsightsOptions class from the @azure/monitor-opentelemetry package. const { useAzureMonitor, ApplicationInsightsOptions } = require("@azure/monitor-opentelemetry");++ // Import the HttpInstrumentationConfig class from the @opentelemetry/instrumentation-http package. const { HttpInstrumentationConfig }= require("@opentelemetry/instrumentation-http");++ // Import the IncomingMessage and RequestOptions classes from the http and https packages, respectively. const { IncomingMessage } = require("http"); const { RequestOptions } = require("https"); + // Create a new HttpInstrumentationConfig object. const httpInstrumentationConfig: HttpInstrumentationConfig = {- enabled: true, - ignoreIncomingRequestHook: (request: IncomingMessage) => { - // Ignore OPTIONS incoming requests - if (request.method === 'OPTIONS') { - return true; - } - return false; - }, - ignoreOutgoingRequestHook: (options: RequestOptions) => { - // Ignore outgoing requests with /test path - if (options.path === '/test') { - return true; - } - return false; + enabled: true, + ignoreIncomingRequestHook: (request: IncomingMessage) => { + // Ignore OPTIONS incoming requests. + if (request.method === 'OPTIONS') { + return true; }+ return false; + }, + ignoreOutgoingRequestHook: (options: RequestOptions) => { + // Ignore outgoing requests with the /test path. + if (options.path === '/test') { + return true; + } + return false; + } };++ // Create a new ApplicationInsightsOptions object. const config: ApplicationInsightsOptions = {- instrumentationOptions: { - http: { - httpInstrumentationConfig - }, - }, + instrumentationOptions: { + http: { + httpInstrumentationConfig + } + } };++ // Enable Azure Monitor integration using the useAzureMonitor function and the ApplicationInsightsOptions object. useAzureMonitor(config); ``` See [sampling overrides](java-standalone-config.md#sampling-overrides-preview) a Use the add [custom property example](#add-a-custom-property-to-a-span), but replace the following lines of code: ```typescript+ // Import the SpanKind and TraceFlags classes from the @opentelemetry/api package. const { SpanKind, TraceFlags } = require("@opentelemetry/api"); + // Create a new SpanEnrichingProcessor class. class SpanEnrichingProcessor {- ... - onEnd(span) { - if(span.kind == SpanKind.INTERNAL){ - span.spanContext().traceFlags = TraceFlags.NONE; - } + onEnd(span) { + // If the span is an internal span, set the trace flags to NONE. + if(span.kind == SpanKind.INTERNAL){ + span.spanContext().traceFlags = TraceFlags.NONE; }+ } } ``` You can use `opentelemetry-api` to get the trace ID or span ID. Get the request trace ID and the span ID in your code: ```javascript- const { trace } = require("@opentelemetry/api"); + // Import the trace module from the OpenTelemetry API. + const { trace } = require("@opentelemetry/api"); - let spanId = trace.getActiveSpan().spanContext().spanId; - let traceId = trace.getActiveSpan().spanContext().traceId; + // Get the span ID and trace ID of the active span. + let spanId = trace.getActiveSpan().spanContext().spanId; + let traceId = trace.getActiveSpan().spanContext().traceId; ``` ### [Python](#tab/python) |
azure-monitor | Opentelemetry Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-configuration.md | Use one of the following two ways to configure the connection string: - Use configuration object: ```typescript- const { useAzureMonitor, AzureMonitorOpenTelemetryOptions } = require("@azure/monitor-opentelemetry"); + // Import the useAzureMonitor function and the AzureMonitorOpenTelemetryOptions class from the @azure/monitor-opentelemetry package. + const { useAzureMonitor, AzureMonitorOpenTelemetryOptions } = require("@azure/monitor-opentelemetry"); ++ // Create a new AzureMonitorOpenTelemetryOptions object. const options: AzureMonitorOpenTelemetryOptions = {- azureMonitorExporterOptions: { - connectionString: "<your connection string>" - } + azureMonitorExporterOptions: { + connectionString: "<your connection string>" + } };- useAzureMonitor(options); + // Enable Azure Monitor integration using the useAzureMonitor function and the AzureMonitorOpenTelemetryOptions object. + useAzureMonitor(options); ``` ### [Python](#tab/python) To set the cloud role instance, see [cloud role instance](java-standalone-config Set the Cloud Role Name and the Cloud Role Instance via [Resource](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/sdk.md#resource-sdk) attributes. Cloud Role Name uses `service.namespace` and `service.name` attributes, although it falls back to `service.name` if `service.namespace` isn't set. Cloud Role Instance uses the `service.instance.id` attribute value. For information on standard attributes for resources, see [Resource Semantic Conventions](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/semantic_conventions/README.md). ```typescript-... +// Import the useAzureMonitor function, the AzureMonitorOpenTelemetryOptions class, the Resource class, and the SemanticResourceAttributes class from the @azure/monitor-opentelemetry, @opentelemetry/resources, and @opentelemetry/semantic-conventions packages, respectively. const { useAzureMonitor, AzureMonitorOpenTelemetryOptions } = require("@azure/monitor-opentelemetry"); const { Resource } = require("@opentelemetry/resources"); const { SemanticResourceAttributes } = require("@opentelemetry/semantic-conventions");-// - -// Setting role name and role instance -// - ++// Create a new Resource object with the following custom resource attributes: +// +// * service_name: my-service +// * service_namespace: my-namespace +// * service_instance_id: my-instance const customResource = new Resource({- [SemanticResourceAttributes.SERVICE_NAME]: "my-service", - [SemanticResourceAttributes.SERVICE_NAMESPACE]: "my-namespace", - [SemanticResourceAttributes.SERVICE_INSTANCE_ID]: "my-instance", + [SemanticResourceAttributes.SERVICE_NAME]: "my-service", + [SemanticResourceAttributes.SERVICE_NAMESPACE]: "my-namespace", + [SemanticResourceAttributes.SERVICE_INSTANCE_ID]: "my-instance", });++// Create a new AzureMonitorOpenTelemetryOptions object and set the resource property to the customResource object. const options: AzureMonitorOpenTelemetryOptions = {- resource: customResource + resource: customResource };++// Enable Azure Monitor integration using the useAzureMonitor function and the AzureMonitorOpenTelemetryOptions object. useAzureMonitor(options); ``` Starting from 3.4.0, rate-limited sampling is available and is now the default. The sampler expects a sample rate of between 0 and 1 inclusive. A rate of 0.1 means approximately 10% of your traces are sent. ```typescript+// Import the useAzureMonitor function and the AzureMonitorOpenTelemetryOptions class from the @azure/monitor-opentelemetry package. const { useAzureMonitor, AzureMonitorOpenTelemetryOptions } = require("@azure/monitor-opentelemetry"); +// Create a new AzureMonitorOpenTelemetryOptions object and set the samplingRatio property to 0.1. const options: AzureMonitorOpenTelemetryOptions = {- samplingRatio: 0.1 + samplingRatio: 0.1 };++// Enable Azure Monitor integration using the useAzureMonitor function and the AzureMonitorOpenTelemetryOptions object. useAzureMonitor(options); ``` For more information about Java, see the [Java supplemental documentation](java- We support the credential classes provided by [Azure Identity](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/identity/identity#credential-classes). ```typescript+// Import the useAzureMonitor function, the AzureMonitorOpenTelemetryOptions class, and the ManagedIdentityCredential class from the @azure/monitor-opentelemetry and @azure/identity packages, respectively. const { useAzureMonitor, AzureMonitorOpenTelemetryOptions } = require("@azure/monitor-opentelemetry"); const { ManagedIdentityCredential } = require("@azure/identity"); +// Create a new ManagedIdentityCredential object. +const credential = new ManagedIdentityCredential(); ++// Create a new AzureMonitorOpenTelemetryOptions object and set the credential property to the credential object. const options: AzureMonitorOpenTelemetryOptions = {- credential: new ManagedIdentityCredential() + credential: credential };++// Enable Azure Monitor integration using the useAzureMonitor function and the AzureMonitorOpenTelemetryOptions object. useAzureMonitor(options); ``` For example: ```typescript+// Import the useAzureMonitor function and the AzureMonitorOpenTelemetryOptions class from the @azure/monitor-opentelemetry package. const { useAzureMonitor, AzureMonitorOpenTelemetryOptions } = require("@azure/monitor-opentelemetry"); +// Create a new AzureMonitorOpenTelemetryOptions object and set the azureMonitorExporterOptions property to an object with the following properties: +// +// * connectionString: The connection string for your Azure Monitor Application Insights resource. +// * storageDirectory: The directory where the Azure Monitor OpenTelemetry exporter will store telemetry data when it is offline. +// * disableOfflineStorage: A boolean value that specifies whether to disable offline storage. const options: AzureMonitorOpenTelemetryOptions = {- azureMonitorExporterOptions = { - connectionString: "<Your Connection String>", - storageDirectory: "C:\\SomeDirectory", - disableOfflineStorage: false - } + azureMonitorExporterOptions: { + connectionString: "<Your Connection String>", + storageDirectory: "C:\\SomeDirectory", + disableOfflineStorage: false + } };++// Enable Azure Monitor integration using the useAzureMonitor function and the AzureMonitorOpenTelemetryOptions object. useAzureMonitor(options); ``` For more information about Java, see the [Java supplemental documentation](java- 2. Add the following code snippet. This example assumes you have an OpenTelemetry Collector with an OTLP receiver running. For details, see the [example on GitHub](https://github.com/open-telemetry/opentelemetry-js/tree/main/examples/otlp-exporter-node). ```typescript+ // Import the useAzureMonitor function, the AzureMonitorOpenTelemetryOptions class, the trace module, the ProxyTracerProvider class, the BatchSpanProcessor class, the NodeTracerProvider class, and the OTLPTraceExporter class from the @azure/monitor-opentelemetry, @opentelemetry/api, @opentelemetry/sdk-trace-base, @opentelemetry/sdk-trace-node, and @opentelemetry/exporter-trace-otlp-http packages, respectively. const { useAzureMonitor, AzureMonitorOpenTelemetryOptions } = require("@azure/monitor-opentelemetry"); const { trace, ProxyTracerProvider } = require("@opentelemetry/api"); const { BatchSpanProcessor } = require('@opentelemetry/sdk-trace-base'); const { NodeTracerProvider } = require('@opentelemetry/sdk-trace-node'); const { OTLPTraceExporter } = require('@opentelemetry/exporter-trace-otlp-http'); + // Enable Azure Monitor integration. useAzureMonitor();++ // Create a new OTLPTraceExporter object. const otlpExporter = new OTLPTraceExporter();++ // Get the NodeTracerProvider instance. const tracerProvider = ((trace.getTracerProvider() as ProxyTracerProvider).getDelegate() as NodeTracerProvider);++ // Add a BatchSpanProcessor to the NodeTracerProvider instance. tracerProvider.addSpanProcessor(new BatchSpanProcessor(otlpExporter)); ``` - #### [Python](#tab/python) 1. Install the [opentelemetry-exporter-otlp](https://pypi.org/project/opentelemetry-exporter-otlp/) package. |
azure-monitor | Container Insights V2 Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-v2-migration.md | To transition to ContainerLogV2, we recommend the following approach. The following table highlights the key differences between using ContainerLog and ContainerLogV2 schema. -| Feature Differences | ContainerLog | ContainerLogV2 | +| Feature differences | ContainerLog | ContainerLogV2 | | - | -- | - | | Onboarding | Only configurable through the ConfigMap | Configurable through both the ConfigMap and DCR | | Pricing | Only compatible with full-priced analytics logs | Supports the low cost basic logs tier in addition to analytics logs | |
azure-monitor | Cost Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cost-logs.md | The default pricing for Log Analytics is a pay-as-you-go model that's based on i ## Data size calculation -Data volume is measured as the size of the data that will be stored in GB (10^9 bytes). The data size of a single record is calculated from a string representation of the columns that are stored in the Log Analytics workspace for that record. It doesn't matter whether the data is sent from an agent or added during the ingestion process. This calculation includes any custom columns added by the [logs ingestion API](logs-ingestion-api-overview.md), [transformations](../essentials/data-collection-transformations.md) or [custom fields](custom-fields.md) that are added as data is collected and then stored in the workspace. +Data volume is measured as the size of the data sent to be stored and is measured in units of GB (10^9 bytes). The data size of a single record is calculated from a string representation of the columns that are stored in the Log Analytics workspace for that record. It doesn't matter whether the data is sent from an agent or added during the ingestion process. This calculation includes any custom columns added by the [logs ingestion API](logs-ingestion-api-overview.md), [transformations](../essentials/data-collection-transformations.md) or [custom fields](custom-fields.md) that are added as data is collected and then stored in the workspace. >[!NOTE] >The billable data volume calculation is generally substantially smaller than the size of the entire incoming JSON-packaged event. On average, across all event types, the billed size is around 25 percent less than the incoming data size. It can be up to 50 percent for small events. The percentage includes the effect of the standard columns excluded from billing. It's essential to understand this calculation of billed data size when you estimate costs and compare other pricing models. Azure Commitment Discounts, such as discounts received from [Microsoft Enterpris ## Dedicated clusters -An [Azure Monitor Logs dedicated cluster](logs-dedicated-clusters.md) is a collection of workspaces in a single managed Azure Data Explorer cluster. Dedicated clusters support advanced features, such as [customer-managed keys](customer-managed-keys.md), and use the same commitment-tier pricing model as workspaces, although they must have a commitment level of at least 100 GB per day. Any usage above the commitment level (overage) is billed at that same price per GB as provided by the current commitment tier. There's no pay-as-you-go option for clusters. +An [Azure Monitor Logs dedicated cluster](logs-dedicated-clusters.md) is a collection of workspaces in a single managed Azure Data Explorer cluster. Dedicated clusters support advanced features, such as [customer-managed keys](customer-managed-keys.md), and use the same commitment-tier pricing model as workspaces, although they must have a commitment level of at least 500 GB per day. Any usage above the commitment level (overage) is billed at that same price per GB as provided by the current commitment tier. There's no pay-as-you-go option for clusters. The cluster commitment tier has a 31-day commitment period after the commitment level is increased. During the commitment period, the commitment tier level can't be reduced, but it can be increased at any time. When workspaces are associated to a cluster, the data ingestion billing for those workspaces is done at the cluster level by using the configured commitment tier level. This query isn't an exact replication of how usage is calculated, but it provide - See [Set daily cap on Log Analytics workspace](daily-cap.md) to control your costs by configuring a maximum volume that might be ingested in a workspace each day. - See [Azure Monitor best practices - Cost management](../best-practices-cost.md) for best practices on configuring and managing Azure Monitor to minimize your charges. + |
azure-monitor | Vminsights Log Query | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-log-query.md | Every record in VMBoundPort is identified by the following fields: |Ip | Port IP address (can be wildcard IP, *0.0.0.0*) | |Port |The Port number | |Protocol | The protocol. Example, *tcp* or *udp* (only *tcp* is currently supported).|- + The identity a port is derived from the above five fields and is stored in the PortId property. This property can be used to quickly find records for a specific port across time. #### Metrics let remoteMachines = remote | summarize by RemoteMachine; ``` ## Performance records-Records with a type of *InsightsMetrics* have performance data from the guest operating system of the virtual machine. These records have the properties in the following table: +Records with a type of *InsightsMetrics* have performance data from the guest operating system of the virtual machine. These records are collected at 60 second intervals and have the properties in the following table: + | Property | Description | The performance counters currently collected into the *InsightsMetrics* table ar | LogicalDisk | BytesPerSecond | Logical Disk Bytes Per Second | BytesPerSecond | mountId - Mount ID of the device | +++ ## Next steps * If you're new to writing log queries in Azure Monitor, review [how to use Log Analytics](../logs/log-analytics-tutorial.md) in the Azure portal to write log queries. * Learn about [writing search queries](../logs/get-started-queries.md).++ |
azure-netapp-files | Access Smb Volume From Windows Client | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/access-smb-volume-from-windows-client.md | -# Access SMB volumes from Azure Active Directory joined Windows virtual machines +# Access SMB volumes from Azure Active Directory-joined Windows virtual machines You can use Azure Active Directory (Azure AD) with the Hybrid Authentication Management module to authenticate credentials in your hybrid cloud. This solution enables Azure AD to become the trusted source for both cloud and on-premises authentication, circumventing the need for clients connecting to Azure NetApp Files to join the on-premises AD domain. >[!NOTE]->This process does not eliminate the need for Active Directory Domain Services (AD DS) as Azure NetApp Files requires connectivity to AD DS. For more information, see [Understand guidelines for Active Directory Domain Services site design and planning](understand-guidelines-active-directory-domain-service-site.md). +>Using Azure AD for authenticating [hybrid user identities](../active-directory/hybrid/whatis-hybrid-identity.md) allows Azure AD users to access Azure NetApp Files SMB shares. This means your end users can access Azure NetApp Files SMB shares without requiring a line-of-sight to domain controllers from hybrid Azure AD-joined and Azure AD-joined VMs. Cloud-only identities aren't currently supported. For more information, see [Understand guidelines for Active Directory Domain Services site design and planning](understand-guidelines-active-directory-domain-service-site.md). :::image type="content" source="../media/azure-netapp-files/diagram-windows-joined-active-directory.png" alt-text="Diagram of SMB volume joined to Azure Active Directory." lightbox="../media/azure-netapp-files/diagram-windows-joined-active-directory.png"::: The configuration process takes you through five process: * Add the CIFS SPN to the computer account * Register a new Azure AD application * Sync CIFS password from AD DS to the Azure AD application registration -* Configure the Azure AD joined VM to use Kerberos authentication +* Configure the Azure AD-joined VM to use Kerberos authentication * Mount the Azure NetApp Files SMB volumes ### Add the CIFS SPN to the computer account The configuration process takes you through five process: * `$servicePrincipalName`: The SPN details from mounting the Azure NetApp Files volume. Use the CIFS/FQDN format. For example: `CIFS/NETBIOS-1234.CONTOSO.COM` * `$targetApplicationID`: Application (client) ID of the Azure AD application. * `$domainCred`: use `Get-Credential` (should be an AD DS domain administrator)- * `$cloudCred`: use `Get-Credential` (should be an AD DS domain administrator) + * `$cloudCred`: use `Get-Credential` (should be an Azure AD global administrator) ```powershell $servicePrincipalName = CIFS/NETBIOS-1234.CONTOSO.COM The configuration process takes you through five process: Import-AzureADKerberosOnPremServicePrincipal -Domain $domain -DomainCredential $domainCred -CloudCredential $cloudCred -ServicePrincipalName $servicePrincipalName -ApplicationId $targetApplicationId ``` -### Configure the Azure AD joined VM to use Kerberos authentication +### Configure the Azure AD-joined VM to use Kerberos authentication -1. Log in to the Azure AD joined VM using hybrid credentials with administrative rights (for example: user@mydirectory.onmicrosoft.com). +1. Log in to the Azure AD-joined VM using hybrid credentials with administrative rights (for example: user@mydirectory.onmicrosoft.com). 1. Configure the VM: 1. Navigate to **Edit group policy** > **Computer Configuration** > **Administrative Templates** > **System** > **Kerberos**. 1. Enable **Allow retrieving the Azure AD Kerberos Ticket Granting Ticket during logon**. The configuration process takes you through five process: ### Mount the Azure NetApp Files SMB volumes -1. Log into to the Azure AD joined VM using a hybrid identity account synced from AD DS. +1. Log into to the Azure AD-joined VM using a hybrid identity account synced from AD DS. 2. Mount the Azure NetApp Files SMB volume using the info provided in the Azure portal. For more information, see [Mount SMB volumes for Windows VMs](mount-volumes-vms-smb.md). 3. Confirm the mounted volume is using Kerberos authentication and not NTLM authentication. Open a command prompt, issue the `klist` command; observe the output in the cloud TGT (krbtgt) and CIFS server ticket information. |
azure-netapp-files | Configure Customer Managed Keys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-customer-managed-keys.md | Azure NetApp Files customer-managed keys is supported for the following regions: * UAE Central * UAE North * UK South-* US Gov Virginia (public preview) +* US Gov Virginia * West Europe * West US * West US 2 |
azure-portal | Azure Portal Add Remove Sort Favorites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-add-remove-sort-favorites.md | Title: Manage favorites in Azure portal -description: Learn how to add or remove services from the favorites list. Previously updated : 02/17/2022+description: Learn how to add or remove services from the Favorites list. Last updated : 09/27/2023 # Manage favorites -Add or remove items from your **Favorites** list in the Azure portal so that you can quickly go to the services you use most often. We've already added some common services to your **Favorites** list, but you may want to customize it. You're the only one who sees the changes you make to **Favorites**. +The **Favorites** list in the Azure portal lets you quickly go to the services you use most often. We've already added some common services to your **Favorites** list, but you may want to customize it by adding or removing items. You're the only one who sees the changes you make to **Favorites**. ++You can view your **Favorites** list in the Azure portal menu, or from the **Favorites** section within **All services**. ## Add a favorite service -Items that are listed under **Favorites** are selected from **All services**. Hover over a service name to display information and resources related to the service. A filled star icon ![Filled star icon](./media/azure-portal-add-remove-sort-favorites/azure-portal-favorites-graystar.png) next to the service name indicates that the item appears on the **Favorites** list. Select the star icon to add a service to the **Favorites** list. +Items that are listed under **Favorites** are selected from **All services**. Within **All services**, you can hover over a service name to display information and resources related to the service. A filled star icon ![Filled star icon](./media/azure-portal-add-remove-sort-favorites/azure-portal-favorites-graystar.png) next to the service name indicates that the item appears in the **Favorites** list. If the star icon isn't filled in for a service, select the star icon to add it to your **Favorites** list. In this example, we'll add **Cost Management + Billing** to the **Favorites** list. In this example, we'll add **Cost Management + Billing** to the **Favorites** li :::image type="content" source="media/azure-portal-add-remove-sort-favorites/azure-portal-favorites-new-all-services.png" alt-text="Screenshot showing All services in the Azure portal menu."::: -1. Enter the word "cost" in the search field. Services that have "cost" in the title or that have "cost" as a keyword are shown. +1. Enter the word "cost" in the **Filter services** field near the top of the **All services** page. Services that have "cost" in the title or that have "cost" as a keyword are shown. :::image type="content" source="media/azure-portal-add-remove-sort-favorites/azure-portal-favorites-find-service.png" alt-text="Screenshot showing a search in All services in the Azure portal."::: In this example, we'll add **Cost Management + Billing** to the **Favorites** li ## Remove an item from Favorites -You can now remove an item directly from the **Favorites** list. +You can remove items directly from the **Favorites** list. -1. In the **Favorites** section of the portal menu, hover over the name of the service you want to remove. +1. In the **Favorites** section of the portal menu, or within the **Favorites** section of **All services**, hover over the name of the service you want to remove. :::image type="content" source="media/azure-portal-add-remove-sort-favorites/azure-portal-favorites-remove.png" alt-text="Screenshot showing how to remove a service from Favorites in the Azure portal."::: |
azure-portal | Azure Portal Dashboards Create Programmatically | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-dashboards-create-programmatically.md | Title: Programmatically create Azure Dashboards description: Use a dashboard in the Azure portal as a template to programmatically create Azure Dashboards. Includes JSON reference. -+ Last updated 09/05/2023 |
azure-resource-manager | Bicep Config | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config.md | Title: Bicep config file description: Describes the configuration file for your Bicep deployments Previously updated : 09/11/2023 Last updated : 09/27/2023 # Configure your Bicep environment You can enable preview features by adding: The preceding sample enables 'userDefineTypes' and 'extensibility`. The available experimental features include: -- **assertions**: Should be enabled in tandem with `testFramework` experimental feature flag for expected functionality. Allows you to author boolean assertions using the `assert` keyword comparing the actual value of a parameter, variable, or resource name to an expected value. Assert statements can only be written directly within the Bicep file whose resources they reference.+- **assertions**: Should be enabled in tandem with `testFramework` experimental feature flag for expected functionality. Allows you to author boolean assertions using the `assert` keyword comparing the actual value of a parameter, variable, or resource name to an expected value. Assert statements can only be written directly within the Bicep file whose resources they reference. For more information, see [Bicep Experimental Test Framework](https://github.com/Azure/bicep/issues/11967). - **compileTimeImports**: Allows you to use symbols defined in another template. See [Import user-defined data types](./bicep-import.md#import-user-defined-data-types-preview). - **extensibility**: Allows Bicep to use a provider model to deploy non-ARM resources. Currently, we only support a Kubernetes provider. See [Bicep extensibility Kubernetes provider](./bicep-extensibility-kubernetes-provider.md). - **sourceMapping**: Enables basic source mapping to map an error location returned in the ARM template layer back to the relevant location in the Bicep file. - **resourceTypedParamsAndOutputs**: Enables the type for a parameter or output to be of type resource to make it easier to pass resource references between modules. This feature is only partially implemented. See [Simplifying resource referencing](https://github.com/azure/bicep/issues/2245). - **symbolicNameCodegen**: Allows the ARM template layer to use a new schema to represent resources as an object dictionary rather than an array of objects. This feature improves the semantic equivalent of the Bicep and ARM templates, resulting in more reliable code generation. Enabling this feature has no effect on the Bicep layer's functionality.-- **testFramework**: Should be enabled in tandem with `assertions` experimental feature flag for expected functionality. Allows you to author client-side, offline unit-test test blocks that reference Bicep files and mock deployment parameters in a separate `test.bicep` file using the new `test` keyword. Test blocks can be run with the command *bicep test <filepath_to_file_with_test_blocks>* which runs all `assert` statements in the Bicep files referenced by the test blocks.+- **testFramework**: Should be enabled in tandem with `assertions` experimental feature flag for expected functionality. Allows you to author client-side, offline unit-test test blocks that reference Bicep files and mock deployment parameters in a separate `test.bicep` file using the new `test` keyword. Test blocks can be run with the command *bicep test <filepath_to_file_with_test_blocks>* which runs all `assert` statements in the Bicep files referenced by the test blocks. For more information, see [Bicep Experimental Test Framework](https://github.com/Azure/bicep/issues/11967). - **userDefinedFunctions**: Allows you to define your own custom functions. See [User-defined functions in Bicep](./user-defined-functions.md). ## Next steps |
azure-resource-manager | Delete Resource Group | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/delete-resource-group.md | Title: Delete resource group and resources description: Describes how to delete resource groups and resources. It describes how Azure Resource Manager orders the deletion of resources when a deleting a resource group. It describes the response codes and how Resource Manager handles them to determine if the deletion succeeded. Previously updated : 04/10/2023 Last updated : 09/27/2023 content_well_notification: - AI-contribution To delete a resource group, you need access to the delete action for the **Micro For a list of operations, see [Azure resource provider operations](../../role-based-access-control/resource-provider-operations.md). For a list of built-in roles, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md). -If you have the required access, but the delete request fails, it may be because there's a [lock on the resources or resource group](lock-resources.md). Even if you didn't manually lock a resource group, it may have been [automatically locked by a related service](lock-resources.md#managed-applications-and-locks). Or, the deletion can fail if the resources are connected to resources in other resource groups that aren't being deleted. For example, you can't delete a virtual network with subnets that are still in use by a virtual machine. +If you have the required access, but the delete request fails, it may be because there's a [lock on the resources or resource group](lock-resources.md). Even if you didn't manually lock a resource group, [a related service may have automatically locked it](lock-resources.md#managed-applications-and-locks). Or, the deletion can fail if the resources are connected to resources in other resource groups that aren't being deleted. For example, you can't delete a virtual network with subnets that are still in use by a virtual machine. -## Accidental deletion +## Can I recover a deleted resource group? -If you accidentally delete a resource group or resource, in some situations it might be possible to recover it. +No, you can't recover a deleted resource group. However, you might be able to resore some recently deleted resources. -Some resource types support *soft delete*. You might have to configure soft delete before you can use it. For more information about enabling soft delete, see the documentation for [Azure Key Vault](../../key-vault/general/soft-delete-overview.md), [Azure Backup](../../backup/backup-azure-delete-vault.md), and [Azure Storage](../../storage/blobs/soft-delete-container-overview.md). +Some resource types support *soft delete*. You might have to configure soft delete before you can use it. For information about enabling soft delete, see: -You can also [open an Azure support case](../../azure-portal/supportability/how-to-create-azure-support-request.md). Provide as much detail as you can about the deleted resources, including their resource IDs, types, and resource names, and request that the support engineer check if the resources can be restored. +* [Azure Key Vault soft-delete overview](../../key-vault/general/soft-delete-overview.md) +* [Azure Storage - Soft delete for containers](../../storage/blobs/soft-delete-container-overview.md) +* [Azure Storage - Soft delete for blobs](../../storage/blobs/soft-delete-blob-overview.md) +* [Soft delete for Azure Backup](../../backup/backup-azure-security-feature-cloud.md) +* [Soft delete for SQL server in Azure VM and SAP HANA in Azure VM workloads](../../backup/soft-delete-sql-saphana-in-azure-vm.md) +* [Soft delete for virtual machines](../..//backup/soft-delete-virtual-machines.md) ++To restore deleted resources, see: ++* [Recover deleted Azure AI services resources](../../ai-services/manage-resources.md) +* [Microsoft Entra - Recover from deletions](../../active-directory/architecture/recover-from-deletions.md) ++You can also [open an Azure support case](../../azure-portal/supportability/how-to-create-azure-support-request.md). Provide as much detail as you can about the deleted resources, including their resource IDs, types, and resource names. Request that the support engineer check if the resources can be restored. > [!NOTE] > Recovery of deleted resources is not possible under all circumstances. A support engineer will investigate your scenario and advise you whether it's possible. |
azure-resource-manager | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/overview.md | Title: Azure Resource Manager overview description: Describes how to use Azure Resource Manager for deployment, management, and access control of resources on Azure. Previously updated : 02/28/2023 Last updated : 09/27/2023 # What is Azure Resource Manager? All capabilities that are available in the portal are also available through Pow If you're new to Azure Resource Manager, there are some terms you might not be familiar with. * **resource** - A manageable item that is available through Azure. Virtual machines, storage accounts, web apps, databases, and virtual networks are examples of resources. Resource groups, subscriptions, management groups, and tags are also examples of resources.-* **resource group** - A container that holds related resources for an Azure solution. The resource group includes those resources that you want to manage as a group. You decide which resources belong in a resource group based on what makes the most sense for your organization. See [Resource groups](#resource-groups). +* **resource group** - A container that holds related resources for an Azure solution. The resource group includes those resources that you want to manage as a group. You decide which resources belong in a resource group based on what makes the most sense for your organization. See [What is a resource group?](#resource-groups). * **resource provider** - A service that supplies Azure resources. For example, a common resource provider is `Microsoft.Compute`, which supplies the virtual machine resource. `Microsoft.Storage` is another common resource provider. See [Resource providers and types](resource-providers-and-types.md). * **declarative syntax** - Syntax that lets you state "Here's what I intend to create" without having to write the sequence of programming commands to create it. ARM templates and Bicep files are examples of declarative syntax. In those files, you define the properties for the infrastructure to deploy to Azure. * **ARM template** - A JavaScript Object Notation (JSON) file that defines one or more resources to deploy to a resource group, subscription, management group, or tenant. The template can be used to deploy the resources consistently and repeatedly. See [Template deployment overview](../templates/overview.md). For information about managing identities and access, see [Azure Active Director You can deploy templates to tenants, management groups, subscriptions, or resource groups. -## Resource groups +## <a name="resource-groups"></a>What is a resource group? ++A resource group is a container that enables you to manage related resources for an Azure solution. By using the resource group, you can coordinate changes to the related resources. For example, you can deploy an update to the resource group and have confidence that the resources are updated in a coordinated operation. Or, when you're finished with the solution, you can delete the resource group and know that all of the resources are deleted. There are some important factors to consider when defining your resource group: There are some important factors to consider when defining your resource group: To ensure state consistency for the resource group, all [control plane operations](./control-plane-and-data-plane.md) are routed through the resource group's location. When selecting a resource group location, we recommend that you select a location close to where your control operations originate. Typically, this location is the one closest to your current location. This routing requirement only applies to control plane operations for the resource group. It doesn't affect requests that are sent to your applications. - If a resource group's region is temporarily unavailable, you can't update resources in the resource group because the metadata is unavailable. The resources in other regions will still function as expected, but you can't update them. + If a resource group's region is temporarily unavailable, you can't update resources in the resource group because the metadata is unavailable. The resources in other regions still function as expected, but you can't update them. For more information about building reliable applications, see [Designing reliable Azure applications](/azure/architecture/checklist/resiliency-per-service). There are some important factors to consider when defining your resource group: The Azure Resource Manager service is designed for resiliency and continuous availability. Resource Manager and control plane operations (requests sent to `management.azure.com`) in the REST API are: -* Distributed across regions. Azure Resource Manager has a separate instance in each region of Azure, meaning that a failure of the Azure Resource Manager instance in one region won't impact the availability of Azure Resource Manager or other Azure services in another region. Although Azure Resource Manager is distributed across regions, some services are regional. This distinction means that while the initial handling of the control plane operation is resilient, the request may be susceptible to regional outages when forwarded to the service. +* Distributed across regions. Azure Resource Manager has a separate instance in each region of Azure, meaning that a failure of the Azure Resource Manager instance in one region doesn't affect the availability of Azure Resource Manager or other Azure services in another region. Although Azure Resource Manager is distributed across regions, some services are regional. This distinction means that while the initial handling of the control plane operation is resilient, the request may be susceptible to regional outages when forwarded to the service. * Distributed across Availability Zones (and regions) in locations that have multiple Availability Zones. This distribution ensures that when a region loses one or more zones, Azure Resource Manager can either fail over to another zone or to another region to continue to provide control plane capability for the resources. |
azure-video-indexer | Accounts Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/accounts-overview.md | |
azure-video-indexer | Add Contributor Role On The Media Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/add-contributor-role-on-the-media-service.md | |
azure-video-indexer | Audio Effects Detection Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/audio-effects-detection-overview.md | -# Audio effects detection +# Audio effects detection + Audio effects detection is an Azure AI Video Indexer feature that detects insights on various acoustic events and classifies them into acoustic categories. Audio effect detection can detect and classify different categories such as laughter, crowd reactions, alarms and/or sirens. |
azure-video-indexer | Audio Effects Detection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/audio-effects-detection.md | |
azure-video-indexer | Clapperboard Metadata | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/clapperboard-metadata.md | In the following example, the board contains the following fields: #### View the insight + To see the instances on the website, select **Insights** and scroll to **Clapper boards**. You can hover over each clapper board, or unfold **Show/Hide clapper board info** and see the metadata: > [!div class="mx-imgBorder"] |
azure-video-indexer | Concepts Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/concepts-overview.md | -# Azure AI Video Indexer terminology & concepts +# Azure AI Video Indexer terminology & concepts + This article gives a brief overview of Azure AI Video Indexer terminology and concepts. Also, review [transparency note overview](/legal/azure-video-indexer/transparency-note?context=/azure/azure-video-indexer/context/context) |
azure-video-indexer | Connect Classic Account To Arm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/connect-classic-account-to-arm.md | -# Connect an existing classic paid Azure AI Video Indexer account to ARM-based account +# Connect an existing classic paid Azure AI Video Indexer account to ARM-based account + This article shows how to connect an existing classic paid Azure AI Video Indexer account to an Azure Resource Manager (ARM)-based (recommended) account. To create a new ARM-based account, see [create a new account](create-account-portal.md). To understand the Azure AI Video Indexer account types, review [account types](accounts-overview.md). |
azure-video-indexer | Connect To Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/connect-to-azure.md | |
azure-video-indexer | Considerations When Use At Scale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/considerations-when-use-at-scale.md | |
azure-video-indexer | Create Account Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/create-account-portal.md | |
azure-video-indexer | Customize Brands Model Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-brands-model-overview.md | |
azure-video-indexer | Customize Brands Model With Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-brands-model-with-api.md | |
azure-video-indexer | Customize Brands Model With Website | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-brands-model-with-website.md | |
azure-video-indexer | Customize Content Models Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-content-models-overview.md | |
azure-video-indexer | Customize Language Model Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-language-model-overview.md | -# Customize a Language model with Azure AI Video Indexer +# Customize a Language model with Azure AI Video Indexer + Azure AI Video Indexer supports automatic speech recognition through integration with the Microsoft [Custom Speech Service](https://azure.microsoft.com/services/cognitive-services/custom-speech-service/). You can customize the Langu |