Updates from: 03/09/2022 02:10:22
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Configure Authentication Sample Web App With Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-web-app-with-api.md
A computer that's running either:
# [Visual Studio](#tab/visual-studio)
-* [Visual Studio 2019 16.8 or later](https://visualstudio.microsoft.com/downloads/?utm_medium=microsoft&utm_source=docs.microsoft.com&utm_campaign=inline+link&utm_content=download+vs2019) with the **ASP.NET and web development** workload
-* [.NET 5.0 SDK](https://dotnet.microsoft.com/download/dotnet)
+* [Visual Studio 2022 17.0 or later](https://visualstudio.microsoft.com/downloads/?utm_medium=microsoft&utm_source=docs.microsoft.com&utm_campaign=inline+link&utm_content=download+vs2019) with the **ASP.NET and web development** workload
+* [.NET 6.0 SDK](https://dotnet.microsoft.com/download/dotnet)
# [Visual Studio Code](#tab/visual-studio-code) * [Visual Studio Code](https://code.visualstudio.com/download) * [C# for Visual Studio Code (latest version)](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp)
-* [.NET 5.0 SDK](https://dotnet.microsoft.com/download/dotnet)
+* [.NET 6.0 SDK](https://dotnet.microsoft.com/download/dotnet)
active-directory-domain-services Create Resource Forest Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/create-resource-forest-powershell.md
Previously updated : 07/27/2020 Last updated : 03/07/2022
To complete this article, you need the following resources and privileges:
* Install and configure Azure AD PowerShell. * If needed, follow the instructions to [install the Azure AD PowerShell module and connect to Azure AD](/powershell/azure/active-directory/install-adv2). * Make sure that you sign in to your Azure AD tenant using the [Connect-AzureAD][Connect-AzureAD] cmdlet.
-* You need *global administrator* privileges in your Azure AD tenant to enable Azure AD DS.
-* You need *Contributor* privileges in your Azure subscription to create the required Azure AD DS resources.
+* You need [Application Administrator](/azure/active-directory/roles/permissions-reference#application-administrator) and [Groups Administrator](/azure/active-directory/roles/permissions-reference#groups-administrator) Azure AD roles in your tenant to enable Azure AD DS.
+* You need [Domain Services Contributor](/azure/role-based-access-control/built-in-roles#contributor) Azure role to create the required Azure AD DS resources.
## Sign in to the Azure portal
active-directory-domain-services Deploy Azure App Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/deploy-azure-app-proxy.md
Previously updated : 07/09/2020 Last updated : 03/07/2022
active-directory-domain-services Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/feature-availability.md
+
+ Title: Azure Active Directory Domain Services (Azure AD DS) feature availability in Azure Government
+description: Learn which Azure AD DS features are available in Azure Government.
+++++ Last updated : 03/07/2022++++++++
+# Azure Active Directory Domain Services feature availability
+
+<!Jeremy said there are additional features that don't fit nicely in this list that we need to add later>
+
+This following table lists Azure Active Directory Domain Services (Azure AD DS) feature availability in Azure Government.
++
+| Feature | Availability |
+||::|
+| Configure LDAPS | &#x2705; |
+| Create trusts | &#x2705; |
+| Create replica sets | &#x2705; |
+| Configure and scope user and group sync | &#x2705; |
+| Configure password hash sync | &#x2705; |
+| Configure password and account lockout policies | &#x2705; |
+| Manage Group Policy | &#x2705; |
+| Manage DNS | &#x2705; |
+| Email notifications | &#x2705; |
+| Configure Kerberos constrained delegation | &#x2705; |
+| Auditing and Azure Monitor Workbooks templates | &#x2705; |
+| Domain join Windows VMs | &#x2705; |
+| Domain join Linux VMs | &#x2705; |
+| Deploy Azure AD Application Proxy | &#x2705; |
+| Enable profile sync for SharePoint | &#x2705; |
+
+## Next steps
+
+- [FAQs](faqs.yml)
+- [Service updates](https://azure.microsoft.com/updates/?product=active-directory-ds)
+- [Pricing](https://azure.microsoft.com/pricing/details/active-directory-ds/)
active-directory-domain-services Migrate From Classic Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/migrate-from-classic-vnet.md
Previously updated : 08/11/2021 Last updated : 03/07/2022
To prepare the managed domain for migration, complete the following steps:
1. Create a variable to hold the credentials for by the migration script using the [Get-Credential][get-credential] cmdlet.
- The user account you specify needs *global administrator* privileges in your Azure AD tenant to enable Azure AD DS and then *Contributor* privileges in your Azure subscription to create the required Azure AD DS resources.
+ The user account you specify needs [Application Administrator](/azure/active-directory/roles/permissions-reference#application-administrator) and [Groups Administrator](/azure/active-directory/roles/permissions-reference#groups-administrator) Azure AD roles in your tenant to enable Azure AD DS and [Domain Services Contributor](/azure/role-based-access-control/built-in-roles#contributor) Azure role to create the required Azure AD DS resources.
When prompted, enter an appropriate user account and password:
active-directory-domain-services Powershell Scoped Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/powershell-scoped-synchronization.md
Previously updated : 03/08/2021 Last updated : 03/07/2022
To complete this article, you need the following resources and privileges:
* If needed, [create an Azure Active Directory tenant][create-azure-ad-tenant] or [associate an Azure subscription with your account][associate-azure-ad-tenant]. * An Azure Active Directory Domain Services managed domain enabled and configured in your Azure AD tenant. * If needed, complete the tutorial to [create and configure an Azure Active Directory Domain Services managed domain][tutorial-create-instance].
-* You need *global administrator* privileges in your Azure AD tenant to change the Azure AD DS synchronization scope.
+* You need [Application Administrator](/azure/active-directory/roles/permissions-reference#application-administrator) and [Groups Administrator](/azure/active-directory/roles/permissions-reference#groups-administrator) Azure AD roles in your tenant to change the Azure AD DS synchronization scope.
## Scoped synchronization overview
active-directory-domain-services Scoped Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/scoped-synchronization.md
Previously updated : 01/20/2021 Last updated : 03/07/2022
To complete this article, you need the following resources and privileges:
* If needed, [create an Azure Active Directory tenant][create-azure-ad-tenant] or [associate an Azure subscription with your account][associate-azure-ad-tenant]. * An Azure Active Directory Domain Services managed domain enabled and configured in your Azure AD tenant. * If needed, complete the tutorial to [create and configure an Azure Active Directory Domain Services managed domain][tutorial-create-instance].
-* You need *global administrator* privileges in your Azure AD tenant to change the Azure AD DS synchronization scope.
+* You need [Application Administrator](/azure/active-directory/roles/permissions-reference#application-administrator) and [Groups Administrator](/azure/active-directory/roles/permissions-reference#groups-administrator) Azure AD roles in your tenant to change the Azure AD DS synchronization scope.
## Scoped synchronization overview
active-directory-domain-services Template Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/template-create-instance.md
Previously updated : 07/09/2020 Last updated : 03/04/2022
To complete this article, you need the following resources:
* Install and configure Azure AD PowerShell. * If needed, follow the instructions to [install the Azure AD PowerShell module and connect to Azure AD](/powershell/azure/active-directory/install-adv2). * Make sure that you sign in to your Azure AD tenant using the [Connect-AzureAD][Connect-AzureAD] cmdlet.
-* You need *global administrator* privileges in your Azure AD tenant to enable Azure AD DS.
-* You need *Contributor* privileges in your Azure subscription to create the required Azure AD DS resources.
+* You need [Application Administrator](/azure/active-directory/roles/permissions-reference#application-administrator) and [Groups Administrator](/azure/active-directory/roles/permissions-reference#groups-administrator) Azure AD roles in your tenant to enable Azure AD DS.
+* You need Domain Services Contributor Azure role to create the required Azure AD DS resources.
## DNS naming requirements
active-directory-domain-services Tutorial Configure Ldaps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-configure-ldaps.md
Previously updated : 03/23/2021 Last updated : 03/07/2022 #Customer intent: As an identity administrator, I want to secure access to an Azure Active Directory Domain Services managed domain using secure lightweight directory access protocol (LDAPS)
To complete this tutorial, you need the following resources and privileges:
* If needed, [create and configure an Azure Active Directory Domain Services managed domain][create-azure-ad-ds-instance]. * The *LDP.exe* tool installed on your computer. * If needed, [install the Remote Server Administration Tools (RSAT)][rsat] for *Active Directory Domain Services and LDAP*.
-* You need global administrator privileges in your Azure AD tenant to enable secure LDAP.
+* You need [Application Administrator](/azure/active-directory/roles/permissions-reference#application-administrator) and [Groups Administrator](/azure/active-directory/roles/permissions-reference#groups-administrator) Azure AD roles in your tenant to enable secure LDAP.
## Sign in to the Azure portal
active-directory-domain-services Tutorial Configure Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-configure-networking.md
Previously updated : 07/06/2020 Last updated : 03/07/2022 #Customer intent: As an identity administrator, I want to create and configure a virtual network subnet or network peering for application workloads in an Azure Active Directory Domain Services managed domain
To complete this tutorial, you need the following resources and privileges:
* If you don't have an Azure subscription, [create an account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * An Azure Active Directory tenant associated with your subscription, either synchronized with an on-premises directory or a cloud-only directory. * If needed, [create an Azure Active Directory tenant][create-azure-ad-tenant] or [associate an Azure subscription with your account][associate-azure-ad-tenant].
-* You need *global administrator* privileges in your Azure AD tenant to configure Azure AD DS.
-* You need *Contributor* privileges in your Azure subscription to create the required Azure AD DS resources.
+* You need [Application Administrator](/azure/active-directory/roles/permissions-reference#application-administrator) and [Groups Administrator](/azure/active-directory/roles/permissions-reference#groups-administrator) Azure AD roles in your tenant to enable Azure AD DS.
+* You need Domain Services Contributor Azure role to create the required Azure AD DS resources.
* An Azure Active Directory Domain Services managed domain enabled and configured in your Azure AD tenant. * If needed, the first tutorial [creates and configures an Azure Active Directory Domain Services managed domain][create-azure-ad-ds-instance].
active-directory-domain-services Tutorial Create Forest Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-forest-trust.md
Previously updated : 10/19/2021 Last updated : 03/07/2022 #Customer intent: As an identity administrator, I want to create a one-way outbound forest from an Azure Active Directory Domain Services resource forest to an on-premises Active Directory Domain Services forest to provide authentication and resource access between forests.
To complete this tutorial, you need the following resources and privileges:
## Sign in to the Azure portal
-In this tutorial, you create and configure the outbound forest trust from Azure AD DS using the Azure portal. To get started, first sign in to the [Azure portal](https://portal.azure.com). Global administrator permissions are required to modify an Azure AD DS instance.
+In this tutorial, you create and configure the outbound forest trust from Azure AD DS using the Azure portal. To get started, first sign in to the [Azure portal](https://portal.azure.com). You need [Application Administrator](/azure/active-directory/roles/permissions-reference#application-administrator) and [Groups Administrator](/azure/active-directory/roles/permissions-reference#groups-administrator) Azure AD roles in your tenant to modify an Azure AD DS instance.
## Networking considerations
active-directory-domain-services Tutorial Create Instance Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-instance-advanced.md
Previously updated : 06/01/2021 Last updated : 03/04/2022 #Customer intent: As an identity administrator, I want to create an Azure Active Directory Domain Services managed domain and define advanced configuration options so that I can synchronize identity information with my Azure Active Directory tenant and provide Domain Services connectivity to virtual machines and applications in Azure.
To complete this tutorial, you need the following resources and privileges:
* If you don't have an Azure subscription, [create an account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * An Azure Active Directory tenant associated with your subscription, either synchronized with an on-premises directory or a cloud-only directory. * If needed, [create an Azure Active Directory tenant][create-azure-ad-tenant] or [associate an Azure subscription with your account][associate-azure-ad-tenant].
-* You need *global administrator* privileges in your Azure AD tenant to enable Azure AD DS.
-* You need *Contributor* privileges in your Azure subscription to create the required Azure AD DS resources.
+* You need [Application Administrator](/azure/active-directory/roles/permissions-reference#application-administrator) and [Groups Administrator](/azure/active-directory/roles/permissions-reference#groups-administrator) Azure AD roles in your tenant to enable Azure AD DS.
+* You need Domain Services Contributor Azure role to create the required Azure AD DS resources.
Although not required for Azure AD DS, it's recommended to [configure self-service password reset (SSPR)][configure-sspr] for the Azure AD tenant. Users can change their password without SSPR, but SSPR helps if they forget their password and need to reset it.
active-directory-domain-services Tutorial Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-instance.md
Previously updated : 12/15/2021 Last updated : 03/08/2022 #Customer intent: As an identity administrator, I want to create an Azure Active Directory Domain Services managed domain so that I can synchronize identity information with my Azure Active Directory tenant and provide Domain Services connectivity to virtual machines and applications in Azure.
To complete this tutorial, you need the following resources and privileges:
* If you don't have an Azure subscription, [create an account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * An Azure Active Directory tenant associated with your subscription, either synchronized with an on-premises directory or a cloud-only directory. * If needed, [create an Azure Active Directory tenant][create-azure-ad-tenant] or [associate an Azure subscription with your account][associate-azure-ad-tenant].
-* You need *global administrator* privileges in your Azure AD tenant to enable Azure AD DS.
-* You need *Contributor* privileges in your Azure subscription to create the required Azure AD DS resources.
-* A virtual network with DNS servers that can query necessary infrastructure such as storage. DNS servers that can't perform general internet queries might prevent the ability to create a managed domain.
+* You need [Application Administrator](/azure/active-directory/roles/permissions-reference#application-administrator) and [Groups Administrator](/azure/active-directory/roles/permissions-reference#groups-administrator) Azure AD roles in your tenant to enable Azure AD DS.
+* You need Domain Services Contributor Azure role to create the required Azure AD DS resources.
+* A virtual network with DNS servers that can query necessary infrastructure such as storage. DNS servers that can't perform general internet queries might block the ability to create a managed domain.
Although not required for Azure AD DS, it's recommended to [configure self-service password reset (SSPR)][configure-sspr] for the Azure AD tenant. Users can change their password without SSPR, but SSPR helps if they forget their password and need to reset it.
active-directory Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/feature-availability.md
Title: Azure AD feature availability in Azure Government
+ Title: Azure Active Directory (Azure AD) feature availability in Azure Government
description: Learn which Azure AD features are available in Azure Government.
-# Cloud feature availability
+# Azure Active Directory feature availability
<!Jeremy said there are additional features that don't fit nicely in this list that we need to add later>
-This following table lists Azure AD feature availability in Azure Government.
+This following tables list Azure AD feature availability in Azure Government.
+## Azure Active Directory
|Service | Feature | Availability | |:||::|
This following table lists Azure AD feature availability in Azure Government.
|Additional risk detected | &#x2705; |
-## HR-provisioning apps
+## HR provisioning apps
| HR-provisioning app | Availability | |-|:--:|
active-directory Howto Authentication Passwordless Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-deployment.md
Microsoft provides communication templates for end users. Download the [authenti
Users register their passwordless method as a part of the **combined security information workflow** at [https://aka.ms/mysecurityinfo](https://aka.ms/mysecurityinfo). Azure AD logs registration of security keys and Microsoft Authenticator app, and any other changes to the authentication methods.
-For the first-time user who doesn't have a password, admins can provide a [Temporary Access Passcode](howto-authentication-temporary-access-pass.md) to register their security information in [https://aka.ms/mysecurityinfo](https://aka.ms/mysecurityinfo.md) . This is a time-limited passcode and satisfies strong authentication requirements. **Temporary Access Pass is a per-user process**.
+For the first-time user who doesn't have a password, admins can provide a [Temporary Access Passcode](howto-authentication-temporary-access-pass.md) to register their security information in [https://aka.ms/mysecurityinfo](https://aka.ms/mysecurityinfo) . This is a time-limited passcode and satisfies strong authentication requirements. **Temporary Access Pass is a per-user process**.
This method can also be used for easy recovery when the user has lost or forgotten their authentication factor such as security key or Microsoft Authenticator app but needs to sign in to **register a new strong authentication method**.
active-directory Tutorial Enable Sspr Writeback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/tutorial-enable-sspr-writeback.md
Password policies in the on-premises AD DS environment may prevent password rese
If you update the group policy, wait for the updated policy to replicate, or use the `gpupdate /force` command. > [!Note]
-> For passwords to be changed immediately, *Minimum password age* must be set to 0. However, if users adhere to the on-premises policies, and the *Minimum password age* is set to a value greater than zero, password writeback still works after the on-premises policies are evaluated.
+> If you need to allow users to change or reset passwords more than one time per day, *Minimum password age* must be set to 0. Password writeback will work after on-premises password policies are successfully evaluated.
## Enable password writeback in Azure AD Connect
active-directory Scenario Web App Call Api App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-call-api-app-configuration.md
In the following example, the `GraphBeta` section specifies these settings.
"AzureAd": { "Instance": "https://login.microsoftonline.com/", "ClientId": "[Client_id-of-web-app-eg-2ec40e65-ba09-4853-bcde-bcb60029e596]",
- "TenantId": "common"
+ "TenantId": "common",
// To call an API "ClientSecret": "[Copy the client secret added to the app from the Azure portal]",
Instead of a client secret, you can provide a client certificate. The following
"AzureAd": { "Instance": "https://login.microsoftonline.com/", "ClientId": "[Client_id-of-web-app-eg-2ec40e65-ba09-4853-bcde-bcb60029e596]",
- "TenantId": "common"
+ "TenantId": "common",
// To call an API "ClientCertificates": [
active-directory Create Access Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-access-review.md
For more information, see [License requirements](access-reviews-overview.md#lice
A multi-stage review allows the administrator to define two or three sets of reviewers to complete a review one after another. In a single-stage review, all reviewers make a decision within the same period and the last reviewer to make a decision "wins". In a multi-stage review, two or three independent sets of reviewers make a decision within their own stage, and the next stage doesn't happen until a decision is made in the previous stage. Multi-stage reviews can be used to reduce the burden on later-stage reviewers, allow for escalation of reviewers, or have independent groups of reviewers agree on decisions. > [!WARNING]
-> Data of users included in multi-stage access reviews are a part of the audit record at the start of the review. Administrators may delete the data at any time by deleting the multi-stage access review series.
+> Data of users included in multi-stage access reviews are a part of the audit record at the start of the review. Administrators may delete the data at any time by deleting the multi-stage access review series.
1. After you have selected the resource and scope of your review, move on to the **Reviews** tab.
active-directory Concept Identity Protection Risks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-identity-protection-risks.md
These risks can be calculated in real-time or calculated offline using Microsoft
| Token Issuer Anomaly | Offline |This risk detection indicates the SAML token issuer for the associated SAML token is potentially compromised. The claims included in the token are unusual or match known attacker patterns. | | Malware linked IP address | Offline | This risk detection type indicates sign-ins from IP addresses infected with malware that is known to actively communicate with a bot server. This detection is determined by correlating IP addresses of the user's device against IP addresses that were in contact with a bot server while the bot server was active. <br><br> **[This detection has been deprecated](../fundamentals/whats-new-archive.md#planned-deprecationmalware-linked-ip-address-detection-in-identity-protection)**. Identity Protection will no longer generate new "Malware linked IP address" detections. Customers who currently have "Malware linked IP address" detections in their tenant will still be able to view, remediate, or dismiss them until the 90-day detection retention time is reached.| | Suspicious browser | Offline | Suspicious browser detection indicates anomalous behavior based on suspicious sign-in activity across multiple tenants from different countries in the same browser. |
-| Unfamiliar sign-in properties | Real-time | This risk detection type considers past sign-in history (IP, Latitude / Longitude and ASN) to look for anomalous sign-ins. The system stores information about previous locations used by a user, and considers these "familiar" locations. The risk detection is triggered when the sign-in occurs from a location that is not already in the list of familiar locations. Newly created users will be in "learning mode" for a while where unfamiliar sign-in properties risk detections will be turned off while our algorithms learn the user's behavior. The learning mode duration is dynamic and depends on how much time it takes the algorithm to gather enough information about the user's sign-in patterns. The minimum duration is five days. A user can go back into learning mode after a long period of inactivity. The system also ignores sign-ins from familiar devices, and locations that are geographically close to a familiar location. <br><br> We also run this detection for basic authentication (or legacy protocols). Because these protocols do not have modern properties such as client ID, there is limited telemetry to reduce false positives. We recommend our customers to move to modern authentication. <br><br> Unfamiliar sign-in properties can be detected on both interactive and non-interactive sign-ins. When this detection is detected on non-interactive sign-ins, it deserves increased scrutiny due to the risk of token replay attacks. |
+| Unfamiliar sign-in properties | Real-time | This risk detection type considers past sign-in history (IP, Latitude / Longitude and ASN) to look for anomalous sign-ins. The system stores information about previous locations used by a user, and considers these "familiar" locations. The risk detection is triggered when the sign-in occurs from a location that is not already in the list of familiar locations. Newly created users will be in "learning mode" for a while where unfamiliar sign-in properties risk detections will be turned off while our algorithms learn the user's behavior. The learning mode duration is dynamic and depends on how much time it takes the algorithm to gather enough information about the user's sign-in patterns. The minimum duration is five days. A user can go back into learning mode after a long period of inactivity and after a secure password reset. The system also ignores sign-ins from familiar devices, and locations that are geographically close to a familiar location. <br><br> We also run this detection for basic authentication (or legacy protocols). Because these protocols do not have modern properties such as client ID, there is limited telemetry to reduce false positives. We recommend our customers to move to modern authentication. <br><br> Unfamiliar sign-in properties can be detected on both interactive and non-interactive sign-ins. When this detection is detected on non-interactive sign-ins, it deserves increased scrutiny due to the risk of token replay attacks. |
| Admin confirmed user compromised | Offline | This detection indicates an admin has selected 'Confirm user compromised' in the Risky users UI or using riskyUsers API. To see which admin has confirmed this user compromised, check the user's risk history (via UI or API). | | Malicious IP address | Offline | This detection indicates sign-in from a malicious IP address. An IP address is considered malicious based on high failure rates because of invalid credentials received from the IP address or other IP reputation sources. | | Suspicious inbox manipulation rules | Offline | This detection is discovered by [Microsoft Defender for Cloud Apps](/cloud-app-security/anomaly-detection-policy#suspicious-inbox-manipulation-rules). This detection profiles your environment and triggers alerts when suspicious rules that delete or move messages or folders are set on a user's inbox. This detection may indicate that the user's account is compromised, that messages are being intentionally hidden, and that the mailbox is being used to distribute spam or malware in your organization. |
active-directory Workbook Risk Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/workbook-risk-analysis.md
++
+ Title: Identity protection risk analysis workbook in Azure AD | Microsoft Docs
+description: Learn how to use the identity protection risk analysis workbook.
+
+documentationcenter: ''
++
+editor: ''
+++++ Last updated : 03/08/2022++++++
+# Identity protection risk analysis workbook
+
+Azure AD Identity Protection detects, remediates, and prevents compromised identities. As an IT administrator, you want to understand risk trends in your organizations and opportunities for better policy configuration. With the Identity Protection Risky Analysis Workbook, you can answer common questions about your Identity Protection implementation.
+
+This article provides you with an overview of this workbook.
++
+## Description
+
+![Workbook category](./media/workbook-risk-analysis/workbook-category.png)
++
+As an IT administrator, you need to understand trends in identity risks and gaps in your policy implementations to ensure you are best protecting your organizations from identity compromise. The identity protection risk analysis workbook helps you analyze the state of risk in your organization.
+
+**This workbook:**
+
+- Provides visualizations of where in the world risk is being detected.
+
+- Allows you to understand the trends in real time vs. Offline risk detections.
+
+- Provides insight into how effective you are at responding to risky users.
++
+
+
+
+## Sections
+
+This workbook has five sections:
+
+- Heatmap of risk detections
+
+- Offline vs real-time risk detections
+
+- Risk detection trends
+
+- Risky users
+
+- Summary
++++
+
++
+## Filters
++
+This workbook supports setting a time range filter.
++
+![Set time range filter](./media/workbook-risk-analysis/time-range-filter.png)
+
+There are more filters in the risk detection trends and risky users sections.
+
+Risk Detection Trends:
+
+- Detection timing type (real-time or offline)
+
+- Risk level (low, medium, high, or none)
+
+Risky Users:
+
+- Risk detail (which indicates what changed a userΓÇÖs risk level)
+
+- Risk level (low, medium, high, or none)
++
+## Best practices
++
+- **[Enable risky sign-in policies](../identity-protection/concept-identity-protection-policies.md)** - To prompt for multi-factor authentication (MFA) on medium risk or above. Enabling the policy reduces the proportion of active real-time risk detections by allowing legitimate users to self-remediate the risk detections with MFA.
+
+- **[Enable a risky user policy](../identity-protection/howto-identity-protection-configure-risk-policies.md#user-risk-with-conditional-access)** - To enable users to securely remediate their accounts when they are high risk. Enabling the policy reduces the number of active at-risk users in your organization by returning the userΓÇÖs credentials to a safe state.
+++++
+## Next steps
+
+- To learn more about identity protection, see [What is identity protection](../identity-protection/overview-identity-protection.md).
+
+- For more information about Azure AD workbooks, see [How to use Azure AD workbooks](howto-use-azure-monitor-workbooks.md).
+
active-directory Joyn Fsm Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/joyn-fsm-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
## Step 2. Configure Joyn FSM to support provisioning with Azure AD
-Contact your [SevenLakes Customer Success Representative](mailto:mailto:CustomerSuccessTeam@sevenlakes.com) in order to obtain the Tenant URL and Secret Token which are required for configuring provisioning.
+Contact your [SevenLakes Customer Success Representative](mailto:CustomerSuccessTeam@sevenlakes.com) in order to obtain the Tenant URL and Secret Token which are required for configuring provisioning.
## Step 3. Add Joyn FSM from the Azure AD application gallery
active-directory Decentralized Identifier Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/decentralized-identifier-overview.md
The scenario we use to explain how VCs work involves:
-Today, Alice provides a username and password to log onto WoodgroveΓÇÖs networked environment. Woodgrove is deploying a VC solution to provide a more manageable way for Alice to prove she is an employee of Woodgrove. Proseware is using a VC solution compatible with Woodgrove's VC solution and they accept credentials issued by Woodgrove as proof of employment.
+Today, Alice provides a username and password to log onto WoodgroveΓÇÖs networked environment. Woodgrove is deploying a verifiable credential solution to provide a more manageable way for Alice to prove that she is an employee of Woodgrove. Proseware accepts verifiable credentials issued by Woodgrove as proof of employment to offer corporate discounts as part of their corporate discount program.
-The issuer of the credential, Woodgrove Inc., creates a public key and a private key. The public key is stored on ION. When the key is added to the infrastructure, the entry is recorded in a blockchain-based decentralized ledger. The issuer provides Alice the private key that is stored in a wallet application. Each time Alice successfully uses the private key the transaction is logged in the wallet application.
+Alice requests Woodgrove Inc for a proof of employment verifiable credential. Woodgrove Inc attests Alice's identiy and issues a signed verfiable credential that Alice can accept and store in her digital wallet application. Alice can now present this verifiable credential as a proof of employement on the Proseware site. After a succesfull presentation of the credential, Prosware offers discount to Alice and the transaction is logged in Alice's wallet application so that she can track where and to whom she has presented her proof of employment verifiable credential.
![microsoft-did-overview](media/decentralized-identifier-overview/did-overview.png)
aks Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/availability-zones.md
First, get the AKS cluster credentials using the [az aks get-credentials][az-aks
az aks get-credentials --resource-group myResourceGroup --name myAKSCluster ```
-Next, use the [kubectl describe][kubectl-describe] command to list the nodes in the cluster and filter on the *failure-domain.beta.kubernetes.io/zone* value. The following example is for a Bash shell.
+Next, use the [kubectl describe][kubectl-describe] command to list the nodes in the cluster and filter on the `topology.kubernetes.io/zone` value. The following example is for a Bash shell.
```console
-kubectl describe nodes | grep -e "Name:" -e "failure-domain.beta.kubernetes.io/zone"
+kubectl describe nodes | grep -e "Name:" -e "topology.kubernetes.io/zone"
``` The following example output shows the three nodes distributed across the specified region and availability zones, such as *eastus2-1* for the first availability zone and *eastus2-2* for the second availability zone: ```console Name: aks-nodepool1-28993262-vmss000000
- failure-domain.beta.kubernetes.io/zone=eastus2-1
+ topology.kubernetes.io/zone=eastus2-1
Name: aks-nodepool1-28993262-vmss000001
- failure-domain.beta.kubernetes.io/zone=eastus2-2
+ topology.kubernetes.io/zone=eastus2-2
Name: aks-nodepool1-28993262-vmss000002
- failure-domain.beta.kubernetes.io/zone=eastus2-3
+ topology.kubernetes.io/zone=eastus2-3
``` As you add additional nodes to an agent pool, the Azure platform automatically distributes the underlying VMs across the specified availability zones.
aks-nodepool1-34917322-vmss000002 eastus eastus-3
## Verify pod distribution across zones
-As documented in [Well-Known Labels, Annotations and Taints][kubectl-well_known_labels], Kubernetes uses the `failure-domain.beta.kubernetes.io/zone` label to automatically distribute pods in a replication controller or service across the different zones available. In order to test this, you can scale up your cluster from 3 to 5 nodes, to verify correct pod spreading:
+As documented in [Well-Known Labels, Annotations and Taints][kubectl-well_known_labels], Kubernetes uses the `topology.kubernetes.io/zone` label to automatically distribute pods in a replication controller or service across the different zones available. In order to test this, you can scale up your cluster from 3 to 5 nodes, to verify correct pod spreading:
```azurecli-interactive az aks scale \
az aks scale \
--node-count 5 ```
-When the scale operation completes after a few minutes, the command `kubectl describe nodes | grep -e "Name:" -e "failure-domain.beta.kubernetes.io/zone"` in a Bash shell should give an output similar to this sample:
+When the scale operation completes after a few minutes, the command `kubectl describe nodes | grep -e "Name:" -e "topology.kubernetes.io/zone"` in a Bash shell should give an output similar to this sample:
```console Name: aks-nodepool1-28993262-vmss000000
- failure-domain.beta.kubernetes.io/zone=eastus2-1
+ topology.kubernetes.io/zone=eastus2-1
Name: aks-nodepool1-28993262-vmss000001
- failure-domain.beta.kubernetes.io/zone=eastus2-2
+ topology.kubernetes.io/zone=eastus2-2
Name: aks-nodepool1-28993262-vmss000002
- failure-domain.beta.kubernetes.io/zone=eastus2-3
+ topology.kubernetes.io/zone=eastus2-3
Name: aks-nodepool1-28993262-vmss000003
- failure-domain.beta.kubernetes.io/zone=eastus2-1
+ topology.kubernetes.io/zone=eastus2-1
Name: aks-nodepool1-28993262-vmss000004
- failure-domain.beta.kubernetes.io/zone=eastus2-2
+ topology.kubernetes.io/zone=eastus2-2
``` We now have two additional nodes in zones 1 and 2. You can deploy an application consisting of three replicas. We will use NGINX as an example:
aks Node Auto Repair https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-auto-repair.md
If AKS identifies an unhealthy node that remains unhealthy for 10 minutes, AKS t
1. Reboot the node. 1. If the reboot is unsuccessful, reimage the node.
+1. If the reimage is unsuccessful, redploy the node.
Alternative remediations are investigated by AKS engineers if auto-repair is unsuccessful.
analysis-services Analysis Services Create Bicep File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-create-bicep-file.md
Title: Quickstart - Create an Azure Analysis Services server resource by using Bicep description: Quickstart showing how to an Azure Analysis Services server resource by using a Bicep file. Previously updated : 03/04/2022 Last updated : 03/08/2022
When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to d
# [CLI](#tab/CLI) ```azurecli-interactive
-echo "Enter the Resource Group name:" &&
-read resourceGroupName &&
-az group delete --name $resourceGroupName &&
-echo "Press [ENTER] to continue ..."
+az group delete --name exampleRG
``` # [PowerShell](#tab/PowerShell) ```azurepowershell-interactive
-$resourceGroupName = Read-Host -Prompt "Enter the Resource Group name"
-Remove-AzResourceGroup -Name $resourceGroupName
-Write-Host "Press [ENTER] to continue..."
+Remove-AzResourceGroup -Name exampleRG
```
app-service Overview Arc Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-arc-integration.md
Title: 'App Service on Azure Arc' description: An introduction to App Service integration with Azure Arc for Azure operators. Previously updated : 01/31/2022 Last updated : 03/08/2022 # App Service, Functions, and Logic Apps on Azure Arc (Preview)
If your extension was in the stable version and auto-upgrade-minor-version is se
```azurecli-interactive az k8s-extension update --cluster-type connectedClusters -c <clustername> -g <resource group> -n <extension name> --release-train stable --version 0.12.0 ```
+### Application services extension v 0.12.1 (March 2022)
+
+- Resolved issue with outbound proxy support to enable logging to Log Analytics Workspace
+
+If your extension was in the stable version and auto-upgrade-minor-version is set to true, the extension upgrades automatically. To manually upgrade the extension to the latest version, you can run the command:
+
+```azurecli-interactive
+ az k8s-extension update --cluster-type connectedClusters -c <clustername> -g <resource group> -n <extension name> --release-train stable --version 0.12.1
+```
## Next steps
app-service Webjobs Sdk How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/webjobs-sdk-how-to.md
static async Task Main()
} ```
-For more information, see the [Azure CosmosDB binding](../azure-functions/functions-bindings-cosmosdb-v2-output.md#hostjson-settings) article.
+For more information, see the [Azure CosmosDB binding](../azure-functions/functions-bindings-cosmosdb-v2.md#hostjson-settings) article.
#### Event Hubs trigger configuration (version 3.*x*)
static async Task Main()
} ```
-For more details, see the [Service Bus binding](../azure-functions/functions-bindings-service-bus.md#hostjson-settings) article.
+For more details, see the [Service Bus binding](../azure-functions/functions-bindings-service-bus.md) article.
### Configuration for other bindings
The Azure Functions documentation provides reference information about each bind
* [Packages](../azure-functions/functions-bindings-storage-queue.md). The package you need to install to include support for the binding in a WebJobs SDK project. * [Examples](../azure-functions/functions-bindings-storage-queue-trigger.md). Code samples. The C# class library example applies to the WebJobs SDK. Just omit the `FunctionName` attribute.
-* [Attributes](../azure-functions/functions-bindings-storage-queue-trigger.md#attributes-and-annotations). The attributes to use for the binding type.
+* [Attributes](../azure-functions/functions-bindings-storage-queue-trigger.md#attributes). The attributes to use for the binding type.
* [Configuration](../azure-functions/functions-bindings-storage-queue-trigger.md#configuration). Explanations of the attribute properties and constructor parameters. * [Usage](../azure-functions/functions-bindings-storage-queue-trigger.md#usage). The types you can bind to and information about how the binding works. For example: polling algorithm, poison queue processing.
application-gateway Classic To Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/classic-to-resource-manager.md
For more information on how to set up an Application Gateway resource after VNet
* [Deployment via Azure CLI](quick-create-cli.md) * [Deployment via ARM template](quick-create-template.md)
+## Common questions
+
+### What is Azure Service Manager and what does it mean by classic?
+
+The word "classic" in classic networking service refers to networking resources managed by Azure Service Manager (ASM). Azure Service Manager (ASM) is the old control plane of Azure responsible for creating, managing, deleting VMs and performing other control plane operations.
+
+### What is Azure Resource Manager?
+
+Azure Resource Manager is the latest control plane of Azure responsible for creating, managing, deleting VMs and performing other control plane operations.
+
+### Where can I find more information regarding classic to Azure Resource Manager migration?
+
+Please refer to [Frequently asked questions about classic to Azure Resource Manager migration](../virtual-machines/migration-classic-resource-manager-faq.yml)
+
+### How do I report an issue?
+
+Post your issues and questions about migration to our [Microsoft Q&A page](https://aka.ms/AAflal1). We recommend posting all your questions on this forum. If you have a support contract, you're welcome to log a support ticket as well.
+ ## Next steps To get started see: [platform-supported migration of IaaS resources from classic to Resource Manager](../virtual-machines/migration-classic-resource-manager-ps.md)
application-gateway How Application Gateway Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/how-application-gateway-works.md
When an application gateway sends the original request to the backend server, it
### Modifications to the request
-Application gateway inserts five additional headers to all requests before it forwards the requests to the backend. These headers are x-forwarded-for, x-forwarded-proto, x-forwarded-port, x-original-host, and x-appgw-trace-id. The format for x-forwarded-for header is a comma-separated list of IP:port.
+Application gateway inserts six additional headers to all requests before it forwards the requests to the backend. These headers are x-forwarded-for, x-forwarded-port, x-forwarded-proto, x-original-host, x-original-url, and x-appgw-trace-id. The format for x-forwarded-for header is a comma-separated list of IP:port.
The valid values for x-forwarded-proto are HTTP or HTTPS. X-forwarded-port specifies the port where the request reached the application gateway. X-original-host header contains the original host header with which the request arrived. This header is useful in Azure website integration, where the incoming host header is modified before traffic is routed to the backend. If session affinity is enabled as an option, then it adds a gateway-managed affinity cookie.
-x-appgw-trace-id is a unique guid generated by application gateway for each client request and presented in the forwarded request to the backend pool member. The guid consists of 32 alphanumeric characters presented without dashes (for example: ac882cd65a2712a0fe1289ec2bb6aee7). This guid can be used to correlate a request received by application gateway and initiated to a backend pool member via the transactionId property in [Diagnostic Logs](application-gateway-diagnostics.md#diagnostic-logging).
+X-appgw-trace-id is a unique guid generated by application gateway for each client request and presented in the forwarded request to the backend pool member. The guid consists of 32 alphanumeric characters presented without dashes (for example: ac882cd65a2712a0fe1289ec2bb6aee7). This guid can be used to correlate a request received by application gateway and initiated to a backend pool member via the transactionId property in [Diagnostic Logs](application-gateway-diagnostics.md#diagnostic-logging).
You can configure application gateway to modify request and response headers and URL by using [Rewrite HTTP headers and URL](rewrite-http-headers-url.md) or to modify the URI path by using a path-override setting. However, unless configured to do so, all incoming requests are proxied to the backend.
application-gateway Key Vault Certs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/key-vault-certs.md
Application Gateway supports certificates referenced in Key Vault via the Role-b
> [!Note] > Specifying Azure Key Vault certificates that are subject to the role-based access control permission model is not supported via the portal.
-In this example, weΓÇÖll use PowerShell to reference a new Key Vault certificate.
+In this example, weΓÇÖll use PowerShell to reference a new Key Vault secret.
``` # Get the Application Gateway we want to modify $appgw = Get-AzApplicationGateway -Name MyApplicationGateway -ResourceGroupName MyResourceGroup
automation Delete Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/delete-account.md
To recover an Automation account, ensure that the following conditions are met:
- Before you attempt to recover a deleted Automation account, ensure that resource group for that account exists. > [!NOTE]
-> * If the resource group of the Automation account is deleted, to recover, you must recreate the resource group with the same name. After a few hours, the Automation account is repopulated in the list of deleted accounts. Then you can restore the account.
+> * If the resource group of the Automation account is deleted, to recover, you must recreate the resource group with the same name.
> * Though the resource group isn't present, you can see the Automation account in the deleted list. If the resource group isn't present, the account restore operation fails with the error *Account restore failed*. ### Recover a deleted Automation account
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/release-notes.md
Previously updated : 02/25/2022 Last updated : 03/08/2022 # Customer intent: As a data professional, I want to understand why my solutions would benefit from running with Azure Arc-enabled data services so that I can leverage the capability of the feature. - # Release notes - Azure Arc-enabled data services This article highlights capabilities, features, and enhancements recently released or improved for Azure Arc-enabled data services.
+## March 2022
+
+This release is published March 8, 2022.
+
+### Image tag
+
+`v1.4.1_2022-03-08`
+
+For complete release version information, see [Version log](version-log.md).
+
+### Data Controller
+- Fixed the issue "ConfigMap sql-config-[SQL MI] does not exist" from the February 2022 release. This issue occurs when deploying a SQL Managed Instance with service type of `loadBalancer` with certain load balancers.
+
+### SQL Managed Instance
+
+- Support for readable secondary replicas:
+ - To set readable secondary replicas use `--readable-secondaries` when you create or update an Arc-enabled SQL Managed Instance deployment.
+ - Set `--readable secondaries` to any value between 0 and the number of replicas minus 1.
+ - `--readable-secondaries` only applies to Business Critical tier.
+- Automatic backups are taken on the primary instance in a Business Critical service tier when there are multiple replicas. When a failover happens, backups move to the new primary.
+- RWX capable storage class is required for backups, for both General Purpose and Business Critical service tiers.
+- Billing support when using multiple read replicas.
+
+For additional information about service tiers, see [High Availability with Azure Arc-enabled SQL Managed Instance (preview)](managed-instance-high-availability.md).
+
+### User experience improvements
+
+The following improvements are available in [Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio).
+
+- Azure Arc and Azure CLI extensions now generally available.
+- Changed edit commands for SQL Managed Instance for Azure Arc dashboard to use `update`, reflecting Azure CLI changes. This works in indirect or direct mode.
+- Data controller deployment wizard step for connectivity mode is now earlier in the process.
+- Removed an extra backups field in SQL MI deployment wizard.
+ ## February 2022 This release is published February 25, 2022.
This release is published February 25, 2022.
For complete release version information, see [Version log](version-log.md).
+> [!CAUTION]
+> There is a known issue with this release where deployment of Arc SQL MI hangs, and sends the controldb pods of Arc Data Controller into a
+> `CrashLoopBackOff` state, when the SQL MI is deployed with `loadBalancer` service type. This issue is fixed in a release on March 08, 2022.
+ ### SQL Managed Instance - Support for readable secondary replicas:
azure-arc Version Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/version-log.md
Previously updated : 02/25/2022 Last updated : 03/08/2022 # Customer intent: As a data professional, I want to understand what versions of components align with specific releases.
This article identifies the component versions with each release of Azure Arc-enabled data services.
+### March 08, 2022
+
+|Component |Value |
+|--||
+|Container images tag |`v1.4.1_2022-03-08`
+|CRD names and versions |`datacontrollers.arcdata.microsoft.com`: v1beta1, v1, v2, v3</br>`exporttasks.tasks.arcdata.microsoft.com`: v1beta1, v1, v2</br>`kafkas.arcdata.microsoft.com`: v1beta1</br>`monitors.arcdata.microsoft.com`: v1beta1, v1, v2</br>`sqlmanagedinstances.sql.arcdata.microsoft.com`: v1beta1, v1, v2, v3, v4</br>`postgresqls.arcdata.microsoft.com`: v1beta1, v1beta2</br>`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`: v1beta1, v1</br>`dags.sql.arcdata.microsoft.com`: v1beta1, v2beta2</br>`activedirectoryconnectors.arcdata.microsoft.com`: v1beta1|
+|ARM API version|2021-11-01|
+|`arcdata` Azure CLI extension version| 1.2.1|
+|Arc enabled Kubernetes helm chart extension version|1.1.18791000|
+|Arc Data extension for Azure Data Studio|1.0|
+ ### February 25, 2022 |Component |Value |
azure-functions Event Driven Scaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/event-driven-scaling.md
Scaling can vary on a number of factors, and scale differently based on the trig
* **Maximum instances:** A single function app only scales out to a maximum of 200 instances. A single instance may process more than one message or request at a time though, so there isn't a set limit on number of concurrent executions. You can [specify a lower maximum](#limit-scale-out) to throttle scale as required. * **New instance rate:** For HTTP triggers, new instances are allocated, at most, once per second. For non-HTTP triggers, new instances are allocated, at most, once every 30 seconds. Scaling is faster when running in a [Premium plan](functions-premium-plan.md).
-* **Scale efficiency:** For Service Bus triggers, use _Manage_ rights on resources for the most efficient scaling. With _Listen_ rights, scaling isn't as accurate because the queue length can't be used to inform scaling decisions. To learn more about setting rights in Service Bus access policies, see [Shared Access Authorization Policy](../service-bus-messaging/service-bus-sas.md#shared-access-authorization-policies). For Event Hub triggers, see the [scaling guidance](functions-bindings-event-hubs-trigger.md#scaling) in the reference article.
+* **Scale efficiency:** For Service Bus triggers, use _Manage_ rights on resources for the most efficient scaling. With _Listen_ rights, scaling isn't as accurate because the queue length can't be used to inform scaling decisions. To learn more about setting rights in Service Bus access policies, see [Shared Access Authorization Policy](../service-bus-messaging/service-bus-sas.md#shared-access-authorization-policies). For Event Hub triggers, see the [this scaling guidance](#event-hubs-trigger).
## Limit scale out
$resource.Properties.functionAppScaleLimit = <SCALE_LIMIT>
$resource | Set-AzResource -Force ```
+## Event Hubs trigger
+
+This section describes how scaling behaves when your function uses an [Event Hubs trigger](functions-bindings-event-hubs-trigger.md) or an [IoT Hub trigger](functions-bindings-event-iot-trigger.md). In these cases, each instance of an event triggered function is backed by a single [EventProcessorHost](/dotnet/api/microsoft.azure.eventhubs.processor) instance. The trigger (powered by Event Hubs) ensures that only one [EventProcessorHost](/dotnet/api/microsoft.azure.eventhubs.processor) instance can get a lease on a given partition.
+
+For example, consider an Event Hub as follows:
+
+* 10 partitions
+* 1,000 events distributed evenly across all partitions, with 100 messages in each partition
+
+When your function is first enabled, there is only one instance of the function. Let's call the first function instance `Function_0`. The `Function_0` function has a single instance of [EventProcessorHost](/dotnet/api/microsoft.azure.eventhubs.processor) that holds a lease on all ten partitions. This instance is reading events from partitions 0-9. From this point forward, one of the following happens:
+
+* **New function instances are not needed**: `Function_0` is able to process all 1,000 events before the Functions scaling logic take effect. In this case, all 1,000 messages are processed by `Function_0`.
+
+* **An additional function instance is added**: If the Functions scaling logic determines that `Function_0` has more messages than it can process, a new function app instance (`Function_1`) is created. This new function also has an associated instance of [EventProcessorHost](/dotnet/api/microsoft.azure.eventhubs.processor). As the underlying Event Hubs detect that a new host instance is trying read messages, it load balances the partitions across the host instances. For example, partitions 0-4 may be assigned to `Function_0` and partitions 5-9 to `Function_1`.
+
+* **N more function instances are added**: If the Functions scaling logic determines that both `Function_0` and `Function_1` have more messages than they can process, new `Functions_N` function app instances are created. Apps are created to the point where `N` is greater than the number of event hub partitions. In our example, Event Hubs again load balances the partitions, in this case across the instances `Function_0`...`Functions_9`.
+
+As scaling occurs, `N` instances is a number greater than the number of event hub partitions. This pattern is used to ensure [EventProcessorHost](/dotnet/api/microsoft.azure.eventhubs.processor) instances are available to obtain locks on partitions as they become available from other instances. You are only charged for the resources used when the function instance executes. In other words, you are not charged for this over-provisioning.
+
+When all function execution completes (with or without errors), checkpoints are added to the associated storage account. When check-pointing succeeds, all 1,000 messages are never retrieved again.
+ ## Best practices and patterns for scalable apps There are many aspects of a function app that impacts how it scales, including host configuration, runtime footprint, and resource efficiency. For more information, see the [scalability section of the performance considerations article](performance-reliability.md#scalability-best-practices). You should also be aware of how connections behave as your function app scales. For more information, see [How to manage connections in Azure Functions](manage-connections.md).
azure-functions Event Grid How Tos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/event-grid-how-tos.md
+
+ Title: How to work with Event Grid triggers and bindings in Azure Functions
+description: Contains various procedures for integrating Azure Event Grid and Azure Functions using triggers and bindings.
Last updated : 10/12/2021+++++
+# How to work with Event Grid triggers and bindings in Azure Functions
+
+Azure Functions provides built-in integration with Azure Event Grid by using [triggers and bindings](functions-triggers-bindings.md). This article shows you how to configure and locally evaluate your Event Grid trigger and bindings. For more information about Event Grid trigger and output binding definitions and examples, see one of the following reference articles:
+++ [Azure Event Grid bindings for Azure Functions](functions-bindings-event-grid.md)++ [Azure Event Grid trigger for Azure Functions](functions-bindings-event-grid-trigger.md) ++ [Azure Event Grid output binding for Azure Functions](functions-bindings-event-grid-output.md) +
+## Create a subscription
+
+To start receiving Event Grid HTTP requests, create an Event Grid subscription that specifies the endpoint URL that invokes the function.
+
+### Azure portal
+
+For functions that you develop in the Azure portal with the Event Grid trigger, select **Integration** then choose the **Event Grid Trigger** and select **Create Event Grid subscription**.
++
+When you select this link, the portal opens the **Create Event Subscription** page with the current trigger endpoint already defined.
++
+For more information about how to create subscriptions by using the Azure portal, see [Create custom event - Azure portal](../event-grid/custom-event-quickstart-portal.md) in the Event Grid documentation.
+
+### Azure CLI
+
+To create a subscription by using [the Azure CLI](/cli/azure/get-started-with-azure-cli), use the [az eventgrid event-subscription create](/cli/azure/eventgrid/event-subscription#az_eventgrid_event_subscription_create) command.
+
+The command requires the endpoint URL that invokes the function, and the endpoint varies between version 1.x of the Functions runtime and later versions. The following example shows the version-specific URL pattern:
+
+# [v2.x+](#tab/v2)
+
+```http
+https://{functionappname}.azurewebsites.net/runtime/webhooks/eventgrid?functionName={functionname}&code={systemkey}
+```
+
+# [v1.x](#tab/v1)
+
+```http
+https://{functionappname}.azurewebsites.net/admin/extensions/EventGridExtensionConfig?functionName={functionname}&code={systemkey}
+```
++
+The system key is an authorization key that has to be included in the endpoint URL for an Event Grid trigger. The following section explains how to get the system key.
+
+Here's an example that subscribes to a blob storage account (with a placeholder for the system key):
+
+# [Bash](#tab/bash/v2)
+
+```azurecli
+az eventgrid resource event-subscription create -g myResourceGroup \
+ --provider-namespace Microsoft.Storage --resource-type storageAccounts \
+ --resource-name myblobstorage12345 --name myFuncSub \
+ --included-event-types Microsoft.Storage.BlobCreated \
+ --subject-begins-with /blobServices/default/containers/images/blobs/ \
+ --endpoint https://mystoragetriggeredfunction.azurewebsites.net/runtime/webhooks/eventgrid?functionName=imageresizefunc&code=<key>
+```
+
+# [Cmd](#tab/cmd/v2)
+
+```azurecli
+az eventgrid resource event-subscription create -g myResourceGroup ^
+ --provider-namespace Microsoft.Storage --resource-type storageAccounts ^
+ --resource-name myblobstorage12345 --name myFuncSub ^
+ --included-event-types Microsoft.Storage.BlobCreated ^
+ --subject-begins-with /blobServices/default/containers/images/blobs/ ^
+ --endpoint https://mystoragetriggeredfunction.azurewebsites.net/runtime/webhooks/eventgrid?functionName=imageresizefunc&code=<key>
+```
+
+# [Bash](#tab/bash/v1)
+
+```azurecli
+az eventgrid resource event-subscription create -g myResourceGroup \
+ --provider-namespace Microsoft.Storage --resource-type storageAccounts \
+ --resource-name myblobstorage12345 --name myFuncSub \
+ --included-event-types Microsoft.Storage.BlobCreated \
+ --subject-begins-with /blobServices/default/containers/images/blobs/ \
+ --endpoint https://mystoragetriggeredfunction.azurewebsites.net/admin/extensions/EventGridExtensionConfig?functionName=imageresizefunc&code=<key>
+```
+
+# [Cmd](#tab/cmd/v1)
+
+```azurecli
+az eventgrid resource event-subscription create -g myResourceGroup ^
+ --provider-namespace Microsoft.Storage --resource-type storageAccounts ^
+ --resource-name myblobstorage12345 --name myFuncSub ^
+ --included-event-types Microsoft.Storage.BlobCreated ^
+ --subject-begins-with /blobServices/default/containers/images/blobs/ ^
+ --endpoint https://mystoragetriggeredfunction.azurewebsites.net/admin/extensions/EventGridExtensionConfig?functionName=imageresizefunc&code=<key>
+```
+++
+For more information about how to create a subscription, see [the blob storage quickstart](../storage/blobs/storage-blob-event-quickstart.md#subscribe-to-your-storage-account) or the other Event Grid quickstarts.
+
+### Get the system key
+
+You can get the system key by using the following API (HTTP GET):
+
+# [v2.x+](#tab/v2)
+
+```
+http://{functionappname}.azurewebsites.net/admin/host/systemkeys/eventgrid_extension?code={masterkey}
+```
+
+# [v1.x](#tab/v1)
+
+```
+http://{functionappname}.azurewebsites.net/admin/host/systemkeys/eventgridextensionconfig_extension?code={masterkey}
+```
+++
+This REST API is an administrator API, so it requires your function app [master key](functions-bindings-http-webhook-trigger.md#authorization-keys). Don't confuse the system key (for invoking an Event Grid trigger function) with the master key (for performing administrative tasks on the function app). When you subscribe to an event grid topic, be sure to use the system key.
+
+Here's an example of the response that provides the system key:
+
+```
+{
+ "name": "eventgridextensionconfig_extension",
+ "value": "{the system key for the function}",
+ "links": [
+ {
+ "rel": "self",
+ "href": "{the URL for the function, without the system key}"
+ }
+ ]
+}
+```
+
+You can get the master key for your function app from the **Function app settings** tab in the portal.
+
+> [!IMPORTANT]
+> The master key provides administrator access to your function app. Don't share this key with third parties or distribute it in native client applications.
+
+For more information, see [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys) in the HTTP trigger reference article.
+
+## Local testing with viewer web app
+
+To test an Event Grid trigger locally, you have to get Event Grid HTTP requests delivered from their origin in the cloud to your local machine. One way to do that is by capturing requests online and manually resending them on your local machine:
+
+1. [Create a viewer web app](#create-a-viewer-web-app) that captures event messages.
+1. [Create an Event Grid subscription](#create-an-event-grid-subscription) that sends events to the viewer app.
+1. [Generate a request](#generate-a-request) and copy the request body from the viewer app.
+1. [Manually post the request](#manually-post-the-request) to the localhost URL of your Event Grid trigger function.
+
+When you're done testing, you can use the same subscription for production by updating the endpoint. Use the [az eventgrid event-subscription update](/cli/azure/eventgrid/event-subscription#az_eventgrid_event_subscription_update) Azure CLI command.
+
+### Create a viewer web app
+
+To simplify capturing event messages, you can deploy a [pre-built web app](https://github.com/Azure-Samples/azure-event-grid-viewer) that displays the event messages. The deployed solution includes an App Service plan, an App Service web app, and source code from GitHub.
+
+Select **Deploy to Azure** to deploy the solution to your subscription. In the Azure portal, provide values for the parameters.
+
+[![Deploy to Azure.](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fazure-event-grid-viewer%2Fmaster%2Fazuredeploy.json)
+
+The deployment may take a few minutes to complete. After the deployment has succeeded, view your web app to make sure it's running. In a web browser, navigate to:
+`https://<your-site-name>.azurewebsites.net`
+
+You see the site but no events have been posted to it yet.
+
+![View new site](media/functions-bindings-event-grid/view-site.png)
+
+### Create an Event Grid subscription
+
+Create an Event Grid subscription of the type you want to test, and give it the URL from your web app as the endpoint for event notification. The endpoint for your web app must include the suffix `/api/updates/`. So, the full URL is `https://<your-site-name>.azurewebsites.net/api/updates`
+
+For information about how to create subscriptions by using the Azure portal, see [Create custom event - Azure portal](../event-grid/custom-event-quickstart-portal.md) in the Event Grid documentation.
+
+### Generate a request
+
+Trigger an event that will generate HTTP traffic to your web app endpoint. For example, if you created a blob storage subscription, upload or delete a blob. When a request shows up in your web app, copy the request body.
+
+The subscription validation request will be received first; ignore any validation requests, and copy the event request.
+
+![Copy request body from web app](media/functions-bindings-event-grid/view-results.png)
+
+### Manually post the request
+
+Run your Event Grid function locally. The `Content-Type` and `aeg-event-type` headers are required to be manually set, while and all other values can be left as default.
+
+Use a tool such as [Postman](https://www.getpostman.com/) or [curl](https://curl.haxx.se/docs/httpscripting.html) to create an HTTP POST request:
+
+* Set a `Content-Type: application/json` header.
+* Set an `aeg-event-type: Notification` header.
+* Paste the RequestBin data into the request body.
+* Post to the URL of your Event Grid trigger function.
+
+ # [v2.x+](#tab/v2)
+
+ ```
+ http://localhost:7071/runtime/webhooks/eventgrid?functionName={FUNCTION_NAME}
+ ```
+
+ # [v1.x](#tab/v1)
+
+ ```
+ http://localhost:7071/admin/extensions/EventGridExtensionConfig?functionName={FUNCTION_NAME}
+ ```
+
+
+
+The `functionName` parameter must be the name specified in the `FunctionName` attribute.
+
+The following screenshots show the headers and request body in Postman:
+
+![Headers in Postman](media/functions-bindings-event-grid/postman2.png)
+
+![Request body in Postman](media/functions-bindings-event-grid/postman.png)
+
+The Event Grid trigger function executes and shows logs similar to the following example:
+
+![Sample Event Grid trigger function logs](media/functions-bindings-event-grid/eg-output.png)
+
+## Next steps
+
+To learn more about Event Grid with Functions, see the following articles:
+++ [Azure Event Grid bindings for Azure Functions](functions-bindings-event-grid.md)++ [Tutorial: Automate resizing uploaded images using Event Grid](../event-grid/resize-images-on-storage-blob-upload-event.md?toc=%2fazure%2fazure-functions%2ftoc.json)
azure-functions Event Messaging Bindings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/event-messaging-bindings.md
+
+ Title: Connect to eventing and messaging services in Azure Functions
+description: Provides an overview of how to connect your functions to other messaging and event-driven services in Azure, such as Azure Service Bus, Azure Event Grid, and Azure Event Hubs.
Last updated : 10/20/2021+++++
+# Connect to eventing and messaging services from Azure Functions
+
+As a cloud computing service, Azure Functions is frequently used to move data between various Azure services. To make it easier for you to connect your code to other services, Functions implements a set of binding extensions to connect to these services. To learn more, see [Azure Functions triggers and bindings concepts](functions-triggers-bindings.md).
+
+By definition, Azure Functions executions are stateless. If you need to connect your code to services in a more stateful way, consider instead using [Durable Functions](durable/durable-functions-overview.md) or [Azure Logic Apps?](../logic-apps/logic-apps-overview.md).
+
+Triggers and bindings are provided to consuming and emitting data easier. There may be cases where you need more control over the service connection, or you just feel more comfortable using a client library provided by a service SDK. In those cases, you can use a client instance from the SDK in your function execution to access the service as you normally would. When using a client directly, you need to pay attention to the effect of scale and performance on client connections. To learn more, see the [guidance on using static clients](manage-connections.md#static-clients).
+
+You can't obtain the client instance used by a service binding from your function execution.
+
+The rest of this article provides specific guidance for integrating your code with the specific Azure services supported by Functions.
+
+## Event Grid
++
+Azure Functions provides built-in integration with Azure Event Grid by using [triggers and bindings](functions-triggers-bindings.md).
+
+To learn how to configure and locally evaluate your Event Grid trigger and bindings, see [How to work with Event Grid triggers and bindings in Azure Functions](event-grid-how-tos.md)
+
+For more information about Event Grid trigger and output binding definitions and examples, see one of the following reference articles:
+++ [Azure Event Grid bindings for Azure Functions](functions-bindings-event-grid.md)++ [Azure Event Grid trigger for Azure Functions](functions-bindings-event-grid-trigger.md) ++ [Azure Event Grid output binding for Azure Functions](functions-bindings-event-grid-output.md) +++
+## Next steps
+
+To learn more about Event Grid with Functions, see the following articles:
+++ [Azure Event Grid bindings for Azure Functions](functions-bindings-event-grid.md)++ [Tutorial: Automate resizing uploaded images using Event Grid](../event-grid/resize-images-on-storage-blob-upload-event.md?toc=%2fazure%2fazure-functions%2ftoc.json)
azure-functions Functions Add Output Binding Storage Queue Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-add-output-binding-storage-queue-java.md
gradle azureFunctionsRun
> [!NOTE]
-> Because you enabled extension bundles in the host.json, the [Storage binding extension](functions-bindings-storage-blob.md#add-to-your-functions-app) was downloaded and installed for you during startup, along with the other Microsoft binding extensions.
+> Because you enabled extension bundles in the host.json, the [Storage binding extension](functions-bindings-storage-blob.md#install-extension) was downloaded and installed for you during startup, along with the other Microsoft binding extensions.
As before, trigger the function from the command line using cURL in a new terminal window:
azure-functions Functions Add Output Binding Storage Queue Vs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-add-output-binding-storage-queue-vs.md
Because you're using a Queue storage output binding, you need the Storage bindin
``` # [Isolated process](#tab/isolated-process) ```bash
- Install-Package Microsoft.Azure.Functions.Worker.Extensions.Storage.Queues -IncludePrerelease
+ Install-Package /dotnet/api/microsoft.azure.webjobs.blobattribute.Queues -IncludePrerelease
```
azure-functions Functions Bindings Cosmosdb V2 Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-input.md
Title: Azure Cosmos DB input binding for Functions 2.x and higher description: Learn to use the Azure Cosmos DB input binding in Azure Functions.- Previously updated : 09/01/2021- Last updated : 03/04/2022 ms.devlang: csharp, java, javascript, powershell, python
+zone_pivot_groups: programming-languages-set-functions-lang-workers
# Azure Cosmos DB input binding for Azure Functions 2.x and higher
The Azure Cosmos DB input binding uses the SQL API to retrieve one or more Azure
For information on setup and configuration details, see the [overview](./functions-bindings-cosmosdb-v2.md). > [!NOTE]
-> If the collection is [partitioned](../cosmos-db/partitioning-overview.md#logical-partitions), lookup operations need to also specify the partition key value.
+> When the collection is [partitioned](../cosmos-db/partitioning-overview.md#logical-partitions), lookup operations must also specify the partition key value.
>
-<a id="example" name="example"></a>
+## Example
-# [C#](#tab/csharp)
+Unless otherwise noted, examples in this article target version 3.x of the [Azure Cosmos DB extension](functions-bindings-cosmosdb-v2.md). For use with extension version 4.x, you need to replace the string `collection` in property and attribute names with `container`.
-This section contains the following examples:
++
+# [In-process](#tab/in-process)
+
+This section contains the following examples for using [in-process C# class library functions](functions-dotnet-class-library.md) with extension version 3.x:
* [Queue trigger, look up ID from JSON](#queue-trigger-look-up-id-from-json-c) * [HTTP trigger, look up ID from query string](#http-trigger-look-up-id-from-query-string-c)
namespace CosmosDBSamplesV2
### HTTP trigger, get multiple docs, using CosmosClient
-The following example shows a [C# function](functions-dotnet-class-library.md) that retrieves a list of documents. The function is triggered by an HTTP request. The code uses a `CosmosClient` instance provided by the Azure Cosmos DB binding, available in [extension version 4.x](./functions-bindings-cosmosdb-v2.md#cosmos-db-extension-4x-and-higher), to read a list of documents. The `CosmosClient` instance could also be used for write operations.
+The following example shows a [C# function](functions-dotnet-class-library.md) that retrieves a list of documents. The function is triggered by an HTTP request. The code uses a `CosmosClient` instance provided by the Azure Cosmos DB binding, available in [extension version 4.x](./functions-bindings-cosmosdb-v2.md?tabs=extensionv4), to read a list of documents. The `CosmosClient` instance could also be used for write operations.
```csharp using System.Linq;
namespace CosmosDBSamplesV2
} ```
+# [Isolated process](#tab/isolated-process)
+
+Example pending.
+ # [C# Script](#tab/csharp-script) This section contains the following examples:
public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, Docume
} ```
-# [Java](#tab/java)
++ This section contains the following examples:
public class DocsFromRouteSqlQuery {
} ```
-# [JavaScript](#tab/javascript)
This section contains the following examples that read a single document by specifying an ID value from various sources:
module.exports = async function (context, input) {
}; ```
-# [PowerShell](#tab/powershell)
* [Queue trigger, look up ID from JSON](#queue-trigger-look-up-id-from-json-ps) * [HTTP trigger, look up ID from query string](#http-trigger-id-query-string-ps)
foreach ($Document in $Documents) {
} ```
-# [Python](#tab/python)
This section contains the following examples that read a single document by specifying an ID value from various sources:
Here's the binding data in the *function.json* file:
} ```
-The [configuration](#configuration) section explains these properties.
+## Attributes
-Here's the Python code:
+Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file.
-```python
-import azure.functions as func
+# [Functions 2.x+](#tab/functionsv2/in-process)
-def main(queuemsg: func.QueueMessage, documents: func.DocumentList):
- for document in documents:
- # operate on each document
-```
-
+# [Extension 4.x+ (preview)](#tab/extensionv4/in-process)
-## Attributes and annotations
-# [C#](#tab/csharp)
+# [Functions 2.x+](#tab/functionsv2/isolated-process)
-In [C# class libraries](functions-dotnet-class-library.md), use the [CosmosDB](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.CosmosDB/CosmosDBAttribute.cs) attribute.
-The attribute's constructor takes the database name and collection name. In [extension version 4.x](./functions-bindings-cosmosdb-v2.md#cosmos-db-extension-4x-and-higher) some settings and properties have been removed or renamed. For information about settings and other properties that you can configure for all versions, see [the following configuration section](#configuration).
+# [Extension 4.x+ (preview)](#tab/extensionv4/isolated-process)
-# [C# Script](#tab/csharp-script)
-Attributes are not supported by C# Script.
+# [Functions 2.x+](#tab/functionsv2/csharp-script)
-# [Java](#tab/java)
-From the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@CosmosDBOutput` annotation on parameters that write to Cosmos DB. The annotation parameter type should be `OutputBinding<T>`, where `T` is either a native Java type or a POJO.
+# [Extension 4.x+ (preview)](#tab/extensionv4/csharp-script)
-# [JavaScript](#tab/javascript)
-Attributes are not supported by JavaScript.
+
-# [PowerShell](#tab/powershell)
+## Annotations
-Attributes are not supported by PowerShell.
+From the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@CosmosDBInput` annotation on parameters that read from Azure Cosmos DB. The annotation supports the following properties:
-# [Python](#tab/python)
++ [name](/java/api/com.microsoft.azure.functions.annotation.cosmosdbinput.name)++ [connectionStringSetting](/java/api/com.microsoft.azure.functions.annotation.cosmosdbinput.connectionstringsetting)++ [databaseName](/java/api/com.microsoft.azure.functions.annotation.cosmosdbinput.databasename)++ [collectionName](/java/api/com.microsoft.azure.functions.annotation.cosmosdbinput.collectionname)++ [dataType](/java/api/com.microsoft.azure.functions.annotation.cosmosdbinput.datatype)++ [id](/java/api/com.microsoft.azure.functions.annotation.cosmosdbinput.id)++ [partitionKey](/java/api/com.microsoft.azure.functions.annotation.cosmosdbinput.partitionkey)++ [sqlQuery](/java/api/com.microsoft.azure.functions.annotation.cosmosdbinput.sqlquery)
-Attributes are not supported by Python.
+## Configuration
-
+The following table explains the binding configuration properties that you set in the *function.json* file, where properties differ by extension version:
-## Configuration
+# [Functions 2.x+](#tab/functionsv2)
+
-The following table explains the binding configuration properties that you set in the *function.json* file and the `CosmosDB` attribute.
+# [Extension 4.x+ (preview)](#tab/extensionv4)
-|function.json property | Attribute property |Description|
-|||-|
-|**type** | n/a | Must be set to `cosmosDB`. |
-|**direction** | n/a | Must be set to `in`. |
-|**name** | n/a | Name of the binding parameter that represents the document in the function. |
-|**databaseName** |**DatabaseName** |The database containing the document. |
-|**collectionName** <br> or <br> **containerName**|**CollectionName** <br> or <br> **ContainerName**| The name of the collection that contains the document. <br><br> In [version 4.x of the extension](./functions-bindings-cosmosdb-v2.md#cosmos-db-extension-4x-and-higher) this property is called `ContainerName`. |
-|**id** | **Id** | The ID of the document to retrieve. This property supports [binding expressions](./functions-bindings-expressions-patterns.md). Don't set both the `id` and **sqlQuery** properties. If you don't set either one, the entire collection is retrieved. |
-|**sqlQuery** |**SqlQuery** | An Azure Cosmos DB SQL query used for retrieving multiple documents. The property supports runtime bindings, as in this example: `SELECT * FROM c where c.departmentId = {departmentId}`. Don't set both the `id` and `sqlQuery` properties. If you don't set either one, the entire collection is retrieved.|
-|**connectionStringSetting** <br> or <br> **connection** |**ConnectionStringSetting** <br> or <br> **Connection**|The name of the app setting containing your Azure Cosmos DB connection string. <br><br> In [version 4.x of the extension](./functions-bindings-cosmosdb-v2.md#cosmos-db-extension-4x-and-higher) this property is called `Connection`. The value is the name of an app setting that either contains the connection string or contains a configuration section or prefix which defines the connection. See [Connections](./functions-reference.md#connections). |
-|**partitionKey**|**PartitionKey**|Specifies the partition key value for the lookup. May include binding parameters. It is required for lookups in [partitioned](../cosmos-db/partitioning-overview.md#logical-partitions) collections.|
-|**preferredLocations**| **PreferredLocations**| (Optional) Defines preferred locations (regions) for geo-replicated database accounts in the Azure Cosmos DB service. Values should be comma-separated. For example, "East US,South Central US,North Europe". |
++
+See the [Example section](#example) for complete examples.
## Usage
-# [C#](#tab/csharp)
+The parameter type supported by the Event Grid trigger depends on the Functions runtime version, the extension package version, and the C# modality used.
-When the function exits successfully, any changes made to the input document via named input parameters are automatically persisted.
-# [C# Script](#tab/csharp-script)
+# [Functions 2.x+](#tab/functionsv2/in-process)
-When the function exits successfully, any changes made to the input document via named input parameters are automatically persisted.
-# [Java](#tab/java)
+# [Extension 4.x+ (preview)](#tab/extensionv4/in-process)
-From the [Java functions runtime library](/java/api/overview/azure/functions/runtime), the [@CosmosDBInput](/java/api/com.microsoft.azure.functions.annotation.cosmosdbinput) annotation exposes Cosmos DB data to the function. This annotation can be used with native Java types, POJOs, or nullable values using `Optional<T>`.
-# [JavaScript](#tab/javascript)
+# [Functions 2.x+](#tab/functionsv2/isolated-process)
-Updates are not made automatically upon function exit. Instead, use `context.bindings.<documentName>In` and `context.bindings.<documentName>Out` to make updates. See the [JavaScript example](#example) for more detail.
-# [PowerShell](#tab/powershell)
+# [Extension 4.x+ (preview)](#tab/extensionv4/isolated-process)
-Updates to documents are not made automatically upon function exit. To update documents in a function use an [output binding](./functions-bindings-cosmosdb-v2-input.md). See the [PowerShell example](#example) for more detail.
+Only JSON string inputs are currently supported.
-# [Python](#tab/python)
+# [Functions 2.x+](#tab/functionsv2/csharp-script)
-Data is made available to the function via a `DocumentList` parameter. Changes made to the document are not automatically persisted.
+
+# [Extension 4.x+ (preview)](#tab/extensionv4/csharp-script)
+
+<!--Any of the below pivots can be combined if the usage info is identical.-->
+From the [Java functions runtime library](/java/api/overview/azure/functions/runtime), the [@CosmosDBInput](/java/api/com.microsoft.azure.functions.annotation.cosmosdbinput) annotation exposes Cosmos DB data to the function. This annotation can be used with native Java types, POJOs, or nullable values using `Optional<T>`.
+Updates are not made automatically upon function exit. Instead, use `context.bindings.<documentName>In` and `context.bindings.<documentName>Out` to make updates. See the [JavaScript example](#example) for more detail.
+Updates to documents are not made automatically upon function exit. To update documents in a function use an [output binding](./functions-bindings-cosmosdb-v2-input.md). See the [PowerShell example](#example) for more detail.
+Data is made available to the function via a `DocumentList` parameter. Changes made to the document are not automatically persisted.
++ ## Next steps - [Run a function when an Azure Cosmos DB document is created or modified (Trigger)](./functions-bindings-cosmosdb-v2-trigger.md)
azure-functions Functions Bindings Cosmosdb V2 Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-output.md
Title: Azure Cosmos DB output binding for Functions 2.x and higher description: Learn to use the Azure Cosmos DB output binding in Azure Functions.- Previously updated : 09/01/2021- Last updated : 03/04/2022 ms.devlang: csharp, java, javascript, powershell, python
+zone_pivot_groups: programming-languages-set-functions-lang-workers
# Azure Cosmos DB output binding for Azure Functions 2.x and higher
The Azure Cosmos DB output binding lets you write a new document to an Azure Cos
For information on setup and configuration details, see the [overview](./functions-bindings-cosmosdb-v2.md).
-<a id="example" name="example"></a>
+## Example
-# [C#](#tab/csharp)
+Unless otherwise noted, examples in this article target version 3.x of the [Azure Cosmos DB extension](functions-bindings-cosmosdb-v2.md). For use with extension version 4.x, you need to replace the string `collection` in property and attribute names with `container`.
++
+# [In-process](#tab/in-process)
This section contains the following examples:
namespace CosmosDBSamplesV2
{ public class ToDoItem {
- public string Id { get; set; }
+ public string id { get; set; }
public string Description { get; set; } } }
namespace CosmosDBSamplesV2
### Queue trigger, write one doc (v4 extension)
-Apps using Cosmos DB [extension version 4.x](./functions-bindings-cosmosdb-v2.md#cosmos-db-extension-4x-and-higher) or higher will have different attribute properties which are shown below. The following example shows a [C# function](functions-dotnet-class-library.md) that adds a document to a database, using data provided in message from Queue storage.
+Apps using Cosmos DB [extension version 4.x] or higher will have different attribute properties which are shown below. The following example shows a [C# function](functions-dotnet-class-library.md) that adds a document to a database, using data provided in message from Queue storage.
```cs using Microsoft.Azure.WebJobs;
namespace CosmosDBSamplesV2
} ``` +
+# [Isolated process](#tab/isolated-process)
+
+The following code defines a `MyDocument` type:
++
+In the following example, the return type is an [`IReadOnlyList<T>`](/dotnet/api/system.collections.generic.ireadonlylist-1), which is a modified list of documents from trigger binding parameter:
++ # [C# Script](#tab/csharp-script) This section contains the following examples:
namespace CosmosDBSamplesV2
{ public class ToDoItem {
- public string Id { get; set; }
+ public string id { get; set; }
public string Description { get; set; } } }
public static async Task Run(ToDoItem[] toDoItemsIn, IAsyncCollector<ToDoItem> t
} ```
-# [Java](#tab/java)
++ * [Queue trigger, save message to database via return value](#queue-trigger-save-message-to-database-via-return-value-java) * [HTTP trigger, save one document to database via return value](#http-trigger-save-one-document-to-database-via-return-value-java)
The following example shows a Java function that writes multiple documents to Co
In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@CosmosDBOutput` annotation on parameters that will be written to Cosmos DB. The annotation parameter type should be ```OutputBinding<T>```, where T is either a native Java type or a POJO.
-# [JavaScript](#tab/javascript)
The following example shows an Azure Cosmos DB output binding in a *function.json* file and a [JavaScript function](functions-reference-node.md) that uses the binding. The function uses a queue input binding for a queue that receives JSON in the following format:
For bulk insert form the objects first and then run the stringify function. Here
}; ```
-# [PowerShell](#tab/powershell)
The following example show how to write data to Cosmos DB using an output binding. The binding is declared in the function's configuration file (_functions.json_), and take data from a queue message and writes out to a Cosmos DB document.
Push-OutputBinding -Name EmployeeDocument -Value @{
} ```
-# [Python](#tab/python)
The following example demonstrates how to write a document to an Azure Cosmos DB database as the output of a function.
def main(req: func.HttpRequest, doc: func.Out[func.Document]) -> func.HttpRespon
return 'OK' ``` --
-## Attributes and annotations
-
-# [C#](#tab/csharp)
+## Attributes
-In [C# class libraries](functions-dotnet-class-library.md), use the [CosmosDB](https://github.com/Azure/azure-webjobs-sdk-extensions/tree/dev/test/WebJobs.Extensions.CosmosDB.Tests) attribute.
+Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file.
-The attribute's constructor takes the database name and collection name. For information about those settings and other properties that you can configure, see [Output - configuration](#configuration). Here's a `CosmosDB` attribute example in a method signature:
+# [Functions 2.x+](#tab/functionsv2/in-process)
-```csharp
- [FunctionName("QueueToDocDB")]
- public static void Run(
- [QueueTrigger("myqueue-items", Connection = "AzureWebJobsStorage")] string myQueueItem,
- [CosmosDB("ToDoList", "Items", Id = "id", ConnectionStringSetting = "myCosmosDB")] out dynamic document)
- {
- ...
- }
-```
-In [extension version 4.x](./functions-bindings-cosmosdb-v2.md#cosmos-db-extension-4x-and-higher) some settings and properties have been removed or renamed. For detailed information about the changes, see [Output - configuration](#configuration). Here's a `CosmosDB` attribute example in a method signature:
+# [Extension 4.x+ (preview)](#tab/extensionv4/in-process)
-```csharp
- [FunctionName("QueueToCosmosDB")]
- public static void Run(
- [QueueTrigger("myqueue-items", Connection = "AzureWebJobsStorage")] string myQueueItem,
- [CosmosDB("database", "container", Connection = "CosmosDBConnectionSetting")] out dynamic document)
- {
- ...
- }
-```
-# [C# Script](#tab/csharp-script)
+# [Functions 2.x+](#tab/functionsv2/isolated-process)
-Attributes are not supported by C# Script.
-# [Java](#tab/java)
+# [Extension 4.x+ (preview)](#tab/functionsv4/isolated-process)
-The `CosmosDBOutput` annotation is available to write data to Cosmos DB. You can apply the annotation to the function or to an individual function parameter. When used on the function method, the return value of the function is what is written to Cosmos DB. If you use the annotation with a parameter, the parameter's type must be declared as an `OutputBinding<T>` where `T` a native Java type or a POJO.
-# [JavaScript](#tab/javascript)
+# [Functions 2.x+](#tab/functionsv2/csharp-script)
-Attributes are not supported by JavaScript.
-# [PowerShell](#tab/powershell)
+# [Extension 4.x+ (preview)](#tab/functionsv4/csharp-script)
-Attributes are not supported by PowerShell.
-
-# [Python](#tab/python)
-
-Attributes are not supported by Python.
-
+## Annotations
+
+From the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@CosmosDBOutput` annotation on parameters that write to Azure Cosmos DB. The annotation supports the following properties:
+++ [name](/java/api/com.microsoft.azure.functions.annotation.cosmosdboutput.name)++ [connectionStringSetting](/java/api/com.microsoft.azure.functions.annotation.cosmosdboutput.connectionstringsetting)++ [databaseName](/java/api/com.microsoft.azure.functions.annotation.cosmosdboutput.databasename)++ [collectionName](/java/api/com.microsoft.azure.functions.annotation.cosmosdboutput.collectionname)++ [createIfNotExists](/java/api/com.microsoft.azure.functions.annotation.cosmosdboutput.createifnotexists)++ [dataType](/java/api/com.microsoft.azure.functions.annotation.cosmosdboutput.datatype)++ [id](/java/api/com.microsoft.azure.functions.annotation.cosmosdboutput.id)++ [partitionKey](/java/api/com.microsoft.azure.functions.annotation.cosmosdboutput.partitionkey)++ [preferredLocations](/java/api/com.microsoft.azure.functions.annotation.cosmosdboutput.preferredlocations)++ [useMultipleWriteLocations](/java/api/com.microsoft.azure.functions.annotation.cosmosdboutput.usemultiplewritelocations)+ ## Configuration
-The following table explains the binding configuration properties that you set in the *function.json* file and the `CosmosDB` attribute.
+The following table explains the binding configuration properties that you set in the *function.json* file, where properties differ by extension version:
+
+# [Functions 2.x+](#tab/functionsv2)
+
-|function.json property | Attribute property |Description|
-|||-|
-|**type** | n/a | Must be set to `cosmosDB`. |
-|**direction** | n/a | Must be set to `out`. |
-|**name** | n/a | Name of the binding parameter that represents the document in the function. |
-|**databaseName** | **DatabaseName**|The database containing the collection where the document is created. |
-|**collectionName** <br> or <br> **containerName** |**CollectionName** <br> or <br> **ContainerName** | The name of the collection where the document is created. <br><br> In [version 4.x of the extension] this property is called `ContainerName`. |
-|**createIfNotExists** |**CreateIfNotExists** | A boolean value to indicate whether the collection is created when it doesn't exist. The default is *false* because new collections are created with reserved throughput, which has cost implications. For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/). |
-|**partitionKey**|**PartitionKey** |When `CreateIfNotExists` is true, it defines the partition key path for the created collection.|
-|**collectionThroughput** <br> or <br> **containerThroughput**|**CollectionThroughput** <br> or <br> **ContainerThroughput**| When `CreateIfNotExists` is true, it defines the [throughput](../cosmos-db/set-throughput.md) of the created collection. <br><br> In [version 4.x of the extension] this property is called `ContainerThroughput`. |
-|**connectionStringSetting** <br> or <br> **connection** |**ConnectionStringSetting** <br> or <br> **Connection**| The name of an app setting or setting collection that specifies how to connect to the Azure Cosmos DB account. See [Connections](#connections) <br><br> In [version 4.x of the extension] this property is called `connection`. |
-|**preferredLocations**| **PreferredLocations**| (Optional) Defines preferred locations (regions) for geo-replicated database accounts in the Azure Cosmos DB service. Values should be comma-separated. For example, "East US,South Central US,North Europe". |
-|**useMultipleWriteLocations**| **UseMultipleWriteLocations**| (Optional) When set to `true` along with `PreferredLocations`, it can leverage [multi-region writes](../cosmos-db/how-to-manage-database-account.md#configure-multiple-write-regions) in the Azure Cosmos DB service. <br><br> This property is not available in [version 4.x of the extension]. |
+# [Extension 4.x+ (preview)](#tab/extensionv4)
+
+See the [Example section](#example) for complete examples.
## Usage
-By default, when you write to the output parameter in your function, a document is created in your database. This document has an automatically generated GUID as the document ID. You can specify the document ID of the output document by specifying the `id` property in the JSON object passed to the output parameter.
+By default, when you write to the output parameter in your function, a document is created in your database. This document has an automatically generated GUID as the document ID. You can specify the document ID of the output document by specifying the id property in the JSON object passed to the output parameter.
-> [!Note]
+> [!NOTE]
> When you specify the ID of an existing document, it gets overwritten by the new output document. + ## Exceptions and return codes | Binding | Reference | ||| | CosmosDB | [CosmosDB Error Codes](/rest/api/cosmos-db/http-status-codes-for-cosmosdb) |
-<a name="host-json"></a>
-
-## host.json settings
--
-```json
-{
- "version": "2.0",
- "extensions": {
- "cosmosDB": {
- "connectionMode": "Gateway",
- "protocol": "Https",
- "leaseOptions": {
- "leasePrefix": "prefix1"
- }
- }
- }
-}
-```
-
-|Property |Default |Description |
-|-|--||
-|GatewayMode|Gateway|The connection mode used by the function when connecting to the Azure Cosmos DB service. Options are `Direct` and `Gateway`|
-|Protocol|Https|The connection protocol used by the function when connection to the Azure Cosmos DB service. Read [here for an explanation of both modes](../cosmos-db/performance-tips.md#networking). <br><br> This setting is not available in [version 4.x of the extension]. |
-|leasePrefix|n/a|Lease prefix to use across all functions in an app. <br><br> This setting is not available in [version 4.x of the extension].|
- ## Next steps - [Run a function when an Azure Cosmos DB document is created or modified (Trigger)](./functions-bindings-cosmosdb-v2-trigger.md) - [Read an Azure Cosmos DB document (Input binding)](./functions-bindings-cosmosdb-v2-input.md)
-[version 4.x of the extension]: ./functions-bindings-cosmosdb-v2.md#cosmos-db-extension-4x-and-higher
+[extension version 4.x]: ./functions-bindings-cosmosdb-v2.md?tabs=extensionv4
azure-functions Functions Bindings Cosmosdb V2 Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-trigger.md
Title: Azure Cosmos DB trigger for Functions 2.x and higher description: Learn to use the Azure Cosmos DB trigger in Azure Functions.- Previously updated : 09/01/2021- Last updated : 03/04/2022 ms.devlang: csharp, java, javascript, powershell, python
+zone_pivot_groups: programming-languages-set-functions-lang-workers
# Azure Cosmos DB trigger for Azure Functions 2.x and higher
The Azure Cosmos DB Trigger uses the [Azure Cosmos DB Change Feed](../cosmos-db/
For information on setup and configuration details, see the [overview](./functions-bindings-cosmosdb-v2.md).
-<a id="example" name="example"></a>
+## Example
-# [C#](#tab/csharp)
+
+The usage of the trigger depends on the extension package version and the C# modality used in your function app, which can be one of the following:
+
+# [In-process](#tab/in-process)
+
+An in-process class library is a compiled C# function runs in the same process as the Functions runtime.
+
+# [Isolated process](#tab/isolated-process)
+
+An isolated process class library compiled C# function runs in a process isolated from the runtime. Isolated process is required to support C# functions running on .NET 5.0.
+
+# [C# script](#tab/csharp-script)
+
+C# script is used primarily when creating C# functions in the Azure portal.
+++
+The following examples depend on the extension version for the given C# mode.
+
+# [Functions 2.x+](#tab/functionsv2/in-process)
The following example shows a [C# function](functions-dotnet-class-library.md) that is invoked when there are inserts or updates in the specified database and collection.
namespace CosmosDBSamplesV2
} ```
-Apps using Cosmos DB [extension version 4.x](./functions-bindings-cosmosdb-v2.md#cosmos-db-extension-4x-and-higher) or higher will have different attribute properties which are shown below. This example refers to a simple `ToDoItem` type.
+# [Extension 4.x+ (preview)](#tab/extensionv4/in-process)
+
+Apps using Cosmos DB [extension version 4.x](./functions-bindings-cosmosdb-v2.md?tabs=extensionv4) or higher will have different attribute properties, which are shown below. This example refers to a simple `ToDoItem` type.
```cs namespace CosmosDBSamplesV2
namespace CosmosDBSamplesV2
} ```
-# [C# Script](#tab/csharp-script)
+# [Functions 2.x+](#tab/functionsv2/isolated-process)
+
+The following code defines a `MyDocument` type:
++
+This document type is the type of the [`IReadOnlyList<T>`](/dotnet/api/system.collections.generic.ireadonlylist-1) used as the Cosmos DB trigger binding parameter in the following example:
+++
+# [Extension 4.x+ (preview)](#tab/extensionv4/isolated-process)
+
+Example pending.
+
+# [Functions 2.x+](#tab/functionsv2/csharp-script)
The following example shows a Cosmos DB trigger binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function writes log messages when Cosmos DB records are added or modified.
Here's the C# script code:
} ```
-# [Java](#tab/java)
+# [Extension 4.x+ (preview)](#tab/extensionv4/csharp-script)
+
+The following example shows a Cosmos DB trigger binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function writes log messages when Cosmos DB records are added or modified.
+
+Here's the binding data in the *function.json* file:
+
+```json
+{
+ "type": "cosmosDBTrigger",
+ "name": "documents",
+ "direction": "in",
+ "leaseContainerName": "leases",
+ "connection": "<connection-app-setting>",
+ "databaseName": "Tasks",
+ "containerName": "Items",
+ "createLeaseContainerIfNotExists": true
+}
+```
+
+Here's the C# script code:
+
+```cs
+ #r "Microsoft.Azure.DocumentDB.Core"
+
+ using System;
+ using Microsoft.Azure.Documents;
+ using System.Collections.Generic;
+ using Microsoft.Extensions.Logging;
+
+ public static void Run(IReadOnlyList<Document> documents, ILogger log)
+ {
+ log.LogInformation("Documents modified " + documents.Count);
+ log.LogInformation("First document Id " + documents[0].Id);
+ }
+```
+++ This function is invoked when there are inserts or updates in the specified database and collection.
+# [Functions 2.x+](#tab/functionsv2)
+ ```java @FunctionName("cosmosDBMonitor") public void cosmosDbProcessor(
This function is invoked when there are inserts or updates in the specified data
context.getLogger().info(items.length + "item(s) is/are changed."); } ```
+# [Extension 4.x+ (preview)](#tab/extensionv4)
++ In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@CosmosDBTrigger` annotation on parameters whose value would come from Cosmos DB. This annotation can be used with native Java types, POJOs, or nullable values using `Optional<T>`.
-# [JavaScript](#tab/javascript)
The following example shows a Cosmos DB trigger binding in a *function.json* file and a [JavaScript function](functions-reference-node.md) that uses the binding. The function writes log messages when Cosmos DB records are added or modified. Here's the binding data in the *function.json* file:
-```json
-{
- "type": "cosmosDBTrigger",
- "name": "documents",
- "direction": "in",
- "leaseCollectionName": "leases",
- "connectionStringSetting": "<connection-app-setting>",
- "databaseName": "Tasks",
- "collectionName": "Items",
- "createLeaseCollectionIfNotExists": true
-}
-```
Here's the JavaScript code:
Here's the JavaScript code:
} ```
-# [PowerShell](#tab/powershell)
The following example shows how to run a function as data changes in Cosmos DB.
-```json
-{
-  "type": "cosmosDBTrigger",
-  "name": "Documents",
-  "direction": "in",
-  "leaseCollectionName": "leases",
-  "connectionStringSetting": "MyStorageConnectionAppSetting",
-  "databaseName": "Tasks",
-  "collectionName": "Items",
-  "createLeaseCollectionIfNotExists": true
-}
-```
In the _run.ps1_ file, you have access to the document that triggers the function via the `$Documents` parameter. ```powershell
-param($Documents,ΓÇ»$TriggerMetadata)
+param($Documents, $TriggerMetadata)
-Write-Host "First document Id modified : $($Documents[0].id)"
+Write-Host "First document Id modified : $($Documents[0].id)"
```
-# [Python](#tab/python)
The following example shows a Cosmos DB trigger binding in a *function.json* file and a [Python function](functions-reference-python.md) that uses the binding. The function writes log messages when Cosmos DB records are modified. Here's the binding data in the *function.json* file:
-```json
-{
- "name": "documents",
- "type": "cosmosDBTrigger",
- "direction": "in",
- "leaseCollectionName": "leases",
- "connectionStringSetting": "<connection-app-setting>",
- "databaseName": "Tasks",
- "collectionName": "Items",
- "createLeaseCollectionIfNotExists": true
-}
-```
Here's the Python code:
Here's the Python code:
logging.info('First document Id modified: %s', documents[0]['id']) ``` -
+## Attributes
-## Attributes and annotations
+Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use the [CosmosDBTriggerAttribute](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.CosmosDB/Trigger/CosmosDBTriggerAttribute.cs) to define the function. C# script instead uses a function.json configuration file.
-# [C#](#tab/csharp)
+# [Functions 2.x+](#tab/functionsv2/in-process)
-In [C# class libraries](functions-dotnet-class-library.md), use the [CosmosDBTrigger](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.CosmosDB/Trigger/CosmosDBTriggerAttribute.cs) attribute.
-The attribute's constructor takes the database name and collection name. For information about those settings and other properties that you can configure, see [Trigger - configuration](#configuration). Here's a `CosmosDBTrigger` attribute example in a method signature:
+# [Extension 4.x+ (preview)](#tab/extensionv4/in-process)
-```csharp
- [FunctionName("DocumentUpdates")]
- public static void Run([CosmosDBTrigger("database", "collection", ConnectionStringSetting = "myCosmosDB")]
- IReadOnlyList<Document> documents,
- ILogger log)
- {
- ...
- }
-```
-In [extension version 4.x](./functions-bindings-cosmosdb-v2.md#cosmos-db-extension-4x-and-higher) some settings and properties have been removed or renamed. For detailed information about the changes, see [Trigger - configuration](#configuration). Here's a `CosmosDBTrigger` attribute example in a method signature which refers to a simple `ToDoItem` type:
+# [Functions 2.x+](#tab/functionsv2/isolated-process)
-```cs
-namespace CosmosDBSamplesV2
-{
- public class ToDoItem
- {
- public string Id { get; set; }
- public string Description { get; set; }
- }
-}
-```
-```csharp
- [FunctionName("DocumentUpdates")]
- public static void Run([CosmosDBTrigger("database", "container", Connection = "CosmosDBConnectionSetting")]
- IReadOnlyList<ToDoItem> documents,
- ILogger log)
- {
- ...
- }
-```
+# [Extension 4.x+ (preview)](#tab/extensionv4/isolated-process)
-For a complete example of either extension version, see [Trigger](#example).
-# [C# Script](#tab/csharp-script)
+# [Functions 2.x+](#tab/functionsv2/csharp-script)
-Attributes are not supported by C# Script.
-# [Java](#tab/java)
+# [Extension 4.x+ (preview)](#tab/extensionv4/csharp-script)
-From the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@CosmosDBInput` annotation on parameters that read data from Cosmos DB.
-# [JavaScript](#tab/javascript)
+
-Attributes are not supported by JavaScript.
+## Annotations
+
+# [Functions 2.x+](#tab/functionsv2)
+
+From the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@CosmosDBInput` annotation on parameters that read data from Cosmos DB. The annotation supports the following properties:
+++ [name](/java/api/com.microsoft.azure.functions.annotation.cosmosdbtrigger.name)++ [connectionStringSetting](/java/api/com.microsoft.azure.functions.annotation.cosmosdbtrigger.connectionstringsetting)++ [databaseName](/java/api/com.microsoft.azure.functions.annotation.cosmosdbtrigger.databasename)++ [collectionName](/java/api/com.microsoft.azure.functions.annotation.cosmosdbtrigger.collectionname)++ [leaseConnectionStringSetting](/java/api/com.microsoft.azure.functions.annotation.cosmosdbtrigger.leaseconnectionstringsetting)++ [leaseDatabaseName](/java/api/com.microsoft.azure.functions.annotation.cosmosdbtrigger.leasedatabasename)++ [leaseCollectionName](/java/api/com.microsoft.azure.functions.annotation.cosmosdbtrigger.leasecollectionname)++ [createLeaseCollectionIfNotExists](/java/api/com.microsoft.azure.functions.annotation.cosmosdbtrigger.createleasecollectionifnotexists)++ [leasesCollectionThroughput](/java/api/com.microsoft.azure.functions.annotation.cosmosdbtrigger.leasescollectionthroughput)++ [leaseCollectionPrefix](/java/api/com.microsoft.azure.functions.annotation.cosmosdbtrigger.leasecollectionprefix)++ [feedPollDelay](/java/api/com.microsoft.azure.functions.annotation.cosmosdbtrigger.feedpolldelay)++ [leaseAcquireInterval](/java/api/com.microsoft.azure.functions.annotation.cosmosdbtrigger.leaseacquireinterval)++ [leaseExpirationInterval](/java/api/com.microsoft.azure.functions.annotation.cosmosdbtrigger.leaseexpirationinterval)++ [leaseRenewInterval](/java/api/com.microsoft.azure.functions.annotation.cosmosdbtrigger.leaserenewinterval)++ [checkpointInterval](/java/api/com.microsoft.azure.functions.annotation.cosmosdbtrigger.checkpointinterval)++ [checkpointDocumentCount](/java/api/com.microsoft.azure.functions.annotation.cosmosdbtrigger.checkpointdocumentcount)++ [maxItemsPerInvocation](/java/api/com.microsoft.azure.functions.annotation.cosmosdbtrigger.maxitemsperinvocation)++ [startFromBeginning](/java/api/com.microsoft.azure.functions.annotation.cosmosdbtrigger.startfrombeginning)++ [preferredLocations](/java/api/com.microsoft.azure.functions.annotation.cosmosdbtrigger.preferredlocations)+
+# [Extension 4.x+ (preview)](#tab/extensionv4)
+
-# [PowerShell](#tab/powershell)
+
-Attributes are not supported by PowerShell.
+## Configuration
-# [Python](#tab/python)
+The following table explains the binding configuration properties that you set in the *function.json* file, where properties differ by extension version:
-Attributes are not supported by Python.
+# [Functions 2.x+](#tab/functionsv2)
-
-## Configuration
+# [Extension 4.x+ (preview)](#tab/extensionv4)
-The following table explains the binding configuration properties that you set in the *function.json* file and the `CosmosDBTrigger` attribute.
-
-|function.json property | Attribute property |Description|
-|||-|
-|**type** | n/a | Must be set to `cosmosDBTrigger`. |
-|**direction** | n/a | Must be set to `in`. This parameter is set automatically when you create the trigger in the Azure portal. |
-|**name** | n/a | The variable name used in function code that represents the list of documents with changes. |
-|**connectionStringSetting** <br> or <br> **connection**|**ConnectionStringSetting** <br> or <br> **Connection**| The name of an app setting or setting collection that specifies how to connect to the Azure Cosmos DB account being monitored. See [Connections](#connections) <br><br> In [version 4.x of the extension] this property is called `connection`. |
-|**databaseName**|**DatabaseName** | The name of the Azure Cosmos DB database with the collection being monitored. |
-|**collectionName** <br> or <br> **containerName** |**CollectionName** <br> or <br> **ContainerName** | The name of the collection being monitored. <br><br> In [version 4.x of the extension] this property is called `ContainerName`. |
-|**leaseConnectionStringSetting** <br> or <br> **leaseConnection** | **LeaseConnectionStringSetting** <br> or <br> **LeaseConnection** | (Optional) The name of an app setting or setting collection that specifies how to connect to the Azure Cosmos DB account that holds the lease collection. See [Connections](#connections) <br><br> When not set, the `connectionStringSetting` value is used. This parameter is automatically set when the binding is created in the portal. The connection string for the leases collection must have write permissions. <br><br> In [version 4.x of the extension] this property is called `leaseConnection`, and if not set the `connection` value will be used. |
-|**leaseDatabaseName** |**LeaseDatabaseName** | (Optional) The name of the database that holds the collection used to store leases. When not set, the value of the `databaseName` setting is used. This parameter is automatically set when the binding is created in the portal. |
-|**leaseCollectionName** <br> or <br> **leaseContainerName** | **LeaseCollectionName** <br> or <br> **LeaseContainerName** | (Optional) The name of the collection used to store leases. When not set, the value `leases` is used. <br><br> In [version 4.x of the extension] this property is called `LeaseContainerName`. |
-|**createLeaseCollectionIfNotExists** <br> or <br> **createLeaseContainerIfNotExists** | **CreateLeaseCollectionIfNotExists** <br> or <br> **CreateLeaseContainerIfNotExists** | (Optional) When set to `true`, the leases collection is automatically created when it doesn't already exist. The default value is `false`. <br><br> In [version 4.x of the extension] this property is called `CreateLeaseContainerIfNotExists`. |
-|**leasesCollectionThroughput** <br> or <br> **leasesContainerThroughput**| **LeasesCollectionThroughput** <br> or <br> **LeasesContainerThroughput**| (Optional) Defines the number of Request Units to assign when the leases collection is created. This setting is only used when `createLeaseCollectionIfNotExists` is set to `true`. This parameter is automatically set when the binding is created using the portal. <br><br> In [version 4.x of the extension] this property is called `LeasesContainerThroughput`. |
-|**leaseCollectionPrefix** <br> or <br> **leaseContainerPrefix**| **LeaseCollectionPrefix** <br> or <br> **leaseContainerPrefix** | (Optional) When set, the value is added as a prefix to the leases created in the Lease collection for this Function. Using a prefix allows two separate Azure Functions to share the same Lease collection by using different prefixes. <br><br> In [version 4.x of the extension] this property is called `LeaseContainerPrefix`. |
-|**feedPollDelay**| **FeedPollDelay**| (Optional) The time (in milliseconds) for the delay between polling a partition for new changes on the feed, after all current changes are drained. Default is 5,000 milliseconds, or 5 seconds.
-|**leaseAcquireInterval**| **LeaseAcquireInterval**| (Optional) When set, it defines, in milliseconds, the interval to kick off a task to compute if partitions are distributed evenly among known host instances. Default is 13000 (13 seconds).
-|**leaseExpirationInterval**| **LeaseExpirationInterval**| (Optional) When set, it defines, in milliseconds, the interval for which the lease is taken on a lease representing a partition. If the lease is not renewed within this interval, it will cause it to expire and ownership of the partition will move to another instance. Default is 60000 (60 seconds).
-|**leaseRenewInterval**| **LeaseRenewInterval**| (Optional) When set, it defines, in milliseconds, the renew interval for all leases for partitions currently held by an instance. Default is 17000 (17 seconds).
-|**checkpointInterval**| **CheckpointInterval**| (Optional) When set, it defines, in milliseconds, the interval between lease checkpoints. Default is always after each Function call. <br><br> This property is not available in [version 4.x of the extension]. |
-|**maxItemsPerInvocation**| **MaxItemsPerInvocation**| (Optional) When set, this property sets the maximum number of items received per Function call. If operations in the monitored collection are performed through stored procedures, [transaction scope](../cosmos-db/stored-procedures-triggers-udfs.md#transactions) is preserved when reading items from the change feed. As a result, the number of items received could be higher than the specified value so that the items changed by the same transaction are returned as part of one atomic batch.
-|**startFromBeginning**| **StartFromBeginning**| (Optional) This option tells the Trigger to read changes from the beginning of the collection's change history instead of starting at the current time. Reading from the beginning only works the first time the Trigger starts, as in subsequent runs, the checkpoints are already stored. Setting this option to `true` when there are leases already created has no effect. |
-|**preferredLocations**| **PreferredLocations**| (Optional) Defines preferred locations (regions) for geo-replicated database accounts in the Azure Cosmos DB service. Values should be comma-separated. For example, "East US,South Central US,North Europe". |
- +++
+See the [Example section](#example) for complete examples.
## Usage
+The parameter type supported by the Event Grid trigger depends on the Functions runtime version, the extension package version, and the C# modality used.
+ The trigger requires a second collection that it uses to store _leases_ over the partitions. Both the collection being monitored and the collection that contains the leases must be available for the trigger to work. >[!IMPORTANT] > If multiple functions are configured to use a Cosmos DB trigger for the same collection, each of the functions should use a dedicated lease collection or specify a different `LeaseCollectionPrefix` for each function. Otherwise, only one of the functions will be triggered. For information about the prefix, see the [Configuration section](#configuration).
+
+>[!IMPORTANT]
+> If multiple functions are configured to use a Cosmos DB trigger for the same collection, each of the functions should use a dedicated lease collection or specify a different `leaseCollectionPrefix` for each function. Otherwise, only one of the functions will be triggered. For information about the prefix, see the [Configuration section](#configuration).
The trigger doesn't indicate whether a document was updated or inserted, it just provides the document itself. If you need to handle updates and inserts differently, you could do that by implementing timestamp fields for insertion or update. + ## Next steps - [Read an Azure Cosmos DB document (Input binding)](./functions-bindings-cosmosdb-v2-input.md) - [Save changes to an Azure Cosmos DB document (Output binding)](./functions-bindings-cosmosdb-v2-output.md)
-[version 4.x of the extension]: ./functions-bindings-cosmosdb-v2.md#cosmos-db-extension-4x-and-higher
+[version 4.x of the extension]: ./functions-bindings-cosmosdb-v2.md?tabs=extensionv4
azure-functions Functions Bindings Cosmosdb V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2.md
Title: Azure Cosmos DB bindings for Functions 2.x and higher description: Understand how to use Azure Cosmos DB triggers and bindings in Azure Functions.- Previously updated : 09/01/2021- Last updated : 03/04/2022
+zone_pivot_groups: programming-languages-set-functions-lang-workers
# Azure Cosmos DB trigger and bindings for Azure Functions 2.x and higher overview
This set of articles explains how to work with [Azure Cosmos DB](../cosmos-db/se
[!INCLUDE [SQL API support only](../../includes/functions-cosmosdb-sqlapi-note.md)]
-## Add to your Functions app
+## Install extension
-### Functions 2.x and higher
+The extension NuGet package you install depends on the C# mode you're using in your function app:
-Working with the trigger and bindings requires that you reference the appropriate package. The NuGet package is used for .NET class libraries while the extension bundle is used for all other application types.
+# [In-process](#tab/in-process)
-| Language | Add by... | Remarks
-|-||-|
-| C# | Installing the [NuGet package], version 3.x | |
-| C# Script, Java, JavaScript, Python, PowerShell | Registering the [extension bundle] | The [Azure Tools extension] is recommended to use with Visual Studio Code. |
-| C# Script (online-only in Azure portal) | Adding a binding | To update existing binding extensions without having to republish your function app, see [Update your extensions]. |
+Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
-[NuGet package]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.CosmosDB
-[core tools]: ./functions-run-local.md
-[extension bundle]: ./functions-bindings-register.md#extension-bundles
-[Update your extensions]: ./functions-bindings-register.md
-[Azure Tools extension]: https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack
+# [Isolated process](#tab/isolated-process)
+
+Functions execute in an isolated C# worker process. To learn more, see [Guide for running functions on .NET 5.0 in Azure](dotnet-isolated-process-guide.md).
+
+# [C# script](#tab/csharp-script)
+
+Functions run as C# script, which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions].
+++
+The process for installing the extension varies depending on the extension version:
-### Cosmos DB extension 4.x and higher
+# [Functions 2.x+](#tab/functionsv2/in-process)
-A new version of the Cosmos DB bindings extension is available in preview. It introduces the ability to [connect using an identity instead of a secret](./functions-reference.md#configure-an-identity-based-connection). For a tutorial on configuring your function apps with managed identities, see the [creating a function app with identity-based connections tutorial](./functions-identity-based-connections-tutorial.md). For .NET applications, the new extension version also changes the types that you can bind to, replacing the types from the v2 SDK `Microsoft.Azure.DocumentDB` with newer types from the v3 SDK [Microsoft.Azure.Cosmos](../cosmos-db/sql/sql-api-sdk-dotnet-standard.md). Learn more about how these new types are different and how to migrate to them from the [SDK migration guide](../cosmos-db/sql/migrate-dotnet-v3.md), [trigger](./functions-bindings-cosmosdb-v2-trigger.md), [input binding](./functions-bindings-cosmosdb-v2-input.md), and [output binding](./functions-bindings-cosmosdb-v2-output.md) examples.
+Working with the trigger and bindings requires that you reference the appropriate NuGet package. Install the [NuGet package], version 3.x.
-This extension version is available as a [preview NuGet package]. To learn more, see [Update your extensions].
+# [Extension 4.x+ (preview)](#tab/extensionv4/in-process)
-[preview NuGet package]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.CosmosDB/4.0.0-preview2
+This preview version of the Cosmos DB bindings extension introduces the ability to [connect using an identity instead of a secret](./functions-reference.md#configure-an-identity-based-connection). For a tutorial on configuring your function apps with managed identities, see the [creating a function app with identity-based connections tutorial](./functions-identity-based-connections-tutorial.md).
+
+This version also changes the types that you can bind to, replacing the types from the v2 SDK `Microsoft.Azure.DocumentDB` with newer types from the v3 SDK [Microsoft.Azure.Cosmos](../cosmos-db/sql/sql-api-sdk-dotnet-standard.md). Learn more about how these new types are different and how to migrate to them from the [SDK migration guide](../cosmos-db/sql/migrate-dotnet-v3.md), [trigger](./functions-bindings-cosmosdb-v2-trigger.md), [input binding](./functions-bindings-cosmosdb-v2-input.md), and [output binding](./functions-bindings-cosmosdb-v2-output.md) examples.
+
+This extension version is available as a [preview NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.CosmosDB/4.0.0-preview3).
> [!NOTE]
-> Currently, authentication with an identity instead of a secret using the 4.x preview extension is only available for Elastic Premium plans.
+> Authentication with an identity instead of a secret using the 4.x preview extension is currently only available for Elastic Premium plans.
+
+# [Functions 2.x+](#tab/functionsv2/isolated-process)
+
+Add the extension to your project by installing the [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.CosmosDB/), version 3.x.
+
+# [Extension 4.x+ (preview)](#tab/extensionv4/isolated-process)
+
+This preview version of the Cosmos DB bindings extension introduces the ability to [connect using an identity instead of a secret](./functions-reference.md#configure-an-identity-based-connection). For a tutorial on configuring your function apps with managed identities, see the [creating a function app with identity-based connections tutorial](./functions-identity-based-connections-tutorial.md).
+
+Add the extension to your project by installing the [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.CosmosDB/), version 4.x.
+
+# [Functions 2.x+](#tab/functionsv2/csharp-script)
+
+You can install this version of the extension in your function app by registering the [extension bundle], version 2.x.
+
+# [Extension 4.x+ (preview)](#tab/extensionv4/csharp-script)
+
+This extension version is available from the extension bundle v3 by adding the following lines in your `host.json` file:
+
+```json
+{
+ "version": "2.0",
+ "extensionBundle": {
+ "id": "Microsoft.Azure.Functions.ExtensionBundle",
+ "version": "[3.3.0, 4.0.0)"
+ }
+}
+```
+++++
+## Install bundle
+
+The Cosmos DB is part of an [extension bundle], which is specified in your host.json project file. You may need to modify this bundle to change the version of the binding, or if bundles aren't already installed. To learn more, see [extension bundle].
+
+# [Bundle v2.x](#tab/functionsv2)
+
+You can install this version of the extension in your function app by registering the [extension bundle], version 2.x.
-### Functions 1.x
+# [Bundle v3.x](#tab/extensionv4)
-Functions 1.x apps automatically have a reference the [Microsoft.Azure.WebJobs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs) NuGet package, version 2.x.
+This version of the bundle contains a preview version of the Cosmos DB bindings extension (version 4.x) that introduces the ability to [connect using an identity instead of a secret](./functions-reference.md#configure-an-identity-based-connection). For a tutorial on configuring your function apps with managed identities, see the [creating a function app with identity-based connections tutorial](./functions-identity-based-connections-tutorial.md).
+
+You can add this version of the extension from the preview extension bundle v3 by adding or replacing the following code in your `host.json` file:
+
+```json
+{
+ "version": "2.0",
+ "extensionBundle": {
+ "id": "Microsoft.Azure.Functions.ExtensionBundle",
+ "version": "[3.3.0, 4.0.0)"
+ }
+}
+```
+
+To learn more, see [Update your extensions].
+++ ## Exceptions and return codes
Functions 1.x apps automatically have a reference the [Microsoft.Azure.WebJobs](
[!INCLUDE [functions-host-json-section-intro](../../includes/functions-host-json-section-intro.md)]
+# [Functions 2.x+](#tab/functionsv2)
+ ```json { "version": "2.0",
Functions 1.x apps automatically have a reference the [Microsoft.Azure.WebJobs](
|Property |Default |Description | |-|--||
-|GatewayMode|Gateway|The connection mode used by the function when connecting to the Azure Cosmos DB service. Options are `Direct` and `Gateway`|
-|Protocol|Https|The connection protocol used by the function when connection to the Azure Cosmos DB service. Read [here for an explanation of both modes](../cosmos-db/performance-tips.md#networking). <br><br> This setting is not available in [version 4.x of the extension](#cosmos-db-extension-4x-and-higher). |
-|leasePrefix|n/a|Lease prefix to use across all functions in an app. <br><br> This setting is not available in [version 4.x of the extension](#cosmos-db-extension-4x-and-higher).|
+|**connectionMode**|`Gateway`|The connection mode used by the function when connecting to the Azure Cosmos DB service. Options are `Direct` and `Gateway`|
+|**protocol**|`Https`|The connection protocol used by the function when connection to the Azure Cosmos DB service. Read [here for an explanation of both modes](../cosmos-db/performance-tips.md#networking). |
+|**leasePrefix**|n/a|Lease prefix to use across all functions in an app. |
+
+# [Extension 4.x+ (preview)](#tab/extensionv4)
+
+```json
+{
+ "version": "2.0",
+ "extensions": {
+ "cosmosDB": {
+ "connectionMode": "Gateway",
+ "userAgentSuffix": "MyDesiredUserAgentStamp"
+ }
+ }
+}
+```
+
+|Property |Default |Description |
+|-|--||
+|**connectionMode**|`Gateway`|The connection mode used by the function when connecting to the Azure Cosmos DB service. Options are `Direct` and `Gateway`|
+|**userAgentSuffix**| n/a | Adds the specified string value to all requests made by the trigger or binding to the service. This makes it easier for you to track the activity in Azure Monitor, based on a specific function app and filtering by `User Agent`.
++ ## Next steps - [Run a function when an Azure Cosmos DB document is created or modified (Trigger)](./functions-bindings-cosmosdb-v2-trigger.md) - [Read an Azure Cosmos DB document (Input binding)](./functions-bindings-cosmosdb-v2-input.md)-- [Save changes to an Azure Cosmos DB document (Output binding)](./functions-bindings-cosmosdb-v2-output.md)
+- [Save changes to an Azure Cosmos DB document (Output binding)](./functions-bindings-cosmosdb-v2-output.md)
+
+[extension bundle]: ./functions-bindings-register.md#extension-bundles
+[Update your extensions]: ./functions-bindings-register.md
azure-functions Functions Bindings Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb.md
Title: Azure Cosmos DB bindings for Functions 1.x description: Understand how to use Azure Cosmos DB triggers and bindings in Azure Functions 1.x.-- Last updated 11/21/2017 ms.devlang: csharp, javascript
This article explains how to work with [Azure Cosmos DB](../cosmos-db/serverless
> >This binding was originally named DocumentDB. In Functions version 1.x, only the trigger was renamed Cosmos DB; the input binding, output binding, and NuGet package retain the DocumentDB name. - > [!NOTE] > Azure Cosmos DB bindings are only supported for use with the SQL API. For all other Azure Cosmos DB APIs, you should access the database from your function by using the static client for your API, including [Azure Cosmos DB's API for MongoDB](../cosmos-db/mongodb-introduction.md), [Cassandra API](../cosmos-db/cassandra-introduction.md), [Gremlin API](../cosmos-db/graph-introduction.md), and [Table API](../cosmos-db/table-introduction.md).
azure-functions Functions Bindings Error Pages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-error-pages.md
Title: Azure Functions error handling and retry guidance description: Learn to handle errors and retry events in Azure Functions with links to specific binding errors.- Last updated 10/01/2020- # Azure Functions error handling and retries
azure-functions Functions Bindings Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-errors.md
Title: Handle Azure Functions bindings errors description: Learn to handle Azure Functions binding errors - Last updated 10/01/2020- # Handle Azure Functions binding errors
azure-functions Functions Bindings Event Grid Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid-output.md
Title: Azure Event Grid output binding for Azure Functions description: Learn to send an Event Grid event in Azure Functions.- Previously updated : 02/14/2020- Last updated : 03/04/2022 ms.devlang: csharp, java, javascript, powershell, python
+zone_pivot_groups: programming-languages-set-functions-lang-workers
# Azure Event Grid output binding for Azure Functions
-Use the Event Grid output binding to write events to a custom topic. You must have a valid [access key for the custom topic](../event-grid/security-authenticate-publishing-clients.md).
+Use the Event Grid output binding to write events to a custom topic. You must have a valid [access key for the custom topic](../event-grid/security-authenticate-publishing-clients.md). The Event Grid output binding doesn't support shared access signature (SAS) tokens.
-For information on setup and configuration details, see the [overview](./functions-bindings-event-grid.md).
+For information on setup and configuration details, see [How to work with Event Grid triggers and bindings in Azure Functions](event-grid-how-tos.md).
-> [!NOTE]
-> The Event Grid output binding does not support shared access signatures (SAS tokens). You must use the topic's access key.
> [!IMPORTANT] > The Event Grid output binding is only available for Functions 2.x and higher. ## Example
-# [C#](#tab/csharp)
-### C# (2.x and higher)
+The type of the output parameter used with an Event Grid output binding depends on the Functions runtime version, the binding extension version, and the modality of the C# function. The C# function can be created using one of the following C# modes:
-The following example shows a [C# function](functions-dotnet-class-library.md) that writes a message to an Event Grid custom topic, using the method return value as the output:
+* [In-process class library](functions-dotnet-class-library.md): compiled C# function that runs in the same process as the Functions runtime.
+* [Isolated process class library](dotnet-isolated-process-guide.md): compiled C# function that runs in a process isolated from the runtime. Isolated process is required to support C# functions running on .NET 5.0.
+* [C# script](functions-reference-csharp.md): used primarily when creating C# functions in the Azure portal.
-```csharp
-[FunctionName("EventGridOutput")]
-[return: EventGrid(TopicEndpointUri = "MyEventGridTopicUriSetting", TopicKeySetting = "MyEventGridTopicKeySetting")]
-public static EventGridEvent Run([TimerTrigger("0 */5 * * * *")] TimerInfo myTimer, ILogger log)
-{
- return new EventGridEvent("message-id", "subject-name", "event-data", "event-type", DateTime.UtcNow, "1.0");
-}
-```
+# [In-process](#tab/in-process)
-The following example shows how to use the `IAsyncCollector` interface to send a batch of messages.
-
-```csharp
-[FunctionName("EventGridAsyncOutput")]
-public static async Task Run(
- [TimerTrigger("0 */5 * * * *")] TimerInfo myTimer,
- [EventGrid(TopicEndpointUri = "MyEventGridTopicUriSetting", TopicKeySetting = "MyEventGridTopicKeySetting")]IAsyncCollector<EventGridEvent> outputEvents,
- ILogger log)
-{
- for (var i = 0; i < 3; i++)
- {
- var myEvent = new EventGridEvent("message-id-" + i, "subject-name", "event-data", "event-type", DateTime.UtcNow, "1.0");
- await outputEvents.AddAsync(myEvent);
- }
-}
-```
-
-### Version 3.x (preview)
-
-The following example shows a Functions 3.x [C# function](functions-dotnet-class-library.md) that binds to a `CloudEvent`:
+The following example shows a C# function that binds to a `CloudEvent` using version 3.x of the extension, which is in preview:
```cs using System.Threading.Tasks;
namespace Azure.Extensions.WebJobs.Sample
} ```
-The following example shows a Functions 3.x [C# function](functions-dotnet-class-library.md) that binds to an `EventGridEvent`:
+The following example shows a C# function that binds to an `EventGridEvent` using version 3.x of the extension, which is in preview:
```cs using System.Threading.Tasks;
namespace Azure.Extensions.WebJobs.Sample
} ```
+The following example shows a C# function that writes an [Microsoft.Azure.EventGrid.Models.EventGridEvent][EventGridEvent] message to an Event Grid custom topic, using the method return value as the output:
+
+```csharp
+[FunctionName("EventGridOutput")]
+[return: EventGrid(TopicEndpointUri = "MyEventGridTopicUriSetting", TopicKeySetting = "MyEventGridTopicKeySetting")]
+public static EventGridEvent Run([TimerTrigger("0 */5 * * * *")] TimerInfo myTimer, ILogger log)
+{
+ return new EventGridEvent("message-id", "subject-name", "event-data", "event-type", DateTime.UtcNow, "1.0");
+}
+```
+
+The following example shows how to use the `IAsyncCollector` interface to send a batch of messages.
+
+```csharp
+[FunctionName("EventGridAsyncOutput")]
+public static async Task Run(
+ [TimerTrigger("0 */5 * * * *")] TimerInfo myTimer,
+ [EventGrid(TopicEndpointUri = "MyEventGridTopicUriSetting", TopicKeySetting = "MyEventGridTopicKeySetting")]IAsyncCollector<EventGridEvent> outputEvents,
+ ILogger log)
+{
+ for (var i = 0; i < 3; i++)
+ {
+ var myEvent = new EventGridEvent("message-id-" + i, "subject-name", "event-data", "event-type", DateTime.UtcNow, "1.0");
+ await outputEvents.AddAsync(myEvent);
+ }
+}
+```
+
+# [Isolated process](#tab/isolated-process)
+
+The following example shows how the custom type is used in both the trigger and an Event Grid output binding:
++ # [C# Script](#tab/csharp-script) The following example shows the Event Grid output binding data in the *function.json* file.
public static void Run(TimerInfo myTimer, ICollector<EventGridEvent> outputEvent
outputEvent.Add(new EventGridEvent("message-id-2", "subject-name", "event-data", "event-type", DateTime.UtcNow, "1.0")); } ```+
-# [Java](#tab/java)
The following example shows a Java function that writes a message to an Event Grid custom topic. The function uses the binding's `setValue` method to output a string.
class EventGridEvent {
} ```
-# [JavaScript](#tab/javascript)
The following example shows the Event Grid output binding data in the *function.json* file.
module.exports = async function(context) {
}; ```
-# [PowerShell](#tab/powershell)
The following example demonstrates how to configure a function to output an Event Grid event message. The section where `type` is set to `eventGrid` configures the values needed to establish an Event Grid output binding.
Push-OutputBinding -Name Response -Value ([HttpResponseContext]@{
}) ```
-# [Python](#tab/python)
The following example shows a trigger binding in a *function.json* file and a [Python function](functions-reference-python.md) that uses the binding. It then sends in an event to the custom topic, as specified by the `topicEndpointUri`.
def main(eventGridEvent: func.EventGridEvent,
data_version="1.0")) ``` -
+## Attributes
-## Attributes and annotations
+Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use attribute to configure the binding. C# script instead uses a function.json configuration file.
-# [C#](#tab/csharp)
+The attribute's constructor takes the name of an application setting that contains the name of the custom topic, and the name of an application setting that contains the topic key.
-For [C# class libraries](functions-dotnet-class-library.md), use the [EventGridAttribute](https://github.com/Azure/azure-functions-eventgrid-extension/blob/dev/src/EventGridExtension/OutputBinding/EventGridAttribute.cs) attribute.
+# [In-process](#tab/in-process)
-The attribute's constructor takes the name of an app setting that contains the name of the custom topic, and the name of an app setting that contains the topic key. For more information about these settings, see [Output - configuration](#configuration). Here's an `EventGrid` attribute example:
+The following table explains the parameters for the `EventGridAttribute`.
-```csharp
-[FunctionName("EventGridOutput")]
-[return: EventGrid(TopicEndpointUri = "MyEventGridTopicUriSetting", TopicKeySetting = "MyEventGridTopicKeySetting")]
-public static string Run([TimerTrigger("0 */5 * * * *")] TimerInfo myTimer, ILogger log)
-{
- ...
-}
-```
+|Parameter | Description|
+|||-|
+|**TopicEndpointUri** | The name of an app setting that contains the URI for the custom topic, such as `MyTopicEndpointUri`. |
+|**TopicKeySetting** | The name of an app setting that contains an access key for the custom topic. |
+
+# [Isolated process](#tab/isolated-process)
-For a complete example, see [example](#example).
+The following table explains the parameters for the `EventGridOutputAttribute`.
+
+|Parameter | Description|
+|||-|
+|**TopicEndpointUri** | The name of an app setting that contains the URI for the custom topic, such as `MyTopicEndpointUri`. |
+|**TopicKeySetting** | The name of an app setting that contains an access key for the custom topic. |
# [C# Script](#tab/csharp-script)
-Attributes are not supported by C# Script.
+C# script uses a function.json file for configuration instead of attributes.
+
+The following table explains the binding configuration properties for C# script that you set in the *function.json* file.
-# [Java](#tab/java)
+|function.json property | Description|
+|||-|
+|**type** | Must be set to `eventGrid`. |
+|**direction** | Must be set to `out`. This parameter is set automatically when you create the binding in the Azure portal. |
+|**name** | The variable name used in function code that represents the event. |
+|**topicEndpointUri** | The name of an app setting that contains the URI for the custom topic, such as `MyTopicEndpointUri`. |
+|**topicKeySetting** | The name of an app setting that contains an access key for the custom topic. |
+++
+## Annotations
For Java classes, use the [EventGridAttribute](https://github.com/Azure/azure-functions-java-library/blob/dev/src/main/java/com/microsoft/azure/functions/annotation/EventGridOutput.java) attribute.
public class Function {
} } ```
+## Configuration
-# [JavaScript](#tab/javascript)
+The following table explains the binding configuration properties that you set in the *function.json* file.
-Attributes are not supported by JavaScript.
+|function.json property |Description|
+|||-|
+|**type** | Must be set to `eventGrid`. |
+|**direction** | Must be set to `out`. This parameter is set automatically when you create the binding in the Azure portal. |
+|**name** | The variable name used in function code that represents the event. |
+|**topicEndpointUri** | The name of an app setting that contains the URI for the custom topic, such as `MyTopicEndpointUri`. |
+|**topicKeySetting** | The name of an app setting that contains an access key for the custom topic. |
-# [PowerShell](#tab/powershell)
-Attributes are not supported by PowerShell.
-# [Python](#tab/python)
+> [!IMPORTANT]
+> Make sure that you set the value of the `TopicEndpointUri` configuration property to the name of an app setting that contains the URI of the custom topic. Don't specify the URI of the custom topic directly in this property.
-Attributes are not supported by Python.
+See the [Example section](#example) for complete examples.
-
+## Usage
-## Configuration
+The parameter type supported by the Event Grid output binding depends on the Functions runtime version, the extension package version, and the C# modality used.
-The following table explains the binding configuration properties that you set in the *function.json* file and the `EventGrid` attribute.
+# [Extension v3.x](#tab/extensionv3/in-process)
-|function.json property | Attribute property |Description|
-|||-|
-|**type** | n/a | Must be set to "eventGrid". |
-|**direction** | n/a | Must be set to "out". This parameter is set automatically when you create the binding in the Azure portal. |
-|**name** | n/a | The variable name used in function code that represents the event. |
-|**topicEndpointUri** |**TopicEndpointUri** | The name of an app setting that contains the URI for the custom topic, such as `MyTopicEndpointUri`. |
-|**topicKeySetting** |**TopicKeySetting** | The name of an app setting that contains an access key for the custom topic. |
+In-process C# class library functions supports the following types:
++ [Azure.Messaging.CloudEvent][CloudEvent]++ [Azure.Messaging.EventGrid][EventGridEvent2]++ [Newtonsoft.Json.Linq.JObject][JObject]++ [System.String][String]
-> [!IMPORTANT]
-> Ensure that you set the value of the `TopicEndpointUri` configuration property to the name of an app setting that contains the URI of the custom topic. Do not specify the URI of the custom topic directly in this property.
+Send messages by using a method parameter such as `out EventGridEvent paramName`.
+To write multiple messages, you can instead use `ICollector<EventGridEvent>` or `IAsyncCollector<EventGridEvent>`.
-## Usage
+# [Extension v2.x](#tab/extensionv2/in-process)
-# [C#](#tab/csharp)
+In-process C# class library functions supports the following types:
-Send messages by using a method parameter such as `out EventGridEvent paramName`. To write multiple messages, you can use `ICollector<EventGridEvent>` or
-`IAsyncCollector<EventGridEvent>` in place of `out EventGridEvent`.
++ [Microsoft.Azure.EventGrid.Models.EventGridEvent][EventGridEvent]++ [Newtonsoft.Json.Linq.JObject][JObject]++ [System.String][String]
-### Additional types
-Apps using the 3.0.0 or higher version of the Event Grid extension use the `EventGridEvent` type from the [Azure.Messaging.EventGrid](/dotnet/api/azure.messaging.eventgrid.eventgridevent) namespace. In addition, you can bind to the `CloudEvent` type from the [Azure.Messaging](/dotnet/api/azure.messaging.cloudevent) namespace.
+Send messages by using a method parameter such as `out EventGridEvent paramName`.
+To write multiple messages, you can instead use `ICollector<EventGridEvent>` or `IAsyncCollector<EventGridEvent>`.
-# [C# Script](#tab/csharp-script)
+# [Functions 1.x](#tab/functionsv1/in-process)
+
+In-process C# class library functions supports the following types:
+++ [Newtonsoft.Json.Linq.JObject][JObject]++ [System.String][String]+
+# [Extension v3.x](#tab/extensionv3/isolated-process)
+
+Requires you to define a custom type, or use a string. See the [Example section](#example) for examples of using a custom parameter type.
+
+# [Extension v2.x](#tab/extensionv2/isolated-process)
+
+Requires you to define a custom type, or use a string. See the [Example section](#example) for examples of using a custom parameter type.
+
+# [Functions 1.x](#tab/functionsv1/isolated-process)
+
+Functions version 1.x doesn't support isolated process.
-Send messages by using a method parameter such as `out EventGridEvent paramName`. In C# script, `paramName` is the value specified in the `name` property of *function.json*. To write multiple messages, you can use `ICollector<EventGridEvent>` or
-`IAsyncCollector<EventGridEvent>` in place of `out EventGridEvent`.
+# [Extension v3.x](#tab/extensionv3/csharp-script)
-### Additional types
-Apps using the 3.0.0 or higher version of the Event Grid extension use the `EventGridEvent` type from the [Azure.Messaging.EventGrid](/dotnet/api/azure.messaging.eventgrid.eventgridevent) namespace. In addition, you can bind to the `CloudEvent` type from the [Azure.Messaging](/dotnet/api/azure.messaging.cloudevent) namespace.
+C# script functions support the following types:
-# [Java](#tab/java)
++ [Azure.Messaging.CloudEvent][CloudEvent]++ [Azure.Messaging.EventGrid][EventGridEvent2]++ [Newtonsoft.Json.Linq.JObject][JObject]++ [System.String][String]
-Send individual messages by calling a method parameter such as `out EventGridOutput paramName`, and write multiple messages with `ICollector<EventGridOutput>` .
+Send messages by using a method parameter such as `out EventGridEvent paramName`.
+To write multiple messages, you can instead use `ICollector<EventGridEvent>` or `IAsyncCollector<EventGridEvent>`.
-# [JavaScript](#tab/javascript)
+# [Extension v2.x](#tab/extensionv2/csharp-script)
+
+C# script functions support the following types:
+++ [Microsoft.Azure.EventGrid.Models.EventGridEvent][EventGridEvent]++ [Newtonsoft.Json.Linq.JObject][JObject]++ [System.String][String]+
+Send messages by using a method parameter such as `out EventGridEvent paramName`.
+To write multiple messages, you can instead use `ICollector<EventGridEvent>` or `IAsyncCollector<EventGridEvent>`.
+
+# [Functions 1.x](#tab/functionsv1/csharp-script)
+
+C# script functions support the following types:
+++ [Newtonsoft.Json.Linq.JObject][JObject]++ [System.String][String]++++
+Send individual messages by calling a method parameter such as `out EventGridOutput paramName`, and write multiple messages with `ICollector<EventGridOutput>`.
+ Access the output event by using `context.bindings.<name>` where `<name>` is the value specified in the `name` property of *function.json*.
-# [PowerShell](#tab/powershell)
-Access the output event by using the `Push-OutputBinding` commandlet to send an event to the Event Grid output binding.
+Access the output event by using the `Push-OutputBinding` cmdlet to send an event to the Event Grid output binding.
-# [Python](#tab/python)
There are two options for outputting an Event Grid message from a function:-- **Return value**: Set the `name` property in *function.json* to `$return`. With this configuration, the function's return value is persisted as an EventGrid message.-- **Imperative**: Pass a value to the [set](/python/api/azure-functions/azure.functions.out#set-val--t--none) method of the parameter declared as an [Out](/python/api/azure-functions/azure.functions.out) type. The value passed to `set` is persisted as an EventGrid message.
+- **Return value**: Set the `name` property in *function.json* to `$return`. With this configuration, the function's return value is persisted as an Event Grid message.
+- **Imperative**: Pass a value to the [set](/python/api/azure-functions/azure.functions.out#set-val--t--none) method of the parameter declared as an [Out](/python/api/azure-functions/azure.functions.out) type. The value passed to `set` is persisted as an Event Grid message.
- ## Next steps * [Dispatch an Event Grid event](./functions-bindings-event-grid-trigger.md)+
+[EventGridEvent]: /dotnet/api/microsoft.azure.eventgrid.models.eventgridevent
+[CloudEvent]: /dotnet/api/azure.messaging.cloudevent
azure-functions Functions Bindings Event Grid Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid-trigger.md
Title: Azure Event Grid trigger for Azure Functions description: Learn to run code when Event Grid events in Azure Functions are dispatched.- Previously updated : 02/14/2020- Last updated : 03/04/2022 ms.devlang: csharp, java, javascript, powershell, python
+zone_pivot_groups: programming-languages-set-functions-lang-workers
# Azure Event Grid trigger for Azure Functions
-Use the function trigger to respond to an event sent to an Event Grid topic.
+Use the function trigger to respond to an event sent to an event grid topic. To learn how to work with the Event Grid trigger.
+ For information on setup and configuration details, see the [overview](./functions-bindings-event-grid.md). > [!NOTE]
-> Event Grid triggers aren't natively supported in an internal load balancer App Service Environment. The trigger uses an HTTP request that can't reach the function app without a gateway into the virtual network.
+> Event Grid triggers aren't natively supported in an internal load balancer App Service Environment (ASE). The trigger uses an HTTP request that can't reach the function app without a gateway into the virtual network.
## Example
-# [C#](#tab/csharp)
+
+For an HTTP trigger example, see [Receive events to an HTTP endpoint](../event-grid/receive-events.md).
+
+The type of the input parameter used with an Event Grid trigger depends on these three factors:
-For an HTTP trigger example, see [Receive events to an HTTP endpoint](../event-grid/receive-events.md).
++ Functions runtime version++ Binding extension version++ Modality of the C# function.
-### Version 3.x
-The following example shows a Functions 3.x [C# function](functions-dotnet-class-library.md) that binds to a `CloudEvent`:
+# [In-process](#tab/in-process)
+
+The following example shows a Functions version 3.x function that uses a `CloudEvent` binding parameter:
```cs using Azure.Messaging;
namespace Company.Function
} ```
-The following example shows a Functions 3.x [C# function](functions-dotnet-class-library.md) that binds to an `EventGridEvent`:
-
-```cs
-using Microsoft.Azure.WebJobs;
-using Microsoft.Azure.WebJobs.Extensions.EventGrid;
-using Azure.Messaging.EventGrid;
-using Microsoft.Extensions.Logging;
-
-namespace Company.Function
-{
- public static class EventGridEventTriggerFunction
- {
- [FunctionName("EventGridEventTriggerFunction")]
- public static void Run(
- ILogger logger,
- [EventGridTrigger] EventGridEvent e)
- {
- logger.LogInformation("Event received {type} {subject}", e.EventType, e.Subject);
- }
- }
-}
-```
-
-### C# (2.x and higher)
-
-The following example shows a [C# function](functions-dotnet-class-library.md) that binds to `EventGridEvent`:
+The following example shows a Functions version 3.x function that uses an `EventGridEvent` binding parameter:
```cs
-using System;
using Microsoft.Azure.WebJobs;
-using Microsoft.Azure.WebJobs.Host;
using Microsoft.Azure.EventGrid.Models; using Microsoft.Azure.WebJobs.Extensions.EventGrid; using Microsoft.Extensions.Logging;
namespace Company.Function
public static class EventGridTriggerDemo { [FunctionName("EventGridTriggerDemo")]
- public static void Run([EventGridTrigger]EventGridEvent eventGridEvent, ILogger log)
+ public static void Run([EventGridTrigger] EventGridEvent eventGridEvent, ILogger log)
{ log.LogInformation(eventGridEvent.Data.ToString()); }
namespace Company.Function
} ```
-For more information, see Packages, [Attributes](#attributes-and-annotations), [Configuration](#configuration), and [Usage](#usage).
-
-### Version 1.x
-
-The following example shows a Functions 1.x [C# function](functions-dotnet-class-library.md) that binds to `JObject`:
+The following example shows a function that uses a `JObject` binding parameter:
```cs using Microsoft.Azure.WebJobs;
namespace Company.Function
public static class EventGridTriggerCSharp { [FunctionName("EventGridTriggerCSharp")]
- public static void Run([EventGridTrigger]JObject eventGridEvent, ILogger log)
+ public static void Run([EventGridTrigger] JObject eventGridEvent, ILogger log)
{ log.LogInformation(eventGridEvent.ToString(Formatting.Indented)); } } } ```
+# [Isolated process](#tab/isolated-process)
+
+When running your C# function in an isolated process, you need to define a custom type for event properties. The following example defines a `MyEventType` class.
++
+The following example shows how the custom type is used in both the trigger and an Event Grid output binding:
+ # [C# Script](#tab/csharp-script)
-The following example shows a trigger binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding.
+The following example shows an Event Grid trigger defined in the *function.json* file.
Here's the binding data in the *function.json* file:
Here's the binding data in the *function.json* file:
} ```
-### Version 2.x and higher
-
-Here's an example that binds to `EventGridEvent`:
+Here's an example of a C# script function that uses an `EventGridEvent` binding parameter:
```csharp #r "Microsoft.Azure.EventGrid"
public static void Run(EventGridEvent eventGridEvent, ILogger log)
} ```
-For more information, see Packages, [Attributes](#attributes-and-annotations), [Configuration](#configuration), and [Usage](#usage).
+For more information, see Packages, [Attributes](#attributes), [Configuration](#configuration), and [Usage](#usage).
-### Version 1.x
-Here's Functions 1.x C# script code that binds to `JObject`:
+Here's an example of a C# script function that uses a `JObject` binding parameter:
```cs #r "Newtonsoft.Json"
public static void Run(JObject eventGridEvent, TraceWriter log)
} ```
-# [Java](#tab/java)
++ This section contains the following examples: * [Event Grid trigger, String parameter](#event-grid-trigger-string-parameter) * [Event Grid trigger, POJO parameter](#event-grid-trigger-pojo-parameter)
-The following examples show trigger binding in [Java](functions-reference-java.md) that use the binding and print out an event, first receiving the event as `String` and second as a POJO.
+The following examples show trigger binding in [Java](functions-reference-java.md) that use the binding and generate an event, first receiving the event as `String` and second as a POJO.
### Event Grid trigger, String parameter
Upon arrival, the event's JSON payload is de-serialized into the ```EventSchema`
``` In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `EventGridTrigger` annotation on parameters whose value would come from EventGrid. Parameters with these annotations cause the function to run when an event arrives. This annotation can be used with native Java types, POJOs, or nullable values using `Optional<T>`.-
-# [JavaScript](#tab/javascript)
- The following example shows a trigger binding in a *function.json* file and a [JavaScript function](functions-reference-node.md) that uses the binding. Here's the binding data in the *function.json* file:
module.exports = async function (context, eventGridEvent) {
context.log("Data: " + JSON.stringify(eventGridEvent.data)); }; ```-
-# [PowerShell](#tab/powershell)
The following example shows how to configure an Event Grid trigger binding in the *function.json* file.
param($eventGridEvent,ΓÇ»$TriggerMetadata)
# Make sure to pass hashtables to Out-String so they're logged correctly $eventGridEvent | Out-String | Write-Host ```-
-# [Python](#tab/python)
- The following example shows a trigger binding in a *function.json* file and a [Python function](functions-reference-python.md) that uses the binding. Here's the binding data in the *function.json* file:
def main(event: func.EventGridEvent):
logging.info('Python EventGrid trigger processed an event: %s', result) ```
+## Attributes
--
-## Attributes and annotations
-
-# [C#](#tab/csharp)
+Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use the [EventGridTrigger](https://github.com/Azure/azure-functions-eventgrid-extension/blob/master/src/EventGridExtension/TriggerBinding/EventGridTriggerAttribute.cs) attribute. C# script instead uses a function.json configuration file.
-In [C# class libraries](functions-dotnet-class-library.md), use the [EventGridTrigger](https://github.com/Azure/azure-functions-eventgrid-extension/blob/master/src/EventGridExtension/TriggerBinding/EventGridTriggerAttribute.cs) attribute.
+# [In-process](#tab/in-process)
Here's an `EventGridTrigger` attribute in a method signature:
Here's an `EventGridTrigger` attribute in a method signature:
[FunctionName("EventGridTest")] public static void EventGridTest([EventGridTrigger] JObject eventGridEvent, ILogger log) {
- ...
-}
```
+# [Isolated process](#tab/isolated-process)
-For a complete example, see C# example.
-
-# [C# Script](#tab/csharp-script)
-
-Attributes are not supported by C# Script.
-
-# [Java](#tab/java)
-
-The [EventGridTrigger](https://github.com/Azure/azure-functions-java-library/blob/master/src/main/java/com/microsoft/azure/functions/annotation/EventGridTrigger.java) annotation allows you to declaratively configure an Event Grid binding by providing configuration values. See the [example](#example) and [configuration](#configuration) sections for more detail.
-
-# [JavaScript](#tab/javascript)
+Here's an `EventGridTrigger` attribute in a method signature:
-Attributes are not supported by JavaScript.
-# [PowerShell](#tab/powershell)
+# [C# script](#tab/csharp-script)
-Attributes are not supported by PowerShell.
+C# script uses a function.json file for configuration instead of attributes.
-# [Python](#tab/python)
+The following table explains the binding configuration properties for C# script that you set in the *function.json* file. There are no constructor parameters or properties to set in the `EventGridTrigger` attribute.
-Attributes are not supported by Python.
+|function.json property |Description|
+|||
+| **type** | Required - must be set to `eventGridTrigger`. |
+| **direction** | Required - must be set to `in`. |
+| **name** | Required - the variable name used in function code for the parameter that receives the event data. |
+## Annotations
+
+The [EventGridTrigger](/java/api/com.microsoft.azure.functions.annotation.eventgridtrigger) annotation allows you to declaratively configure an Event Grid binding by providing configuration values. See the [example](#example) and [configuration](#configuration) sections for more detail.
## Configuration The following table explains the binding configuration properties that you set in the *function.json* file. There are no constructor parameters or properties to set in the `EventGridTrigger` attribute.
The following table explains the binding configuration properties that you set i
| **type** | Required - must be set to `eventGridTrigger`. | | **direction** | Required - must be set to `in`. | | **name** | Required - the variable name used in function code for the parameter that receives the event data. |+
+See the [Example section](#example) for complete examples.
## Usage
-# [C#](#tab/csharp)
+The parameter type supported by the Event Grid trigger depends on the Functions runtime version, the extension package version, and the C# modality used.
-In Azure Functions 1.x, you can use the following parameter types for the Event Grid trigger:
+# [Extension v3.x](#tab/extensionv3/in-process)
-* `JObject`
-* `string`
+In-process C# class library functions supports the following types:
-In Azure Functions 2.x and higher, you also have the option to use the following parameter type for the Event Grid trigger:
++ [Azure.Messaging.CloudEvent][CloudEvent]++ [Azure.Messaging.EventGrid][EventGridEvent2]++ [Newtonsoft.Json.Linq.JObject][JObject]++ [System.String][String]
-* `Microsoft.Azure.EventGrid.Models.EventGridEvent`- Defines properties for the fields common to all event types.
+# [Extension v2.x](#tab/extensionv2/in-process)
-> [!NOTE]
-> If you try to bind to `Microsoft.Azure.WebJobs.Extensions.EventGrid.EventGridEvent`, the compiler displays a "deprecated" message and advises you to use `Microsoft.Azure.EventGrid.Models.EventGridEvent` instead. To use the newer type, reference the [Microsoft.Azure.EventGrid](https://www.nuget.org/packages/Microsoft.Azure.EventGrid) NuGet package and fully qualify the `EventGridEvent` type name by prefixing it with `Microsoft.Azure.EventGrid.Models`.
+In-process C# class library functions supports the following types:
-### Additional types
++ [Microsoft.Azure.EventGrid.Models.EventGridEvent][EventGridEvent]++ [Newtonsoft.Json.Linq.JObject][JObject]++ [System.String][String]
-Apps using the 3.0.0 or higher version of the Event Grid extension use the `EventGridEvent` type from the [Azure.Messaging.EventGrid](/dotnet/api/azure.messaging.eventgrid.eventgridevent) namespace. In addition, you can bind to the `CloudEvent` type from the [Azure.Messaging](/dotnet/api/azure.messaging.cloudevent) namespace.
+# [Functions 1.x](#tab/functionsv1/in-process)
-# [C# Script](#tab/csharp-script)
+In-process C# class library functions supports the following types:
-In Azure Functions 1.x, you can use the following parameter types for the Event Grid trigger:
++ [Newtonsoft.Json.Linq.JObject][JObject]++ [System.String][String]
-* `JObject`
-* `string`
+# [Extension v3.x](#tab/extensionv3/isolated-process)
-In Azure Functions 2.x and higher, you also have the option to use the following parameter type for the Event Grid trigger:
+Requires you to define a custom type, or use a string. See the [Example section](#example) for examples of using a custom parameter type.
-* `Microsoft.Azure.EventGrid.Models.EventGridEvent`- Defines properties for the fields common to all event types.
+# [Extension v2.x](#tab/extensionv2/isolated-process)
-> [!NOTE]
-> If you try to bind to `Microsoft.Azure.WebJobs.Extensions.EventGrid.EventGridEvent`, the compiler will display a "deprecated" message and advise you to use `Microsoft.Azure.EventGrid.Models.EventGridEvent` instead. To use the newer type, reference the [Microsoft.Azure.EventGrid](https://www.nuget.org/packages/Microsoft.Azure.EventGrid) NuGet package and fully qualify the `EventGridEvent` type name by prefixing it with `Microsoft.Azure.EventGrid.Models`. For information about how to reference NuGet packages in a C# script function, see [Using NuGet packages](functions-reference-csharp.md#using-nuget-packages)
+Requires you to define a custom type, or use a string. See the [Example section](#example) for examples of using a custom parameter type.
-### Additional types
+# [Functions 1.x](#tab/functionsv1/isolated-process)
-Apps using the 3.0.0 or higher version of the Event Grid extension use the `EventGridEvent` type from the [Azure.Messaging.EventGrid](/dotnet/api/azure.messaging.eventgrid.eventgridevent) namespace. In addition, you can bind to the `CloudEvent` type from the [Azure.Messaging](/dotnet/api/azure.messaging.cloudevent) namespace.
+Functions version 1.x doesn't support isolated process.
-# [Java](#tab/java)
+# [Extension v3.x](#tab/extensionv3/csharp-script)
-The Event Grid event instance is available via the parameter associated to the `EventGridTrigger` attribute, typed as an `EventSchema`. See the [example](#example) for more detail.
+In-process C# class library functions supports the following types:
-# [JavaScript](#tab/javascript)
++ [Azure.Messaging.CloudEvent][CloudEvent]++ [Azure.Messaging.EventGrid][EventGridEvent2]++ [Newtonsoft.Json.Linq.JObject][JObject]++ [System.String][String]
-The Event Grid instance is available via the parameter configured in the *function.json* file's `name` property.
+# [Extension v2.x](#tab/extensionv2/csharp-script)
-# [PowerShell](#tab/powershell)
+In-process C# class library functions supports the following types:
-The Event Grid instance is available via the parameter configured in the *function.json* file's `name` property.
++ [Microsoft.Azure.EventGrid.Models.EventGridEvent][EventGridEvent]++ [Newtonsoft.Json.Linq.JObject][JObject]++ [System.String][String]
-# [Python](#tab/python)
+# [Functions 1.x](#tab/functionsv1/csharp-script)
-The Event Grid instance is available via the parameter configured in the *function.json* file's `name` property, typed as `func.EventGridEvent`.
+In-process C# class library functions supports the following types:
+++ [Newtonsoft.Json.Linq.JObject][JObject]++ [System.String][String]
+The Event Grid event instance is available via the parameter associated to the `EventGridTrigger` attribute, typed as an `EventSchema`.
+The Event Grid instance is available via the parameter configured in the *function.json* file's `name` property.
+The Event Grid instance is available via the parameter configured in the *function.json* file's `name` property, typed as `func.EventGridEvent`.
+ ## Event schema Data for an Event Grid event is received as a JSON object in the body of an HTTP request. The JSON looks similar to the following example:
The top-level properties in the event JSON data are the same among all event typ
For explanations of the common and event-specific properties, see [Event properties](../event-grid/event-schema.md#event-properties) in the Event Grid documentation.
-The `EventGridEvent` type defines only the top-level properties; the `Data` property is a `JObject`.
-
-## Create a subscription
-
-To start receiving Event Grid HTTP requests, create an Event Grid subscription that specifies the endpoint URL that invokes the function.
-
-### Azure portal
-
-For functions that you develop in the Azure portal with the Event Grid trigger, select **Integration** then choose the **Event Grid Trigger** and select **Create Event Grid subscription**.
--
-When you select this link, the portal opens the **Create Event Subscription** page with the current trigger endpoint already defined.
--
-For more information about how to create subscriptions by using the Azure portal, see [Create custom event - Azure portal](../event-grid/custom-event-quickstart-portal.md) in the Event Grid documentation.
-
-### Azure CLI
-
-To create a subscription by using [the Azure CLI](/cli/azure/get-started-with-azure-cli), use the [az eventgrid event-subscription create](/cli/azure/eventgrid/event-subscription#az_eventgrid_event_subscription_create) command.
-
-The command requires the endpoint URL that invokes the function. The following example shows the version-specific URL pattern:
-
-#### Version 2.x (and higher) runtime
-
-```http
-https://{functionappname}.azurewebsites.net/runtime/webhooks/eventgrid?functionName={functionname}&code={systemkey}
-```
-
-#### Version 1.x runtime
-
-```http
-https://{functionappname}.azurewebsites.net/admin/extensions/EventGridExtensionConfig?functionName={functionname}&code={systemkey}
-```
-
-The system key is an authorization key that has to be included in the endpoint URL for an Event Grid trigger. The following section explains how to get the system key.
-
-Here's an example that subscribes to a blob storage account (with a placeholder for the system key):
-
-#### Version 2.x (and higher) runtime
-
-# [Bash](#tab/bash)
-
-```azurecli
-az eventgrid resource event-subscription create -g myResourceGroup \
- --provider-namespace Microsoft.Storage --resource-type storageAccounts \
- --resource-name myblobstorage12345 --name myFuncSub \
- --included-event-types Microsoft.Storage.BlobCreated \
- --subject-begins-with /blobServices/default/containers/images/blobs/ \
- --endpoint https://mystoragetriggeredfunction.azurewebsites.net/runtime/webhooks/eventgrid?functionName=imageresizefunc&code=<key>
-```
-
-# [Cmd](#tab/cmd)
-
-```azurecli
-az eventgrid resource event-subscription create -g myResourceGroup ^
- --provider-namespace Microsoft.Storage --resource-type storageAccounts ^
- --resource-name myblobstorage12345 --name myFuncSub ^
- --included-event-types Microsoft.Storage.BlobCreated ^
- --subject-begins-with /blobServices/default/containers/images/blobs/ ^
- --endpoint https://mystoragetriggeredfunction.azurewebsites.net/runtime/webhooks/eventgrid?functionName=imageresizefunc&code=<key>
-```
---
-#### Version 1.x runtime
-
-# [Bash](#tab/bash)
-
-```azurecli
-az eventgrid resource event-subscription create -g myResourceGroup \
- --provider-namespace Microsoft.Storage --resource-type storageAccounts \
- --resource-name myblobstorage12345 --name myFuncSub \
- --included-event-types Microsoft.Storage.BlobCreated \
- --subject-begins-with /blobServices/default/containers/images/blobs/ \
- --endpoint https://mystoragetriggeredfunction.azurewebsites.net/admin/extensions/EventGridExtensionConfig?functionName=imageresizefunc&code=<key>
-```
-
-# [Cmd](#tab/cmd)
-
-```azurecli
-az eventgrid resource event-subscription create -g myResourceGroup ^
- --provider-namespace Microsoft.Storage --resource-type storageAccounts ^
- --resource-name myblobstorage12345 --name myFuncSub ^
- --included-event-types Microsoft.Storage.BlobCreated ^
- --subject-begins-with /blobServices/default/containers/images/blobs/ ^
- --endpoint https://mystoragetriggeredfunction.azurewebsites.net/admin/extensions/EventGridExtensionConfig?functionName=imageresizefunc&code=<key>
-```
---
-For more information about how to create a subscription, see [the blob storage quickstart](../storage/blobs/storage-blob-event-quickstart.md#subscribe-to-your-storage-account) or the other Event Grid quickstarts.
-
-### Get the system key
-
-You can get the system key by using the following API (HTTP GET):
-
-#### Version 2.x (and higher) runtime
-
-```
-http://{functionappname}.azurewebsites.net/admin/host/systemkeys/eventgrid_extension?code={masterkey}
-```
-
-#### Version 1.x runtime
-
-```
-http://{functionappname}.azurewebsites.net/admin/host/systemkeys/eventgridextensionconfig_extension?code={masterkey}
-```
-
-This is an admin API, so it requires your function app [master key](functions-bindings-http-webhook-trigger.md#authorization-keys). Don't confuse the system key (for invoking an Event Grid trigger function) with the master key (for performing administrative tasks on the function app). When you subscribe to an Event Grid topic, be sure to use the system key.
-
-Here's an example of the response that provides the system key:
-
-```
-{
- "name": "eventgridextensionconfig_extension",
- "value": "{the system key for the function}",
- "links": [
- {
- "rel": "self",
- "href": "{the URL for the function, without the system key}"
- }
- ]
-}
-```
-
-You can get the master key for your function app from the **Function app settings** tab in the portal.
-
-> [!IMPORTANT]
-> The master key provides administrator access to your function app. Don't share this key with third parties or distribute it in native client applications.
-
-For more information, see [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys) in the HTTP trigger reference article.
-
-Alternatively, you can send an HTTP PUT to specify the key value yourself.
-
-## Local testing with viewer web app
-
-To test an Event Grid trigger locally, you have to get Event Grid HTTP requests delivered from their origin in the cloud to your local machine. One way to do that is by capturing requests online and manually resending them on your local machine:
-
-1. [Create a viewer web app](#create-a-viewer-web-app) that captures event messages.
-1. [Create an Event Grid subscription](#create-an-event-grid-subscription) that sends events to the viewer app.
-1. [Generate a request](#generate-a-request) and copy the request body from the viewer app.
-1. [Manually post the request](#manually-post-the-request) to the localhost URL of your Event Grid trigger function.
-
-When you're done testing, you can use the same subscription for production by updating the endpoint. Use the [az eventgrid event-subscription update](/cli/azure/eventgrid/event-subscription#az_eventgrid_event_subscription_update) Azure CLI command.
-
-### Create a viewer web app
-
-To simplify capturing event messages, you can deploy a [pre-built web app](https://github.com/Azure-Samples/azure-event-grid-viewer) that displays the event messages. The deployed solution includes an App Service plan, an App Service web app, and source code from GitHub.
-
-Select **Deploy to Azure** to deploy the solution to your subscription. In the Azure portal, provide values for the parameters.
-
-<a href="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fazure-event-grid-viewer%2Fmaster%2Fazuredeploy.json" target="_blank"><img src="https://azuredeploy.net/deploybutton.png" alt="Button to deploy to Azure."></a>
-
-The deployment may take a few minutes to complete. After the deployment has succeeded, view your web app to make sure it's running. In a web browser, navigate to:
-`https://<your-site-name>.azurewebsites.net`
-
-You see the site but no events have been posted to it yet.
-
-![View new site](media/functions-bindings-event-grid/view-site.png)
-
-### Create an Event Grid subscription
-
-Create an Event Grid subscription of the type you want to test, and give it the URL from your web app as the endpoint for event notification. The endpoint for your web app must include the suffix `/api/updates/`. So, the full URL is `https://<your-site-name>.azurewebsites.net/api/updates`
-
-For information about how to create subscriptions by using the Azure portal, see [Create custom event - Azure portal](../event-grid/custom-event-quickstart-portal.md) in the Event Grid documentation.
-
-### Generate a request
-
-Trigger an event that will generate HTTP traffic to your web app endpoint. For example, if you created a blob storage subscription, upload or delete a blob. When a request shows up in your web app, copy the request body.
-
-The subscription validation request will be received first; ignore any validation requests, and copy the event request.
-
-![Copy request body from web app](media/functions-bindings-event-grid/view-results.png)
-
-### Manually post the request
-
-Run your Event Grid function locally. The `Content-Type` and `aeg-event-type` headers are required to be manually set, while and all other values can be left as default.
-
-Use a tool such as [Postman](https://www.getpostman.com/) or [curl](https://curl.haxx.se/docs/httpscripting.html) to create an HTTP POST request:
-
-* Set a `Content-Type: application/json` header.
-* Set an `aeg-event-type: Notification` header.
-* Paste the RequestBin data into the request body.
-* Post to the URL of your Event Grid trigger function.
- * For 2.x and higher use the following pattern:
-
- ```
- http://localhost:7071/runtime/webhooks/eventgrid?functionName={FUNCTION_NAME}
- ```
-
- * For 1.x use:
-
- ```
- http://localhost:7071/admin/extensions/EventGridExtensionConfig?functionName={FUNCTION_NAME}
- ```
-
-The `functionName` parameter must be the name specified in the `FunctionName` attribute.
-
-The following screenshots show the headers and request body in Postman:
-
-![Headers in Postman](media/functions-bindings-event-grid/postman2.png)
-
-![Request body in Postman](media/functions-bindings-event-grid/postman.png)
-
-The Event Grid trigger function executes and shows logs similar to the following example:
-
-![Sample Event Grid trigger function logs](media/functions-bindings-event-grid/eg-output.png)
- ## Next steps * [Dispatch an Event Grid event](./functions-bindings-event-grid-output.md)+
+[EventGridEvent]: /dotnet/api/microsoft.azure.eventgrid.models.eventgridevent
+[EventGridEvent2]: /dotnet/api/azure.messaging.eventgrid.eventgridevent
+[CloudEvent]: /dotnet/api/azure.messaging.cloudevent
+[JObject]: https://www.newtonsoft.com/json/help/html/t_newtonsoft_json_linq_jobject.htm
+[String]: /dotnet/api/system.string
azure-functions Functions Bindings Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid.md
Title: Azure Event Grid bindings for Azure Functions description: Understand how to handle Event Grid events in Azure Functions.- Previously updated : 02/14/2020- Last updated : 03/04/2022
+zone_pivot_groups: programming-languages-set-functions-lang-workers
# Azure Event Grid bindings for Azure Functions
-This reference explains how to handle [Event Grid](../event-grid/overview.md) events in Azure Functions. For details on how to handle Event Grid messages in an HTTP end point, see [Receive events to an HTTP endpoint](../event-grid/receive-events.md).
+This reference shows how to connect to Azure Event Grid using Azure Functions triggers and bindings.
-Event Grid is an Azure service that sends HTTP requests to notify you about events that happen in *publishers*. A publisher is the service or resource that originates the event. For example, an Azure blob storage account is a publisher, and [a blob upload or deletion is an event](../storage/blobs/storage-blob-event-overview.md). Some [Azure services have built-in support for publishing events to Event Grid](../event-grid/overview.md#event-sources).
-
-Event *handlers* receive and process events. Azure Functions is one of several [Azure services that have built-in support for handling Event Grid events](../event-grid/overview.md#event-handlers). In this reference, you learn to use an Event Grid trigger to invoke a function when an event is received from Event Grid, and to use the output binding to send events to an [Event Grid custom topic](../event-grid/post-to-custom-topic.md).
-
-If you prefer, you can use an HTTP trigger to handle Event Grid Events; see [Receive events to an HTTP endpoint](../event-grid/receive-events.md). Currently, you can't use an Event Grid trigger for an Azure Functions app when the event is delivered in the [CloudEvents schema](../event-grid/cloudevents-schema.md#azure-functions). Instead, use an HTTP trigger.
| Action | Type | |||
-| Run a function when an Event Grid event is dispatched | [Trigger](./functions-bindings-event-grid-trigger.md) |
-| Sends an Event Grid event |[Output binding](./functions-bindings-event-grid-output.md) |
+| Run a function when an Event Grid event is dispatched | [Trigger][trigger] |
+| Sends an Event Grid event | [Output binding][binding] |
+| Control the returned HTTP status code | [HTTP endpoint](../event-grid/receive-events.md) |
-The code in this reference defaults to .NET Core syntax, used in Functions version 2.x and higher. For information on the 1.x syntax, see the [1.x functions templates](https://github.com/Azure/azure-functions-templates/tree/v1.x/Functions.Templates/Templates).
-## Add to your Functions app
-### Functions 2.x and higher
+## Install extension
-Working with the trigger and bindings requires that you reference the appropriate package. The NuGet package is used for .NET class libraries while the extension bundle is used for all other application types.
+The extension NuGet package you install depends on the C# mode you're using in your function app:
-| Language | Add by... | Remarks |
-||||
-| C# | Installing the [NuGet package], version 2.x | |
-| C# Script, Java, JavaScript, Python, PowerShell | Registering the [extension bundle] | The [Azure Tools extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack) is recommended to use with Visual Studio Code. |
-| C# Script (online-only in Azure portal) | Adding a binding | To update existing binding extensions without having to republish your function app, see [Update your extensions]. |
+# [In-process](#tab/in-process)
-[core tools]: ./functions-run-local.md
-[extension bundle]: ./functions-bindings-register.md#extension-bundles
-[NuGet package]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.EventGrid
-[Update your extensions]: ./functions-bindings-register.md
-[Azure Tools extension]: https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack
+Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
+
+# [Isolated process](#tab/isolated-process)
+
+Functions execute in an isolated C# worker process. To learn more, see [Guide for running functions on .NET 5.0 in Azure](dotnet-isolated-process-guide.md).
+
+# [C# script](#tab/csharp-script)
+
+Functions run as C# script, which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions].
+++
+The functionality of the extension varies depending on the extension version:
+
+# [Extension v3.x](#tab/extensionv3/in-process)
+
+This version of the extension supports updated Event Grid binding parameter types of [Azure.Messaging.CloudEvent](/dotnet/api/azure.messaging.cloudevent) and [Azure.Messaging.EventGrid.EventGridEvent](/dotnet/api/azure.messaging.eventgrid.eventgridevent).
+
+Add this version of the extension to your project by installing the [NuGet package], version 3.x.
+
+# [Extension v2.x](#tab/extensionv2/in-process)
+
+Supports the default Event Grid binding parameter type of [Microsoft.Azure.EventGrid.Models.EventGridEvent](/dotnet/api/microsoft.azure.eventgrid.models.eventgridevent).
+
+Add the extension to your project by installing the [NuGet package], version 2.x.
+
+# [Functions 1.x](#tab/functionsv1/in-process)
+
+Functions 1.x apps automatically have a reference to the [Microsoft.Azure.WebJobs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs) NuGet package, version 2.x.
+
+The Event Grid output binding is only available for Functions 2.x and higher.
+
+# [Extension v3.x](#tab/extensionv3/isolated-process)
+
+Add the extension to your project by installing the [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.EventGrid), version 3.x.
+
+# [Extension v2.x](#tab/extensionv2/isolated-process)
+
+Add the extension to your project by installing the [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.EventGrid), version 2.x.
+
+# [Functions 1.x](#tab/functionsv1/isolated-process)
+
+Functions version 1.x doesn't support isolated process.
+
+The Event Grid output binding is only available for Functions 2.x and higher.
-#### Event Grid extension 3.x and higher
+# [Extension v3.x](#tab/extensionv3/csharp-script)
-A new version of the Event Grid bindings extension is now available. For .NET applications, it changes the types that you can bind to, replacing the types from `Microsoft.Azure.EventGrid.Models` with newer types from [Azure.Messaging.EventGrid](/dotnet/api/azure.messaging.eventgrid). [Cloud events](/dotnet/api/azure.messaging.cloudevent) are also supported in the new Event Grid extension.
+This version of the extension supports updated Event Grid binding parameter types of [Azure.Messaging.CloudEvent](/dotnet/api/azure.messaging.cloudevent) and [Microsoft.Azure.EventGrid.Models.EventGridEvent](/dotnet/api/microsoft.azure.eventgrid.models.eventgridevent).
-This extension version is available by installing the [NuGet package], version 3.x, or it can be added from the extension bundle v3 by adding the following in your `host.json` file:
+You can install this version of the extension in your function app by registering the [extension bundle], version 3.x.
+
+# [Extension v2.x](#tab/extensionv2/csharp-script)
+
+Supports the default Event Grid binding parameter type of [Microsoft.Azure.EventGrid.Models.EventGridEvent](/dotnet/api/microsoft.azure.eventgrid.models.eventgridevent).
+
+You can install this version of the extension in your function app by registering the [extension bundle], version 2.x.
+
+# [Functions 1.x](#tab/functionsv1/csharp-script)
+
+Functions 1.x apps automatically have a reference to the [Microsoft.Azure.WebJobs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs) NuGet package, version 2.x.
+
+The Event Grid output binding is only available for Functions 2.x and higher.
+++++
+## Install bundle
+
+The Event Grid extension is part of an [extension bundle], which is specified in your host.json project file. You may need to modify this bundle to change the version of the Event Grid binding, or if bundles aren't already installed. To learn more, see [extension bundle].
+
+# [Bundle v3.x](#tab/extensionv3)
+
+This version of the extension supports updated Event Grid binding parameter types of [Azure.Messaging.CloudEvent](/dotnet/api/azure.messaging.cloudevent) and [Microsoft.Azure.EventGrid.Models.EventGridEvent](/dotnet/api/microsoft.azure.eventgrid.models.eventgridevent).
+
+You can add this version of the extension from the extension bundle v3 by adding or replacing the following code in your `host.json` file:
```json {
This extension version is available by installing the [NuGet package], version 3
} ``` - To learn more, see [Update your extensions].
-### Functions 1.x
+# [Bundle v2.x](#tab/extensionv2)
+
+You can install this version of the extension in your function app by registering the [extension bundle], version 2.x.
+
+# [Functions 1.x](#tab/functionsv1)
+
+The Event Grid output binding is only available for Functions 2.x and higher.
++
-Functions 1.x apps automatically have a reference the [Microsoft.Azure.WebJobs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs) NuGet package, version 2.x.
## Next steps+
+* [Event Grid trigger][trigger]
+* [Event Grid output binding][binding]
* [Run a function when an Event Grid event is dispatched](./functions-bindings-event-grid-trigger.md) * [Dispatch an Event Grid event](./functions-bindings-event-grid-trigger.md)+
+[binding]: functions-bindings-event-grid-output.md
+[trigger]: functions-bindings-event-grid-trigger.md
+[extension bundle]: ./functions-bindings-register.md#extension-bundles
+[NuGet package]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.EventGrid
+[Update your extensions]: ./functions-bindings-register.md
+
azure-functions Functions Bindings Event Hubs Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-hubs-output.md
Title: Azure Event Hubs output binding for Azure Functions description: Learn to write messages to Azure Event Hubs streams using Azure Functions.- ms.assetid: daf81798-7acc-419a-bc32-b5a41c6db56b Previously updated : 02/21/2020- Last updated : 03/04/2022
+zone_pivot_groups: programming-languages-set-functions-lang-workers
# Azure Event Hubs output binding for Azure Functions
This article explains how to work with [Azure Event Hubs](../event-hubs/event-hu
For information on setup and configuration details, see the [overview](functions-bindings-event-hubs.md).
+Use the Event Hubs output binding to write events to an event stream. You must have send permission to an event hub to write events to it.
+
+Make sure the required package references are in place before you try to implement an output binding.
+
+## Example
++
+# [In-process](#tab/in-process)
+
+The following example shows a [C# function](functions-dotnet-class-library.md) that writes a message to an event hub, using the method return value as the output:
+
+```csharp
+[FunctionName("EventHubOutput")]
+[return: EventHub("outputEventHubMessage", Connection = "EventHubConnectionAppSetting")]
+public static string Run([TimerTrigger("0 */5 * * * *")] TimerInfo myTimer, ILogger log)
+{
+ log.LogInformation($"C# Timer trigger function executed at: {DateTime.Now}");
+ return $"{DateTime.Now}";
+}
+```
+
+The following example shows how to use the `IAsyncCollector` interface to send a batch of messages. This scenario is common when you are processing messages coming from one Event Hub and sending the result to another Event Hub.
+
+```csharp
+[FunctionName("EH2EH")]
+public static async Task Run(
+ [EventHubTrigger("source", Connection = "EventHubConnectionAppSetting")] EventData[] events,
+ [EventHub("dest", Connection = "EventHubConnectionAppSetting")]IAsyncCollector<string> outputEvents,
+ ILogger log)
+{
+ foreach (EventData eventData in events)
+ {
+ // do some processing:
+ var myProcessedEvent = DoSomething(eventData);
+
+ // then send the message
+ await outputEvents.AddAsync(JsonConvert.SerializeObject(myProcessedEvent));
+ }
+}
+```
+# [Isolated process](#tab/isolated-process)
+
+The following example shows a [C# function](dotnet-isolated-process-guide.md) that writes a message string to an event hub, using the method return value as the output:
++
+# [C# Script](#tab/csharp-script)
+
+The following example shows an event hub trigger binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function writes a message to an event hub.
+
+The following examples show Event Hubs binding data in the *function.json* file for Functions runtime version 2.x and later versions.
+
+```json
+{
+ "type": "eventHub",
+ "name": "outputEventHubMessage",
+ "eventHubName": "myeventhub",
+ "connection": "MyEventHubSendAppSetting",
+ "direction": "out"
+}
+```
+
+Here's C# script code that creates one message:
+
+```cs
+using System;
+using Microsoft.Extensions.Logging;
+
+public static void Run(TimerInfo myTimer, out string outputEventHubMessage, ILogger log)
+{
+ String msg = $"TimerTriggerCSharp1 executed at: {DateTime.Now}";
+ log.LogInformation(msg);
+ outputEventHubMessage = msg;
+}
+```
+
+Here's C# script code that creates multiple messages:
+
+```cs
+public static void Run(TimerInfo myTimer, ICollector<string> outputEventHubMessage, ILogger log)
+{
+ string message = $"Message created at: {DateTime.Now}";
+ log.LogInformation(message);
+ outputEventHubMessage.Add("1 " + message);
+ outputEventHubMessage.Add("2 " + message);
+}
+```
++++
+The following example shows an event hub trigger binding in a *function.json* file and a function that uses the binding. The function writes an output message to an event hub.
+
+The following example shows an Event Hubs binding data in the *function.json* file, which is different for version 1.x of the Functions runtime compared to later versions.
+
+# [Functions 2.x+](#tab/functionsv2)
+
+```json
+{
+ "type": "eventHub",
+ "name": "outputEventHubMessage",
+ "eventHubName": "myeventhub",
+ "connection": "MyEventHubSendAppSetting",
+ "direction": "out"
+}
+```
+
+# [Functions 1.x](#tab/functionsv1)
+
+```json
+{
+ "type": "eventHub",
+ "name": "outputEventHubMessage",
+ "path": "myeventhub",
+ "connection": "MyEventHubSendAppSetting",
+ "direction": "out"
+}
+```
+++
+Here's JavaScript code that sends a single message:
+
+```javascript
+module.exports = function (context, myTimer) {
+ var timeStamp = new Date().toISOString();
+ context.log('Message created at: ', timeStamp);
+ context.bindings.outputEventHubMessage = "Message created at: " + timeStamp;
+ context.done();
+};
+```
+
+Here's JavaScript code that sends multiple messages:
+
+```javascript
+module.exports = function(context) {
+ var timeStamp = new Date().toISOString();
+ var message = 'Message created at: ' + timeStamp;
+
+ context.bindings.outputEventHubMessage = [];
+
+ context.bindings.outputEventHubMessage.push("1 " + message);
+ context.bindings.outputEventHubMessage.push("2 " + message);
+ context.done();
+};
+```
+
+
+Complete PowerShell examples are pending.
+The following example shows an event hub trigger binding in a *function.json* file and a [Python function](functions-reference-python.md) that uses the binding. The function writes a message to an event hub.
+
+The following examples show Event Hubs binding data in the *function.json* file.
+
+```json
+{
+ "type": "eventHub",
+ "name": "$return",
+ "eventHubName": "myeventhub",
+ "connection": "MyEventHubSendAppSetting",
+ "direction": "out"
+}
+```
+
+Here's Python code that sends a single message:
+
+```python
+import datetime
+import logging
+import azure.functions as func
++
+def main(timer: func.TimerRequest) -> str:
+ timestamp = datetime.datetime.utcnow()
+ logging.info('Message created at: %s', timestamp)
+ return 'Message created at: {}'.format(timestamp)
+```
+
+The following example shows a Java function that writes a message containing the current time to an Event Hub.
+
+```java
+@FunctionName("sendTime")
+@EventHubOutput(name = "event", eventHubName = "samples-workitems", connection = "AzureEventHubConnection")
+public String sendTime(
+ @TimerTrigger(name = "sendTimeTrigger", schedule = "0 */5 * * * *") String timerInfo) {
+ return LocalDateTime.now().toString();
+ }
+```
+
+In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@EventHubOutput` annotation on parameters whose value would be published to Event Hub. The parameter should be of type `OutputBinding<T>` , where T is a POJO or any native Java type.
+
+## Attributes
+
+Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use attribute to configure the binding. C# script instead uses a function.json configuration file.
+
+# [In-process](#tab/in-process)
+
+Use the [EventHubAttribute] to define an output binding to an event hub, which supports the following properties.
+
+| Parameters | Description|
+||-|
+|**EventHubName** | The name of the event hub. When the event hub name is also present in the connection string, that value overrides this property at runtime. |
+|**Connection** | The name of an app setting or setting collection that specifies how to connect to Event Hubs. To learn more, see [Connections](#connections).|
+
+# [Isolated process](#tab/isolated-process)
+
+Use the [EventHubOutputAttribute] to define an output binding to an event hub, which supports the following properties.
+
+| Parameters | Description|
+||-|
+|**EventHubName** | The name of the event hub. When the event hub name is also present in the connection string, that value overrides this property at runtime. |
+|**Connection** | The name of an app setting or setting collection that specifies how to connect to Event Hubs. To learn more, see [Connections](#connections).|
+
+# [C# Script](#tab/csharp-script)
+
+The following table explains the binding configuration properties that you set in the *function.json* file.
+
+|function.json property | Description|
+|||
+|**type** | Must be set to `eventHub`. |
+|**direction** | Must be set to `out`. This parameter is set automatically when you create the binding in the Azure portal. |
+|**name** | The variable name used in function code that represents the event. |
+|**eventHubName** | Functions 2.x and higher. The name of the event hub. When the event hub name is also present in the connection string, that value overrides this property at runtime. In Functions 1.x, this property is named `path`.|
+|**connection** | The name of an app setting or setting collection that specifies how to connect to Event Hubs. To learn more, see [Connections](#connections).|
+++
+## Annotations
+
+In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the [EventHubOutput](/java/api/com.microsoft.azure.functions.annotation.eventhuboutput) annotation on parameters whose value would be published to Event Hub. The following settings are supported on the annotation:
+++ [name](/java/api/com.microsoft.azure.functions.annotation.eventhuboutput.name)++ [dataType](/java/api/com.microsoft.azure.functions.annotation.eventhuboutput.datatype)++ [eventHubName](/java/api/com.microsoft.azure.functions.annotation.eventhuboutput.eventhubname)++ [connection](/java/api/com.microsoft.azure.functions.annotation.eventhuboutput.connection)++
+## Configuration
+
+The following table explains the binding configuration properties that you set in the *function.json* file, which differs by runtime version.
+
+# [Functions 2.x+](#tab/functionsv2)
+
+|function.json property | Description|
+|||
+|**type** | Must be set to `eventHub`. |
+|**direction** | Must be set to `out`. This parameter is set automatically when you create the binding in the Azure portal. |
+|**name** | The variable name used in function code that represents the event. |
+|**eventHubName** | Functions 2.x and higher. The name of the event hub. When the event hub name is also present in the connection string, that value overrides this property at runtime. |
+|**connection** | The name of an app setting or setting collection that specifies how to connect to Event Hubs. To learn more, see [Connections](#connections).|
+
+# [Functions 1.x](#tab/functionsv1)
+
+|function.json property | Description|
+|||
+|**type** | Must be set to `eventHub`. |
+|**direction** | Must be set to `out`. This parameter is set automatically when you create the binding in the Azure portal. |
+|**name** | The variable name used in function code that represents the event. |
+|**path** | Functions 1.x only. The name of the event hub. When the event hub name is also present in the connection string, that value overrides this property at runtime. |
+|**connection** | The name of an app setting or setting collection that specifies how to connect to Event Hubs. To learn more, see [Connections](#connections).|
+++++
+## Usage
+
+The parameter type supported by the Event Hubs output binding depends on the Functions runtime version, the extension package version, and the C# modality used.
+
+# [Extension v5.x+](#tab/extensionv5/in-process)
+
+In-process C# class library functions supports the following types:
+++ [Azure.Messaging.EventHubs.EventData](/dotnet/api/azure.messaging.eventhubs.eventdata)++ String++ Byte array++ Plain-old CLR object (POCO)+
+This version of [EventData](/dotnet/api/azure.messaging.eventhubs.eventdata) drops support for the legacy `Body` type in favor of [EventBody](/dotnet/api/azure.messaging.eventhubs.eventdata.eventbody).
+
+Send messages by using a method parameter such as `out string paramName`. To write multiple messages, you can use `ICollector<string>` or `IAsyncCollector<string>` in place of `out string`.
+
+# [Extension v3.x+](#tab/extensionv3/in-process)
+
+In-process C# class library functions supports the following types:
+++ [Microsoft.Azure.EventHubs.EventData](/dotnet/api/microsoft.azure.eventhubs.eventdata)++ String++ Byte array++ Plain-old CLR object (POCO)+
+Send messages by using a method parameter such as `out string paramName`. To write multiple messages, you can use `ICollector<string>` or `IAsyncCollector<string>` in place of `out string`.
+
+# [Extension v5.x+](#tab/extensionv5/isolated-process)
+
+Requires you to define a custom type, or use a string.
+
+# [Extension v3.x+](#tab/extensionv3/isolated-process)
+
+Requires you to define a custom type, or use a string.
+
+# [Extension v5.x+](#tab/extensionv5/csharp-script)
+
+C# script functions support the following types:
+++ [Azure.Messaging.EventHubs.EventData](/dotnet/api/azure.messaging.eventhubs.eventdata)++ String++ Byte array++ Plain-old CLR object (POCO)+
+This version of [EventData](/dotnet/api/azure.messaging.eventhubs.eventdata) drops support for the legacy `Body` type in favor of [EventBody](/dotnet/api/azure.messaging.eventhubs.eventdata.eventbody).
+
+Send messages by using a method parameter such as `out string paramName`, where `paramName` is the value specified in the `name` property of *function.json*. To write multiple messages, you can use `ICollector<string>` or `IAsyncCollector<string>` in place of `out string`.
+
+# [Extension v3.x+](#tab/extensionv3/csharp-script)
+
+C# script functions support the following types:
+++ [Microsoft.Azure.EventHubs.EventData](/dotnet/api/microsoft.azure.eventhubs.eventdata)++ String++ Byte array++ Plain-old CLR object (POCO)+
+Send messages by using a method parameter such as `out string paramName`, where `paramName` is the value specified in the `name` property of *function.json*. To write multiple messages, you can use `ICollector<string>` or
+`IAsyncCollector<string>` in place of `out string`.
++++
+There are two options for outputting an Event Hub message from a function by using the [EventHubOutput](/java/api/com.microsoft.azure.functions.annotation.eventhuboutput) annotation:
+
+- **Return value**: By applying the annotation to the function itself, the return value of the function is persisted as an Event Hub message.
+
+- **Imperative**: To explicitly set the message value, apply the annotation to a specific parameter of the type [`OutputBinding<T>`](/java/api/com.microsoft.azure.functions.OutputBinding), where `T` is a POJO or any native Java type. With this configuration, passing a value to the `setValue` method persists the value as an Event Hub message.
+
+Complete PowerShell examples are pending.
+
+Access the output event by using `context.bindings.<name>` where `<name>` is the value specified in the `name` property of *function.json*.
++
+There are two options for outputting an Event Hub message from a function:
+
+- **Return value**: Set the `name` property in *function.json* to `$return`. With this configuration, the function's return value is persisted as an Event Hub message.
+
+- **Imperative**: Pass a value to the [set](/python/api/azure-functions/azure.functions.out#set-val--t--none) method of the parameter declared as an [Out](/python/api/azure-functions/azure.functions.out) type. The value passed to `set` is persisted as an Event Hub message.
+++
+## Exceptions and return codes
+
+| Binding | Reference |
+|||
+| Event Hub | [Operations Guide](/rest/api/eventhub/publisher-policy-operations) |
## Next steps - [Respond to events sent to an event hub event stream (Trigger)](./functions-bindings-event-hubs-trigger.md)
+
+
+[EventHubAttribute]: /dotnet/api/microsoft.azure.webjobs.eventhubattribute
azure-functions Functions Bindings Event Hubs Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-hubs-trigger.md
Title: Azure Event Hubs trigger for Azure Functions description: Learn to use Azure Event Hubs trigger in Azure Functions.- ms.assetid: daf81798-7acc-419a-bc32-b5a41c6db56b Previously updated : 02/21/2020- Last updated : 03/04/2022
+zone_pivot_groups: programming-languages-set-functions-lang-workers
# Azure Event Hubs trigger for Azure Functions
azure-functions Functions Bindings Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-hubs.md
Title: Azure Event Hubs bindings for Azure Functions description: Learn to use Azure Event Hubs trigger and bindings in Azure Functions.- ms.assetid: daf81798-7acc-419a-bc32-b5a41c6db56b Previously updated : 02/21/2020- Last updated : 03/04/2022
+zone_pivot_groups: programming-languages-set-functions-lang-workers
# Azure Event Hubs trigger and bindings for Azure Functions
azure-functions Functions Bindings Event Iot Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-iot-output.md
- Title: Azure IoT Hub output binding for Azure Functions
-description: Learn to write messages to Azure IoT Hubs streams using Azure Functions.
-- Previously updated : 02/21/2020---
-# Azure IoT Hub output binding for Azure Functions
-
-This article explains how to work with Azure Functions output bindings for IoT Hub. The IoT Hub support is based on the [Azure Event Hubs Binding](functions-bindings-event-hubs.md).
-
-For information on setup and configuration details, see the [overview](functions-bindings-event-iot.md).
-
-> [!IMPORTANT]
-> While the following code samples use the Event Hub API, the given syntax is applicable for IoT Hub functions.
--
-## Next steps
--- [Respond to events sent to an event hub event stream (Trigger)](./functions-bindings-event-iot-trigger.md)
azure-functions Functions Bindings Event Iot Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-iot-trigger.md
Title: Azure IoT Hub trigger for Azure Functions description: Learn to respond to events sent to an IoT hub event stream in Azure Functions.- Previously updated : 02/21/2020- Last updated : 03/04/2022
+zone_pivot_groups: programming-languages-set-functions-lang-workers
# Azure IoT Hub trigger for Azure Functions
azure-functions Functions Bindings Event Iot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-iot.md
Title: Azure IoT Hub bindings for Azure Functions description: Learn to use IoT Hub trigger and binding in Azure Functions.- Previously updated : 02/21/2020- Last updated : 03/04/2022
+zone_pivot_groups: programming-languages-set-functions-lang-workers
# Azure IoT Hub bindings for Azure Functions
This set of articles explains how to work with Azure Functions bindings for IoT
| Action | Type | |--|| | Respond to events sent to an IoT hub event stream. | [Trigger](./functions-bindings-event-iot-trigger.md) |
-| Write events to an IoT event stream | [Output binding](./functions-bindings-event-iot-output.md) |
[!INCLUDE [functions-bindings-event-hubs](../../includes/functions-bindings-event-hubs.md)]
azure-functions Functions Bindings Example https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-example.md
Title: Azure Functions trigger and binding example description: Learn to configure Azure Function bindings - ms.devlang: csharp, javascript Last updated 02/08/2022- # Azure Functions trigger and binding example
azure-functions Functions Bindings Expressions Patterns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-expressions-patterns.md
Title: Azure Functions bindings expressions and patterns description: Learn to create different Azure Functions binding expressions based on common patterns.- ms.devlang: csharp Last updated 02/18/2019- # Azure Functions binding expression patterns
azure-functions Functions Bindings Http Webhook Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-http-webhook-output.md
Title: Azure Functions HTTP output bindings description: Learn how to return HTTP responses in Azure Functions.-- Previously updated : 02/21/2020- Last updated : 03/04/2022
+zone_pivot_groups: programming-languages-set-functions-lang-workers
# Azure Functions HTTP output bindings
-Use the HTTP output binding to respond to the HTTP request sender. This binding requires an HTTP trigger and allows you to customize the response associated with the trigger's request.
+Use the HTTP output binding to respond to the HTTP request sender (HTTP trigger). This binding requires an HTTP trigger and allows you to customize the response associated with the trigger's request.
The default return value for an HTTP-triggered function is: - `HTTP 204 No Content` with an empty body in Functions 2.x and higher - `HTTP 200 OK` with an empty body in Functions 1.x
+## Attribute
+
+Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries don't require an attribute. C# script uses a function.json configuration file.
+
+# [In-process](#tab/in-process)
+
+A return value attribute isn't required. To learn more, see [Usage](#usage).
+
+# [Isolated process](#tab/isolated-process)
+
+A return value attribute isn't required. To learn more, see [Usage](#usage).
+
+# [C# Script](#tab/csharp-script)
+
+The following table explains the binding configuration properties that you set in the *function.json* file.
+
+|Property |Description |
+|||
+| **type** |Must be set to `http`. |
+| **direction** | Must be set to `out`. |
+| **name** | The variable name used in function code for the response, or `$return` to use the return value. |
+++
+## Annotations
+
+In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the [HttpOutput](/java/api/com.microsoft.azure.functions.annotation.httpoutput) annotation to define an output variable other than the default variable returned by the function. This annotation supports the following settings:
+++ [dataType](/java/api/com.microsoft.azure.functions.annotation.httpoutput.datatype)++ [name](/java/api/com.microsoft.azure.functions.annotation.httpoutput.name)+ ## Configuration
-The following table explains the binding configuration properties that you set in the *function.json* file. For C# class libraries, there are no attribute properties that correspond to these *function.json* properties.
+The following table explains the binding configuration properties that you set in the *function.json* file.
|Property |Description | |||
The following table explains the binding configuration properties that you set i
| **direction** | Must be set to `out`. | | **name** | The variable name used in function code for the response, or `$return` to use the return value. | + ## Usage
-To send an HTTP response, use the language-standard response patterns. In C# or C# script, make the function return type `IActionResult` or `Task<IActionResult>`. In C#, a return value attribute isn't required.
-
-For example responses, see the [trigger example](./functions-bindings-http-webhook-trigger.md#example).
-
-## host.json settings
--
-> [!NOTE]
-> For a reference of host.json in Functions 1.x, see [host.json reference for Azure Functions 1.x](functions-host-json-v1.md#http).
-
-```json
-{
- "extensions": {
- "http": {
- "routePrefix": "api",
- "maxOutstandingRequests": 200,
- "maxConcurrentRequests": 100,
- "dynamicThrottlesEnabled": true,
- "hsts": {
- "isEnabled": true,
- "maxAge": "10"
- },
- "customHeaders": {
- "X-Content-Type-Options": "nosniff"
- }
- }
- }
-}
-```
-
-|Property |Default | Description |
-||||
-| customHeaders|none|Allows you to set custom headers in the HTTP response. The previous example adds the `X-Content-Type-Options` header to the response to avoid content type sniffing. |
-|dynamicThrottlesEnabled|true<sup>\*</sup>|When enabled, this setting causes the request processing pipeline to periodically check system performance counters like `connections/threads/processes/memory/cpu/etc` and if any of those counters are over a built-in high threshold (80%), requests will be rejected with a `429 "Too Busy"` response until the counter(s) return to normal levels.<br/><sup>\*</sup>The default in a Consumption plan is `true`. The default in a Dedicated plan is `false`.|
-|hsts|not enabled|When `isEnabled` is set to `true`, the [HTTP Strict Transport Security (HSTS) behavior of .NET Core](/aspnet/core/security/enforcing-ssl?tabs=visual-studio#hsts) is enforced, as defined in the [`HstsOptions` class](/dotnet/api/microsoft.aspnetcore.httpspolicy.hstsoptions). The above example also sets the [`maxAge`](/dotnet/api/microsoft.aspnetcore.httpspolicy.hstsoptions.maxage#Microsoft_AspNetCore_HttpsPolicy_HstsOptions_MaxAge) property to 10 days. Supported properties of `hsts` are: <table><tr><th>Property</th><th>Description</th></tr><tr><td>excludedHosts</td><td>A string array of host names for which the HSTS header isn't added.</td></tr><tr><td>includeSubDomains</td><td>Boolean value that indicates whether the includeSubDomain parameter of the Strict-Transport-Security header is enabled.</td></tr><tr><td>maxAge</td><td>String that defines the max-age parameter of the Strict-Transport-Security header.</td></tr><tr><td>preload</td><td>Boolean that indicates whether the preload parameter of the Strict-Transport-Security header is enabled.</td></tr></table>|
-|maxConcurrentRequests|100<sup>\*</sup>|The maximum number of HTTP functions that are executed in parallel. This value allows you to control concurrency, which can help manage resource utilization. For example, you might have an HTTP function that uses a large number of system resources (memory/cpu/sockets) such that it causes issues when concurrency is too high. Or you might have a function that makes outbound requests to a third-party service, and those calls need to be rate limited. In these cases, applying a throttle here can help. <br/><sup>*</sup>The default for a Consumption plan is 100. The default for a Dedicated plan is unbounded (`-1`).|
-|maxOutstandingRequests|200<sup>\*</sup>|The maximum number of outstanding requests that are held at any given time. This limit includes requests that are queued but have not started executing, as well as any in progress executions. Any incoming requests over this limit are rejected with a 429 "Too Busy" response. That allows callers to employ time-based retry strategies, and also helps you to control maximum request latencies. This only controls queuing that occurs within the script host execution path. Other queues such as the ASP.NET request queue will still be in effect and unaffected by this setting. <br/><sup>\*</sup>The default for a Consumption plan is 200. The default for a Dedicated plan is unbounded (`-1`).|
-|routePrefix|api|The route prefix that applies to all routes. Use an empty string to remove the default prefix. |
+To send an HTTP response, use the language-standard response patterns.
+
+The response type depends on the C# mode:
+
+# [In-process](#tab/in-process)
+
+The HTTP triggered function returns a type of [IActionResult](/dotnet/api/microsoft.aspnetcore.mvc.iactionresult) or `Task<IActionResult>`.
+
+# [Isolated process](#tab/isolated-process)
+
+The HTTP triggered function returns an [HttpResponseData](/dotnet/api/microsoft.azure.functions.worker.http.httpresponsedata) object.
+
+# [C# Script](#tab/csharp-script)
+
+The HTTP triggered function returns a type of [IActionResult](/dotnet/api/microsoft.aspnetcore.mvc.iactionresult) or `Task<IActionResult>`.
++++
+For Java, use an [HttpResponseMessage.Builder](/jav#httprequestmessage-and-httpresponsemessage).
++
+For example responses, see the [trigger examples](./functions-bindings-http-webhook-trigger.md#example).
## Next steps
azure-functions Functions Bindings Http Webhook Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-http-webhook-trigger.md
Title: Azure Functions HTTP trigger description: Learn how to call an Azure Function via HTTP.-- Previously updated : 02/21/2020- Last updated : 03/04/2022 ms.devlang: csharp, java, javascript, powershell, python
+zone_pivot_groups: programming-languages-set-functions-lang-workers
# Azure Functions HTTP trigger
For more information about HTTP bindings, see the [overview](./functions-binding
## Example
-# [C#](#tab/csharp)
++
+The code in this article defaults to .NET Core syntax, used in Functions version 2.x and higher. For information on the 1.x syntax, see the [1.x functions templates](https://github.com/Azure/azure-functions-templates/tree/v1.x/Functions.Templates/Templates).
+
+# [In-process](#tab/in-process)
The following example shows a [C# function](functions-dotnet-class-library.md) that looks for a `name` parameter either in the query string or the body of the HTTP request. Notice that the return value is used for the output binding, but a return value attribute isn't required.
public static async Task<IActionResult> Run(
} ```
+# [Isolated process](#tab/isolated-process)
+
+The following example shows an HTTP trigger that returns a "hello world" response as an [HttpResponseData](/dotnet/api/microsoft.azure.functions.worker.http.httpresponsedata) object:
++ # [C# Script](#tab/csharp-script) The following example shows a trigger binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function looks for a `name` parameter either in the query string or the body of the HTTP request.
public class Person {
} ```
-# [Java](#tab/java)
+++
+This section contains the following examples:
* [Read parameter from the query string](#read-parameter-from-the-query-string) * [Read body from a POST request](#read-body-from-a-post-request)
public HttpResponseMessage run(
} ```
-# [JavaScript](#tab/javascript)
The following example shows a trigger binding in a *function.json* file and a [JavaScript function](functions-reference-node.md) that uses the binding. The function looks for a `name` parameter either in the query string or the body of the HTTP request.
module.exports = async function(context, req) {
}; ```
-# [PowerShell](#tab/powershell)
The following example shows a trigger binding in a *function.json* file and a [PowerShell function](functions-reference-powershell.md). The function looks for a `name` parameter either in the query string or the body of the HTTP request.
Push-OutputBinding -Name Response -Value ([HttpResponseContext]@{
}) ``` -
-# [Python](#tab/python)
- The following example shows a trigger binding in a *function.json* file and a [Python function](functions-reference-python.md) that uses the binding. The function looks for a `name` parameter either in the query string or the body of the HTTP request. Here's the *function.json* file:
def main(req: func.HttpRequest) -> func.HttpResponse:
) ``` -
+## Attributes
-## Attributes and annotations
+Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use the `HttpTriggerAttribute` to define the trigger binding. C# script instead uses a function.json configuration file.
-In [C# class libraries](functions-dotnet-class-library.md) and Java, the `HttpTrigger` attribute is available to configure the function.
+# [In-process](#tab/in-process)
-You can set the authorization level and allowable HTTP methods in attribute constructor parameters, webhook type, and a route template. For more information about these settings, see [configuration](#configuration).
+In [in-process functions](functions-dotnet-class-library.md), the `HttpTriggerAttribute` supports the following parameters:
-# [C#](#tab/csharp)
+| Parameters | Description|
+||-|
+| **AuthLevel** | Determines what keys, if any, need to be present on the request in order to invoke the function. For supported values, see [Authorization level](#http-auth). |
+| **Methods** | An array of the HTTP methods to which the function responds. If not specified, the function responds to all HTTP methods. See [customize the HTTP endpoint](#customize-the-http-endpoint). |
+| **Route** | Defines the route template, controlling to which request URLs your function responds. The default value if none is provided is `<functionname>`. For more information, see [customize the HTTP endpoint](#customize-the-http-endpoint). |
+| **WebHookType** | _Supported only for the version 1.x runtime._<br/><br/>Configures the HTTP trigger to act as a [webhook](https://en.wikipedia.org/wiki/Webhook) receiver for the specified provider. For supported values, see [WebHook type](#webhook-type).|
-This example demonstrates how to use the [HttpTrigger](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/dev/src/WebJobs.Extensions.Http/HttpTriggerAttribute.cs) attribute.
+# [Isolated process](#tab/isolated-process)
-```csharp
-[FunctionName("HttpTriggerCSharp")]
-public static Task<IActionResult> Run(
- [HttpTrigger(AuthorizationLevel.Anonymous)] HttpRequest req)
-{
- ...
-}
-```
+In [isolated process](dotnet-isolated-process-guide.md) function apps, the `HttpTriggerAttribute` supports the following parameters:
-For a complete example, see the [trigger example](#example).
+| Parameters | Description|
+||-|
+| **AuthLevel** | Determines what keys, if any, need to be present on the request in order to invoke the function. For supported values, see [Authorization level](#http-auth). |
+| **Methods** | An array of the HTTP methods to which the function responds. If not specified, the function responds to all HTTP methods. See [customize the HTTP endpoint](#customize-the-http-endpoint). |
+| **Route** | Defines the route template, controlling to which request URLs your function responds. The default value if none is provided is `<functionname>`. For more information, see [customize the HTTP endpoint](#customize-the-http-endpoint). |
# [C# Script](#tab/csharp-script)
-Attributes are not supported by C# Script.
+The following table explains the trigger configuration properties that you set in the *function.json* file:
-# [Java](#tab/java)
+|function.json property | Description|
+|||
+| **type** | Required - must be set to `httpTrigger`. |
+| **direction** | Required - must be set to `in`. |
+| **name** | Required - the variable name used in function code for the request or request body. |
+| **authLevel** | Determines what keys, if any, need to be present on the request in order to invoke the function. For supported values, see [Authorization level](#http-auth). |
+| **methods** | An array of the HTTP methods to which the function responds. If not specified, the function responds to all HTTP methods. See [customize the HTTP endpoint](#customize-the-http-endpoint). |
+| **route** | Defines the route template, controlling to which request URLs your function responds. The default value if none is provided is `<functionname>`. For more information, see [customize the HTTP endpoint](#customize-the-http-endpoint). |
+| **webHookType** | _Supported only for the version 1.x runtime._<br/><br/>Configures the HTTP trigger to act as a [webhook](https://en.wikipedia.org/wiki/Webhook) receiver for the specified provider. For supported values, see [WebHook type](#webhook-type).|
-This example demonstrates how to use the [HttpTrigger](https://github.com/Azure/azure-functions-java-library/blob/dev/src/main/java/com/microsoft/azure/functions/annotation/HttpTrigger.java) attribute.
+
-```java
-@FunctionName("HttpTriggerJava")
-public HttpResponseMessage<String> HttpTrigger(
- @HttpTrigger(name = "req",
- methods = {"get"},
- authLevel = AuthorizationLevel.ANONYMOUS) HttpRequestMessage<String> request,
- final ExecutionContext context) {
+## Annotations
- ...
-}
-```
+In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the [HttpTrigger](/java/api/com.microsoft.azure.functions.annotation.httptrigger) annotation, which supports the following settings:
+++ [authLevel](/java/api/com.microsoft.azure.functions.annotation.httptrigger.authlevel)++ [dataType](/java/api/com.microsoft.azure.functions.annotation.httptrigger.datatype)++ [methods](/java/api/com.microsoft.azure.functions.annotation.httptrigger.methods)++ [name](/java/api/com.microsoft.azure.functions.annotation.httptrigger.name)++ [route](/java/api/com.microsoft.azure.functions.annotation.httptrigger.route)
-For a complete example, see the [trigger example](#example).
+
+## Configuration
-# [JavaScript](#tab/javascript)
+The following table explains the trigger configuration properties that you set in the *function.json* file, which differs by runtime version.
-Attributes are not supported by JavaScript.
+# [Functions 2.x+](#tab/functionsv2)
-# [PowerShell](#tab/powershell)
+The following table explains the binding configuration properties that you set in the *function.json* file.
-Attributes are not supported by PowerShell.
+|function.json property | Description|
+|||
+| **type** | Required - must be set to `httpTrigger`. |
+| **direction** | Required - must be set to `in`. |
+| **name** | Required - the variable name used in function code for the request or request body. |
+| **authLevel** | Determines what keys, if any, need to be present on the request in order to invoke the function. For supported values, see [Authorization level](#http-auth). |
+| **methods** | An array of the HTTP methods to which the function responds. If not specified, the function responds to all HTTP methods. See [customize the HTTP endpoint](#customize-the-http-endpoint). |
+| **route** | Defines the route template, controlling to which request URLs your function responds. The default value if none is provided is `<functionname>`. For more information, see [customize the HTTP endpoint](#customize-the-http-endpoint). |
-# [Python](#tab/python)
+# [Functions 1.x](#tab/functionsv1)
-Attributes are not supported by Python.
+The following table explains the binding configuration properties that you set in the *function.json* file.
+
+|function.json property | Description|
+|||
+| **type** | Required - must be set to `httpTrigger`. |
+| **direction** | Required - must be set to `in`. |
+| **name** | Required - the variable name used in function code for the request or request body. |
+| **authLevel** | Determines what keys, if any, need to be present on the request in order to invoke the function. For supported values, see [Authorization level](#http-auth). |
+| **methods** | An array of the HTTP methods to which the function responds. If not specified, the function responds to all HTTP methods. See [customize the HTTP endpoint](#customize-the-http-endpoint). |
+| **route** | Defines the route template, controlling to which request URLs your function responds. The default value if none is provided is `<functionname>`. For more information, see [customize the HTTP endpoint](#customize-the-http-endpoint). |
+| **webHookType** | Configures the HTTP trigger to act as a [webhook](https://en.wikipedia.org/wiki/Webhook) receiver for the specified provider. For supported values, see [WebHook type](#webhook-type).|
-## Configuration
+
+## Usage
-The following table explains the binding configuration properties that you set in the *function.json* file and the `HttpTrigger` attribute.
+This section details how to configure your HTTP trigger function binding.
-|function.json property | Attribute property |Description|
-|||-|
-| **type** | n/a| Required - must be set to `httpTrigger`. |
-| **direction** | n/a| Required - must be set to `in`. |
-| **name** | n/a| Required - the variable name used in function code for the request or request body. |
-| <a name="http-auth"></a>**authLevel** | **AuthLevel** |Determines what keys, if any, need to be present on the request in order to invoke the function. The authorization level can be one of the following values: <ul><li><code>anonymous</code>&mdash;No API key is required.</li><li><code>function</code>&mdash;A function-specific API key is required. This is the default value if none is provided.</li><li><code>admin</code>&mdash;The master key is required.</li></ul> For more information, see the section about [authorization keys](#authorization-keys). |
-| **methods** |**Methods** | An array of the HTTP methods to which the function responds. If not specified, the function responds to all HTTP methods. See [customize the HTTP endpoint](#customize-the-http-endpoint). |
-| **route** | **Route** | Defines the route template, controlling to which request URLs your function responds. The default value if none is provided is `<functionname>`. For more information, see [customize the HTTP endpoint](#customize-the-http-endpoint). |
-| **webHookType** | **WebHookType** | _Supported only for the version 1.x runtime._<br/><br/>Configures the HTTP trigger to act as a [webhook](https://en.wikipedia.org/wiki/Webhook) receiver for the specified provider. Don't set the `methods` property if you set this property. The webhook type can be one of the following values:<ul><li><code>genericJson</code>&mdash;A general-purpose webhook endpoint without logic for a specific provider. This setting restricts requests to only those using HTTP POST and with the `application/json` content type.</li><li><code>github</code>&mdash;The function responds to [GitHub webhooks](https://developer.github.com/webhooks/). Do not use the _authLevel_ property with GitHub webhooks. For more information, see the GitHub webhooks section later in this article.</li><li><code>slack</code>&mdash;The function responds to [Slack webhooks](https://api.slack.com/outgoing-webhooks). Do not use the _authLevel_ property with Slack webhooks. For more information, see the Slack webhooks section later in this article.</li></ul>|
+The [HttpTrigger](/java/api/com.microsoft.azure.functions.annotation.httptrigger) annotation should be applied to a method parameter of one of the following types:
-## Payload
++ [HttpRequestMessage\<T\>](/java/api/com.microsoft.azure.functions.httprequestmessage).++ Any native Java types such as int, String, byte[].++ Nullable values using Optional.++ Any plain-old Java object (POJO) type.+
+### Payload
The trigger input type is declared as either `HttpRequest` or a custom type. If you choose `HttpRequest`, you get full access to the request object. For a custom type, the runtime tries to parse the JSON request body to set the object properties.
-## Customize the HTTP endpoint
+### Customize the HTTP endpoint
By default when you create a function for an HTTP trigger, the function is addressable with a route of the form:
By default when you create a function for an HTTP trigger, the function is addre
http://<APP_NAME>.azurewebsites.net/api/<FUNCTION_NAME> ```
-You can customize this route using the optional `route` property on the HTTP trigger's input binding. As an example, the following *function.json* file defines a `route` property for an HTTP trigger:
+You can customize this route using the optional `route` property on the HTTP trigger's input binding. You can use any [Web API Route Constraint](https://www.asp.net/web-api/overview/web-api-routing-and-actions/attribute-routing-in-web-api-2#constraints) with your parameters.
-```json
-{
- "bindings": [
- {
- "type": "httpTrigger",
- "name": "req",
- "direction": "in",
- "methods": [ "get" ],
- "route": "products/{category:alpha}/{id:int?}"
- },
- {
- "type": "http",
- "name": "res",
- "direction": "out"
- }
- ]
-}
-```
-Using this configuration, the function is now addressable with the following route instead of the original route.
+# [In-process](#tab/in-process)
-```
-http://<APP_NAME>.azurewebsites.net/api/products/electronics/357
-```
+The following C# function code accepts two parameters `category` and `id` in the route and writes a response using both parameters.
-This configuration allows the function code to support two parameters in the address, _category_ and _id_. For more information on how route parameters are tokenized in a URL, see [Routing in ASP.NET Core](/aspnet/core/fundamentals/routing#route-constraint-reference).
+```csharp
+[FunctionName("Function1")]
+public static IActionResult Run(
+[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = "products/{category:alpha}/{id:int?}")] HttpRequest req,
+string category, int? id, ILogger log)
+{
+ log.LogInformation("C# HTTP trigger function processed a request.");
-# [C#](#tab/csharp)
+ var message = String.Format($"Category: {category}, ID: {id}");
+ return (ActionResult)new OkObjectResult(message);
+}
+```
+# [Isolated process](#tab/isolated-process)
-You can use any [Web API Route Constraint](https://www.asp.net/web-api/overview/web-api-routing-and-actions/attribute-routing-in-web-api-2#constraints) with your parameters. The following C# function code makes use of both parameters.
+The following function code accepts two parameters `category` and `id` in the route and writes a response using both parameters.
```csharp
-using System.Net;
-using Microsoft.AspNetCore.Mvc;
-using Microsoft.Extensions.Primitives;
-
-public static IActionResult Run(HttpRequest req, string category, int? id, ILogger log)
+[Function("HttpTrigger1")]
+public static HttpResponseData Run([HttpTrigger(AuthorizationLevel.Function, "get", "post",
+Route = "products/{category:alpha}/{id:int?}")] HttpRequestData req, string category, int? id,
+FunctionContext executionContext)
{
+ var logger = executionContext.GetLogger("HttpTrigger1");
+ logger.LogInformation("C# HTTP trigger function processed a request.");
+ var message = String.Format($"Category: {category}, ID: {id}");
- return (ActionResult)new OkObjectResult(message);
+ var response = req.CreateResponse(HttpStatusCode.OK);
+ response.Headers.Add("Content-Type", "text/plain; charset=utf-8");
+ response.WriteString(message);
+
+ return response;
} ``` # [C# Script](#tab/csharp-script)
-You can use any [Web API Route Constraint](https://www.asp.net/web-api/overview/web-api-routing-and-actions/attribute-routing-in-web-api-2#constraints) with your parameters. The following C# function code makes use of both parameters.
+ The following C# function code makes use of both parameters.
```csharp #r "Newtonsoft.Json"
public static IActionResult Run(HttpRequest req, string category, int? id, ILogg
} ```
-# [Java](#tab/java)
+
-The function execution context is properties as declared in the `HttpTrigger` attribute. The attribute allows you to define route parameters, authorization levels, HTTP verbs and the incoming request instance.
-Route parameters are defined via the `HttpTrigger` attribute.
+Route parameters are defined using the `route` setting of the `HttpTrigger` annotation. The following function code accepts two parameters `category` and `id` in the route and writes a response using both parameters.
```java package com.function;
public class HttpTriggerJava {
} ```
-# [JavaScript](#tab/javascript)
-In Node, the Functions runtime provides the request body from the `context` object. For more information, see the [JavaScript trigger example](#example).
+As an example, the following *function.json* file defines a `route` property for an HTTP trigger with two parameters, `category` and `id`:
-The following example shows how to read route parameters from `context.bindingData`.
+```json
+{
+ "bindings": [
+ {
+ "type": "httpTrigger",
+ "name": "req",
+ "direction": "in",
+ "methods": [ "get" ],
+ "route": "products/{category:alpha}/{id:int?}"
+ },
+ {
+ "type": "http",
+ "name": "res",
+ "direction": "out"
+ }
+ ]
+}
+```
++
+The Functions runtime provides the request body from the `context` object. The following example shows how to read route parameters from `context.bindingData`.
```javascript module.exports = async function (context, req) {
module.exports = async function (context, req) {
} ```
-# [PowerShell](#tab/powershell)
Route parameters declared in the *function.json* file are accessible as a property of the `$Request.Params` object.
Push-OutputBinding -Name Response -Value ([HttpResponseContext]@{
}) ```
-# [Python](#tab/python)
- The function execution context is exposed via a parameter declared as `func.HttpRequest`. This instance allows a function to access data route parameters, query string values and methods that allow you to return HTTP responses. Once defined, the route parameters are available to the function by calling the `route_params` method.
def main(req: func.HttpRequest) -> func.HttpResponse:
return func.HttpResponse(message) ``` -
+Using this configuration, the function is now addressable with the following route instead of the original route.
+
+```
+http://<APP_NAME>.azurewebsites.net/api/products/electronics/357
+```
+
+This configuration allows the function code to support two parameters in the address, _category_ and _id_. For more information on how route parameters are tokenized in a URL, see [Routing in ASP.NET Core](/aspnet/core/fundamentals/routing#route-constraint-reference).
By default, all function routes are prefixed with *api*. You can also customize or remove the prefix using the `extensions.http.routePrefix` property in your [host.json](functions-host-json.md) file. The following example removes the *api* route prefix by using an empty string for the prefix in the *host.json* file.
By default, all function routes are prefixed with *api*. You can also customize
} ```
-## Using route parameters
+### Using route parameters
Route parameters that defined a function's `route` pattern are available to each binding. For example, if you have a route defined as `"route": "products/{id}"` then a table storage binding can use the value of the `{id}` parameter in the binding configuration.
When you use route parameters, an `invoke_URL_template` is automatically created
You can programmatically access the `invoke_URL_template` by using the Azure Resource Manager APIs for [List Functions](/rest/api/appservice/webapps/listfunctions) or [Get Function](/rest/api/appservice/webapps/getfunction).
-## Working with client identities
+### Working with client identities
If your function app is using [App Service Authentication / Authorization](../app-service/overview-authentication-authorization.md), you can view information about authenticated clients from your code. This information is available as [request headers injected by the platform](../app-service/configure-authentication-user-identities.md#access-user-claims-in-app-code). You can also read this information from binding data. This capability is only available to the Functions runtime in 2.x and higher. It is also currently only available for .NET languages.
-# [C#](#tab/csharp)
+Information regarding authenticated clients is available as a [ClaimsPrincipal], which is available as part of the request context as shown in the following example:
-Information regarding authenticated clients is available as a [ClaimsPrincipal](/dotnet/api/system.security.claims.claimsprincipal). The ClaimsPrincipal is available as part of the request context as shown in the following example:
+# [In-process](#tab/in-process)
```csharp using System.Net;
public static void Run(JObject input, ClaimsPrincipal principal, ILogger log)
return; } ```
+# [Isolated process](#tab/isolated-process)
-# [C# Script](#tab/csharp-script)
+The authenticated user is available via [HTTP Headers](../app-service/configure-authentication-user-identities.md#access-user-claims-in-app-code).
-Information regarding authenticated clients is available as a [ClaimsPrincipal](/dotnet/api/system.security.claims.claimsprincipal). The ClaimsPrincipal is available as part of the request context as shown in the following example:
+# [C# Script](#tab/csharp-script)
```csharp using System.Net;
public static void Run(JObject input, ClaimsPrincipal principal, ILogger log)
} ```
-# [Java](#tab/java)
-
-The authenticated user is available via [HTTP Headers](../app-service/configure-authentication-user-identities.md#access-user-claims-in-app-code).
-
-# [JavaScript](#tab/javascript)
-
-The authenticated user is available via [HTTP Headers](../app-service/configure-authentication-user-identities.md#access-user-claims-in-app-code).
+
-# [PowerShell](#tab/powershell)
The authenticated user is available via [HTTP Headers](../app-service/configure-authentication-user-identities.md#access-user-claims-in-app-code).
-# [Python](#tab/python)
-The authenticated user is available via [HTTP Headers](../app-service/configure-authentication-user-identities.md#access-user-claims-in-app-code).
+### <a name="http-auth"></a>Authorization level
+The authorization level is a string value that indicates the kind of [authorization key](#authorization-keys) that's required to access the function endpoint. For an HTTP triggered function, the authorization level can be one of the following values:
-
+| Level value | Description |
+| | |
+|**anonymous**| No API key is required.|
+|**function**| A function-specific API key is required. This is the default value when a level isn't specifically set.|
+|**admin**| The master key is required.|
-## <a name="authorization-keys"></a>Function access keys
+### <a name="authorization-keys"></a>Function access keys
[!INCLUDE [functions-authorization-keys](../../includes/functions-authorization-keys.md)]
-## Obtaining keys
+#### Obtaining keys
Keys are stored as part of your function app in Azure and are encrypted at rest. To view your keys, create new ones, or roll keys to new values, navigate to one of your HTTP-triggered functions in the [Azure portal](https://portal.azure.com) and select **Function Keys**.
Function and host keys can be deleted programmatically by using the [Delete Func
You can also use the [legacy key management APIs to obtain function keys](https://github.com/Azure/azure-functions-host/wiki/Key-management-API), but using the Azure Resource Manager APIs is recommended instead.
-## API key authorization
+#### API key authorization
Most HTTP trigger templates require an API key in the request. So your HTTP request normally looks like the following URL:
You can allow anonymous requests, which do not require keys. You can also requir
> When running functions locally, authorization is disabled regardless of the specified authorization level setting. After publishing to Azure, the `authLevel` setting in your trigger is enforced. Keys are still required when running [locally in a container](functions-create-function-linux-custom-image.md#build-the-container-image-and-test-locally).
-## Secure an HTTP endpoint in production
+#### Secure an HTTP endpoint in production
To fully secure your function endpoints in production, you should consider implementing one of the following function app-level security options. When using one of these function app-level security methods, you should set the HTTP-triggered function authorization level to `anonymous`. [!INCLUDE [functions-enable-auth](../../includes/functions-enable-auth.md)]
-#### Deploy your function app in isolation
+##### Deploy your function app in isolation
[!INCLUDE [functions-deploy-isolation](../../includes/functions-deploy-isolation.md)]
-## Webhooks
+### Webhooks
> [!NOTE] > Webhook mode is only available for version 1.x of the Functions runtime. This change was made to improve the performance of HTTP triggers in version 2.x and higher. In version 1.x, webhook templates provide additional validation for webhook payloads. In version 2.x and higher, the base HTTP trigger still works and is the recommended approach for webhooks.
-### GitHub webhooks
+#### WebHook type
+
+The `webHookType` binding property indicates the type if webhook supported by the function, which also dictates the supported payload. The webhook type can be one of the following values:
+
+| Type value | Description |
+| | |
+| **genericJson**| A general-purpose webhook endpoint without logic for a specific provider. This setting restricts requests to only those using HTTP POST and with the `application/json` content type.|
+| **[github](#github-webhooks)** | The function responds to [GitHub webhooks](https://developer.github.com/webhooks/). Don't use the `authLevel` property with GitHub webhooks. |
+| **[slack](#slack-webhooks)** | The function responds to [Slack webhooks](https://api.slack.com/outgoing-webhooks). Don't use the `authLevel` property with Slack webhooks. |
+
+When setting the `webHookType` property, don't also set the `methods` property on the binding.
+
+#### GitHub webhooks
To respond to GitHub webhooks, first create your function with an HTTP Trigger, and set the **webHookType** property to `github`. Then copy its URL and API key into the **Add webhook** page of your GitHub repository. ![Screenshot that shows how to add a webhook for your function.](./media/functions-bindings-http-webhook/github-add-webhook.png)
-### Slack webhooks
+#### Slack webhooks
The Slack webhook generates a token for you instead of letting you specify it, so you must configure a function-specific key with the token from Slack. See [Authorization keys](#authorization-keys).
-## Webhooks and keys
+### Webhooks and keys
Webhook authorization is handled by the webhook receiver component, part of the HTTP trigger, and the mechanism varies based on the webhook type. Each mechanism does rely on a key. By default, the function key named "default" is used. To use a different key, configure the webhook provider to send the key name with the request in one of the following ways:
Webhook authorization is handled by the webhook receiver component, part of the
Passing binary and form data to a non-C# function requires that you use the appropriate content-type header. Supported content types include `octet-stream` for binary data and [multipart types](https://www.iana.org/assignments/media-types/media-types.xhtml#multipart).
-### Known issues
+#### Known issues
In non-C# functions, requests sent with the content-type `image/jpeg` results in a `string` value passed to the function. In cases like these, you can manually convert the `string` value to a byte array to access the raw binary data.
-## Limits
+### Limits
The HTTP request length is limited to 100 MB (104,857,600 bytes), and the URL length is limited to 4 KB (4,096 bytes). These limits are specified by the `httpRuntime` element of the runtime's [Web.config file](https://github.com/Azure/azure-functions-host/blob/v3.x/src/WebJobs.Script.WebHost/web.config).
If a function that uses the HTTP trigger doesn't complete within 230 seconds, th
## Next steps - [Return an HTTP response from a function](./functions-bindings-http-webhook-output.md)+
+[ClaimsPrincipal]: /dotnet/api/system.security.claims.claimsprincipal
azure-functions Functions Bindings Http Webhook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-http-webhook.md
Title: Azure Functions HTTP triggers and bindings description: Learn to use HTTP triggers and bindings in Azure Functions.-- Previously updated : 02/14/2020- Last updated : 03/04/2022
+zone_pivot_groups: programming-languages-set-functions-lang-workers
# Azure Functions HTTP triggers and bindings overview
Azure Functions may be invoked via HTTP requests to build serverless APIs and re
| Run a function from an HTTP request | [Trigger](./functions-bindings-http-webhook-trigger.md) | | Return an HTTP response from a function |[Output binding](./functions-bindings-http-webhook-output.md) |
-The code in this article defaults to .NET Core syntax, used in Functions version 2.x and higher. For information on the 1.x syntax, see the [1.x functions templates](https://github.com/Azure/azure-functions-templates/tree/v1.x/Functions.Templates/Templates).
-## Add to your Functions app
+## Install extension
-### Functions 2.x and higher
+The extension NuGet package you install depends on the C# mode you're using in your function app:
-Working with the trigger and bindings requires that you reference the appropriate package. The NuGet package is used for .NET class libraries while the extension bundle is used for all other application types.
+# [In-process](#tab/in-process)
-| Language | Add by... | Remarks
-|-||-|
-| C# | Installing the [NuGet package], version 3.x | |
-| C# Script, Java, JavaScript, Python, PowerShell | Registering the [extension bundle] | The [Azure Tools extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack) is recommended to use with Visual Studio Code. |
-| C# Script (online-only in Azure portal) | Adding a binding | To update existing binding extensions without having to republish your function app, see [Update your extensions]. |
+Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
-[core tools]: ./functions-run-local.md
-[extension bundle]: ./functions-bindings-register.md#extension-bundles
-[NuGet package]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Http
-[Update your extensions]: ./functions-bindings-register.md
-[Azure Tools extension]: https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack
+# [Isolated process](#tab/isolated-process)
+
+Functions execute in an isolated C# worker process. To learn more, see [Guide for running functions on .NET 5.0 in Azure](dotnet-isolated-process-guide.md).
+
+# [C# script](#tab/csharp-script)
+
+Functions run as C# script, which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions].
+++
+The functionality of the extension varies depending on the extension version:
+
+# [Functions v2.x+](#tab/functionsv2/in-process)
+
+Add the extension to your project by installing the [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Http), version 3.x.
+
+# [Functions v1.x](#tab/functionsv1/in-process)
+
+Functions 1.x apps automatically have a reference the [Microsoft.Azure.WebJobs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs) NuGet package, version 2.x.
+
+# [Functions v2.x+](#tab/functionsv2/isolated-process)
+
+Add the extension to your project by installing the [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Http), version 3.x.
+
+# [Functions v1.x](#tab/functionsv1/isolated-process)
+
+Functions 1.x doesn't support running in an isolated process.
+
+# [Functions v2.x+](#tab/functionsv2/csharp-script)
+
+This version of the extension should already be available to your function app with [extension bundle], version 2.x.
-### Functions 1.x
+# [Functions 1.x](#tab/functionsv1/csharp-script)
Functions 1.x apps automatically have a reference the [Microsoft.Azure.WebJobs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs) NuGet package, version 2.x. +++
+## Install bundle
+
+Starting with Functions version 2.x, the HTTP extension is part of an [extension bundle], which is specified in your host.json project file. To learn more, see [extension bundle].
+
+# [Bundle v2.x](#tab/functionsv2)
+
+This version of the extension should already be available to your function app with [extension bundle], version 2.x.
+
+# [Functions 1.x](#tab/functionsv1)
+
+Functions 1.x apps automatically have a reference to the extension.
++++
+## host.json settings
++
+> [!NOTE]
+> For a reference of host.json in Functions 1.x, see [host.json reference for Azure Functions 1.x](functions-host-json-v1.md#http).
+
+```json
+{
+ "extensions": {
+ "http": {
+ "routePrefix": "api",
+ "maxOutstandingRequests": 200,
+ "maxConcurrentRequests": 100,
+ "dynamicThrottlesEnabled": true,
+ "hsts": {
+ "isEnabled": true,
+ "maxAge": "10"
+ },
+ "customHeaders": {
+ "X-Content-Type-Options": "nosniff"
+ }
+ }
+ }
+}
+```
+
+|Property |Default | Description |
+||||
+| customHeaders|none|Allows you to set custom headers in the HTTP response. The previous example adds the `X-Content-Type-Options` header to the response to avoid content type sniffing. |
+|dynamicThrottlesEnabled|true<sup>\*</sup>|When enabled, this setting causes the request processing pipeline to periodically check system performance counters like `connections/threads/processes/memory/cpu/etc` and if any of those counters are over a built-in high threshold (80%), requests will be rejected with a `429 "Too Busy"` response until the counter(s) return to normal levels.<br/><sup>\*</sup>The default in a Consumption plan is `true`. The default in a Dedicated plan is `false`.|
+|hsts|not enabled|When `isEnabled` is set to `true`, the [HTTP Strict Transport Security (HSTS) behavior of .NET Core](/aspnet/core/security/enforcing-ssl?tabs=visual-studio#hsts) is enforced, as defined in the [`HstsOptions` class](/dotnet/api/microsoft.aspnetcore.httpspolicy.hstsoptions). The above example also sets the [`maxAge`](/dotnet/api/microsoft.aspnetcore.httpspolicy.hstsoptions.maxage#Microsoft_AspNetCore_HttpsPolicy_HstsOptions_MaxAge) property to 10 days. Supported properties of `hsts` are: <table><tr><th>Property</th><th>Description</th></tr><tr><td>excludedHosts</td><td>A string array of host names for which the HSTS header isn't added.</td></tr><tr><td>includeSubDomains</td><td>Boolean value that indicates whether the includeSubDomain parameter of the Strict-Transport-Security header is enabled.</td></tr><tr><td>maxAge</td><td>String that defines the max-age parameter of the Strict-Transport-Security header.</td></tr><tr><td>preload</td><td>Boolean that indicates whether the preload parameter of the Strict-Transport-Security header is enabled.</td></tr></table>|
+|maxConcurrentRequests|100<sup>\*</sup>|The maximum number of HTTP functions that are executed in parallel. This value allows you to control concurrency, which can help manage resource utilization. For example, you might have an HTTP function that uses a large number of system resources (memory/cpu/sockets) such that it causes issues when concurrency is too high. Or you might have a function that makes outbound requests to a third-party service, and those calls need to be rate limited. In these cases, applying a throttle here can help. <br/><sup>*</sup>The default for a Consumption plan is 100. The default for a Dedicated plan is unbounded (`-1`).|
+|maxOutstandingRequests|200<sup>\*</sup>|The maximum number of outstanding requests that are held at any given time. This limit includes requests that are queued but have not started executing, as well as any in progress executions. Any incoming requests over this limit are rejected with a 429 "Too Busy" response. That allows callers to employ time-based retry strategies, and also helps you to control maximum request latencies. This only controls queuing that occurs within the script host execution path. Other queues such as the ASP.NET request queue will still be in effect and unaffected by this setting. <br/><sup>\*</sup>The default for a Consumption plan is 200. The default for a Dedicated plan is unbounded (`-1`).|
+|routePrefix|api|The route prefix that applies to all routes. Use an empty string to remove the default prefix. |
+ ## Next steps - [Run a function from an HTTP request](./functions-bindings-http-webhook-trigger.md)-- [Return an HTTP response from a function](./functions-bindings-http-webhook-output.md)
+- [Return an HTTP response from a function](./functions-bindings-http-webhook-output.md)
+
+[extension bundle]: ./functions-bindings-register.md#extension-bundles
+[Update your extensions]: ./functions-bindings-register.md
azure-functions Functions Bindings Mobile Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-mobile-apps.md
Title: Mobile Apps bindings for Azure Functions description: Understand how to use Azure Mobile Apps bindings in Azure Functions.-- ms.devlang: csharp, javascript Last updated 11/21/2017- # Mobile Apps bindings for Azure Functions
This article explains how to work with [Azure Mobile Apps](/previous-versions/az
The Mobile Apps bindings let you read and update data tables in mobile apps. - ## Packages - Functions 1.x Mobile Apps bindings are provided in the [Microsoft.Azure.WebJobs.Extensions.MobileApps](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.MobileApps) NuGet package, version 1.x. Source code for the package is in the [azure-webjobs-sdk-extensions](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/v2.x/src/WebJobs.Extensions.MobileApps/) GitHub repository.
azure-functions Functions Bindings Notification Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-notification-hubs.md
Title: Notification Hubs bindings for Azure Functions description: Understand how to use Azure Notification Hub binding in Azure Functions.-- ms.devlang: csharp, fsharp, javascript Last updated 11/21/2017- # Notification Hubs output binding for Azure Functions
This article explains how to send push notifications by using [Azure Notificatio
Azure Notification Hubs must be configured for the Platform Notifications Service (PNS) you want to use. To learn how to get push notifications in your client app from Notification Hubs, see [Getting started with Notification Hubs](../notification-hubs/notification-hubs-windows-store-dotnet-get-started-wns-push-notification.md) and select your target client platform from the drop-down list near the top of the page. - > [!IMPORTANT] > Google has [deprecated Google Cloud Messaging (GCM) in favor of Firebase Cloud Messaging (FCM)](https://developers.google.com/cloud-messaging/faq). This output binding doesn't support FCM. To send notifications using FCM, use the [Firebase API](https://firebase.google.com/docs/cloud-messaging/server#choosing-a-server-option) directly in your function or use [template notifications](../notification-hubs/notification-hubs-templates-cross-platform-push-messages.md).
azure-functions Functions Bindings Rabbitmq Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-rabbitmq-output.md
Title: RabbitMQ output bindings for Azure Functions description: Learn to send RabbitMQ messages from Azure Functions. - ms.assetid: Previously updated : 12/17/2020 Last updated : 01/21/2022 ms.devlang: csharp, java, javascript, python
+zone_pivot_groups: programming-languages-set-functions-lang-workers
# RabbitMQ output binding for Azure Functions overview
For information on setup and configuration details, see the [overview](functions
## Example
-# [C#](#tab/csharp)
++
+# [In-process](#tab/in-process)
The following example shows a [C# function](functions-dotnet-class-library.md) that sends a RabbitMQ message when triggered by a TimerTrigger every 5 minutes using the method return value as the output:
namespace Company.Function
} ```
+# [Isolated process](#tab/isolated-process)
+++ # [C# Script](#tab/csharp-script) The following example shows a RabbitMQ output binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function reads in the message from an HTTP trigger and outputs it to the RabbitMQ queue.
public static void Run(string input, out string outputMessage, ILogger log)
outputMessage = input; } ```+
-# [JavaScript](#tab/javascript)
+
+The following Java function uses the `@RabbitMQOutput` annotation from the [Java RabbitMQ types](https://mvnrepository.com/artifact/com.microsoft.azure.functions/azure-functions-java-library-rabbitmq) to describe the configuration for a RabbitMQ queue output binding. The function sends a message to the RabbitMQ queue when triggered by a TimerTrigger every 5 minutes.
+
+```java
+@FunctionName("RabbitMQOutputExample")
+public void run(
+@TimerTrigger(name = "keepAliveTrigger", schedule = "0 */5 * * * *") String timerInfo,
+@RabbitMQOutput(connectionStringSetting = "rabbitMQConnectionAppSetting", queueName = "hello") OutputBinding<String> output,
+final ExecutionContext context) {
+ output.setValue("Some string");
+}
+```
+ The following example shows a RabbitMQ output binding in a *function.json* file and a [JavaScript function](functions-reference-node.md) that uses the binding. The function reads in the message from an HTTP trigger and outputs it to the RabbitMQ queue.
module.exports = async function (context, input) {
}; ```
-# [Python](#tab/python)
The following example shows a RabbitMQ output binding in a *function.json* file and a Python function that uses the binding. The function reads in the message from an HTTP trigger and outputs it to the RabbitMQ queue.
def main(req: func.HttpRequest, outputMessage: func.Out[str]) -> func.HttpRespon
return 'OK' ```
-# [Java](#tab/java)
-The following Java function uses the `@RabbitMQOutput` annotation from the [Java RabbitMQ types](https://mvnrepository.com/artifact/com.microsoft.azure.functions/azure-functions-java-library-rabbitmq) to describe the configuration for a RabbitMQ queue output binding. The function sends a message to the RabbitMQ queue when triggered by a TimerTrigger every 5 minutes.
+## Attributes
-```java
-@FunctionName("RabbitMQOutputExample")
-public void run(
-@TimerTrigger(name = "keepAliveTrigger", schedule = "0 */5 * * * *") String timerInfo,
-@RabbitMQOutput(connectionStringSetting = "rabbitMQConnectionAppSetting", queueName = "hello") OutputBinding<String> output,
-final ExecutionContext context) {
- output.setValue("Some string");
-}
-```
+Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use the <!--attribute API here--> attribute to define the function. C# script instead uses a function.json configuration file.
-
+The attribute's constructor takes the following parameters:
-## Attributes and annotations
+|Parameter | Description|
+||-|
+|**QueueName**| Name of the queue from which to receive messages. |
+|**HostName**|Hostname of the queue, such as 10.26.45.210. Ignored when using `ConnectStringSetting`.|
+|**UserNameSetting**|Name of the app setting that contains the username to access the queue, such as `UserNameSetting: "%< UserNameFromSettings >%"`. Ignored when using `ConnectStringSetting`.|
+|**PasswordSetting**|Name of the app setting that contains the password to access the queue, such as `PasswordSetting: "%< PasswordFromSettings >%"`. Ignored when using `ConnectStringSetting`.|
+|**ConnectionStringSetting**|The name of the app setting that contains the RabbitMQ message queue connection string. The trigger won't work when you specify the connection string directly instead through an app setting. For example, when you have set `ConnectionStringSetting: "rabbitMQConnection"`, then in both the *local.settings.json* and in your function app you need a setting like `"RabbitMQConnection" : "< ActualConnectionstring >"`.|
+|**Port**|Gets or sets the port used. Defaults to 0, which points to the RabbitMQ client's default port setting of `5672`. |
-# [C#](#tab/csharp)
+# [In-process](#tab/in-process)
In [C# class libraries](functions-dotnet-class-library.md), use the [RabbitMQAttribute](https://github.com/Azure/azure-functions-rabbitmq-extension/blob/dev/src/RabbitMQAttribute.cs).
-Here's a `RabbitMQAttribute` attribute in a method signature:
+Here's a `RabbitMQTrigger` attribute in a method signature for an in-process library:
```csharp [FunctionName("RabbitMQOutput")]
ILogger log)
} ```
-For a complete example, see C# [example](#example).
+# [Isolated process](#tab/isolated-process)
-# [C# Script](#tab/csharp-script)
+In [C# class libraries](functions-dotnet-class-library.md), use the [RabbitMQTrigger](https://github.com/Azure/azure-functions-rabbitmq-extension/blob/dev/src/Trigger/RabbitMQTriggerAttribute.cs) attribute.
-Attributes are not supported by C# Script.
+Here's a `RabbitMQTrigger` attribute in a method signature for an isolated process library:
-# [JavaScript](#tab/javascript)
-Attributes are not supported by JavaScript.
-# [Python](#tab/python)
+# [C# script](#tab/csharp-script)
-Attributes are not supported by Python.
+C# script uses a function.json file for configuration instead of attributes.
-# [Java](#tab/java)
+The following table explains the binding configuration properties for C# script that you set in the *function.json* file.
-The `RabbitMQOutput` annotation allows you to create a function that runs when sending a RabbitMQ message. Configuration options available include queue name and connection string name. For additional parameter details please visit the [RabbitMQOutput Java annotations](https://github.com/Azure/azure-functions-rabbitmq-extension/blob/dev/binding-library/java/src/main/java/com/microsoft/azure/functions/rabbitmq/annotation/RabbitMQOutput.java).
-
-See the output binding [example](#example) for more detail.
+|function.json property | Description|
+||-|
+|**type** | Must be set to `RabbitMQ`.|
+|**direction** | Must be set to `out`.|
+|**name** | The name of the variable that represents the queue in function code. |
+|**queueName**| See the **QueueName** attribute above.|
+|**hostName**|See the **HostName** attribute above.|
+|**userNameSetting**|See the **UserNameSetting** attribute above.|
+|**passwordSetting**|See the **PasswordSetting** attribute above.|
+|**connectionStringSetting**|See the **ConnectionStringSetting** attribute above.|
+|**port**|See the **Port** attribute above.|
+## Annotations
+
+The `RabbitMQOutput` annotation allows you to create a function that runs when a RabbitMQ message is created.
+
+The annotation supports the following configuration settings:
+
+|Setting | Description|
+||-|
+|**queueName**| Name of the queue from which to receive messages. |
+|**hostName**|Hostname of the queue, such as 10.26.45.210. Ignored when using `ConnectStringSetting`.|
+|**userNameSetting**|Name of the app setting that contains the username to access the queue, such as `UserNameSetting: "%< UserNameFromSettings >%"`. Ignored when using `ConnectStringSetting`.|
+|**passwordSetting**|Name of the app setting that contains the password to access the queue, such as `PasswordSetting: "%< PasswordFromSettings >%"`. Ignored when using `ConnectStringSetting`.|
+|**connectionStringSetting**|The name of the app setting that contains the RabbitMQ message queue connection string. The trigger won't work when you specify the connection string directly instead through an app setting. For example, when you have set `ConnectionStringSetting: "rabbitMQConnection"`, then in both the *local.settings.json* and in your function app you need a setting like `"RabbitMQConnection" : "< ActualConnectionstring >"`.|
+|**port**|Gets or sets the port used. Defaults to 0, which points to the RabbitMQ client's default port setting of `5672`. |
+
+See the output binding [example](#example) for more detail.
+
+
## Configuration
-The following table explains the binding configuration properties that you set in the *function.json* file and the `RabbitMQ` attribute.
+The following table explains the binding configuration properties that you set in the *function.json* file.
-|function.json property | Attribute property |Description|
-|||-|
-|**type** | n/a | Must be set to "RabbitMQ".|
-|**direction** | n/a | Must be set to "out". |
-|**name** | n/a | The name of the variable that represents the queue in function code. |
-|**queueName**|**QueueName**| Name of the queue to send messages to. |
-|**hostName**|**HostName**|(ignored if using ConnectStringSetting) <br>Hostname of the queue (Ex: 10.26.45.210)|
-|**userName**|**UserName**|(ignored if using ConnectionStringSetting) <br>Name of the app setting that contains the username to access the queue. Ex. UserNameSetting: "< UserNameFromSettings >"|
-|**password**|**Password**|(ignored if using ConnectionStringSetting) <br>Name of the app setting that contains the password to access the queue. Ex. UserNameSetting: "< UserNameFromSettings >"|
-|**connectionStringSetting**|**ConnectionStringSetting**|The name of the app setting that contains the RabbitMQ message queue connection string. Please note that if you specify the connection string directly and not through an app setting in local.settings.json, the trigger will not work. (Ex: In *function.json*: connectionStringSetting: "rabbitMQConnection" <br> In *local.settings.json*: "rabbitMQConnection" : "< ActualConnectionstring >")|
-|**port**|**Port**|(ignored if using ConnectionStringSetting) Gets or sets the Port used. Defaults to 0 which points to rabbitmq client's default port setting: 5672.|
+|function.json property |Description|
+||-|
+|**type** | Must be set to `RabbitMQ`.|
+|**direction** | Must be set to `out`. |
+|**name** | The name of the variable that represents the queue in function code. |
+|**queueName**| Name of the queue to send messages to. |
+|**hostName**| Hostname of the queue, such as 10.26.45.210. Ignored when using `connectStringSetting`. |
+|**userName**| Name of the app setting that contains the username to access the queue, such as UserNameSetting: "< UserNameFromSettings >". Ignored when using `connectStringSetting`.|
+|**password**| Name of the app setting that contains the password to access the queue, such as UserNameSetting: "< UserNameFromSettings >". Ignored when using `connectStringSetting`.|
+|**connectionStringSetting**|The name of the app setting that contains the RabbitMQ message queue connection string. The trigger won't work when you specify the connection string directly instead of through an app setting in `local.settings.json`. For example, when you have set `connectionStringSetting: "rabbitMQConnection"` then in both the *local.settings.json* and in your function app you need a setting like `"rabbitMQConnection" : "< ActualConnectionstring >"`.|
+|**port**| Gets or sets the Port used. Defaults to 0, which points to the RabbitMQ client's default port setting of `5672`.|
[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)] +
+See the [Example section](#example) for complete examples.
+ ## Usage
-# [C#](#tab/csharp)
+The parameter type supported by the RabbitMQ trigger depends on the Functions runtime version, the extension package version, and the C# modality used.
+
+# [In-process](#tab/in-process)
Use the following parameter types for the output binding:
-* `byte[]` - If the parameter value is null when the function exits, Functions does not create a message.
-* `string` - If the parameter value is null when the function exits, Functions does not create a message.
-* `POCO` - If the parameter value isn't formatted as a C# object, an error will be received. For a complete example, see C# [example](#example).
+* `byte[]` - If the parameter value is null when the function exits, Functions doesn't create a message.
+* `string` - If the parameter value is null when the function exits, Functions doesn't create a message.
+* `POCO` - The message is formatted as a C# object.
When working with C# functions: * Async functions need a return value or `IAsyncCollector` instead of an `out` parameter.
-# [C# Script](#tab/csharp-script)
+# [Isolated process](#tab/isolated-process)
+
+The RabbitMQ bindings currently support only string and serializable object types when running in an isolated process.
+
+# [C# script](#tab/csharp-script)
Use the following parameter types for the output binding:
-* `byte[]` - If the parameter value is null when the function exits, Functions does not create a message.
-* `string` - If the parameter value is null when the function exits, Functions does not create a message.
+* `byte[]` - If the parameter value is null when the function exits, Functions doesn't create a message.
+* `string` - If the parameter value is null when the function exits, Functions doesn't create a message.
* `POCO` - If the parameter value isn't formatted as a C# object, an error will be received. For a complete example, see C# Script [example](#example). When working with C# Script functions: * Async functions need a return value or `IAsyncCollector` instead of an `out` parameter.
-# [JavaScript](#tab/javascript)
-
-The queue message is available via context.bindings.\<NAME\> where \<NAME\> matches the name defined in function.json. If the payload is JSON, the value is deserialized into an object.
-
-# [Python](#tab/python)
+
-Refer to the Python [example](#example).
+For a complete example, see C# [example](#example).
-# [Java](#tab/java)
Use the following parameter types for the output binding:
-* `byte[]` - If the parameter value is null when the function exits, Functions does not create a message.
-* `string` - If the parameter value is null when the function exits, Functions does not create a message.
+* `byte[]` - If the parameter value is null when the function exits, Functions doesn't create a message.
+* `string` - If the parameter value is null when the function exits, Functions doesn't create a message.
* `POJO` - If the parameter value isn't formatted as a Java object, an error will be received. -
+
+The queue message is available via `context.bindings.<NAME>` where `<NAME>` matches the name defined in function.json. If the payload is JSON, the value is deserialized into an object.
++
+Refer to the Python [example](#example).
+ ## Next steps
azure-functions Functions Bindings Rabbitmq Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-rabbitmq-trigger.md
Title: RabbitMQ trigger for Azure Functions
-description: Learn to run an Azure Function when a RabbitMQ message is created.
+description: Learn how to run an Azure Function when a RabbitMQ message is created.
- ms.assetid: Previously updated : 12/17/2020 Last updated : 01/21/2022 ms.devlang: csharp, java, javascript, python
+zone_pivot_groups: programming-languages-set-functions-lang-workers
# RabbitMQ trigger for Azure Functions overview
For information on setup and configuration details, see the [overview](functions
## Example
-# [C#](#tab/csharp)
++
+# [In-process](#tab/in-process)
The following example shows a [C# function](functions-dotnet-class-library.md) that reads and logs the RabbitMQ message as a [RabbitMQ Event](https://rabbitmq.github.io/rabbitmq-dotnet-client/api/RabbitMQ.Client.Events.BasicDeliverEventArgs.html):
namespace Company.Function
} ```
-Like with Json objects, an error will occur if the message isn't properly formatted as a C# object. If it is, it is then bound to the variable pocObj, which can be used for what whatever it is needed for.
+Like with Json objects, an error will occur if the message isn't properly formatted as a C# object. If it is, it's then bound to the variable pocObj, which can be used for what whatever it's needed for.
+
+# [Isolated process](#tab/isolated-process)
+ # [C# Script](#tab/csharp-script)
public static void Run(string myQueueItem, ILogger log)
log.LogInformation($"C# Script RabbitMQ trigger function processed: {ΓÇïΓÇïmyQueueItem}ΓÇïΓÇï"); }ΓÇïΓÇï ```++
-# [JavaScript](#tab/javascript)
+The following Java function uses the `@RabbitMQTrigger` annotation from the [Java RabbitMQ types](https://mvnrepository.com/artifact/com.microsoft.azure.functions/azure-functions-java-library-rabbitmq) to describe the configuration for a RabbitMQ queue trigger. The function grabs the message placed on the queue and adds it to the logs.
+
+```java
+@FunctionName("RabbitMQTriggerExample")
+public void run(
+ @RabbitMQTrigger(connectionStringSetting = "rabbitMQConnectionAppSetting", queueName = "queue") String input,
+ final ExecutionContext context)
+{
+ context.getLogger().info("Java HTTP trigger processed a request." + input);
+}
+```
The following example shows a RabbitMQ trigger binding in a *function.json* file and a [JavaScript function](functions-reference-node.md) that uses the binding. The function reads and logs a RabbitMQ message.
Here's the binding data in the *function.json* file:
] }ΓÇïΓÇï ```- Here's the JavaScript script code: ```javascript
module.exports = async function (context, myQueueItem) {ΓÇïΓÇï
}ΓÇïΓÇï; ```
-# [Python](#tab/python)
The following example demonstrates how to read a RabbitMQ queue message via a trigger.
def main(myQueueItem) -> None:
logging.info('Python RabbitMQ trigger function processed a queue item: %s', myQueueItem) ```
-# [Java](#tab/java)
-The following Java function uses the `@RabbitMQTrigger` annotation from the [Java RabbitMQ types](https://mvnrepository.com/artifact/com.microsoft.azure.functions/azure-functions-java-library-rabbitmq) to describe the configuration for a RabbitMQ queue trigger. The function grabs the message placed on the queue and adds it to the logs.
+## Attributes
-```java
-@FunctionName("RabbitMQTriggerExample")
-public void run(
- @RabbitMQTrigger(connectionStringSetting = "rabbitMQConnectionAppSetting", queueName = "queue") String input,
- final ExecutionContext context)
-{
- context.getLogger().info("Java HTTP trigger processed a request." + input);
-}
-```
+Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use the <!--attribute API here--> attribute to define the function. C# script instead uses a function.json configuration file.
-
+The attribute's constructor takes the following parameters:
-## Attributes and annotations
+|Parameter | Description|
+||-|
+|**QueueName**| Name of the queue from which to receive messages. |
+|**HostName**|Hostname of the queue, such as 10.26.45.210. Ignored when using `ConnectStringSetting`.|
+|**UserNameSetting**|Name of the app setting that contains the username to access the queue, such as `UserNameSetting: "%< UserNameFromSettings >%"`. Ignored when using `ConnectStringSetting`.|
+|**PasswordSetting**|Name of the app setting that contains the password to access the queue, such as `PasswordSetting: "%< PasswordFromSettings >%"`. Ignored when using `ConnectStringSetting`.|
+|**ConnectionStringSetting**|The name of the app setting that contains the RabbitMQ message queue connection string. The trigger won't work when you specify the connection string directly instead through an app setting. For example, when you have set `ConnectionStringSetting: "rabbitMQConnection"`, then in both the *local.settings.json* and in your function app you need a setting like `"RabbitMQConnection" : "< ActualConnectionstring >"`.|
+|**Port**|Gets or sets the port used. Defaults to 0, which points to the RabbitMQ client's default port setting of `5672`. |
-# [C#](#tab/csharp)
+# [In-process](#tab/in-process)
In [C# class libraries](functions-dotnet-class-library.md), use the [RabbitMQTrigger](https://github.com/Azure/azure-functions-rabbitmq-extension/blob/dev/src/Trigger/RabbitMQTriggerAttribute.cs) attribute.
-Here's a `RabbitMQTrigger` attribute in a method signature:
+Here's a `RabbitMQTrigger` attribute in a method signature for an in-process library:
```csharp [FunctionName("RabbitMQTest")]
public static void RabbitMQTest([RabbitMQTrigger("queue")] string message, ILogg
} ```
-For a complete example, see C# [example](#example).
+# [Isolated process](#tab/isolated-process)
-# [C# Script](#tab/csharp-script)
+In [C# class libraries](functions-dotnet-class-library.md), use the [RabbitMQTrigger](https://github.com/Azure/azure-functions-rabbitmq-extension/blob/dev/src/Trigger/RabbitMQTriggerAttribute.cs) attribute.
-Attributes are not supported by C# Script.
+Here's a `RabbitMQTrigger` attribute in a method signature for an isolated process library:
-# [JavaScript](#tab/javascript)
-Attributes are not supported by JavaScript.
+# [C# script](#tab/csharp-script)
-# [Python](#tab/python)
+C# script uses a function.json file for configuration instead of attributes.
-Attributes are not supported by Python.
+The following table explains the binding configuration properties for C# script that you set in the *function.json* file.
-# [Java](#tab/java)
+|function.json property | Description|
+||-|
+|**type** | Must be set to `RabbitMQTrigger`.|
+|**direction** | Must be set to "in".|
+|**name** | The name of the variable that represents the queue in function code. |
+|**queueName**| See the **QueueName** attribute above.|
+|**hostName**|See the **HostName** attribute above.|
+|**userNameSetting**|See the **UserNameSetting** attribute above.|
+|**passwordSetting**|See the **PasswordSetting** attribute above.|
+|**connectionStringSetting**|See the **ConnectionStringSetting** attribute above.|
+|**port**|See the **Port** attribute above.|
-The `RabbitMQTrigger` annotation allows you to create a function that runs when a RabbitMQ message is created. Configuration options available include queue name and connection string name. For additional parameter details please visit the [RabbitMQTrigger Java annotations](https://github.com/Azure/azure-functions-rabbitmq-extension/blob/dev/binding-library/java/src/main/java/com/microsoft/azure/functions/rabbitmq/annotation/RabbitMQTrigger.java).
+
-See the trigger [example](#example) for more detail.
-
+## Annotations
+
+The `RabbitMQTrigger` annotation allows you to create a function that runs when a RabbitMQ message is created.
+
+The annotation supports the following configuration options:
+
+|Parameter | Description|
+||-|
+|**queueName**| Name of the queue from which to receive messages. |
+|**hostName**|Hostname of the queue, such as 10.26.45.210. Ignored when using `ConnectStringSetting`.|
+|**userNameSetting**|Name of the app setting that contains the username to access the queue, such as `UserNameSetting: "%< UserNameFromSettings >%"`. Ignored when using `ConnectStringSetting`.|
+|**passwordSetting**|Name of the app setting that contains the password to access the queue, such as `PasswordSetting: "%< PasswordFromSettings >%"`. Ignored when using `ConnectStringSetting`.|
+|**connectionStringSetting**|The name of the app setting that contains the RabbitMQ message queue connection string. The trigger won't work when you specify the connection string directly instead through an app setting. For example, when you have set `ConnectionStringSetting: "rabbitMQConnection"`, then in both the *local.settings.json* and in your function app you need a setting like `"RabbitMQConnection" : "< ActualConnectionstring >"`.|
+|**port**|Gets or sets the port used. Defaults to 0, which points to the RabbitMQ client's default port setting of `5672`. |
+ ## Configuration
-The following table explains the binding configuration properties that you set in the *function.json* file and the `RabbitMQTrigger` attribute.
+The following table explains the binding configuration properties that you set in the *function.json* file.
-|function.json property | Attribute property |Description|
-|||-|
-|**type** | n/a | Must be set to "RabbitMQTrigger".|
-|**direction** | n/a | Must be set to "in".|
-|**name** | n/a | The name of the variable that represents the queue in function code. |
-|**queueName**|**QueueName**| Name of the queue to receive messages from. |
-|**hostName**|**HostName**|(ignored if using ConnectStringSetting) <br>Hostname of the queue (Ex: 10.26.45.210)|
-|**userNameSetting**|**UserNameSetting**|(ignored if using ConnectionStringSetting) <br>Name of the app setting that contains the username to access the queue. Ex. UserNameSetting: "%< UserNameFromSettings >%"|
-|**passwordSetting**|**PasswordSetting**|(ignored if using ConnectionStringSetting) <br>Name of the app setting that contains the password to access the queue. Ex. PasswordSetting: "%< PasswordFromSettings >%"|
-|**connectionStringSetting**|**ConnectionStringSetting**|The name of the app setting that contains the RabbitMQ message queue connection string. Please note that if you specify the connection string directly and not through an app setting in local.settings.json, the trigger will not work. (Ex: In *function.json*: connectionStringSetting: "rabbitMQConnection" <br> In *local.settings.json*: "rabbitMQConnection" : "< ActualConnectionstring >")|
-|**port**|**Port**|(ignored if using ConnectionStringSetting) Gets or sets the Port used. Defaults to 0 which points to rabbitmq client's default port setting: 5672.|
+|function.json property | Description|
+||-|
+|**type** | Must be set to `RabbitMQTrigger`.|
+|**direction** | Must be set to `in`.|
+|**name** | The name of the variable that represents the queue in function code. |
+|**queueName**| Name of the queue from which to receive messages. |
+|**hostName**|Hostname of the queue, such as 10.26.45.210. Ignored when using `connectStringSetting`.|
+|**userNameSetting**|Name of the app setting that contains the username to access the queue, such as `UserNameSetting: "%< UserNameFromSettings >%"`. Ignored when using `connectStringSetting`.|
+|**passwordSetting**|Name of the app setting that contains the password to access the queue, such as `PasswordSetting: "%< PasswordFromSettings >%"`. Ignored when using `connectStringSetting`.|
+|**connectionStringSetting**|The name of the app setting that contains the RabbitMQ message queue connection string. The trigger won't work when you specify the connection string directly instead through an app setting. For example, when you have set `connectionStringSetting: "rabbitMQConnection"`, then in both the *local.settings.json* and in your function app you need a setting like `"rabbitMQConnection" : "< ActualConnectionstring >"`.|
+|**port**|Gets or sets the port used. Defaults to 0, which points to the RabbitMQ client's default port setting of `5672`. |
[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)] +
+See the [Example section](#example) for complete examples.
+ ## Usage
-# [C#](#tab/csharp)
+The parameter type supported by the RabbitMQ trigger depends on the C# modality used.
+
+# [In-process](#tab/in-process)
The default message type is [RabbitMQ Event](https://rabbitmq.github.io/rabbitmq-dotnet-client/api/RabbitMQ.Client.Events.BasicDeliverEventArgs.html), and the `Body` property of the RabbitMQ Event can be read as the types listed below:
The default message type is [RabbitMQ Event](https://rabbitmq.github.io/rabbitmq
* `byte[]` * `POCO` - The message is formatted as a C# object. For complete code, see C# [example](#example).
-# [C# Script](#tab/csharp-script)
+# [Isolated process](#tab/isolated-process)
+
+The RabbitMQ bindings currently support only string and serializable object types when running in an isolated process.
+
+# [C# script](#tab/csharp-script)
The default message type is [RabbitMQ Event](https://rabbitmq.github.io/rabbitmq-dotnet-client/api/RabbitMQ.Client.Events.BasicDeliverEventArgs.html), and the `Body` property of the RabbitMQ Event can be read as the types listed below:
The default message type is [RabbitMQ Event](https://rabbitmq.github.io/rabbitmq
* `byte[]` * `POCO` - The message is formatted as a C# object. For a complete example, see C# Script [example](#example).
-# [JavaScript](#tab/javascript)
+
-The queue message is available via context.bindings.\<NAME\> where \<NAME\> matches the name defined in function.json. If the payload is JSON, the value is deserialized into an object.
+For a complete example, see C# [example](#example).
-# [Python](#tab/python)
-Refer to the Python [example](#example).
+Refer to Java [annotations](#annotations).
-# [Java](#tab/java)
-Refer to Java [attributes and annotations](#attributes-and-annotations).
+The queue message is available via `context.bindings.<NAME>` where `<NAME>` matches the name defined in function.json. If the payload is JSON, the value is deserialized into an object.
-+
+Refer to the Python [example](#example).
+ ## Dead letter queues
-Dead letter queues and exchanges can't be controlled or configured from the RabbitMQ trigger. In order to use dead letter queues, pre-configure the queue used by the trigger in RabbitMQ. Please refer to the [RabbitMQ documentation](https://www.rabbitmq.com/dlx.html).
+Dead letter queues and exchanges can't be controlled or configured from the RabbitMQ trigger. To use dead letter queues, pre-configure the queue used by the trigger in RabbitMQ. Refer to the [RabbitMQ documentation](https://www.rabbitmq.com/dlx.html).
## host.json settings
Dead letter queues and exchanges can't be controlled or configured from the Rabb
|||| |prefetchCount|30|Gets or sets the number of messages that the message receiver can simultaneously request and is cached.| |queueName|n/a| Name of the queue to receive messages from.|
-|connectionString|n/a|The RabbitMQ message queue connection string. Please note that the connection string is directly specified here and not through an app setting.|
-|port|0|(ignored if using connectionString) Gets or sets the Port used. Defaults to 0 which points to rabbitmq client's default port setting: 5672.|
+|connectionString|n/a|The RabbitMQ message queue connection string. The connection string is directly specified here and not through an app setting.|
+|port|0|(ignored if using connectionString) Gets or sets the Port used. Defaults to 0, which points to rabbitmq client's default port setting: 5672.|
## Local testing > [!NOTE] > The connectionString takes precedence over "hostName", "userName", and "password". If these are all set, the connectionString will override the other two.
-If you are testing locally without a connection string, you should set the "hostName" setting and "userName" and "password" if applicable in the "rabbitMQ" section of *host.json*:
+If you're testing locally without a connection string, you should set the "hostName" setting and "userName" and "password" if applicable in the "rabbitMQ" section of *host.json*:
```json {
azure-functions Functions Bindings Rabbitmq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-rabbitmq.md
Title: Azure RabbitMQ bindings for Azure Functions description: Learn to send Azure RabbitMQ triggers and bindings in Azure Functions. - ms.assetid: Previously updated : 12/17/2020 Last updated : 11/15/2021
+zone_pivot_groups: programming-languages-set-functions-lang-workers
+ # RabbitMQ bindings for Azure Functions overview > [!NOTE]
-> The RabbitMQ bindings are only fully supported on **Premium and Dedicated** plans. Consumption is not supported.
-
+> The RabbitMQ bindings are only fully supported on [Premium](functions-premium-plan.md) and [Dedicated App Service](dedicated-plan.md) plans. Consumption plans aren't supported.
+> RabbitMQ bindings are only supported for Azure Functions version 3.x and later versions.
+
Azure Functions integrates with [RabbitMQ](https://www.rabbitmq.com/) via [triggers and bindings](./functions-triggers-bindings.md). The Azure Functions RabbitMQ extension allows you to send and receive messages using the RabbitMQ API with Functions. | Action | Type |
Azure Functions integrates with [RabbitMQ](https://www.rabbitmq.com/) via [trigg
| Run a function when a RabbitMQ message comes through the queue | [Trigger](./functions-bindings-rabbitmq-trigger.md) | | Send RabbitMQ messages |[Output binding](./functions-bindings-rabbitmq-output.md) |
-## Add to your Functions app
+## Prerequisites
-To get started with developing with this extension, make sure you first [set up a RabbitMQ endpoint](https://github.com/Azure/azure-functions-rabbitmq-extension/wiki/Setting-up-a-RabbitMQ-Endpoint). To learn more about RabbitMQ, check out their [getting started page](https://www.rabbitmq.com/getstarted.html).
+Before working with the RabbitMQ extension, you must [set up your RabbitMQ endpoint](https://github.com/Azure/azure-functions-rabbitmq-extension/wiki/Setting-up-a-RabbitMQ-Endpoint). To learn more about RabbitMQ, see the [getting started page](https://www.rabbitmq.com/getstarted.html).
-### Functions 3.x and higher
-Working with the trigger and bindings requires that you reference the appropriate package. The NuGet package is used for .NET class libraries while the extension bundle is used for all other application types.
+## Install extension
-| Language | Add by... | Remarks
-|-||-|
-| C# | Installing the [NuGet package], version 4.x | |
-| C# Script, Java, JavaScript, Python, PowerShell | Registering the [extension bundle] | The [Azure Tools extension] is recommended to use with Visual Studio Code. |
-| C# Script (online-only in Azure portal) | Adding a binding | To update existing binding extensions without having to republish your function app, see [Update your extensions]. |
+The extension NuGet package you install depends on the C# mode you're using in your function app:
-[NuGet package]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.RabbitMQ
-[core tools]: ./functions-run-local.md
-[extension bundle]: ./functions-bindings-register.md#extension-bundles
-[Update your extensions]: ./functions-bindings-register.md
-[Azure Tools extension]: https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack
+# [In-process](#tab/in-process)
+
+Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
+
+Add the extension to your project by installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.RabbitMQ).
+
+# [Isolated process](#tab/isolated-process)
+
+Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated process](dotnet-isolated-process-guide.md).
+
+Add the extension to your project by installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Rabbitmq).
+
+# [C# script](#tab/csharp-script)
-### Functions 1.x and 2.x
+Functions run as C# script, which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions].
-RabbitMQ Binding extensions are not supported for Functions 1.x and 2.x. Please use Functions 3.x and higher.
+You can install this version of the extension in your function app by registering the [extension bundle], version 2.x, or a later version.
+++++
+## Install bundle
+
+The RabbitMQ extension is part of an [extension bundle], which is specified in your host.json project file. When you create a project that targets version 3.x or later, you should already have this bundle installed. To learn more, see [extension bundle].
+ ## Next steps - [Run a function when a RabbitMQ message is created (Trigger)](./functions-bindings-rabbitmq-trigger.md)-- [Send RabbitMQ messages from Azure Functions (Output binding)](./functions-bindings-rabbitmq-output.md)
+- [Send RabbitMQ messages from Azure Functions (Output binding)](./functions-bindings-rabbitmq-output.md)
+
+[extension bundle]: ./functions-bindings-register.md#extension-bundles
azure-functions Functions Bindings Register https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-register.md
Title: Register Azure Functions binding extensions description: Learn to register an Azure Functions binding extension based on your environment.-- Last updated 09/14/2020- # Register Azure Functions binding extensions
The following table lists the currently available versions of the default *Micro
| | | | | 1.x | `[1.*, 2.0.0)` | See [extensions.json](https://github.com/Azure/azure-functions-extension-bundles/blob/v1.x/src/Microsoft.Azure.Functions.ExtensionBundle/extensions.json) used to generate the bundle | | 2.x | `[2.*, 3.0.0)` | See [extensions.json](https://github.com/Azure/azure-functions-extension-bundles/blob/v2.x/src/Microsoft.Azure.Functions.ExtensionBundle/extensions.json) used to generate the bundle |
-| 3.x | `[3.3.0, 4.0.0)` | See [extensions.json](https://github.com/Azure/azure-functions-extension-bundles/blob/v3.x/src/Microsoft.Azure.Functions.ExtensionBundle/extensions.json) used to generate the bundle<sup>1</sup> |
+| 3.x | `[3.3.0, 4.0.0)` | See [extensions.json](https://github.com/Azure/azure-functions-extension-bundles/blob/4f5934a18989353e36d771d0a964f14e6cd17ac3/src/Microsoft.Azure.Functions.ExtensionBundle/extensions.json) used to generate the bundle<sup>1</sup> |
<sup>1</sup> Version 3.x of the extension bundle currently does not include the [Table Storage bindings](./functions-bindings-storage-table.md). If your app requires Table Storage, you will need to continue using the 2.x version for now.
In **Visual Studio**, you can install packages from the Package Manager Console
Install-Package Microsoft.Azure.WebJobs.Extensions.ServiceBus -Version <TARGET_VERSION> ```
-The name of the package used for a given binding is provided in the reference article for that binding. For an example, see the [Packages section of the Service Bus binding reference article](functions-bindings-service-bus.md#functions-1x).
+The name of the package used for a given binding is provided in the reference article for that binding.
Replace `<TARGET_VERSION>` in the example with a specific version of the package, such as `3.0.0-beta5`. Valid versions are listed on the individual package pages at [NuGet.org](https://nuget.org). The major versions that correspond to Functions runtime 1.x or 2.x are specified in the reference article for the binding.
azure-functions Functions Bindings Return Value https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-return-value.md
Title: Using return value from an Azure Function description: Learn to manage return values for Azure Functions-- ms.devlang: csharp, fsharp, java, javascript, powershell, python Last updated 01/14/2019- # Using the Azure Function return value
azure-functions Functions Bindings Sendgrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-sendgrid.md
Title: Azure Functions SendGrid bindings description: Azure Functions SendGrid bindings reference.- ms.devlang: csharp, java, javascript, python Previously updated : 11/29/2017-- Last updated : 03/04/2022
+zone_pivot_groups: programming-languages-set-functions-lang-workers
# Azure Functions SendGrid bindings
This article explains how to send email by using [SendGrid](https://sendgrid.com
[!INCLUDE [intro](../../includes/functions-bindings-intro.md)]
-## Packages - Functions 1.x
+
+## Install extension
+
+The extension NuGet package you install depends on the C# mode you're using in your function app:
+
+# [In-process](#tab/in-process)
+
+Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
+
+# [Isolated process](#tab/isolated-process)
+
+Functions execute in an isolated C# worker process. To learn more, see [Guide for running functions on .NET 5.0 in Azure](dotnet-isolated-process-guide.md).
+
+# [C# script](#tab/csharp-script)
+
+Functions run as C# script, which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions].
+++
+The functionality of the extension varies depending on the extension version:
+
+# [Functions v2.x+](#tab/functionsv2/in-process)
+
+Add the extension to your project by installing the [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.SendGrid), version 3.x.
+
+# [Functions v1.x](#tab/functionsv1/in-process)
+
+Add the extension to your project by installing the [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.SendGrid), version 2.x.
+
+# [Functions v2.x+](#tab/functionsv2/isolated-process)
+
+Add the extension to your project by installing the [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.SendGrid), version 3.x.
+
+# [Functions v1.x](#tab/functionsv1/isolated-process)
+
+Functions 1.x doesn't support running in an isolated process.
+
+# [Functions v2.x+](#tab/functionsv2/csharp-script)
+
+This version of the extension should already be available to your function app with [extension bundle], version 2.x.
+
+# [Functions 1.x](#tab/functionsv1/csharp-script)
+
+You can add the extension to your project by explicitly installing the [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.SendGrid), version 2.x. To learn more, see [Explicitly install extensions](functions-bindings-register.md#explicitly-install-extensions).
+++
-The SendGrid bindings are provided in the [Microsoft.Azure.WebJobs.Extensions.SendGrid](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.SendGrid) NuGet package, version 2.x. Source code for the package is in the [azure-webjobs-sdk-extensions](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/v2.x/src/WebJobs.Extensions.SendGrid/) GitHub repository.
+## Install bundle
+Starting with Functions version 2.x, the HTTP extension is part of an [extension bundle], which is specified in your host.json project file. To learn more, see [extension bundle].
-## Packages - Functions 2.x and higher
+# [Bundle v2.x](#tab/functionsv2)
-The SendGrid bindings are provided in the [Microsoft.Azure.WebJobs.Extensions.SendGrid](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.SendGrid) NuGet package, version 3.x. Source code for the package is in the [azure-webjobs-sdk-extensions](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.SendGrid/) GitHub repository.
+This version of the extension should already be available to your function app with [extension bundle], version 2.x.
+# [Functions 1.x](#tab/functionsv1)
+
+You can add the extension to your project by explicitly installing the [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.SendGrid), version 2.x. To learn more, see [Explicitly install extensions](functions-bindings-register.md#explicitly-install-extensions).
+++ ## Example
-# [C#](#tab/csharp)
-The following example shows a [C# function](functions-dotnet-class-library.md) that uses a Service Bus queue trigger and a SendGrid output binding.
+# [In-process](#tab/in-process)
-### Synchronous
+The following examples shows a [C# function](functions-dotnet-class-library.md) that uses a Service Bus queue trigger and a SendGrid output binding.
+
+The following example is a synchronous execution:
```cs using SendGrid.Helpers.Mail;
public class OutgoingEmail
} ```
-### Asynchronous
+This example shows asynchronous execution:
+ ```cs using SendGrid.Helpers.Mail;
public class OutgoingEmail
You can omit setting the attribute's `ApiKey` property if you have your API key in an app setting named "AzureWebJobsSendGridApiKey".
+# [Isolated process](#tab/isolated-process)
+
+We don't currently have an example for using the SendGrid binding in a function app running in an isolated process.
+ # [C# Script](#tab/csharp-script) The following example shows a SendGrid output binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding.
public class Message
public string Content { get; set; } } ```+
-# [JavaScript](#tab/javascript)
- The following example shows a SendGrid output binding in a *function.json* file and a [JavaScript function](functions-reference-node.md) that uses the binding. Here's the binding data in the *function.json* file:
module.exports = function (context, input) {
}; ```
-# [Python](#tab/python)
+
+Complete PowerShell examples aren't currently available for SendGrid bindings.
The following example shows an HTTP-triggered function that sends an email using the SendGrid binding. You can provide default values in the binding configuration. For instance, the *from* email address is configured in *function.json*.
def main(req: func.HttpRequest, sendGridMessage: func.Out[str]) -> func.HttpResp
return func.HttpResponse(f"Sent") ```-
-# [Java](#tab/java)
The following example uses the `@SendGridOutput` annotation from the [Java functions runtime library](/java/api/overview/azure/functions/runtime) to send an email using the SendGrid output binding.
public class HttpTriggerSendGrid {
} ``` -
+## Attributes
-## Attributes and annotations
+Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the output binding. C# script instead uses a function.json configuration file.
-# [C#](#tab/csharp)
+# [In-process](#tab/in-process)
-In [C# class libraries](functions-dotnet-class-library.md), use the [SendGrid](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.SendGrid/SendGridAttribute.cs) attribute.
+In [in-process](functions-dotnet-class-library.md) function apps, use the [SendGridAttribute](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.SendGrid/SendGridAttribute.cs), which supports the following parameters.
-For information about attribute properties that you can configure, see [Configuration](#configuration). Here's a `SendGrid` attribute example in a method signature:
+| Attribute/annotation property | Description |
+|-|-|
+| **ApiKey** | The name of an app setting that contains your API key. If not set, the default app setting name is `AzureWebJobsSendGridApiKey`.|
+| **To** | (Optional) The recipient's email address. |
+| **From** | (Optional) The sender's email address. |
+| **Subject** | (Optional) The subject of the email. |
+| **Text** | (Optional) The email content. |
-```csharp
-[FunctionName("SendEmail")]
-public static void Run(
- [ServiceBusTrigger("myqueue", Connection = "ServiceBusConnection")] OutgoingEmail email,
- [SendGrid(ApiKey = "CustomSendGridKeyAppSettingName")] out SendGridMessage message)
-{
- ...
-}
-```
+# [Isolated process](#tab/isolated-process)
-For a complete example, see [C# example](#example).
+In [isolated process](dotnet-isolated-process-guide.md) function apps, the `SendGridOutputAttribute` supports the following parameters:
-# [C# Script](#tab/csharp-script)
+| Attribute/annotation property | Description |
+|-|-|
+| **ApiKey** | The name of an app setting that contains your API key. If not set, the default app setting name is `AzureWebJobsSendGridApiKey`.|
+| **To** | (Optional) The recipient's email address. |
+| **From** | (Optional) The sender's email address. |
+| **Subject** | (Optional) The subject of the email. |
+| **Text** | (Optional) The email content. |
-Attributes are not supported by C# Script.
+# [C# Script](#tab/csharp-script)
-# [JavaScript](#tab/javascript)
+The following table explains the trigger configuration properties that you set in the *function.json* file:
-Attributes are not supported by JavaScript.
+| *function.json* property | Description |
+|--||
+| **type** | Must be set to `sendGrid`.|
+| **direction** | Must be set to `out`.|
+| **name** | The variable name used in function code for the request or request body. This value is `$return` when there is only one return value. |
+| **apiKey** | The name of an app setting that contains your API key. If not set, the default app setting name is *AzureWebJobsSendGridApiKey*.|
+| **to**| (Optional) The recipient's email address. |
+| **from**| (Optional) The sender's email address. |
+| **subject**| (Optional) The subject of the email. |
+| **text**| (Optional) The email content. |
-# [Python](#tab/python)
+
-Attributes are not supported by Python.
+## Annotations
-# [Java](#tab/java)
+The [SendGridOutput](/java/api/com.microsoft.azure.functions.annotation.sendgridoutput) annotation allows you to declaratively configure the SendGrid binding by providing the following configuration values.
-The [SendGridOutput](https://github.com/Azure/azure-functions-java-library/blob/master/src/main/java/com/microsoft/azure/functions/annotation/SendGridOutput.java) annotation allows you to declaratively configure the SendGrid binding by providing configuration values. See the [example](#example) and [configuration](#configuration) sections for more detail.
++ [apiKey](/java/api/com.microsoft.azure.functions.annotation.sendgridoutput.apikey)++ [dataType](/java/api/com.microsoft.azure.functions.annotation.sendgridoutput.datatype)++ [name](/java/api/com.microsoft.azure.functions.annotation.sendgridoutput.name)++ [to](/java/api/com.microsoft.azure.functions.annotation.sendgridoutput.to)++ [from](/java/api/com.microsoft.azure.functions.annotation.sendgridoutput.from)++ [subject](/java/api/com.microsoft.azure.functions.annotation.sendgridoutput.subject)++ [text](/java/api/com.microsoft.azure.functions.annotation.sendgridoutput.text) - ## Configuration The following table lists the binding configuration properties available in the *function.json* file and the `SendGrid` attribute/annotation.
-| *function.json* property | Attribute/annotation property | Description | Optional |
-|--|-|-|-|
-| type |n/a| Must be set to `sendGrid`.| No |
-| direction |n/a| Must be set to `out`.| No |
-| name |n/a| The variable name used in function code for the request or request body. This value is `$return` when there is only one return value. | No |
-| apiKey | ApiKey | The name of an app setting that contains your API key. If not set, the default app setting name is *AzureWebJobsSendGridApiKey*.| No |
-| to| To | The recipient's email address. | Yes |
-| from| From | The sender's email address. | Yes |
-| subject| Subject | The subject of the email. | Yes |
-| text| Text | The email content. | Yes |
+| *function.json* property | Description |
+|--||
+| **type** | Must be set to `sendGrid`.|
+| **direction** | Must be set to `out`.|
+| **name** | The variable name used in function code for the request or request body. This value is `$return` when there is only one return value. |
+| **apiKey** | The name of an app setting that contains your API key. If not set, the default app setting name is *AzureWebJobsSendGridApiKey*.|
+| **to**| (Optional) The recipient's email address. |
+| **from**| (Optional) The sender's email address. |
+| **subject**| (Optional) The subject of the email. |
+| **text**| (Optional) The email content. |
Optional properties may have default values defined in the binding and either added or overridden programmatically. + [!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)] <a name="host-json"></a>
Optional properties may have default values defined in the binding and either ad
|Property |Default | Description | ||||
-|from|n/a|The sender's email address across all functions.|
+|**from**|n/a|The sender's email address across all functions.|
## Next steps > [!div class="nextstepaction"] > [Learn more about Azure functions triggers and bindings](functions-triggers-bindings.md)+
+[extension bundle]: ./functions-bindings-register.md#extension-bundles
+[Update your extensions]: ./functions-bindings-register.md
azure-functions Functions Bindings Service Bus Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus-output.md
Title: Azure Service Bus output bindings for Azure Functions description: Learn to send Azure Service Bus messages from Azure Functions.-- ms.assetid: daedacf0-6546-4355-a65c-50873e74f66b Previously updated : 02/19/2020- Last updated : 03/04/2022 ms.devlang: csharp, java, javascript, powershell, python -
+zone_pivot_groups: programming-languages-set-functions-lang-workers
+ # Azure Service Bus output binding for Azure Functions Use Azure Service Bus output binding to send queue or topic messages.
For information on setup and configuration details, see the [overview](functions
## Example
-# [C#](#tab/csharp)
++
+# [In-process](#tab/in-process)
The following example shows a [C# function](functions-dotnet-class-library.md) that sends a Service Bus queue message:
public static string ServiceBusOutput([HttpTrigger] dynamic input, ILogger log)
return input.Text; } ```
+# [Isolated process](#tab/isolated-process)
+
+The following example shows a [C# function](dotnet-isolated-process-guide.md) that receives a Service Bus queue message, logs the message, and sends a message to different Service Bus queue:
+ # [C# Script](#tab/csharp-script)
public static async Task Run(TimerInfo myTimer, ILogger log, IAsyncCollector<str
await outputSbQueue.AddAsync("2 " + message); } ```+
-# [Java](#tab/java)
The following example shows a Java function that sends a message to a Service Bus queue `myqueue` when triggered by an HTTP request.
Java functions can also write to a Service Bus topic. The following example uses
} ```
-# [JavaScript](#tab/javascript)
The following example shows a Service Bus output binding in a *function.json* file and a [JavaScript function](functions-reference-node.md) that uses the binding. The function uses a timer trigger to send a queue message every 15 seconds.
module.exports = async function (context, myTimer) {
}; ```
-# [PowerShell](#tab/powershell)
The following example shows a Service Bus output binding in a *function.json* file and a [PowerShell function](functions-reference-powershell.md) that uses the binding.
Here's the binding data in the *function.json* file:
Here's the PowerShell that creates a message as the function's output. ```powershell
-param($QueueItem,ΓÇ»$TriggerMetadata)
+param($QueueItem, $TriggerMetadata)
-Push-OutputBinding -Name outputSbMsg -Value @{
-    name = $QueueItem.name
-    employeeId = $QueueItem.employeeId
-    address = $QueueItem.address
+Push-OutputBinding -Name outputSbMsg -Value @{
+ name = $QueueItem.name
+ employeeId = $QueueItem.employeeId
+ address = $QueueItem.address
} ```
-# [Python](#tab/python)
The following example demonstrates how to write out to a Service Bus queue in Python.
def main(req: func.HttpRequest, msg: func.Out[str]) -> func.HttpResponse:
-## Attributes and annotations
+## Attributes
-# [C#](#tab/csharp)
+Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the output binding. C# script instead uses a function.json configuration file.
+
+# [In-process](#tab/in-process)
In [C# class libraries](functions-dotnet-class-library.md), use the [ServiceBusAttribute](https://github.com/Azure/azure-functions-servicebus-extension/blob/master/src/Microsoft.Azure.WebJobs.Extensions.ServiceBus/ServiceBusAttribute.cs).
-The attribute's constructor takes the name of the queue or the topic and subscription. You can also specify the connection's access rights. How to choose the access rights setting is explained in the [Output - configuration](#configuration) section. Here's an example that shows the attribute applied to the return value of the function:
+The following table explains the properties you can set using the attribute:
+
+| Property |Description|
+| | |
+|**QueueName**|Name of the queue. Set only if sending queue messages, not for a topic. |
+|**TopicName**|Name of the topic. Set only if sending topic messages, not for a queue.|
+|**Connection**|The name of an app setting or setting collection that specifies how to connect to Service Bus. See [Connections](#connections).|
+|**Access**|Access rights for the connection string. Available values are `manage` and `listen`. The default is `manage`, which indicates that the `connection` has the **Manage** permission. If you use a connection string that does not have the **Manage** permission, set `accessRights` to "listen". Otherwise, the Functions runtime might fail trying to do operations that require manage rights. In Azure Functions version 2.x and higher, this property is not available because the latest version of the Service Bus SDK doesn't support manage operations.|
+
+Here's an example that shows the attribute applied to the return value of the function:
```csharp [FunctionName("ServiceBusOutput")]
public static string Run([HttpTrigger] dynamic input, ILogger log)
} ```
-For a complete example, see [Output - example](#example).
+For a complete example, see [Example](#example).
-You can use the `ServiceBusAccount` attribute to specify the Service Bus account to use at class, method, or parameter level. For more information, see [Trigger - attributes](functions-bindings-service-bus-trigger.md#attributes-and-annotations).
+You can use the `ServiceBusAccount` attribute to specify the Service Bus account to use at class, method, or parameter level. For more information, see [Attributes](functions-bindings-service-bus-trigger.md#attributes) in the trigger reference.
-# [C# Script](#tab/csharp-script)
+# [Isolated process](#tab/isolated-process)
-Attributes are not supported by C# Script.
+In [C# class libraries](dotnet-isolated-process-guide.md), use the [ServiceBusOutputAttribute](https://github.com/Azure/azure-functions-dotnet-worker/blob/main/extensions/Worker.Extensions.ServiceBus/src/ServiceBusOutputAttribute.cs) to define the queue or topic written to by the output.
-# [Java](#tab/java)
+The following table explains the properties you can set using the attribute:
-The `ServiceBusQueueOutput` and `ServiceBusTopicOutput` annotations are available to write a message as a function output. The parameter decorated with these annotations must be declared as an `OutputBinding<T>` where `T` is the type corresponding to the message's type.
+| Property |Description|
+| | |
+|**EntityType**|Sets the entity type as either `Queue` for sending messages to a queue or `Topic` when sending messages to a topic. |
+|**QueueOrTopicName**|Name of the topic or queue to send messages to. Use `EntityType` to set the destination type.|
+|**Connection**|The name of an app setting or setting collection that specifies how to connect to Service Bus. See [Connections](#connections).|
-# [JavaScript](#tab/javascript)
+# [C# script](#tab/csharp-script)
-Attributes are not supported by JavaScript.
+C# script uses a *function.json* file for configuration instead of attributes. The following table explains the binding configuration properties that you set in the *function.json* file.
-# [PowerShell](#tab/powershell)
+|function.json property | Description|
+|||-|
+|**type** |Must be set to "serviceBus". This property is set automatically when you create the trigger in the Azure portal.|
+|**direction** | Must be set to "out". This property is set automatically when you create the trigger in the Azure portal. |
+|**name** | The name of the variable that represents the queue or topic message in function code. Set to "$return" to reference the function return value. |
+|**queueName**|Name of the queue. Set only if sending queue messages, not for a topic.
+|**topicName**|Name of the topic. Set only if sending topic messages, not for a queue.|
+|**connection**|The name of an app setting or setting collection that specifies how to connect to Service Bus. See [Connections](#connections).|
+|**accessRights** (v1 only)|Access rights for the connection string. Available values are `manage` and `listen`. The default is `manage`, which indicates that the `connection` has the **Manage** permission. If you use a connection string that does not have the **Manage** permission, set `accessRights` to "listen". Otherwise, the Functions runtime might fail trying to do operations that require manage rights. In Azure Functions version 2.x and higher, this property is not available because the latest version of the Service Bus SDK doesn't support manage operations.|
-Attributes are not supported by PowerShell.
+
-# [Python](#tab/python)
+## Annotations
-Attributes are not supported by Python.
+The `ServiceBusQueueOutput` and `ServiceBusTopicOutput` annotations are available to write a message as a function output. The parameter decorated with these annotations must be declared as an `OutputBinding<T>` where `T` is the type corresponding to the message's type.
- ## Configuration The following table explains the binding configuration properties that you set in the *function.json* file and the `ServiceBus` attribute.
-|function.json property | Attribute property |Description|
+|function.json property | Description|
|||-|
-|**type** | n/a | Must be set to "serviceBus". This property is set automatically when you create the trigger in the Azure portal.|
-|**direction** | n/a | Must be set to "out". This property is set automatically when you create the trigger in the Azure portal. |
-|**name** | n/a | The name of the variable that represents the queue or topic message in function code. Set to "$return" to reference the function return value. |
-|**queueName**|**QueueName**|Name of the queue. Set only if sending queue messages, not for a topic.
-|**topicName**|**TopicName**|Name of the topic. Set only if sending topic messages, not for a queue.|
-|**connection**|**Connection**|The name of an app setting or setting collection that specifies how to connect to Service Bus. See [Connections](#connections).|
-|**accessRights** (v1 only)|**Access**|Access rights for the connection string. Available values are `manage` and `listen`. The default is `manage`, which indicates that the `connection` has the **Manage** permission. If you use a connection string that does not have the **Manage** permission, set `accessRights` to "listen". Otherwise, the Functions runtime might fail trying to do operations that require manage rights. In Azure Functions version 2.x and higher, this property is not available because the latest version of the Service Bus SDK doesn't support manage operations.|
+|**type** |Must be set to "serviceBus". This property is set automatically when you create the trigger in the Azure portal.|
+|**direction** | Must be set to "out". This property is set automatically when you create the trigger in the Azure portal. |
+|**name** | The name of the variable that represents the queue or topic message in function code. Set to "$return" to reference the function return value. |
+|**queueName**|Name of the queue. Set only if sending queue messages, not for a topic.
+|**topicName**|Name of the topic. Set only if sending topic messages, not for a queue.|
+|**connection**|The name of an app setting or setting collection that specifies how to connect to Service Bus. See [Connections](#connections).|
+|**accessRights** (v1 only)|Access rights for the connection string. Available values are `manage` and `listen`. The default is `manage`, which indicates that the `connection` has the **Manage** permission. If you use a connection string that does not have the **Manage** permission, set `accessRights` to "listen". Otherwise, the Functions runtime might fail trying to do operations that require manage rights. In Azure Functions version 2.x and higher, this property is not available because the latest version of the Service Bus SDK doesn't support manage operations.|
[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)] +
+See the [Example section](#example) for complete examples.
## Usage
-In Azure Functions 1.x, the runtime creates the queue if it doesn't exist and you have set `accessRights` to `manage`. In Functions version 2.x and higher, the queue or topic must already exist; if you specify a queue or topic that doesn't exist, the function will fail.
-# [C#](#tab/csharp)
+The following output parameter types are supported by all C# modalities and extension versions:
-Use the following parameter types for the output binding:
+| Type | Description |
+| | |
+| **[System.String](/dotnet/api/system.string)** | Use when the message to write is simple text. When the parameter value is null when the function exits, Functions doesn't create a message.|
+| **byte[]** | Use for writing binary data messages. When the parameter value is null when the function exits, Functions doesn't create a message. |
+| **Object** | When a message contains JSON, Functions serializes the object into a JSON message payload. When the parameter value is null when the function exits, Functions creates a message with a null object.|
-* `out T paramName` - `T` can be any JSON-serializable type. If the parameter value is null when the function exits, Functions creates the message with a null object.
-* `out string` - If the parameter value is null when the function exits, Functions does not create a message.
-* `out byte[]` - If the parameter value is null when the function exits, Functions does not create a message.
-* `out BrokeredMessage` - If the parameter value is null when the function exits, Functions does not create a message (for Functions 1.x)
-* `out Message` - If the parameter value is null when the function exits, Functions does not create a message (for Functions 2.x and higher)
-* `ICollector<T>` or `IAsyncCollector<T>` (for async methods) - For creating multiple messages. A message is created when you call the `Add` method.
+Messaging-specific parameter types contain additional message metadata. The specific types supported by the Event Grid Output binding depend on the Functions runtime version, the extension package version, and the C# modality used.
-When working with C# functions:
+# [Extension v5.x](#tab/extensionv5/in-process)
-* Async functions need a return value or `IAsyncCollector` instead of an `out` parameter.
+Use the [ServiceBusMessage](/dotnet/api/azure.messaging.servicebus.servicebusmessage) type when sending messages with metadata. Parameters are defined as `return` type attributes. Use an `ICollector<T>` or `IAsyncCollector<T>` to write multiple messages. A message is created when you call the `Add` method.
-* To access the session ID, bind to a [`Message`](/dotnet/api/microsoft.azure.servicebus.message) type and use the `sessionId` property.
+When the parameter value is null when the function exits, Functions doesn't create a message.
-### Additional types
-Apps using the 5.0.0 or higher version of the Service Bus extension use the `ServiceBusMessage` type in [Azure.Messaging.ServiceBus](/dotnet/api/azure.messaging.servicebus.servicebusmessage) instead of the one in the [Microsoft.Azure.ServiceBus](/dotnet/api/microsoft.azure.servicebus.message) namespace. This version drops support for the legacy `Message` type in favor of the following types:
+# [Functions 2.x and higher](#tab/functionsv2/in-process)
-- [ServiceBusMessage](/dotnet/api/azure.messaging.servicebus.servicebusmessage)
+Use the [Message](/dotnet/api/microsoft.azure.servicebus.message) type when sending messages with metadata. Parameters are defined as `return` type attributes. Use an `ICollector<T>` or `IAsyncCollector<T>` to write multiple messages. A message is created when you call the `Add` method.
-# [C# Script](#tab/csharp-script)
+When the parameter value is null when the function exits, Functions doesn't create a message.
-Use the following parameter types for the output binding:
-* `out T paramName` - `T` can be any JSON-serializable type. If the parameter value is null when the function exits, Functions creates the message with a null object.
-* `out string` - If the parameter value is null when the function exits, Functions does not create a message.
-* `out byte[]` - If the parameter value is null when the function exits, Functions does not create a message.
-* `out BrokeredMessage` - If the parameter value is null when the function exits, Functions does not create a message (for Functions 1.x)
-* `out Message` - If the parameter value is null when the function exits, Functions does not create a message (for Functions 2.x and higher)
-* `ICollector<T>` or `IAsyncCollector<T>` - For creating multiple messages. A message is created when you call the `Add` method.
+# [Functions 1.x](#tab/functionsv1/in-process)
-When working with C# functions:
+Use the [BrokeredMessage](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage) type when sending messages with metadata. Parameters are defined as `return` type attributes. When the parameter value is null when the function exits, Functions doesn't create a message.
-* Async functions need a return value or `IAsyncCollector` instead of an `out` parameter.
-* To access the session ID, bind to a [`Message`](/dotnet/api/microsoft.azure.servicebus.message) type and use the `sessionId` property.
+# [Extension 5.x and higher](#tab/extensionv5/isolated-process)
-### Additional types
-Apps using the 5.0.0 or higher version of the Service Bus extension use the `ServiceBusMessage` type in [Azure.Messaging.ServiceBus](/dotnet/api/azure.messaging.servicebus.servicebusmessage) instead of the one in the[Microsoft.Azure.ServiceBus](/dotnet/api/microsoft.azure.servicebus.message) namespace. This version drops support for the legacy `Message` type in favor of the following types:
+Messaging-specific types are not yet supported.
-- [ServiceBusMessage](/dotnet/api/azure.messaging.servicebus.servicebusmessage)
+# [Functions 2.x and higher](#tab/functionsv2/isolated-process)
-# [Java](#tab/java)
+Messaging-specific types are not yet supported.
-Use the [Azure Service Bus SDK](../service-bus-messaging/index.yml) rather than the built-in output binding.
+# [Functions 1.x](#tab/functionsv1/isolated-process)
-# [JavaScript](#tab/javascript)
+Messaging-specific types are not yet supported.
-Access the queue or topic by using `context.bindings.<name from function.json>`. You can assign a string, a byte array, or a JavaScript object (deserialized into JSON) to `context.binding.<name>`.
+# [Extension 5.x and higher](#tab/extensionv5/csharp-script)
-# [PowerShell](#tab/powershell)
+Use the [ServiceBusMessage](/dotnet/api/azure.messaging.servicebus.servicebusmessage) type when sending messages with metadata. Parameters are defined as `out` parameters. Use an `ICollector<T>` or `IAsyncCollector<T>` to write multiple messages. A message is created when you call the `Add` method.
-Output to the Service Bus is available via the `Push-OutputBinding` cmdlet where you pass arguments that match the name designated by binding's name parameter in the *function.json* file.
+When the parameter value is null when the function exits, Functions doesn't create a message.
-# [Python](#tab/python)
+# [Functions 2.x and higher](#tab/functionsv2/csharp-script)
-Use the [Azure Service Bus SDK](../service-bus-messaging/index.yml) rather than the built-in output binding.
+Use the [Message](/dotnet/api/microsoft.azure.servicebus.message) type when sending messages with metadata. Parameters are defined as `out` parameters. Use an `ICollector<T>` or `IAsyncCollector<T>` to write multiple messages. A message is created when you call the `Add` method.
+
+When the parameter value is null when the function exits, Functions doesn't create a message.
+
+# [Functions 1.x](#tab/functionsv1/csharp-script)
+
+Use the [BrokeredMessage](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage) type when sending messages with metadata. Parameters are defined as `out` parameters. Use an `ICollector<T>` or `IAsyncCollector<T>` to write multiple messages. A message is created when you call the `Add` method.
+
+When the parameter value is null when the function exits, Functions doesn't create a message.
+
+In Azure Functions 1.x, the runtime creates the queue if it doesn't exist and you have set `accessRights` to `manage`. In Functions version 2.x and higher, the queue or topic must already exist; if you specify a queue or topic that doesn't exist, the function fails.
+
+<!--Any of the below pivots can be combined if the usage info is identical.-->
+Use the [Azure Service Bus SDK](../service-bus-messaging/index.yml) rather than the built-in output binding.
+Access the queue or topic by using `context.bindings.<name from function.json>`. You can assign a string, a byte array, or a JavaScript object (deserialized into JSON) to `context.binding.<name>`.
+Output to the Service Bus is available via the `Push-OutputBinding` cmdlet where you pass arguments that match the name designated by binding's name parameter in the *function.json* file.
+Use the [Azure Service Bus SDK](../service-bus-messaging/index.yml) rather than the built-in output binding.
+For a complete example, see [the examples section](#example).
+ ## Exceptions and return codes
Use the [Azure Service Bus SDK](../service-bus-messaging/index.yml) rather than
## Next steps -- [Run a function when a Service Bus queue or topic message is created (Trigger)](./functions-bindings-service-bus-trigger.md)
+- [Run a function when a Service Bus queue or topic message is created (Trigger)](./functions-bindings-service-bus-trigger.md)
azure-functions Functions Bindings Service Bus Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus-trigger.md
Title: Azure Service Bus trigger for Azure Functions description: Learn to run an Azure Function when as Azure Service Bus messages are created.-- ms.assetid: daedacf0-6546-4355-a65c-50873e74f66b Previously updated : 02/19/2020- Last updated : 03/04/2022 ms.devlang: csharp, java, javascript, powershell, python
+zone_pivot_groups: programming-languages-set-functions-lang-workers
# Azure Service Bus trigger for Azure Functions
For information on setup and configuration details, see the [overview](functions
## Example
-# [C#](#tab/csharp)
++
+# [In-process](#tab/in-process)
The following example shows a [C# function](functions-dotnet-class-library.md) that reads [message metadata](#message-metadata) and logs a Service Bus queue message:
public static void Run(
log.LogInformation($"MessageId={messageId}"); } ```
+# [Isolated process](#tab/isolated-process)
+
+The following example shows a [C# function](dotnet-isolated-process-guide.md) that receives a Service Bus queue message, logs the message, and sends a message to different Service Bus queue:
+ # [C# Script](#tab/csharp-script)
public static void Run(string myQueueItem,
log.Info($"MessageId={messageId}"); } ```+
-# [Java](#tab/java)
The following Java function uses the `@ServiceBusQueueTrigger` annotation from the [Java functions runtime library](/java/api/overview/azure/functions/runtime) to describe the configuration for a Service Bus queue trigger. The function grabs the message placed on the queue and adds it to the logs.
Java functions can also be triggered when a message is added to a Service Bus to
context.getLogger().info(message); } ```-
-# [JavaScript](#tab/javascript)
The following example shows a Service Bus trigger binding in a *function.json* file and a [JavaScript function](functions-reference-node.md) that uses the binding. The function reads [message metadata](#message-metadata) and logs a Service Bus queue message.
module.exports = async function(context, myQueueItem) {
}; ```
-# [PowerShell](#tab/powershell)
The following example shows a Service Bus trigger binding in a *function.json* file and a [PowerShell function](functions-reference-powershell.md) that uses the binding.
param([string] $mySbMsg, $TriggerMetadata)
Write-Host "PowerShell ServiceBus queue trigger function processed message: $mySbMsg" ```
-# [Python](#tab/python)
The following example demonstrates how to read a Service Bus queue message via a trigger.
def main(msg: func.ServiceBusMessage):
logging.info(result) ```
+## Attributes
+
+Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use the [ServiceBusTriggerAttribute](https://github.com/Azure/azure-functions-servicebus-extension/blob/master/src/Microsoft.Azure.WebJobs.Extensions.ServiceBus/ServiceBusTriggerAttribute.cs) attribute to define the function trigger. C# script instead uses a function.json configuration file.
+
+# [In-process](#tab/in-process)
+
+The following table explains the properties you can set using this trigger attribute:
+
+| Property |Description|
+| | |
+|**QueueName**|Name of the queue to monitor. Set only if monitoring a queue, not for a topic. |
+|**TopicName**|Name of the topic to monitor. Set only if monitoring a topic, not for a queue.|
+|**SubscriptionName**|Name of the subscription to monitor. Set only if monitoring a topic, not for a queue.|
+|**Connection**| The name of an app setting or setting collection that specifies how to connect to Service Bus. See [Connections](#connections).|
+|**Access**|Access rights for the connection string. Available values are `manage` and `listen`. The default is `manage`, which indicates that the `connection` has the **Manage** permission. If you use a connection string that does not have the **Manage** permission, set `accessRights` to "listen". Otherwise, the Functions runtime might fail trying to do operations that require manage rights. In Azure Functions version 2.x and higher, this property is not available because the latest version of the Service Bus SDK doesn't support manage operations.|
+|**IsBatched**| Messages are delivered in batches. Requires an array or collection type. |
+|**IsSessionsEnabled**|`true` if connecting to a [session-aware](../service-bus-messaging/message-sessions.md) queue or subscription. `false` otherwise, which is the default value.|
+|**AutoComplete**|`true` Whether the trigger should automatically call complete after processing, or if the function code will manually call complete.<br/><br/>If set to `true`, the trigger completes the message automatically if the function execution completes successfully, and abandons the message otherwise.<br/><br/>When set to `false`, you are responsible for calling [MessageReceiver](/dotnet/api/microsoft.azure.servicebus.core.messagereceiver) methods to complete, abandon, or deadletter the message. If an exception is thrown (and none of the `MessageReceiver` methods are called), then the lock remains. Once the lock expires, the message is re-queued with the `DeliveryCount` incremented and the lock is automatically renewed. |
+
+# [Isolated process](#tab/isolated-process)
+
+The following table explains the properties you can set using this trigger attribute:
+
+| Property |Description|
+| | |
+|**QueueName**|Name of the queue to monitor. Set only if monitoring a queue, not for a topic. |
+|**TopicName**|Name of the topic to monitor. Set only if monitoring a topic, not for a queue.|
+|**SubscriptionName**|Name of the subscription to monitor. Set only if monitoring a topic, not for a queue.|
+|**Connection**| The name of an app setting or setting collection that specifies how to connect to Service Bus. See [Connections](#connections).|
+|**IsBatched**| Messages are delivered in batches. Requires an array or collection type. |
+|**IsSessionsEnabled**|`true` if connecting to a [session-aware](../service-bus-messaging/message-sessions.md) queue or subscription. `false` otherwise, which is the default value.|
+
+# [C# script](#tab/csharp-script)
+
+C# script uses a *function.json* file for configuration instead of attributes. The following table explains the binding configuration properties that you set in the *function.json* file.
+
+|function.json property | Description|
+||-|
+|**type** | Must be set to `serviceBusTrigger`. This property is set automatically when you create the trigger in the Azure portal.|
+|**direction** | Must be set to "in". This property is set automatically when you create the trigger in the Azure portal. |
+|**name** | The name of the variable that represents the queue or topic message in function code. |
+|**queueName**| Name of the queue to monitor. Set only if monitoring a queue, not for a topic.
+|**topicName**| Name of the topic to monitor. Set only if monitoring a topic, not for a queue.|
+|**subscriptionName**| Name of the subscription to monitor. Set only if monitoring a topic, not for a queue.|
+|**connection**| The name of an app setting or setting collection that specifies how to connect to Service Bus. See [Connections](#connections).|
+|**accessRights**| Access rights for the connection string. Available values are `manage` and `listen`. The default is `manage`, which indicates that the `connection` has the **Manage** permission. If you use a connection string that does not have the **Manage** permission, set `accessRights` to "listen". Otherwise, the Functions runtime might fail trying to do operations that require manage rights. In Azure Functions version 2.x and higher, this property is not available because the latest version of the Service Bus SDK doesn't support manage operations.|
+|**isSessionsEnabled**| `true` if connecting to a [session-aware](../service-bus-messaging/message-sessions.md) queue or subscription. `false` otherwise, which is the default value.|
+|**autoComplete**| `true` when the trigger should automatically call complete after processing, or if the function code will manually call complete.<br/><br/>Setting to `false` is only supported in C#.<br/><br/>If set to `true`, the trigger completes the message automatically if the function execution completes successfully, and abandons the message otherwise.<br/><br/>When set to `false`, you are responsible for calling [MessageReceiver](/dotnet/api/microsoft.azure.servicebus.core.messagereceiver) methods to complete, abandon, or deadletter the message. If an exception is thrown (and none of the `MessageReceiver` methods are called), then the lock remains. Once the lock expires, the message is re-queued with the `DeliveryCount` incremented and the lock is automatically renewed.<br/><br/>This property is available only in Azure Functions 2.x and higher. |
+## Annotations
-## Attributes and annotations
-
-# [C#](#tab/csharp)
-
-In [C# class libraries](functions-dotnet-class-library.md), use the following attributes to configure a Service Bus trigger:
+The `ServiceBusQueueTrigger` annotation allows you to create a function that runs when a Service Bus queue message is created. Configuration options available include the following properties:
-* [ServiceBusTriggerAttribute](https://github.com/Azure/azure-functions-servicebus-extension/blob/master/src/Microsoft.Azure.WebJobs.Extensions.ServiceBus/ServiceBusTriggerAttribute.cs)
+|Property | Description|
+||-|
+|**name** | The name of the variable that represents the queue or topic message in function code. |
+|**queueName**| Name of the queue to monitor. Set only if monitoring a queue, not for a topic.
+|**topicName**| Name of the topic to monitor. Set only if monitoring a topic, not for a queue.|
+|**subscriptionName**| Name of the subscription to monitor. Set only if monitoring a topic, not for a queue.|
+|**connection**| The name of an app setting or setting collection that specifies how to connect to Service Bus. See [Connections](#connections).|
- The attribute's constructor takes the name of the queue or the topic and subscription. In Azure Functions version 1.x, you can also specify the connection's access rights. If you don't specify access rights, the default is `Manage`. For more information, see the [Trigger - configuration](#configuration) section.
+The `ServiceBusTopicTrigger` annotation allows you to designate a topic and subscription to target what data triggers the function.
- Here's an example that shows the attribute used with a string parameter:
- ```csharp
- [FunctionName("ServiceBusQueueTriggerCSharp")]
- public static void Run(
- [ServiceBusTrigger("myqueue")] string myQueueItem, ILogger log)
- {
- ...
- }
- ```
+See the trigger [example](#example) for more detail.
- Since the `Connection` property isn't defined, Functions looks for an app setting named `AzureWebJobsServiceBus`, which is the default name for the Service Bus connection string. You can also set the `Connection` property to specify the name of an application setting that contains the Service Bus connection string to use, as shown in the following example:
+## Configuration
- ```csharp
- [FunctionName("ServiceBusQueueTriggerCSharp")]
- public static void Run(
- [ServiceBusTrigger("myqueue", Connection = "ServiceBusConnection")]
- string myQueueItem, ILogger log)
- {
- ...
- }
- ```
+The following table explains the binding configuration properties that you set in the *function.json* file.
+
+|function.json property | Description|
+||-|
+|**type** | Must be set to `serviceBusTrigger`. This property is set automatically when you create the trigger in the Azure portal.|
+|**direction** | Must be set to "in". This property is set automatically when you create the trigger in the Azure portal. |
+|**name** | The name of the variable that represents the queue or topic message in function code. |
+|**queueName**| Name of the queue to monitor. Set only if monitoring a queue, not for a topic.
+|**topicName**| Name of the topic to monitor. Set only if monitoring a topic, not for a queue.|
+|**subscriptionName**| Name of the subscription to monitor. Set only if monitoring a topic, not for a queue.|
+|**connection**| The name of an app setting or setting collection that specifies how to connect to Service Bus. See [Connections](#connections).|
+|**accessRights**| Access rights for the connection string. Available values are `manage` and `listen`. The default is `manage`, which indicates that the `connection` has the **Manage** permission. If you use a connection string that does not have the **Manage** permission, set `accessRights` to "listen". Otherwise, the Functions runtime might fail trying to do operations that require manage rights. In Azure Functions version 2.x and higher, this property is not available because the latest version of the Service Bus SDK doesn't support manage operations.|
+|**isSessionsEnabled**| `true` if connecting to a [session-aware](../service-bus-messaging/message-sessions.md) queue or subscription. `false` otherwise, which is the default value.|
+|**autoComplete**| Must be `true` for non-C# functions, which means that the trigger should either automatically call complete after processing, or the function code manually calls complete.<br/><br/>When set to `true`, the trigger completes the message automatically if the function execution completes successfully, and abandons the message otherwise.<br/><br/>Exceptions in the function results in the runtime calls `abandonAsync` in the background. If no exception occurs, then `completeAsync` is called in the background. This property is available only in Azure Functions 2.x and higher. |
- For a complete example, see [Trigger - example](#example).
-* [ServiceBusAccountAttribute](https://github.com/Azure/azure-functions-servicebus-extension/blob/master/src/Microsoft.Azure.WebJobs.Extensions.ServiceBus/ServiceBusAccountAttribute.cs)
- Provides another way to specify the Service Bus account to use. The constructor takes the name of an app setting that contains a Service Bus connection string. The attribute can be applied at the parameter, method, or class level. The following example shows class level and method level:
+See the [Example section](#example) for complete examples.
- ```csharp
- [ServiceBusAccount("ClassLevelServiceBusAppSetting")]
- public static class AzureFunctions
- {
- [ServiceBusAccount("MethodLevelServiceBusAppSetting")]
- [FunctionName("ServiceBusQueueTriggerCSharp")]
- public static void Run(
- [ServiceBusTrigger("myqueue", AccessRights.Manage)]
- string myQueueItem, ILogger log)
- {
- ...
- }
- ```
+## Usage
-The Service Bus account to use is determined in the following order:
+The following parameter types are supported by all C# modalities and extension versions:
-* The `ServiceBusTrigger` attribute's `Connection` property.
-* The `ServiceBusAccount` attribute applied to the same parameter as the `ServiceBusTrigger` attribute.
-* The `ServiceBusAccount` attribute applied to the function.
-* The `ServiceBusAccount` attribute applied to the class.
-* The "AzureWebJobsServiceBus" app setting.
+| Type | Description |
+| | |
+| **[System.String](/dotnet/api/system.string)** | Use when the message is simple text. |
+| **byte[]** | Use for binary data messages. |
+| **Object** | When a message contains JSON, Functions tries to deserialize the JSON data into known plain-old CLR object type. |
-# [C# Script](#tab/csharp-script)
+Messaging-specific parameter types contain additional message metadata. The specific types supported by the Event Grid trigger depend on the Functions runtime version, the extension package version, and the C# modality used.
-Attributes are not supported by C# Script.
+# [Extension v5.x](#tab/extensionv5/in-process)
-# [Java](#tab/java)
+Use the [ServiceBusReceivedMessage](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage) type to receive message metadata from Service Bus Queues and Subscriptions. To learn more, see [Messages, payloads, and serialization](../service-bus-messaging/service-bus-messages-payloads.md).
-The `ServiceBusQueueTrigger` annotation allows you to create a function that runs when a Service Bus queue message is created. Configuration options available include queue name and connection string name.
+In [C# class libraries](functions-dotnet-class-library.md), the attribute's constructor takes the name of the queue or the topic and subscription.
-The `ServiceBusTopicTrigger` annotation allows you to designate a topic and subscription to target what data triggers the function.
-See the trigger [example](#example) for more detail.
+# [Functions 2.x and higher](#tab/functionsv2/in-process)
-# [JavaScript](#tab/javascript)
+Use the [Message](/dotnet/api/microsoft.azure.servicebus.message) type to receive messages with metadata. To learn more, see [Messages, payloads, and serialization](../service-bus-messaging/service-bus-messages-payloads.md).
-Attributes are not supported by JavaScript.
+In [C# class libraries](functions-dotnet-class-library.md), the attribute's constructor takes the name of the queue or the topic and subscription.
-# [PowerShell](#tab/powershell)
-Attributes are not supported by PowerShell.
+# [Functions 1.x](#tab/functionsv1/in-process)
-# [Python](#tab/python)
+The following parameter types are available for the queue or topic message:
-Attributes are not supported by Python.
+* [BrokeredMessage](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage) - Gives you the deserialized message with the [BrokeredMessage.GetBody\<T>()](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.getbody#Microsoft_ServiceBus_Messaging_BrokeredMessage_GetBody__1) method.
+* [MessageReceiver](/dotnet/api/microsoft.azure.servicebus.core.messagereceiver) - Used to receive and acknowledge messages from the message container, which is required when `autoComplete` is set to `false`.
-
+In [C# class libraries](functions-dotnet-class-library.md), the attribute's constructor takes the name of the queue or the topic and subscription. In Azure Functions version 1.x, you can also specify the connection's access rights. If you don't specify access rights, the default is `Manage`.
-## Configuration
-The following table explains the binding configuration properties that you set in the *function.json* file and the `ServiceBusTrigger` attribute.
-
-|function.json property | Attribute property |Description|
-|||-|
-|**type** | n/a | Must be set to "serviceBusTrigger". This property is set automatically when you create the trigger in the Azure portal.|
-|**direction** | n/a | Must be set to "in". This property is set automatically when you create the trigger in the Azure portal. |
-|**name** | n/a | The name of the variable that represents the queue or topic message in function code. |
-|**queueName**|**QueueName**|Name of the queue to monitor. Set only if monitoring a queue, not for a topic.
-|**topicName**|**TopicName**|Name of the topic to monitor. Set only if monitoring a topic, not for a queue.|
-|**subscriptionName**|**SubscriptionName**|Name of the subscription to monitor. Set only if monitoring a topic, not for a queue.|
-|**connection**|**Connection**| The name of an app setting or setting collection that specifies how to connect to Service Bus. See [Connections](#connections).|
-|**accessRights**|**Access**|Access rights for the connection string. Available values are `manage` and `listen`. The default is `manage`, which indicates that the `connection` has the **Manage** permission. If you use a connection string that does not have the **Manage** permission, set `accessRights` to "listen". Otherwise, the Functions runtime might fail trying to do operations that require manage rights. In Azure Functions version 2.x and higher, this property is not available because the latest version of the Service Bus SDK doesn't support manage operations.|
-|**isSessionsEnabled**|**IsSessionsEnabled**|`true` if connecting to a [session-aware](../service-bus-messaging/message-sessions.md) queue or subscription. `false` otherwise, which is the default value.|
-|**autoComplete**|**AutoComplete**|`true` Whether the trigger should automatically call complete after processing, or if the function code will manually call complete.<br><br>Setting to `false` is only supported in C#.<br><br>If set to `true`, the trigger completes the message automatically if the function execution completes successfully, and abandons the message otherwise.<br><br>When set to `false`, you are responsible for calling [MessageReceiver](/dotnet/api/microsoft.azure.servicebus.core.messagereceiver) methods to complete, abandon, or deadletter the message. If an exception is thrown (and none of the `MessageReceiver` methods are called), then the lock remains. Once the lock expires, the message is re-queued with the `DeliveryCount` incremented and the lock is automatically renewed.<br><br>In non-C# functions, exceptions in the function results in the runtime calls `abandonAsync` in the background. If no exception occurs, then `completeAsync` is called in the background. This property is available only in Azure Functions 2.x and higher. |
+# [Extension 5.x and higher](#tab/extensionv5/isolated-process)
+Messaging-specific types are not yet supported.
+# [Functions 2.x and higher](#tab/functionsv2/isolated-process)
-## Usage
+Messaging-specific types are not yet supported.
-# [C#](#tab/csharp)
+# [Functions 1.x](#tab/functionsv1/isolated-process)
-The following parameter types are available for the queue or topic message:
+Messaging-specific types are not yet supported.
-* `string` - If the message is text.
-* `byte[]` - Useful for binary data.
-* A custom type - If the message contains JSON, Azure Functions tries to deserialize the JSON data.
-* `BrokeredMessage` - Gives you the deserialized message with the [BrokeredMessage.GetBody\<T>()](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.getbody#Microsoft_ServiceBus_Messaging_BrokeredMessage_GetBody__1)
- method.
-* [`MessageReceiver`](/dotnet/api/microsoft.azure.servicebus.core.messagereceiver) - Used to receive and acknowledge messages from the message container (required when [`autoComplete`](functions-bindings-service-bus.md#hostjson-settings) is set to `false`)
+# [Extension 5.x and higher](#tab/extensionv5/csharp-script)
-These parameter types are for Azure Functions version 1.x; for 2.x and higher, use [`Message`](/dotnet/api/microsoft.azure.servicebus.message) instead of `BrokeredMessage`.
+Use the [ServiceBusReceivedMessage](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage) type to receive message metadata from Service Bus Queues and Subscriptions. To learn more, see [Messages, payloads, and serialization](../service-bus-messaging/service-bus-messages-payloads.md).
-### Additional types
-Apps using the 5.0.0 or higher version of the Service Bus extension use the `ServiceBusReceivedMessage` type in [Azure.Messaging.ServiceBus](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage) instead of the one in the [Microsoft.Azure.ServiceBus](/dotnet/api/microsoft.azure.servicebus.message) namespace. This version drops support for the legacy `Message` type in favor of the following types:
+# [Functions 2.x and higher](#tab/functionsv2/csharp-script)
-- [ServiceBusReceivedMessage](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage)
+Use the [Message](/dotnet/api/microsoft.azure.servicebus.message) type to receive messages with metadata. To learn more, see [Messages, payloads, and serialization](../service-bus-messaging/service-bus-messages-payloads.md).
-# [C# Script](#tab/csharp-script)
+# [Functions 1.x](#tab/functionsv1/csharp-script)
The following parameter types are available for the queue or topic message:
-* `string` - If the message is text.
-* `byte[]` - Useful for binary data.
-* A custom type - If the message contains JSON, Azure Functions tries to deserialize the JSON data.
-* `BrokeredMessage` - Gives you the deserialized message with the [BrokeredMessage.GetBody\<T>()](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.getbody#Microsoft_ServiceBus_Messaging_BrokeredMessage_GetBody__1)
- method.
-
-These parameters are for Azure Functions version 1.x; for 2.x and higher, use [`Message`](/dotnet/api/microsoft.azure.servicebus.message) instead of `BrokeredMessage`.
+* [BrokeredMessage](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage) - Gives you the deserialized message with the [BrokeredMessage.GetBody\<T>()](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.getbody#Microsoft_ServiceBus_Messaging_BrokeredMessage_GetBody__1) method.
+* [MessageReceiver](/dotnet/api/microsoft.azure.servicebus.core.messagereceiver) - Used to receive and acknowledge messages from the message container, which is required when `autoComplete` is set to `false`.
-### Additional types
-Apps using the 5.0.0 or higher version of the Service Bus extension use the `ServiceBusReceivedMessage` type in [Azure.Messaging.ServiceBus](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage) instead of the one in the [Microsoft.Azure.ServiceBus](/dotnet/api/microsoft.azure.servicebus.message) namespace. This version drops support for the legacy `Message` type in favor of the following types:
+ -- [ServiceBusReceivedMessage](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage)
+When the `Connection` property isn't defined, Functions looks for an app setting named `AzureWebJobsServiceBus`, which is the default name for the Service Bus connection string. You can also set the `Connection` property to specify the name of an application setting that contains the Service Bus connection string to use.
-# [Java](#tab/java)
The incoming Service Bus message is available via a `ServiceBusQueueMessage` or `ServiceBusTopicMessage` parameter.
-[See the example for details](#example).
-
-# [JavaScript](#tab/javascript)
- Access the queue or topic message by using `context.bindings.<name from function.json>`. The Service Bus message is passed into the function as either a string or JSON object.-
-# [PowerShell](#tab/powershell)
- The Service Bus instance is available via the parameter configured in the *function.json* file's name property.-
-# [Python](#tab/python)
- The queue message is available to the function via a parameter typed as `func.ServiceBusMessage`. The Service Bus message is passed into the function as either a string or JSON object.
+For a complete example, see [the examples section](#example).
- ## Poison messages
The Functions runtime receives a message in [PeekLock mode](../service-bus-messa
The `maxAutoRenewDuration` is configurable in *host.json*, which maps to [OnMessageOptions.MaxAutoRenewDuration](/dotnet/api/microsoft.azure.servicebus.messagehandleroptions.maxautorenewduration). The maximum allowed for this setting is 5 minutes according to the Service Bus documentation, whereas you can increase the Functions time limit from the default of 5 minutes to 10 minutes. For Service Bus functions you wouldnΓÇÖt want to do that then, because youΓÇÖd exceed the Service Bus renewal limit. ## Message metadata
-The Service Bus trigger provides several [metadata properties](./functions-bindings-expressions-patterns.md#trigger-metadata). These properties can be used as part of binding expressions in other bindings or as parameters in your code. These properties are members of the [Message](/dotnet/api/microsoft.azure.servicebus.message) class.
+Messaging-specific types let you easily retrieve [metadata as properties of the object](./functions-bindings-expressions-patterns.md#trigger-metadata). These properties depend on the Functions runtime version, the extension package version, and the C# modality used.
+
+# [Extension v5.x](#tab/extensionv5/in-process)
+
+These properties are members of the [ServiceBusReceivedMessage](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage) class.
+
+|Property|Type|Description|
+|--|-|--|
+|`ApplicationProperties`|`ApplicationProperties`|Properties set by the sender.|
+|`ContentType`|`string`|A content type identifier utilized by the sender and receiver for application-specific logic.|
+|`CorrelationId`|`string`|The correlation ID.|
+|`DeliveryCount`|`Int32`|The number of deliveries.|
+|`EnqueuedTime`|`DateTime`|The enqueued time in UTC.|
+|`ScheduledEnqueueTimeUtc`|`DateTime`|The scheduled enqueued time in UTC.|
+|`ExpiresAt`|`DateTime`|The expiration time in UTC.|
+|`MessageId`|`string`|A user-defined value that Service Bus can use to identify duplicate messages, if enabled.|
+|`ReplyTo`|`string`|The reply to queue address.|
+|`Subject`|`string`|The application-specific label which can be used in place of the `Label` metadata property.|
+|`To`|`string`|The send to address.|
+
+# [Functions 2.x and higher](#tab/functionsv2/in-process)
+
+These properties are members of the [Message](/dotnet/api/microsoft.azure.servicebus.message) class.
+
+|Property|Type|Description|
+|--|-|--|
+|`ContentType`|`string`|A content type identifier utilized by the sender and receiver for application-specific logic.|
+|`CorrelationId`|`string`|The correlation ID.|
+|`DeliveryCount`|`Int32`|The number of deliveries.|
+|`ScheduledEnqueueTimeUtc`|`DateTime`|The scheduled enqueued time in UTC.|
+|`ExpiresAtUtc`|`DateTime`|The expiration time in UTC.|
+|`Label`|`string`|The application-specific label.|
+|`MessageId`|`string`|A user-defined value that Service Bus can use to identify duplicate messages, if enabled.|
+|`ReplyTo`|`string`|The reply to queue address.|
+|`To`|`string`|The send to address.|
+|`UserProperties`|`IDictionary<string, object>`|Properties set by the sender. |
+
+# [Functions 1.x](#tab/functionsv1/in-process)
+
+These properties are members of the [BrokeredMessage](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage) and [MessageReceiver](/dotnet/api/microsoft.azure.servicebus.core.messagereceiver) classes.
|Property|Type|Description| |--|-|--|
The Service Bus trigger provides several [metadata properties](./functions-bindi
|`ReplyTo`|`string`|The reply to queue address.| |`SequenceNumber`|`long`|The unique number assigned to a message by the Service Bus.| |`To`|`string`|The send to address.|
-|`UserProperties`|`IDictionary<string, object>`|Properties set by the sender. (For version 5.x+ of the extension this is not supported, please use `ApplicationProperties`.)|
+|`UserProperties`|`IDictionary<string, object>`|Properties set by the sender. |
+
+# [Extension 5.x and higher](#tab/extensionv5/isolated-process)
+
+Messaging-specific types are not yet supported.
+
+# [Functions 2.x and higher](#tab/functionsv2/isolated-process)
-See [code examples](#example) that use these properties earlier in this article.
+Messaging-specific types are not yet supported.
-### Additional message metadata
+# [Functions 1.x](#tab/functionsv1/isolated-process)
-The below metadata properties are supported for apps using 5.0.0 of the extension or higher. These properties are members of the [ServiceBusReceivedMessage](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage) class.
+Messaging-specific types are not yet supported.
+
+# [Extension 5.x and higher](#tab/extensionv5/csharp-script)
+
+These properties are members of the [ServiceBusReceivedMessage](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage) class.
|Property|Type|Description| |--|-|--|
-|`ApplicationProperties`|`ApplicationProperties`|Properties set by the sender. Use this in place of the `UserProperties` metadata property.|
+|`ApplicationProperties`|`ApplicationProperties`|Properties set by the sender.|
+|`ContentType`|`string`|A content type identifier utilized by the sender and receiver for application-specific logic.|
+|`CorrelationId`|`string`|The correlation ID.|
+|`DeliveryCount`|`Int32`|The number of deliveries.|
+|`EnqueuedTime`|`DateTime`|The enqueued time in UTC.|
+|`ScheduledEnqueueTimeUtc`|`DateTime`|The scheduled enqueued time in UTC.|
+|`ExpiresAt`|`DateTime`|The expiration time in UTC.|
+|`MessageId`|`string`|A user-defined value that Service Bus can use to identify duplicate messages, if enabled.|
+|`ReplyTo`|`string`|The reply to queue address.|
|`Subject`|`string`|The application-specific label which can be used in place of the `Label` metadata property.|
-|`MessageActions`|`ServiceBusMessageActions`|The set of actions which can be performed on a `ServiceBusReceivedMessage`. This can be used in place of the `MessageReceiver` metadata property.
-|`SessionActions`|`ServiceBusSessionMessageActions`|The set of actions that can be performed on a session and a `ServiceBusReceivedMessage`. This can be used in place of the `MessageSession` metadata property.|
+|`To`|`string`|The send to address.|
+
+# [Functions 2.x and higher](#tab/functionsv2/csharp-script)
+
+These properties are members of the [Message](/dotnet/api/microsoft.azure.servicebus.message) class.
+
+|Property|Type|Description|
+|--|-|--|
+|`ContentType`|`string`|A content type identifier utilized by the sender and receiver for application-specific logic.|
+|`CorrelationId`|`string`|The correlation ID.|
+|`DeliveryCount`|`Int32`|The number of deliveries.|
+|`ScheduledEnqueueTimeUtc`|`DateTime`|The scheduled enqueued time in UTC.|
+|`ExpiresAtUtc`|`DateTime`|The expiration time in UTC.|
+|`Label`|`string`|The application-specific label.|
+|`MessageId`|`string`|A user-defined value that Service Bus can use to identify duplicate messages, if enabled.|
+|`ReplyTo`|`string`|The reply to queue address.|
+|`To`|`string`|The send to address.|
+|`UserProperties`|`IDictionary<string, object>`|Properties set by the sender. |
+
+# [Functions 1.x](#tab/functionsv1/csharp-script)
+
+These properties are members of the [BrokeredMessage](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage) and [MessageReceiver](/dotnet/api/microsoft.azure.servicebus.core.messagereceiver) classes.
+
+|Property|Type|Description|
+|--|-|--|
+|`ContentType`|`string`|A content type identifier utilized by the sender and receiver for application-specific logic.|
+|`CorrelationId`|`string`|The correlation ID.|
+|`DeadLetterSource`|`string`|The dead letter source.|
+|`DeliveryCount`|`Int32`|The number of deliveries.|
+|`EnqueuedTimeUtc`|`DateTime`|The enqueued time in UTC.|
+|`ExpiresAtUtc`|`DateTime`|The expiration time in UTC.|
+|`Label`|`string`|The application-specific label.|
+|`MessageId`|`string`|A user-defined value that Service Bus can use to identify duplicate messages, if enabled.|
+|`MessageReceiver`|`MessageReceiver`|Service Bus message receiver. Can be used to abandon, complete, or deadletter the message.|
+|`MessageSession`|`MessageSession`|A message receiver specifically for session-enabled queues and topics.|
+|`ReplyTo`|`string`|The reply to queue address.|
+|`SequenceNumber`|`long`|The unique number assigned to a message by the Service Bus.|
+|`To`|`string`|The send to address.|
+|`UserProperties`|`IDictionary<string, object>`|Properties set by the sender. |
+++ ## Next steps - [Send Azure Service Bus messages from Azure Functions (Output binding)](./functions-bindings-service-bus-output.md)++
+[BrokeredMessage]: /dotnet/api/microsoft.servicebus.messaging.brokeredmessage
azure-functions Functions Bindings Service Bus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus.md
Title: Azure Service Bus bindings for Azure Functions description: Learn to send Azure Service Bus triggers and bindings in Azure Functions.-- ms.assetid: daedacf0-6546-4355-a65c-50873e74f66b Previously updated : 02/19/2020- Last updated : 03/04/2022
+zone_pivot_groups: programming-languages-set-functions-lang-workers
# Azure Service Bus bindings for Azure Functions
Azure Functions integrates with [Azure Service Bus](https://azure.microsoft.com/
| Run a function when a Service Bus queue or topic message is created | [Trigger](./functions-bindings-service-bus-trigger.md) | | Send Azure Service Bus messages |[Output binding](./functions-bindings-service-bus-output.md) |
-## Add to your Functions app
-### Functions 2.x and higher
+## Install extension
-Working with the trigger and bindings requires that you reference the appropriate package. The NuGet package is used for .NET class libraries while the extension bundle is used for all other application types.
+The extension NuGet package you install depends on the C# mode you're using in your function app:
-| Language | Add by... | Remarks
-|-||-|
-| C# | Installing the [NuGet package], version 4.x | |
-| C# Script, Java, JavaScript, Python, PowerShell | Registering the [extension bundle] | The [Azure Tools extension] is recommended to use with Visual Studio Code. |
-| C# Script (online-only in Azure portal) | Adding a binding | To update existing binding extensions without having to republish your function app, see [Update your extensions]. |
+# [In-process](#tab/in-process)
-[NuGet package]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.ServiceBus/
-[core tools]: ./functions-run-local.md
-[extension bundle]: ./functions-bindings-register.md#extension-bundles
-[Update your extensions]: ./functions-bindings-register.md
-[Azure Tools extension]: https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack
+Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
+
+Add the extension to your project installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.RabbitMQ).
+
+# [Isolated process](#tab/isolated-process)
+
+Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated process](dotnet-isolated-process-guide.md).
+
+Add the extension to your project installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.servicebus).
+
+# [C# script](#tab/csharp-script)
+
+Functions run as C# script, which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions].
+
+You can install this version of the extension in your function app by registering the [extension bundle], version 2.x, or a later version.
+++
+The functionality of the extension varies depending on the extension version:
+
+# [Extension 5.x+](#tab/extensionv5/in-process)
+
+Version 5.x of the Service Bus bindings extension introduces the ability to [connect using an identity instead of a secret](./functions-reference.md#configure-an-identity-based-connection). For a tutorial on configuring your function apps with managed identities, see the [creating a function app with identity-based connections tutorial](./functions-identity-based-connections-tutorial.md). This extension version also changes the types that you can bind to, replacing the types from `Microsoft.ServiceBus.Messaging` and `Microsoft.Azure.ServiceBus` with newer types from [Azure.Messaging.ServiceBus](/dotnet/api/azure.messaging.servicebus).
+
+This extension version is available by installing the [NuGet package], version 5.x or later.
+
+# [Functions 2.x+](#tab/functionsv2/in-process)
+
+Working with the trigger and bindings requires that you reference the appropriate NuGet package. Install NuGet package, versions < 5.x.
+
+# [Functions 1.x](#tab/functionsv1/in-process)
+
+Functions 1.x apps automatically have a reference the [Microsoft.Azure.WebJobs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs) NuGet package, version 2.x.
+
+# [Extension 5.x+](#tab/extensionv5/isolated-process)
+
+Add the extension to your project by installing the [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.ServiceBus), version 5.x.
+
+# [Functions 2.x+](#tab/functionsv2/isolated-process)
+
+Add the extension to your project by installing the [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.ServiceBus), version 4.x.
+
+# [Functions 1.x](#tab/functionsv1/isolated-process)
+
+Functions version 1.x doesn't support isolated process.
-#### Service Bus extension 5.x and higher
+# [Extension 5.x+](#tab/extensionv5/csharp-script)
-A new version of the Service Bus bindings extension is now available. It introduces the ability to [connect using an identity instead of a secret](./functions-reference.md#configure-an-identity-based-connection). For a tutorial on configuring your function apps with managed identities, see the [creating a function app with identity-based connections tutorial](./functions-identity-based-connections-tutorial.md). For .NET applications, the new extension version also changes the types that you can bind to, replacing the types from `Microsoft.ServiceBus.Messaging` and `Microsoft.Azure.ServiceBus` with newer types from [Azure.Messaging.ServiceBus](/dotnet/api/azure.messaging.servicebus).
+Version 5.x of the Service Bus bindings extension introduces the ability to [connect using an identity instead of a secret](./functions-reference.md#configure-an-identity-based-connection). For a tutorial on configuring your function apps with managed identities, see the [creating a function app with identity-based connections tutorial](./functions-identity-based-connections-tutorial.md). This extension version also changes the types that you can bind to, replacing the types from `Microsoft.ServiceBus.Messaging` and `Microsoft.Azure.ServiceBus` with newer types from [Azure.Messaging.ServiceBus](/dotnet/api/azure.messaging.servicebus).
-This extension version is available by installing the [NuGet package], version 5.x, or it can be added from the extension bundle v3 by adding the following in your `host.json` file:
+This extension version is available from the extension bundle v3 by adding the following lines in your `host.json` file:
```json {
This extension version is available by installing the [NuGet package], version 5
} ``` - To learn more, see [Update your extensions].
-[core tools]: ./functions-run-local.md
-[extension bundle]: ./functions-bindings-register.md#extension-bundles
-[NuGet package]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.ServiceBus
-[Update your extensions]: ./functions-bindings-register.md
-[Azure Tools extension]: https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack
+# [Functions 2.x+](#tab/functionsv2/csharp-script)
-### Functions 1.x
+You can install this version of the extension in your function app by registering the [extension bundle], version 2.x.
+
+# [Functions 1.x](#tab/functionsv1/csharp-script)
Functions 1.x apps automatically have a reference to the [Microsoft.Azure.WebJobs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs) NuGet package, version 2.x. +
-<a name="host-json"></a>
-## host.json settings
+## Install bundle
+
+The Service Bus binding is part of an [extension bundle], which is specified in your host.json project file. You may need to modify this bundle to change the version of the binding, or if bundles aren't already installed. To learn more, see [extension bundle].
+# [Bundle v3.x](#tab/extensionv3)
-> [!NOTE]
-> For a reference of host.json in Functions 1.x, see [host.json reference for Azure Functions 1.x](functions-host-json-v1.md).
+Version 3.x of the extension bundle contains version 5.x of the Service Bus bindings extension, which introduces the ability to [connect using an identity instead of a secret](./functions-reference.md#configure-an-identity-based-connection). For a tutorial on configuring your function apps with managed identities, see the [creating a function app with identity-based connections tutorial](./functions-identity-based-connections-tutorial.md).
+
+You can add this version of the extension from the preview extension bundle v3 by adding or replacing the following code in your `host.json` file:
```json {
- "version": "2.0",
- "extensions": {
- "serviceBus": {
- "prefetchCount": 100,
- "messageHandlerOptions": {
- "autoComplete": true,
- "maxConcurrentCalls": 32,
- "maxAutoRenewDuration": "00:05:00"
- },
- "sessionHandlerOptions": {
- "autoComplete": false,
- "messageWaitTimeout": "00:00:30",
- "maxAutoRenewDuration": "00:55:00",
- "maxConcurrentSessions": 16
- },
- "batchOptions": {
- "maxMessageCount": 1000,
- "operationTimeout": "00:01:00",
- "autoComplete": true
- }
- }
- }
+ "version": "3.0",
+ "extensionBundle": {
+ "id": "Microsoft.Azure.Functions.ExtensionBundle",
+ "version": "[3.3.0, 4.0.0)"
+ }
} ```
-If you have `isSessionsEnabled` set to `true`, the `sessionHandlerOptions` is honored. If you have `isSessionsEnabled` set to `false`, the `messageHandlerOptions` is honored.
+To learn more, see [Update your extensions].
-|Property |Default | Description |
-||||
-|prefetchCount|0|Gets or sets the number of messages that the message receiver can simultaneously request.|
-|messageHandlerOptions.maxAutoRenewDuration|00:05:00|The maximum duration within which the message lock will be renewed automatically.|
-|messageHandlerOptions.autoComplete|true|Whether the trigger should automatically call complete after processing, or if the function code will manually call complete.<br><br>Setting to `false` is only supported in C#.<br><br>If set to `true`, the trigger completes the message automatically if the function execution completes successfully, and abandons the message otherwise.<br><br>When set to `false`, you are responsible for calling [MessageReceiver](/dotnet/api/microsoft.azure.servicebus.core.messagereceiver) methods to complete, abandon, or deadletter the message. If an exception is thrown (and none of the `MessageReceiver` methods are called), then the lock remains. Once the lock expires, the message is re-queued with the `DeliveryCount` incremented and the lock is automatically renewed.<br><br>In non-C# functions, exceptions in the function results in the runtime calls `abandonAsync` in the background. If no exception occurs, then `completeAsync` is called in the background. |
-|messageHandlerOptions.maxConcurrentCalls|16|The maximum number of concurrent calls to the callback that the message pump should initiate per scaled instance. By default, the Functions runtime processes multiple messages concurrently.|
-|sessionHandlerOptions.maxConcurrentSessions|2000|The maximum number of sessions that can be handled concurrently per scaled instance.|
-|batchOptions.maxMessageCount|1000| The maximum number of messages sent to the function when triggered. |
-|batchOptions.operationTimeout|00:01:00| A time span value expressed in `hh:mm:ss`. |
-|batchOptions.autoComplete|true| See the above description for `messageHandlerOptions.autoComplete`. |
+# [Bundle v2.x](#tab/extensionv2)
+
+You can install this version of the extension in your function app by registering the [extension bundle], version 2.x.
+
+# [Functions 1.x](#tab/functions1)
-### Additional settings for version 5.x+
+Functions 1.x apps automatically have a reference to the extension.
-The example host.json file below contains only the settings for version 5.0.0 and higher of the Service Bus extension.
+++
+<a name="host-json"></a>
+
+## host.json settings
+
+This section describes the configuration settings available for this binding, which depends on the runtime and extension version.
+
+# [Extension 5.x+](#tab/extensionv5)
```json {
The example host.json file below contains only the settings for version 5.0.0 an
} ```
-When using service bus extension version 5.x and higher, the following global configuration settings are supported in addition to the 2.x settings in `ServiceBusOptions`.
+When you set the `isSessionsEnabled` property or attribute on [the trigger](functions-bindings-service-bus-trigger.md) to `true`, the `sessionHandlerOptions` is honored. When you set the `isSessionsEnabled` property or attribute on [the trigger](functions-bindings-service-bus-trigger.md) to `false`, the `messageHandlerOptions` is honored.
|Property |Default | Description | ||||
-|prefetchCount|0|Gets or sets the number of messages that the message receiver can simultaneously request.|
-| transportType| amqpTcp | The protocol and transport that is used for communicating with Service Bus. Available options: `amqpTcp`, `amqpWebSockets`|
-| webProxy| n/a | The proxy to use for communicating with Service Bus over web sockets. A proxy cannot be used with the `amqpTcp` transport. |
-|autoCompleteMessages|true|Determines whether or not to automatically complete messages after successful execution of the function and should be used in place of the `autoComplete` configuration setting.|
-|maxAutoLockRenewalDuration|00:05:00|The maximum duration within which the message lock will be renewed automatically. This setting only applies for functions that receive a single message at a time.|
-|maxConcurrentCalls|16|The maximum number of concurrent calls to the callback that the should be initiate per scaled instance. By default, the Functions runtime processes multiple messages concurrently. This setting only applies for functions that receive a single message at a time.|
-|maxConcurrentSessions|8|The maximum number of sessions that can be handled concurrently per scaled instance. This setting only applies for functions that receive a single message at a time.|
-|maxMessages|1000|The maximum number of messages that will be passed to each function call. This setting only applies for functions that receive a batch of messages.|
-|sessionIdleTimeout|n/a|The maximum amount of time to wait for a message to be received for the currently active session. After this time has elapsed, the processor will close the session and attempt to process another session. This setting only applies for functions that receive a single message at a time.|
-|enableCrossEntityTransactions|false|Whether or not to enable transactions that span multiple entities on a Service Bus namespace.|
+|**mode**|`Exponential`|The approach to use for calculating retry delays. The default exponential mode retries attempts with a delay based on a back-off strategy where each attempt increases the wait duration before retrying. The `Fixed` mode retries attempts at fixed intervals with each delay having a consistent duration.|
+|**tryTimeout**|`00:01:00`|The maximum duration to wait for an operation per attempt.|
+|**delay**|`00:00:00.80`|The delay or back-off factor to apply between retry attempts.|
+|**maxDelay**|`00:01:00`|The maximum delay to allow between retry attempts|
+|**maxRetries**|`3`|The maximum number of retry attempts before considering the associated operation to have failed.|
+|**prefetchCount**|`0`|Gets or sets the number of messages that the message receiver can simultaneously request.|
+| **transportType**| amqpTcp | The protocol and transport that is used for communicating with Service Bus. Available options: `amqpTcp`, `amqpWebSockets`|
+| **webProxy**| n/a | The proxy to use for communicating with Service Bus over web sockets. A proxy cannot be used with the `amqpTcp` transport. |
+|**autoCompleteMessages**|`true`|Determines whether or not to automatically complete messages after successful execution of the function and should be used in place of the `autoComplete` configuration setting.|
+|**maxAutoLockRenewalDuration**|`00:05:00`|The maximum duration within which the message lock will be renewed automatically. This setting only applies for functions that receive a single message at a time.|
+|**maxConcurrentCalls**|`16`|The maximum number of concurrent calls to the callback that the should be initiate per scaled instance. By default, the Functions runtime processes multiple messages concurrently. This setting only applies for functions that receive a single message at a time.|
+|**maxConcurrentSessions**|`8`|The maximum number of sessions that can be handled concurrently per scaled instance. This setting only applies for functions that receive a single message at a time.|
+|**maxMessages**|`1000`|The maximum number of messages that will be passed to each function call. This setting only applies for functions that receive a batch of messages.|
+|**sessionIdleTimeout**|n/a|The maximum amount of time to wait for a message to be received for the currently active session. After this time has elapsed, the processor will close the session and attempt to process another session. This setting only applies for functions that receive a single message at a time.|
+|**enableCrossEntityTransactions**|`false`|Whether or not to enable transactions that span multiple entities on a Service Bus namespace.|
+
+# [Functions 2.x+](#tab/functionsv2)
-### Retry settings
+```json
+{
+ "version": "2.0",
+ "extensions": {
+ "serviceBus": {
+ "prefetchCount": 100,
+ "messageHandlerOptions": {
+ "autoComplete": true,
+ "maxConcurrentCalls": 32,
+ "maxAutoRenewDuration": "00:05:00"
+ },
+ "sessionHandlerOptions": {
+ "autoComplete": false,
+ "messageWaitTimeout": "00:00:30",
+ "maxAutoRenewDuration": "00:55:00",
+ "maxConcurrentSessions": 16
+ },
+ "batchOptions": {
+ "maxMessageCount": 1000,
+ "operationTimeout": "00:01:00",
+ "autoComplete": true
+ }
+ }
+ }
+}
+```
-In addition to the above configuration properties when using version 5.x and higher of the service bus extension, you can also configure `RetryOptions` from within the `ServiceBusOptions`. These settings determine whether a failed operation should be retried, and, if so, the amount of time to wait between retry attempts. The options also control the amount of time allowed for receiving messages and other interactions with the Service Bus service.
+When you set the `isSessionsEnabled` property or attribute on [the trigger](functions-bindings-service-bus-trigger.md) to `true`, the `sessionHandlerOptions` is honored. When you set the `isSessionsEnabled` property or attribute on [the trigger](functions-bindings-service-bus-trigger.md) to `false`, the `messageHandlerOptions` is honored.
|Property |Default | Description | ||||
-|mode|Exponential|The approach to use for calculating retry delays. The default exponential mode will retry attempts with a delay based on a back-off strategy where each attempt will increase the duration that it waits before retrying. The `Fixed` mode will retry attempts at fixed intervals with each delay having a consistent duration.|
-|tryTimeout|00:01:00|The maximum duration to wait for an operation per attempt.|
-|delay|00:00:00.80|The delay or back-off factor to apply between retry attempts.|
-|maxDelay|00:01:00|The maximum delay to allow between retry attempts|
-|maxRetries|3|The maximum number of retry attempts before considering the associated operation to have failed.|
+|**prefetchCount**|`0`|Gets or sets the number of messages that the message receiver can simultaneously request.|
+|**maxAutoRenewDuration**|`00:05:00`|The maximum duration within which the message lock will be renewed automatically.|
+|**autoComplete**|`true`|Whether the trigger should automatically call complete after processing, or if the function code manually calls complete.<br><br>Setting to `false` is only supported in C#.<br><br>When set to `true`, the trigger completes the message, session, or batch automatically when the function execution completes successfully, and abandons the message otherwise.<br><br>When set to `false`, you are responsible for calling [MessageReceiver](/dotnet/api/microsoft.azure.servicebus.core.messagereceiver) methods to complete, abandon, or deadletter the message, session, or batch. When an exception is thrown (and none of the `MessageReceiver` methods are called), then the lock remains. Once the lock expires, the message is re-queued with the `DeliveryCount` incremented and the lock is automatically renewed.<br><br>In non-C# functions, exceptions in the function results in the runtime calls `abandonAsync` in the background. If no exception occurs, then `completeAsync` is called in the background. |
+|**maxConcurrentCalls**|`16`|The maximum number of concurrent calls to the callback that the message pump should initiate per scaled instance. By default, the Functions runtime processes multiple messages concurrently.|
+|**maxConcurrentSessions**|`2000`|The maximum number of sessions that can be handled concurrently per scaled instance.|
+|**maxMessageCount**|`1000`| The maximum number of messages sent to the function when triggered. |
+|**operationTimeout**|`00:01:00`| A time span value expressed in `hh:mm:ss`. |
+
+# [Functions 1.x](#tab/functionsv1)
+
+For a reference of host.json in Functions 1.x, see [host.json reference for Azure Functions 1.x](functions-host-json-v1.md).
In addition to the above configuration properties when using version 5.x and hig
- [Run a function when a Service Bus queue or topic message is created (Trigger)](./functions-bindings-service-bus-trigger.md) - [Send Azure Service Bus messages from Azure Functions (Output binding)](./functions-bindings-service-bus-output.md)+
+[extension bundle]: ./functions-bindings-register.md#extension-bundles
+[NuGet package]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.ServiceBus/
+[Update your extensions]: ./functions-bindings-register.md
azure-functions Functions Bindings Signalr Service Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-signalr-service-input.md
Title: Azure Functions SignalR Service input binding description: Learn to return a SignalR service endpoint URL and access token in Azure Functions.- ms.devlang: csharp, java, javascript, python Previously updated : 02/20/2020-- Last updated : 03/04/2022
+zone_pivot_groups: programming-languages-set-functions-lang-workers
# SignalR Service input binding for Azure Functions
For information on setup and configuration details, see the [overview](functions
## Example
-# [C#](#tab/csharp)
++
+# [In-process](#tab/in-process)
The following example shows a [C# function](functions-dotnet-class-library.md) that acquires SignalR connection information using the input binding and returns it over HTTP.
public static SignalRConnectionInfo Negotiate(
} ```
+# [Isolated process](#tab/isolated-process)
+
+The following example shows a SignalR trigger that reads a message string from one hub using a SignalR trigger and writes it to a second hub using an output binding. The data required to connect to the output binding is obtained as a `MyConnectionInfo` object from an input binding defined using a `SignalRConnectionInfo` attribute.
++
+The `MyConnectionInfo` and `MyMessage` classes are defined as follows:
++ # [C# Script](#tab/csharp-script) The following example shows a SignalR connection info input binding in a *function.json* file and a [C# Script function](functions-reference-csharp.md) that uses the binding to return the connection information.
public static SignalRConnectionInfo Run(HttpRequest req, SignalRConnectionInfo c
} ```
-# [JavaScript](#tab/javascript)
-
-The following example shows a SignalR connection info input binding in a *function.json* file and a [JavaScript function](functions-reference-node.md) that uses the binding to return the connection information.
+
-Here's binding data in the *function.json* file:
+The following example shows a SignalR connection info input binding in a *function.json* file and a function that uses the binding to return the connection information.
-Example function.json:
+Here's binding data for the example in the *function.json* file:
```json {
Example function.json:
} ``` + Here's the JavaScript code: ```javascript
module.exports = async function (context, req, connectionInfo) {
}; ```
-# [Python](#tab/python)
+
+Complete PowerShell examples are pending.
The following example shows a SignalR connection info input binding in a *function.json* file and a [Python function](functions-reference-python.md) that uses the binding to return the connection information.
-Here's binding data in the *function.json* file:
-
-Example function.json:
-
-```json
-{
- "type": "signalRConnectionInfo",
- "name": "connectionInfo",
- "hubName": "chat",
- "connectionStringSetting": "<name of setting containing SignalR Service connection string>",
- "direction": "in"
-}
-```
- Here's the Python code: ```python
def main(req: func.HttpRequest, connectionInfoJson: str) -> func.HttpResponse:
) ```
-# [Java](#tab/java)
The following example shows a [Java function](functions-reference-java.md) that acquires SignalR connection information using the input binding and returns it over HTTP.
public SignalRConnectionInfo negotiate(
} ``` -+
+## Usage
+
+### Authenticated tokens
-## Authenticated tokens
+When the function is triggered by an authenticated client, you can add a user ID claim to the generated token. You can easily add authentication to a function app using [App Service Authentication](../app-service/overview-authentication-authorization.md).
-If the function is triggered by an authenticated client, you can add a user ID claim to the generated token. You can easily add authentication to a function app using [App Service Authentication](../app-service/overview-authentication-authorization.md).
+App Service authentication sets HTTP headers named `x-ms-client-principal-id` and `x-ms-client-principal-name` that contain the authenticated user's client principal ID and name, respectively.
-App Service Authentication sets HTTP headers named `x-ms-client-principal-id` and `x-ms-client-principal-name` that contain the authenticated user's client principal ID and name, respectively.
-# [C#](#tab/csharp)
+# [In-process](#tab/in-process)
You can set the `UserId` property of the binding to the value from either header using a [binding expression](./functions-bindings-expressions-patterns.md): `{headers.x-ms-client-principal-id}` or `{headers.x-ms-client-principal-name}`.
public static SignalRConnectionInfo Negotiate(
} ```
+# [Isolated process](#tab/isolated-process)
+
+Sample code not available for isolated process.
+ # [C# Script](#tab/csharp-script) You can set the `userId` property of the binding to the value from either header using a [binding expression](./functions-bindings-expressions-patterns.md): `{headers.x-ms-client-principal-id}` or `{headers.x-ms-client-principal-name}`.
public static SignalRConnectionInfo Run(HttpRequest req, SignalRConnectionInfo c
return connectionInfo; } ```++
-# [JavaScript](#tab/javascript)
+SignalR trigger isn't currently supported for Java.
+
You can set the `userId` property of the binding to the value from either header using a [binding expression](./functions-bindings-expressions-patterns.md): `{headers.x-ms-client-principal-id}` or `{headers.x-ms-client-principal-name}`.
-Example function.json:
+Here's binding data in the *function.json* file:
```json {
Example function.json:
} ``` Here's the JavaScript code: ```javascript
module.exports = async function (context, req, connectionInfo) {
}; ```
-# [Python](#tab/python)
-
-You can set the `userId` property of the binding to the value from either header using a [binding expression](./functions-bindings-expressions-patterns.md): `{headers.x-ms-client-principal-id}` or `{headers.x-ms-client-principal-name}`.
-
-Example function.json:
-
-```json
-{
- "type": "signalRConnectionInfo",
- "name": "connectionInfo",
- "hubName": "chat",
- "userId": "{headers.x-ms-client-principal-id}",
- "connectionStringSetting": "<name of setting containing SignalR Service connection string>",
- "direction": "in"
-}
-```
+
+Complete PowerShell examples are pending.
Here's the Python code:
def main(req: func.HttpRequest, connectionInfo: str) -> func.HttpResponse:
) ```
-# [Java](#tab/java)
You can set the `userId` property of the binding to the value from either header using a [binding expression](./functions-bindings-expressions-patterns.md): `{headers.x-ms-client-principal-id}` or `{headers.x-ms-client-principal-name}`.
public SignalRConnectionInfo negotiate(
return connectionInfo; } ```++
+## Attributes
+
+Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use attribute to define the function. C# script instead uses a function.json configuration file.
+
+# [In-process](#tab/in-process)
+
+The following table explains the properties of the `SignalRConnectionInfo` attribute:
+
+| Attribute property |Description|
+||-|
+**HubName**| This value must be set to the name of the SignalR hub for which the connection information is generated. |
+|**UserId**| Optional: The value of the user identifier claim to be set in the access key token. |
+|**ConnectionStringSetting**| The name of the app setting that contains the SignalR Service connection string, which defaults to `AzureSignalRConnectionString`. |
+
+# [Isolated process](#tab/isolated-process)
+
+The following table explains the properties of the `SignalRConnectionInfoInput` attribute:
+
+| Attribute property |Description|
+||-|
+**HubName**| This value must be set to the name of the SignalR hub for which the connection information is generated. |
+|**UserId**| Optional: The value of the user identifier claim to be set in the access key token. |
+|**ConnectionStringSetting**| The name of the app setting that contains the SignalR Service connection string, which defaults to `AzureSignalRConnectionString`. |
+
+# [C# Script](#tab/csharp-script)
+
+The following table explains the binding configuration properties that you set in the *function.json* file.
+
+|function.json property | Description|
+||--|
+|**type**| Must be set to `signalRConnectionInfo`.|
+|**direction**| Must be set to `in`.|
+|**name**| Variable name used in function code for connection info object. |
+|**hubName**| This value must be set to the name of the SignalR hub for which the connection information is generated.|
+|**userId**| Optional: The value of the user identifier claim to be set in the access key token. |
+|**connectionStringSetting**| The name of the app setting that contains the SignalR Service connection string, which defaults to `AzureSignalRConnectionString`. |
+
+## Annotations
+
+The following table explains the supported settings for the `SignalRConnectionInfoInput` annotation.
+
+|Setting | Description|
+||--|
+|**name**| Variable name used in function code for connection info object. |
+|**hubName**| This value must be set to the name of the SignalR hub for which the connection information is generated.|
+|**userId**| Optional: The value of the user identifier claim to be set in the access key token. |
+|**connectionStringSetting**| The name of the app setting that contains the SignalR Service connection string, which defaults to `AzureSignalRConnectionString`. |
+
+## Configuration
+
+The following table explains the binding configuration properties that you set in the *function.json* file.
+
+|function.json property | Description|
+||--|
+|**type**| Must be set to `signalRConnectionInfo`.|
+|**direction**| Must be set to `in`.|
+|**name**| Variable name used in function code for connection info object. |
+|**hubName**| This value must be set to the name of the SignalR hub for which the connection information is generated.|
+|**userId**| Optional: The value of the user identifier claim to be set in the access key token. |
+|**connectionStringSetting**| The name of the app setting that contains the SignalR Service connection string, which defaults to `AzureSignalRConnectionString`. |
++ ## Next steps - [Handle messages from SignalR Service (Trigger binding)](./functions-bindings-signalr-service-trigger.md)
azure-functions Functions Bindings Signalr Service Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-signalr-service-output.md
Title: Azure Functions SignalR Service output binding description: Learn about the SignalR Service output binding for Azure Functions.- ms.devlang: csharp, java, javascript, python Previously updated : 02/20/2020-- Last updated : 03/04/2022
+zone_pivot_groups: programming-languages-set-functions-lang-workers
# SignalR Service output binding for Azure Functions
The output binding also allows you to manage groups.
For information on setup and configuration details, see the [overview](functions-bindings-signalr-service.md).
-## Broadcast to all clients
+## Example
-The following example shows a function that sends a message using the output binding to all connected clients. The *target* is the name of the method to be invoked on each client. The *Arguments* property is an array of zero or more objects to be passed to the client method.
++
+### Broadcast to all clients
+
+# [In-process](#tab/in-process)
-# [C#](#tab/csharp)
+The following example shows a function that sends a message using the output binding to all connected clients. The *target* is the name of the method to be invoked on each client. The *Arguments* property is an array of zero or more objects to be passed to the client method.
```cs [FunctionName("SendMessage")]
public static Task SendMessage(
} ```
+# [Isolated process](#tab/isolated-process)
+
+The following example shows a SignalR trigger that reads a message string from one hub using a SignalR trigger and writes it to a second hub using an output binding. The *target* is the name of the method to be invoked on each client.
++
+The `MyConnectionInfo` and `MyMessage` classes are defined as follows:
+++ # [C# Script](#tab/csharp-script) Here's binding data in the *function.json* file:
public static Task Run(
} ```
-# [JavaScript](#tab/javascript)
+++
+### Broadcast to all clients
Here's binding data in the *function.json* file:
Example function.json:
} ``` + Here's the JavaScript code: ```javascript
module.exports = async function (context, req) {
}; ```
-# [Python](#tab/python)
-
-Here's binding data in the *function.json* file:
-
-Example function.json:
-
-```json
-{
- "type": "signalR",
- "name": "outMessage",
- "hubName": "<hub_name>",
- "connectionStringSetting": "<name of setting containing SignalR Service connection string>",
- "direction": "out"
-}
-```
+
+Complete PowerShell examples are pending.
Here's the Python code:
def main(req: func.HttpRequest, outMessage: func.Out[str]) -> func.HttpResponse:
})) ```
-# [Java](#tab/java)
+
+### Broadcast to all clients
```java @FunctionName("sendMessage")
public SignalRMessage sendMessage(
} ``` -
-## Send to a user
+### Send to a user
You can send a message only to connections that have been authenticated to a user by setting the *user ID* in the SignalR message.
-# [C#](#tab/csharp)
+# [In-process](#tab/in-process)
```cs [FunctionName("SendMessage")]
public static Task SendMessage(
} ```
+# [Isolated process](#tab/isolated-process)
+
+Not supported for isolated process.
+ # [C# Script](#tab/csharp-script) Example function.json:
Example function.json:
} ```
-Here's the C# Script code:
+Here's the C# script code:
```cs #r "Microsoft.Azure.WebJobs.Extensions.SignalRService"
public static Task Run(
} ```
-# [JavaScript](#tab/javascript)
+++
+### Send to a user
+
+You can send a message only to connections that have been authenticated to a user by setting the *user ID* in the SignalR message.
Example function.json: ```json { "type": "signalR",
- "name": "signalRMessages",
+ "name": "outRMessages",
"hubName": "<hub_name>", "connectionStringSetting": "<name of setting containing SignalR Service connection string>", "direction": "out" } ``` Here's the JavaScript code: ```javascript module.exports = async function (context, req) {
- context.bindings.signalRMessages = [{
+ context.bindings.outMessages = [{
// message will only be sent to this user ID "userId": "userId1", "target": "newMessage",
module.exports = async function (context, req) {
}; ```
-# [Python](#tab/python)
-
-Here's binding data in the *function.json* file:
-
-Example function.json:
-
-```json
-{
- "type": "signalR",
- "name": "outMessage",
- "hubName": "<hub_name>",
- "connectionStringSetting": "<name of setting containing SignalR Service connection string>",
- "direction": "out"
-}
-```
+
+Complete PowerShell examples are pending.
Here's the Python code: ```python
-def main(req: func.HttpRequest, outMessage: func.Out[str]) -> func.HttpResponse:
+def main(req: func.HttpRequest, outMessages: func.Out[str]) -> func.HttpResponse:
message = req.get_json() outMessage.set(json.dumps({ #message will only be sent to this user ID
def main(req: func.HttpRequest, outMessage: func.Out[str]) -> func.HttpResponse:
})) ```
-# [Java](#tab/java)
+### Send to a user
+
+You can send a message only to connections that have been authenticated to a user by setting the *user ID* in the SignalR message.
```java @FunctionName("sendMessage")
public SignalRMessage sendMessage(
} ``` --
-## Send to a group
+### Send to a group
You can send a message only to connections that have been added to a group by setting the *group name* in the SignalR message.
-# [C#](#tab/csharp)
+# [In-process](#tab/in-process)
```cs [FunctionName("SendMessage")]
public static Task SendMessage(
}); } ```
+# [Isolated process](#tab/isolated-process)
+
+Example not available for isolated process.
# [C# Script](#tab/csharp-script)
public static Task Run(
} ```
-# [JavaScript](#tab/javascript)
++
+### Send to a group
+
+You can send a message only to connections that have been added to a group by setting the *group name* in the SignalR message.
Example function.json:
Example function.json:
} ``` Here's the JavaScript code: ```javascript
module.exports = async function (context, req) {
}; ```
-# [Python](#tab/python)
-
-Here's binding data in the *function.json* file:
-
-Example function.json:
-
-```json
-{
- "type": "signalR",
- "name": "outMessage",
- "hubName": "<hub_name>",
- "connectionStringSetting": "<name of setting containing SignalR Service connection string>",
- "direction": "out"
-}
-```
+
+Complete PowerShell examples are pending.
Here's the Python code:
def main(req: func.HttpRequest, outMessage: func.Out[str]) -> func.HttpResponse:
})) ```
-# [Java](#tab/java)
+
+### Send to a group
+
+You can send a message only to connections that have been added to a group by setting the *group name* in the SignalR message.
```java @FunctionName("sendMessage")
public SignalRMessage sendMessage(
} ``` --
-## Group management
-
-SignalR Service allows users to be added to groups. Messages can then be sent to a group. You can use the `SignalR` output binding to manage a user's group membership.
+### Group management
-# [C#](#tab/csharp)
-
-### Add user to a group
-
-The following example adds a user to a group.
+SignalR Service allows users to be added to groups. Messages can then be sent to a group. You can use the `SignalR` output binding to manage a user's group membership. The following example adds a user to a group.
+# [In-process](#tab/in-process)
```csharp [FunctionName("addToGroup")] public static Task AddToGroup(
public static Task AddToGroup(
} ```
-### Remove user from a group
- The following example removes a user from a group. ```csharp
public static Task RemoveFromGroup(
} ```
-> [!NOTE]
-> In order to get the `ClaimsPrincipal` correctly bound, you must have configured the authentication settings in Azure Functions.
+# [Isolated process](#tab/isolated-process)
-# [C# Script](#tab/csharp-script)
+Example not available for isolated process.
-### Add user to a group
+# [C# Script](#tab/csharp-script)
The following example adds a user to a group.
public static Task Run(
} ```
-### Remove user from a group
- The following example removes a user from a group. Example *function.json*
public static Task Run(
} ``` ++ > [!NOTE] > In order to get the `ClaimsPrincipal` correctly bound, you must have configured the authentication settings in Azure Functions.
-# [JavaScript](#tab/javascript)
-### Add user to a group
+### Group management
-The following example adds a user to a group.
+SignalR Service allows users to be added to groups. Messages can then be sent to a group. You can use the `SignalR` output binding to manage a user's group membership. The following example adds a user to a group.
-Example *function.json*
+Example *function.json* that defines the output binding:
```json {
Example *function.json*
} ```
-*index.js*
+
+The following example adds a user to a group.
```javascript module.exports = async function (context, req) {
module.exports = async function (context, req) {
}; ```
-### Remove user from a group
- The following example removes a user from a group.
-Example *function.json*
-
-```json
-{
- "type": "signalR",
- "name": "signalRGroupActions",
- "connectionStringSetting": "<name of setting containing SignalR Service connection string>",
- "hubName": "chat",
- "direction": "out"
-}
-```
-
-*index.js*
- ```javascript module.exports = async function (context, req) { context.bindings.signalRGroupActions = [{
module.exports = async function (context, req) {
}; ```
-# [Python](#tab/python)
-
-### Add user to a group
+
+Complete PowerShell examples are pending.
The following example adds a user to a group.
-Example *function.json*
-
-```json
-{
- "type": "signalR",
- "name": "action",
- "connectionStringSetting": "<name of setting containing SignalR Service connection string>",
- "hubName": "chat",
- "direction": "out"
-}
-```
-
-*\_\_init.py__*
- ```python def main(req: func.HttpRequest, action: func.Out[str]) -> func.HttpResponse: action.set(json.dumps({
def main(req: func.HttpRequest, action: func.Out[str]) -> func.HttpResponse:
})) ```
-### Remove user from a group
- The following example removes a user from a group.
-Example *function.json*
-
-```json
-{
- "type": "signalR",
- "name": "action",
- "connectionStringSetting": "<name of setting containing SignalR Service connection string>",
- "hubName": "chat",
- "direction": "out"
-}
-```
-
-*\_\_init.py__*
- ```python def main(req: func.HttpRequest, action: func.Out[str]) -> func.HttpResponse: action.set(json.dumps({
def main(req: func.HttpRequest, action: func.Out[str]) -> func.HttpResponse:
})) ```
-# [Java](#tab/java)
+
+### Group management
-### Add user to a group
+SignalR Service allows users to be added to groups. Messages can then be sent to a group. You can use the `SignalR` output binding to manage a user's group membership.
The following example adds a user to a group.
public SignalRGroupAction addToGroup(
} ```
-### Remove user from a group
- The following example removes a user from a group. ```java
public SignalRGroupAction removeFromGroup(
return action; } ```+
+## Attributes
+
+Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use attribute to define the function. C# script instead uses a function.json configuration file.
+
+# [In-process](#tab/in-process)
+
+The following table explains the properties of the `SignalR` output attribute.
+
+| Attribute property |Description|
+||-|
+|**HubName**| This value must be set to the name of the SignalR hub for which the connection information is generated.|
+|**ConnectionStringSetting**| The name of the app setting that contains the SignalR Service connection string, which defaults to `AzureSignalRConnectionString`. |
++
+# [Isolated process](#tab/isolated-process)
+
+The following table explains the properties of the `SignalROutput` attribute.
+
+| Attribute property |Description|
+||-|
+|**HubName**| This value must be set to the name of the SignalR hub for which the connection information is generated.|
+|**ConnectionStringSetting**| The name of the app setting that contains the SignalR Service connection string, which defaults to `AzureSignalRConnectionString`. |
+
+# [C# Script](#tab/csharp-script)
+
+The following table explains the binding configuration properties that you set in the *function.json* file.
+
+|function.json property | Description|
+||-|
+|**type**| Must be set to `signalR`.|
+|**direction**|Must be set to `out`.|
+|**name**| Variable name used in function code for connection info object. |
+|**hubName**| This value must be set to the name of the SignalR hub for which the connection information is generated.|
+|**connectionStringSetting**| The name of the app setting that contains the SignalR Service connection string, which defaults to `AzureSignalRConnectionString`. |
-## Configuration
+
+## Annotations
-### SignalRConnectionInfo
+The following table explains the supported settings for the `SignalROutput` annotation.
-The following table explains the binding configuration properties that you set in the *function.json* file and the `SignalRConnectionInfo` attribute.
+|Setting | Description|
+||--|
+|**name**| Variable name used in function code for connection info object. |
+|**hubName**|This value must be set to the name of the SignalR hub for which the connection information is generated.|
+|**connectionStringSetting**|The name of the app setting that contains the SignalR Service connection string, which defaults to `AzureSignalRConnectionString`. |
+
+## Configuration
-|function.json property | Attribute property |Description|
-|||-|
-|**type**| n/a | Must be set to `signalRConnectionInfo`.|
-|**direction**| n/a | Must be set to `in`.|
-|**name**| n/a | Variable name used in function code for connection info object. |
-|**hubName**|**HubName**| This value must be set to the name of the SignalR hub for which the connection information is generated.|
-|**userId**|**UserId**| Optional: The value of the user identifier claim to be set in the access key token. |
-|**connectionStringSetting**|**ConnectionStringSetting**| The name of the app setting that contains the SignalR Service connection string (defaults to "AzureSignalRConnectionString") |
+The following table explains the binding configuration properties that you set in the *function.json* file.
-### SignalR
+|function.json property | Description|
+||-|
+|**type**| Must be set to `signalR`.|
+|**direction**|Must be set to `out`.|
+|**name**| Variable name used in function code for connection info object. |
+|**hubName**| This value must be set to the name of the SignalR hub for which the connection information is generated.|
+|**connectionStringSetting**| The name of the app setting that contains the SignalR Service connection string, which defaults to `AzureSignalRConnectionString`. |
-The following table explains the binding configuration properties that you set in the *function.json* file and the `SignalR` attribute.
-|function.json property | Attribute property |Description|
-|||-|
-|**type**| n/a | Must be set to `signalR`.|
-|**direction**| n/a | Must be set to `out`.|
-|**name**| n/a | Variable name used in function code for connection info object. |
-|**hubName**|**HubName**| This value must be set to the name of the SignalR hub for which the connection information is generated.|
-|**connectionStringSetting**|**ConnectionStringSetting**| The name of the app setting that contains the SignalR Service connection string (defaults to "AzureSignalRConnectionString") |
[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)]
azure-functions Functions Bindings Signalr Service Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-signalr-service-trigger.md
ms.devlang: csharp, javascript, python Previously updated : 05/11/2020 Last updated : 11/29/2021 -
+zone_pivot_groups: programming-languages-set-functions-lang-workers
# SignalR Service trigger binding for Azure Functions
Use the *SignalR* trigger binding to respond to messages sent from Azure SignalR Service. When function is triggered, messages passed to the function is parsed as a json object. In SignalR Service serverless mode, SignalR Service uses the [Upstream](../azure-signalr/concept-upstream.md) feature to send messages from client to Function App. And Function App uses SignalR Service trigger binding to handle these messages. The general architecture is shown below:+ :::image type="content" source="media/functions-bindings-signalr-service/signalr-trigger.png" alt-text="SignalR Trigger Architecture"::: For information on setup and configuration details, see the [overview](functions-bindings-signalr-service.md). ## Example
-The following example shows a function that receives a message using the trigger binding and log the message.
++
-# [C#](#tab/csharp)
+# [In-process](#tab/in-process)
-SignalR Service trigger binding for C# has two programming models. Class based model and traditional model. Class based model can provide a consistent SignalR server-side programming experience. And traditional model provides more flexibility and similar with other function bindings.
+SignalR Service trigger binding for C# has two programming models. Class based model and traditional model. Class based model provides a consistent SignalR server-side programming experience. Traditional model provides more flexibility and is similar to other function bindings.
-### With Class based model
+### With class-based model
See [Class based model](../azure-signalr/signalr-concept-serverless-development-config.md#class-based-model) for details.
public class SignalRTestHub : ServerlessHub
} ```
-### With Traditional model
+### With a traditional model
Traditional model obeys the convention of Azure Function developed by C#. If you're not familiar with it, you can learn from [documents](./functions-dotnet-class-library.md).
public static async Task Run([SignalRTrigger("SignalRTest", "messages", "SendMes
} ```
-#### Use attribute `[SignalRParameter]` to simplify `ParameterNames`
-
-As it's bit cumbersome to use `ParameterNames`, `SignalRParameter` is provided to achieve the same purpose.
+Because it can be hard to use `ParameterNames` in the trigger, the following example shows you how to use the `SignalRParameter` attribute to define the `message` attribute.
```cs [FunctionName("SignalRTest")]
public static async Task Run([SignalRTrigger("SignalRTest", "messages", "SendMes
} ```
-# [C# Script](#tab/csharp-script)
-Here's binding data in the *function.json* file:
+# [Isolated process](#tab/isolated-process)
+
+The following example shows a SignalR trigger that reads a message string from one hub using a SignalR trigger and writes it to a second hub using an output binding. The data required to connect to the output binding is obtained as a `MyConnectionInfo` object from an input binding defined using a `SignalRConnectionInfo` attribute.
++
+The `MyConnectionInfo` and `MyMessage` classes are defined as follows:
++
+# [C# Script](#tab/csharp-script)
-Example function.json:
+Here's example binding data in the *function.json* file:
```json {
Example function.json:
} ```
-Here's the C# Script code:
+And, here's the code:
```cs #r "Microsoft.Azure.WebJobs.Extensions.SignalRService"
public static void Run(InvocationContext invocation, string message, ILogger log
} ```
-# [JavaScript](#tab/javascript)
+
-Here's binding data in the *function.json* file:
-Example function.json:
+SignalR trigger isn't currently supported for Java.
+
+
+Here's binding data in the *function.json* file:
```json {
Example function.json:
} ``` + Here's the JavaScript code: ```javascript
module.exports = async function (context, invocation) {
context.log(`Receive ${context.bindingData.message} from ${invocation.ConnectionId}.`) }; ```-
-# [Python](#tab/python)
-
-Here's binding data in the *function.json* file:
-
-Example function.json:
-
-```json
-{
- "type": "signalRTrigger",
- "name": "invocation",
- "hubName": "SignalRTest",
- "category": "messages",
- "event": "SendMessage",
- "parameterNames": [
- "message"
- ],
- "direction": "in"
-}
-```
+
+Complete PowerShell examples are pending.
Here's the Python code:
def main(invocation) -> None:
invocation_json = json.loads(invocation) logging.info("Receive {0} from {1}".format(invocation_json['Arguments'][0], invocation_json['ConnectionId'])) ```
+
+
+## Attributes
+
+Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use the `SignalRTrigger` attribute to define the function. C# script instead uses a function.json configuration file.
+
+# [In-process](#tab/in-process)
+
+The following table explains the properties of the `SignalRTrigger` attribute.
+
+| Attribute property |Description|
+||-|
+|**HubName**| This value must be set to the name of the SignalR hub for the function to be triggered.|
+|**Category**| This value must be set as the category of messages for the function to be triggered. The category can be one of the following values: <ul><li>**connections**: Including *connected* and *disconnected* events</li><li>**messages**: Including all other events except those in *connections* category</li></ul> |
+|**Event**| This value must be set as the event of messages for the function to be triggered. For *messages* category, event is the *target* in [invocation message](https://github.com/dotnet/aspnetcore/blob/master/src/SignalR/docs/specs/HubProtocol.md#invocation-message-encoding) that clients send. For *connections* category, only *connected* and *disconnected* is used. |
+|**ParameterNames**| (Optional) A list of names that binds to the parameters. |
+|**ConnectionStringSetting**| The name of the app setting that contains the SignalR Service connection string, which defaults to `AzureSignalRConnectionString`. |
+
+# [Isolated process](#tab/isolated-process)
+
+The following table explains the properties of the `SignalRTrigger` attribute.
+
+| Attribute property |Description|
+||-|
+|**HubName**| This value must be set to the name of the SignalR hub for the function to be triggered.|
+|**Category**| This value must be set as the category of messages for the function to be triggered. The category can be one of the following values: <ul><li>**connections**: Including *connected* and *disconnected* events</li><li>**messages**: Including all other events except those in *connections* category</li></ul> |
+|**Event**| This value must be set as the event of messages for the function to be triggered. For *messages* category, event is the *target* in [invocation message](https://github.com/dotnet/aspnetcore/blob/master/src/SignalR/docs/specs/HubProtocol.md#invocation-message-encoding) that clients send. For *connections* category, only *connected* and *disconnected* is used. |
+|**ParameterNames**| (Optional) A list of names that binds to the parameters. |
+|**ConnectionStringSetting**| The name of the app setting that contains the SignalR Service connection string, which defaults to `AzureSignalRConnectionString`. |
+
+# [C# script](#tab/csharp-script)
+
+C# script uses a function.json file for configuration instead of attributes.
+
+The following table explains the binding configuration properties for C# script that you set in the *function.json* file.
+
+|function.json property |Description|
+||--|
+|**type**| Must be set to `SignalRTrigger`.|
+|**direction**| Must be set to `in`.|
+|**name**| Variable name used in function code for trigger invocation context object. |
+|**hubName**| This value must be set to the name of the SignalR hub for the function to be triggered.|
+|**category**| This value must be set as the category of messages for the function to be triggered. The category can be one of the following values: <ul><li>**connections**: Including *connected* and *disconnected* events</li><li>**messages**: Including all other events except those in *connections* category</li></ul> |
+|**event**| This value must be set as the event of messages for the function to be triggered. For *messages* category, event is the *target* in [invocation message](https://github.com/dotnet/aspnetcore/blob/master/src/SignalR/docs/specs/HubProtocol.md#invocation-message-encoding) that clients send. For *connections* category, only *connected* and *disconnected* is used. |
+|**parameterNames**| (Optional) A list of names that binds to the parameters. |
+|**connectionStringSetting**| The name of the app setting that contains the SignalR Service connection string, which defaults to `AzureSignalRConnectionString`. |
+
+## Annotations
+
+There isn't currently a supported Java annotation for a SignalR trigger.
+
## Configuration
-### SignalRTrigger
+The following table explains the binding configuration properties that you set in the *function.json* file.
-The following table explains the binding configuration properties that you set in the *function.json* file and the `SignalRTrigger` attribute.
+|function.json property |Description|
+||--|
+|**type**| Must be set to `SignalRTrigger`.|
+|**direction**| Must be set to `in`.|
+|**name**| Variable name used in function code for trigger invocation context object. |
+|**hubName**| This value must be set to the name of the SignalR hub for the function to be triggered.|
+|**category**| This value must be set as the category of messages for the function to be triggered. The category can be one of the following values: <ul><li>**connections**: Including *connected* and *disconnected* events</li><li>**messages**: Including all other events except those in *connections* category</li></ul> |
+|**event**| This value must be set as the event of messages for the function to be triggered. For *messages* category, event is the *target* in [invocation message](https://github.com/dotnet/aspnetcore/blob/master/src/SignalR/docs/specs/HubProtocol.md#invocation-message-encoding) that clients send. For *connections* category, only *connected* and *disconnected* is used. |
+|**parameterNames**| (Optional) A list of names that binds to the parameters. |
+|**connectionStringSetting**| The name of the app setting that contains the SignalR Service connection string, which defaults to `AzureSignalRConnectionString`. |
-|function.json property | Attribute property |Description|
-|||-|
-|**type**| n/a | Must be set to `SignalRTrigger`.|
-|**direction**| n/a | Must be set to `in`.|
-|**name**| n/a | Variable name used in function code for trigger invocation context object. |
-|**hubName**|**HubName**| This value must be set to the name of the SignalR hub for the function to be triggered.|
-|**category**|**Category**| This value must be set as the category of messages for the function to be triggered. The category can be one of the following values: <ul><li>**connections**: Including *connected* and *disconnected* events</li><li>**messages**: Including all other events except those in *connections* category</li></ul> |
-|**event**|**Event**| This value must be set as the event of messages for the function to be triggered. For *messages* category, event is the *target* in [invocation message](https://github.com/dotnet/aspnetcore/blob/master/src/SignalR/docs/specs/HubProtocol.md#invocation-message-encoding) that clients send. For *connections* category, only *connected* and *disconnected* is used. |
-|**parameterNames**|**ParameterNames**| (Optional) A list of names that binds to the parameters. |
-|**connectionStringSetting**|**ConnectionStringSetting**| The name of the app setting that contains the SignalR Service connection string (defaults to "AzureSignalRConnectionString") |
-## Payload
+See the [Example section](#example) for complete examples.
+
+## Usage
+
+### Payloads
The trigger input type is declared as either `InvocationContext` or a custom type. If you choose `InvocationContext` you get full access to the request content. For a custom type, the runtime tries to parse the JSON request body to set the object properties. ### InvocationContext
-InvocationContext contains all the content in the message send from SignalR Service.
+`InvocationContext` contains all the content in the message send from aa SignalR service, which includes the following properties:
-|Property in InvocationContext | Description|
+|Property | Description|
||| |Arguments| Available for *messages* category. Contains *arguments* in [invocation message](https://github.com/dotnet/aspnetcore/blob/master/src/SignalR/docs/specs/HubProtocol.md#invocation-message-encoding)| |Error| Available for *disconnected* event. It can be Empty if the connection closed with no error, or it contains the error messages.|
InvocationContext contains all the content in the message send from SignalR Serv
|Query| The query of the request when clients connect to the service.| |Claims| The claims of the client.|
-## Using `ParameterNames`
+### Using `ParameterNames`
-The property `ParameterNames` in `SignalRTrigger` allows you to bind arguments of invocation messages to the parameters of functions. The name you defined can be used as part of [binding expressions](../azure-functions/functions-bindings-expressions-patterns.md) in other binding or as parameters in your code. That gives you a more convenient way to access arguments of `InvocationContext`.
+The property `ParameterNames` in `SignalRTrigger` lets you bind arguments of invocation messages to the parameters of functions. You can use the name you defined as part of [binding expressions](../azure-functions/functions-bindings-expressions-patterns.md) in other binding or as parameters in your code. That gives you a more convenient way to access arguments of `InvocationContext`.
Say you have a JavaScript SignalR client trying to invoke method `broadcast` in Azure Function with two arguments `message1`, `message2`.
Say you have a JavaScript SignalR client trying to invoke method `broadcast` in
await connection.invoke("broadcast", message1, message2); ```
-After you set `parameterNames`, the name you defined will respectively correspond to the arguments sent on the client side.
+After you set `parameterNames`, the names you defined correspond to the arguments sent on the client side.
```cs [SignalRTrigger(parameterNames: new string[] {"arg1, arg2"})]
After you set `parameterNames`, the name you defined will respectively correspon
Then, the `arg1` will contain the content of `message1`, and `arg2` will contain the content of `message2`.
+### `ParameterNames` considerations
-### Remarks
-
-For the parameter binding, the order matters. If you are using `ParameterNames`, the order in `ParameterNames` matches the order of the arguments you invoke in the client. If you are using attribute `[SignalRParameter]` in C#, the order of arguments in Azure Function methods matches the order of arguments in clients.
+For the parameter binding, the order matters. If you're using `ParameterNames`, the order in `ParameterNames` matches the order of the arguments you invoke in the client. If you're using attribute `[SignalRParameter]` in C#, the order of arguments in Azure Function methods matches the order of arguments in clients.
`ParameterNames` and attribute `[SignalRParameter]` **cannot** be used at the same time, or you will get an exception.
-## SignalR Service integration
+### SignalR Service integration
SignalR Service needs a URL to access Function App when you're using SignalR Service trigger binding. The URL should be configured in **Upstream Settings** on the SignalR Service side.
The `Function_App_URL` can be found on Function App's Overview page and The `API
If you want to use more than one Function App together with one SignalR Service, upstream can also support complex routing rules. Find more details at [Upstream settings](../azure-signalr/concept-upstream.md).
-## Step by step sample
+### Step by step sample
You can follow the sample in GitHub to deploy a chat room on Function App with SignalR Service trigger binding and upstream feature: [Bidirectional chat room sample](https://github.com/aspnet/AzureSignalR-samples/tree/master/samples/BidirectionChat)
azure-functions Functions Bindings Signalr Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-signalr-service.md
Title: Azure Functions SignalR Service bindings description: Understand how to use SignalR Service bindings with Azure Functions.-- Previously updated : 02/28/2019- Last updated : 03/04/2022
+zone_pivot_groups: programming-languages-set-functions-lang-workers
# SignalR Service bindings for Azure Functions
-This set of articles explains how to authenticate and send real-time messages to clients connected to [Azure SignalR Service](https://azure.microsoft.com/services/signalr-service/) by using SignalR Service bindings in Azure Functions. Azure Functions supports input and output bindings for SignalR Service.
+This set of articles explains how to authenticate and send real-time messages to clients connected to [Azure SignalR Service](https://azure.microsoft.com/services/signalr-service/) by using SignalR Service bindings in Azure Functions. Azure Functions runtime version 2.x and higher supports input and output bindings for SignalR Service.
| Action | Type | |||
This set of articles explains how to authenticate and send real-time messages to
| Return the service endpoint URL and access token | [Input binding](./functions-bindings-signalr-service-input.md) | | Send SignalR Service messages |[Output binding](./functions-bindings-signalr-service-output.md) |
-## Add to your Functions app
-### Functions 2.x and higher
+## Install extension
-Working with the trigger and bindings requires that you reference the appropriate package. The NuGet package is used for .NET class libraries while the extension bundle is used for all other application types.
+The extension NuGet package you install depends on the C# mode you're using in your function app:
-| Language | Add by... | Remarks
-|-||-|
-| C# | Installing the [NuGet package], version 3.x | |
-| C# Script, Java, JavaScript, Python, PowerShell | Registering the [extension bundle] | The [Azure Tools extension] is recommended to use with Visual Studio Code. |
-| C# Script (online-only in Azure portal) | Adding a binding | To update existing binding extensions without having to republish your function app, see [Update your extensions]. |
+# [In-process](#tab/in-process)
-[NuGet package]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.SignalRService
-[core tools]: ./functions-run-local.md
-[extension bundle]: ./functions-bindings-register.md#extension-bundles
-[Update your extensions]: ./functions-bindings-register.md
-[Azure Tools extension]: https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack
+Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
-For details on how to configure and use SignalR Service and Azure Functions together, refer to [Azure Functions development and configuration with Azure SignalR Service](../azure-signalr/signalr-concept-serverless-development-config.md).
+Add the extension to your project by installing this [NuGet package].
+
+# [Isolated process](#tab/isolated-process)
+
+Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated process](dotnet-isolated-process-guide.md).
+
+Add the extension to your project by installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.SignalRService/).
+
+# [C# script](#tab/csharp-script)
+
+Functions run as C# script, which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions].
+
+You can install this version of the extension in your function app by registering the [extension bundle], version 2.x, or a later version.
+++++
+## Install bundle
-### Annotations library (Java only)
+The SignalR Service extension is part of an [extension bundle], which is specified in your host.json project file. When you create a project that targets version 3.x or later, you should already have this bundle installed. To learn more, see [extension bundle].
++
+## Add dependency
To use the SignalR Service annotations in Java functions, you need to add a dependency to the *azure-functions-java-library-signalr* artifact (version 1.0 or higher) to your *pom.xml* file.
To use the SignalR Service annotations in Java functions, you need to add a depe
<version>1.0.0</version> </dependency> ``` ## Connection string settings Add the `AzureSignalRConnectionString` key to the _host.json_ file that points to the application setting with your connection string. For local development, this value may exist in the _local.settings.json_ file.
+For details on how to configure and use SignalR Service and Azure Functions together, refer to [Azure Functions development and configuration with Azure SignalR Service](../azure-signalr/signalr-concept-serverless-development-config.md).
+ ## Next steps - [Handle messages from SignalR Service (Trigger binding)](./functions-bindings-signalr-service-trigger.md) - [Return the service endpoint URL and access token (Input binding)](./functions-bindings-signalr-service-input.md)-- [Send SignalR Service messages (Output binding)](./functions-bindings-signalr-service-output.md)
+- [Send SignalR Service messages (Output binding)](./functions-bindings-signalr-service-output.md)
+
+[NuGet package]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.SignalRService
+[core tools]: ./functions-run-local.md
+[extension bundle]: ./functions-bindings-register.md#extension-bundles
+[Update your extensions]: ./functions-bindings-register.md
+[Azure Tools extension]: https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack
azure-functions Functions Bindings Storage Blob Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-input.md
Title: Azure Blob storage input binding for Azure Functions description: Learn how to provide Azure Blob storage input binding data to an Azure Function.- Previously updated : 02/13/2020- Last updated : 03/04/2022 ms.devlang: csharp, java, javascript, powershell, python
+zone_pivot_groups: programming-languages-set-functions-lang-workers
# Azure Blob storage input binding for Azure Functions
For information on setup and configuration details, see the [overview](./functio
## Example
-# [C#](#tab/csharp)
-The following example is a [C# function](functions-dotnet-class-library.md) that uses a queue trigger and an input blob binding. The queue message contains the name of the blob, and the function logs the size of the blob.
+
+# [In-process](#tab/in-process)
```csharp+
+The following example is a [C# function](functions-dotnet-class-library.md) that uses a queue trigger and an input blob binding. The queue message contains the name of the blob, and the function logs the size of the blob.
+ [FunctionName("BlobInput")] public static void Run( [QueueTrigger("myqueue-items")] string myQueueItem,
public static void Run(
} ```
-# [C# Script](#tab/csharp-script)
+# [Isolated process](#tab/isolated-process)
-<!--Same example for input and output. -->
+The following example is a [C# function](dotnet-isolated-process-guide.md) that runs in an isolated process and uses a blob trigger with both blob input and blob output blob bindings. The function is triggered by the creation of a blob in the *test-samples-trigger* container. It reads a text file from the *test-samples-input* container and creates a new text file in an output container based on the name of the triggered file.
++
+# [C# Script](#tab/csharp-script)
The following example shows blob input and output bindings in a *function.json* file and [C# script (.csx)](functions-reference-csharp.md) code that uses the bindings. The function makes a copy of a text blob. The function is triggered by a queue message that contains the name of the blob to copy. The new blob is named *{originalblobname}-Copy*.
public static void Run(string myQueueItem, string myInputBlob, out string myOutp
myOutputBlob = myInputBlob; } ```+
-# [Java](#tab/java)
This section contains the following examples:
This section contains the following examples:
In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@BlobInput` annotation on parameters whose value would come from a blob. This annotation can be used with native Java types, POJOs, or nullable values using `Optional<T>`.
-# [JavaScript](#tab/javascript)
-
-<!--Same example for input and output. -->
The following example shows blob input and output bindings in a *function.json* file and [JavaScript code](functions-reference-node.md) that uses the bindings. The function makes a copy of a blob. The function is triggered by a queue message that contains the name of the blob to copy. The new blob is named *{originalblobname}-Copy*.
module.exports = async function(context) {
}; ```
-# [PowerShell](#tab/powershell)
-The following example shows a blob input binding, defined in the _function.json_ file, which makes the incoming blob data available to the [PowerShell](functions-reference-powershell.md) function.
+The following example shows a blob input binding, defined in the _function.json_ file, which makes the incoming blob data available to the [PowerShell](functions-reference-powershell.md) function.
Here's the json configuration:
param([byte[]] $InputBlob, $TriggerMetadata)
Write-Host "PowerShell Blob trigger: Name: $($TriggerMetadata.Name) Size: $($InputBlob.Length) bytes" ```
-# [Python](#tab/python)
-
-<!--Same example for input and output. -->
The following example shows blob input and output bindings in a *function.json* file and [Python code](functions-reference-python.md) that uses the bindings. The function makes a copy of a blob. The function is triggered by a queue message that contains the name of the blob to copy. The new blob is named *{originalblobname}-Copy*.
def main(queuemsg: func.QueueMessage, inputblob: bytes) -> bytes:
return inputblob ``` -
+## Attributes
+
+Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file.
-## Attributes and annotations
+# [In-process](#tab/in-process)
-# [C#](#tab/csharp)
+In [C# class libraries](functions-dotnet-class-library.md), use the [BlobAttribute](/dotnet/api/microsoft.azure.webjobs.blobattribute), which takes the following parameters:
-In [C# class libraries](functions-dotnet-class-library.md), use the [BlobAttribute](https://github.com/Azure/azure-webjobs-sdk/blob/dev/src/Microsoft.Azure.WebJobs.Extensions.Storage/Blobs/BlobAttribute.cs).
+|Parameter | Description|
+||-|
+|**BlobPath** | The path to the blob.|
+|**Connection** | The name of an app setting or setting collection that specifies how to connect to Azure Blobs. See [Connections](#connections).|
+|**Access** | Indicates whether you will be reading or writing.|
-The attribute's constructor takes the path to the blob and a `FileAccess` parameter indicating read or write, as shown in the following example:
+The following example shows how the attribute's constructor takes the path to the blob and a `FileAccess` parameter indicating read for the input binding:
```csharp [FunctionName("BlobInput")]
public static void Run(
```
-You can set the `Connection` property to specify the storage account to use, as shown in the following example:
-
-```csharp
-[FunctionName("BlobInput")]
-public static void Run(
- [QueueTrigger("myqueue-items")] string myQueueItem,
- [Blob("samples-workitems/{queueTrigger}", FileAccess.Read, Connection = "StorageConnectionAppSetting")] Stream myBlob,
- ILogger log)
-{
- log.LogInformation($"BlobInput processed blob\n Name:{myQueueItem} \n Size: {myBlob.Length} bytes");
-}
-```
-
-You can use the `StorageAccount` attribute to specify the storage account at class, method, or parameter level. For more information, see [Trigger - attributes and annotations](./functions-bindings-storage-blob-trigger.md#attributes-and-annotations).
-# [C# Script](#tab/csharp-script)
+# [Isolated process](#tab/isolated-process)
-Attributes are not supported by C# Script.
+Isolated process defines an input binding by using a `BlobInputAttribute` attribute, which takes the following parameters:
-# [Java](#tab/java)
+|Parameter | Description|
+||-|
+|**BlobPath** | The path to the blob.|
+|**Connection** | The name of an app setting or setting collection that specifies how to connect to Azure Blobs. See [Connections](#connections).|
-The `@BlobInput` attribute gives you access to the blob that triggered the function. If you use a byte array with the attribute, set `dataType` to `binary`. Refer to the [input example](#example) for details.
+# [C# script](#tab/csharp-script)
-# [JavaScript](#tab/javascript)
+C# script uses a function.json file for configuration instead of attributes.
-Attributes are not supported by JavaScript.
+The following table explains the binding configuration properties for C# script that you set in the *function.json* file.
-# [PowerShell](#tab/powershell)
+|function.json property | Description|
+||-|
+|**type** | Must be set to `blob`. |
+|**direction** | Must be set to `in`. Exceptions are noted in the [usage](#usage) section. |
+|**name** | The name of the variable that represents the blob in function code.|
+|**path** | The path to the blob. |
+|**connection** | The name of an app setting or setting collection that specifies how to connect to Azure Blobs. See [Connections](#connections).|
+|**dataType**| For dynamically typed languages, specifies the underlying data type. Possible values are `string`, `binary`, or `stream`. For more detail, refer to the [triggers and bindings concepts](functions-triggers-bindings.md?tabs=python#trigger-and-binding-definitions). |
-Attributes are not supported by PowerShell.
+
-# [Python](#tab/python)
-Attributes are not supported by Python.
+## Annotations
-
+The `@BlobInput` attribute gives you access to the blob that triggered the function. If you use a byte array with the attribute, set `dataType` to `binary`. Refer to the [input example](#example) for details.
## Configuration
-The following table explains the binding configuration properties that you set in the *function.json* file and the `Blob` attribute.
+The following table explains the binding configuration properties that you set in the *function.json* file.
-|function.json property | Attribute property |Description|
-|||-|
-|**type** | n/a | Must be set to `blob`. |
-|**direction** | n/a | Must be set to `in`. Exceptions are noted in the [usage](#usage) section. |
-|**name** | n/a | The name of the variable that represents the blob in function code.|
-|**path** |**BlobPath** | The path to the blob. This property supports [binding expressions](./functions-bindings-expressions-patterns.md). |
-|**connection** |**Connection**| The name of an app setting or setting collection that specifies how to connect to Azure Blobs. See [Connections](#connections).|
-|**dataType**| n/a | For dynamically typed languages, specifies the underlying data type. Possible values are `string`, `binary`, or `stream`. For more more detail, refer to the [triggers and bindings concepts](functions-triggers-bindings.md?tabs=python#trigger-and-binding-definitions). |
-|n/a | **Access** | Indicates whether you will be reading or writing. |
+|function.json property | Description|
+||-|
+|**type** | Must be set to `blob`. |
+|**direction** | Must be set to `in`. Exceptions are noted in the [usage](#usage) section. |
+|**name** | The name of the variable that represents the blob in function code.|
+|**path** | The path to the blob. |
+|**connection** | The name of an app setting or setting collection that specifies how to connect to Azure Blobs. See [Connections](#connections).|
+|**dataType**| For dynamically typed languages, specifies the underlying data type. Possible values are `string`, `binary`, or `stream`. For more detail, refer to the [triggers and bindings concepts](functions-triggers-bindings.md?tabs=python#trigger-and-binding-definitions). |
+See the [Example section](#example) for complete examples.
## Usage
-# [C#](#tab/csharp)
+The usage of the Blob input binding depends on the extension package version, and the C# modality used in your function app, which can be one of the following:
-# [C# Script](#tab/csharp-script)
--
-# [Java](#tab/java)
The `@BlobInput` attribute gives you access to the blob that triggered the function. If you use a byte array with the attribute, set `dataType` to `binary`. Refer to the [input example](#example) for details.-
-# [JavaScript](#tab/javascript)
- Access blob data using `context.bindings.<NAME>` where `<NAME>` matches the value defined in *function.json*.-
-# [PowerShell](#tab/powershell)
- Access the blob data via a parameter that matches the name designated by binding's name parameter in the _function.json_ file.-
-# [Python](#tab/python)
- Access blob data via the parameter typed as [InputStream](/python/api/azure-functions/azure.functions.inputstream). Refer to the [input example](#example) for details. - ## Next steps
azure-functions Functions Bindings Storage Blob Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-output.md
Title: Azure Blob storage output binding for Azure Functions description: Learn how to provide Azure Blob storage output binding data to an Azure Function.- Previously updated : 02/13/2020- Last updated : 03/04/2022 ms.devlang: csharp, java, javascript, powershell, python
+zone_pivot_groups: programming-languages-set-functions-lang-workers
# Azure Blob storage output binding for Azure Functions
For information on setup and configuration details, see the [overview](./functio
## Example
-# [C#](#tab/csharp)
-The following example is a [C# function](functions-dotnet-class-library.md) that uses a blob trigger and two output blob bindings. The function is triggered by the creation of an image blob in the *sample-images* container. It creates small and medium size copies of the image blob.
+
+# [In-process](#tab/in-process)
+
+The following example is a [C# function](functions-dotnet-class-library.md) that runs in-process and uses a blob trigger and two output blob bindings. The function is triggered by the creation of an image blob in the *sample-images* container. It creates small and medium size copies of the image blob.
```csharp using System.Collections.Generic;
public class ResizeImages
} ```
-# [C# Script](#tab/csharp-script)
+# [Isolated process](#tab/isolated-process)
-<!--Same example for input and output. -->
+The following example is a [C# function](dotnet-isolated-process-guide.md) that runs in an isolated process and uses a blob trigger with both blob input and blob output blob bindings. The function is triggered by the creation of a blob in the *test-samples-trigger* container. It reads a text file from the *test-samples-input* container and creates a new text file in an output container based on the name of the triggered file.
++
+# [C# Script](#tab/csharp-script)
The following example shows blob input and output bindings in a *function.json* file and [C# script (.csx)](functions-reference-csharp.md) code that uses the bindings. The function makes a copy of a text blob. The function is triggered by a queue message that contains the name of the blob to copy. The new blob is named *{originalblobname}-Copy*.
public static void Run(string myQueueItem, string myInputBlob, out string myOutp
} ```
-# [Java](#tab/java)
++ This section contains the following examples:
-* [HTTP trigger, using OutputBinding](#http-trigger-using-outputbinding-java)
-* [Queue trigger, using function return value](#queue-trigger-using-function-return-value-java)
+- [HTTP trigger, using OutputBinding](#http-trigger-using-outputbinding-java)
+- [Queue trigger, using function return value](#queue-trigger-using-function-return-value-java)
#### HTTP trigger, using OutputBinding (Java)
This section contains the following examples:
In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@BlobOutput` annotation on function parameters whose value would be written to an object in blob storage. The parameter type should be `OutputBinding<T>`, where T is any native Java type or a POJO.
-# [JavaScript](#tab/javascript)
<!--Same example for input and output. -->
module.exports = async function(context) {
}; ```
-# [PowerShell](#tab/powershell)
The following example demonstrates how to create a copy of an incoming blob as the output from a [PowerShell function](functions-reference-powershell.md).
Write-Host "PowerShell Blob trigger function Processed blob Name: $($TriggerMeta
Push-OutputBinding -Name myOutputBlob -Value $myInputBlob ```
-# [Python](#tab/python)
+ <!--Same example for input and output. -->
def main(queuemsg: func.QueueMessage, inputblob: bytes, outputblob: func.Out[byt
-## Attributes and annotations
+## Attributes
-# [C#](#tab/csharp)
+Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use attribute to define the function. C# script instead uses a function.json configuration file.
-In [C# class libraries](functions-dotnet-class-library.md), use the [BlobAttribute](https://github.com/Azure/azure-webjobs-sdk/blob/dev/src/Microsoft.Azure.WebJobs.Extensions.Storage/Blobs/BlobAttribute.cs).
+# [In-process](#tab/in-process)
-The attribute's constructor takes the path to the blob and a `FileAccess` parameter indicating read or write, as shown in the following example:
+The [BlobAttribute](/dotnet/api/microsoft.azure.webjobs.blobattribute) attribute's constructor takes the following parameters:
-```csharp
-[FunctionName("ResizeImage")]
-public static void Run(
- [BlobTrigger("sample-images/{name}")] Stream image,
- [Blob("sample-images-md/{name}", FileAccess.Write)] Stream imageSmall)
-{
- ...
-}
-```
+|Parameter | Description|
+||-|
+|**BlobPath** | The path to the blob.|
+|**Connection** | The name of an app setting or setting collection that specifies how to connect to Azure Blobs. See [Connections](#connections).|
+|**Access** | Indicates whether you will be reading or writing.|
-You can set the `Connection` property to specify the storage account to use, as shown in the following example:
+The following example sets the path to the blob and a `FileAccess` parameter indicating write for an output binding:
```csharp [FunctionName("ResizeImage")] public static void Run( [BlobTrigger("sample-images/{name}")] Stream image,
- [Blob("sample-images-md/{name}", FileAccess.Write, Connection = "StorageConnectionAppSetting")] Stream imageSmall)
+ [Blob("sample-images-md/{name}", FileAccess.Write)] Stream imageSmall)
{ ... } ```
-# [C# Script](#tab/csharp-script)
-Attributes are not supported by C# Script.
+# [Isolated process](#tab/isolated-process)
-# [Java](#tab/java)
+The `BlobOutputAttribute` constructor takes the following parameters:
-The `@BlobOutput` attribute gives you access to the blob that triggered the function. If you use a byte array with the attribute, set `dataType` to `binary`. Refer to the [output example](#example) for details.
+|Parameter | Description|
+||-|
+|**BlobPath** | The path to the blob.|
+|**Connection** | The name of an app setting or setting collection that specifies how to connect to Azure Blobs. See [Connections](#connections).|
-# [JavaScript](#tab/javascript)
-Attributes are not supported by JavaScript.
+# [C# script](#tab/csharp-script)
-# [PowerShell](#tab/powershell)
+The following table explains the binding configuration properties for C# script that you set in the *function.json* file.
-Attributes are not supported by PowerShell.
-
-# [Python](#tab/python)
-
-Attributes are not supported by Python.
+|function.json property | Description|
+||-|
+|**type** | Must be set to `blob`. |
+|**direction** | Must be set to `in`. Exceptions are noted in the [usage](#usage) section. |
+|**name** | The name of the variable that represents the blob in function code.|
+|**path** | The path to the blob. |
+|**connection** | The name of an app setting or setting collection that specifies how to connect to Azure Blobs. See [Connections](#connections).|
+|**dataType**| For dynamically typed languages, specifies the underlying data type. Possible values are `string`, `binary`, or `stream`. For more detail, refer to the [triggers and bindings concepts](functions-triggers-bindings.md?tabs=python#trigger-and-binding-definitions). |
-For a complete example, see [Output example](#example).
-You can use the `StorageAccount` attribute to specify the storage account at class, method, or parameter level. For more information, see [Trigger - attributes](./functions-bindings-storage-blob-trigger.md#attributes-and-annotations).
+## Annotations
+The `@BlobOutput` attribute gives you access to the blob that triggered the function. If you use a byte array with the attribute, set `dataType` to `binary`. Refer to the [output example](#example) for details.
## Configuration
-The following table explains the binding configuration properties that you set in the *function.json* file and the `Blob` attribute.
+The following table explains the binding configuration properties that you set in the *function.json* file.
|function.json property | Attribute property |Description| |||-|
-|**type** | n/a | Must be set to `blob`. |
-|**direction** | n/a | Must be set to `out` for an output binding. Exceptions are noted in the [usage](#usage) section. |
-|**name** | n/a | The name of the variable that represents the blob in function code. Set to `$return` to reference the function return value.|
-|**path** |**BlobPath** | The path to the blob container. |
-|**connection** |**Connection**| The name of an app setting or setting collection that specifies how to connect to Azure Blobs. See [Connections](#connections).|
-|**Access** | n/a | Indicates whether you will be reading or writing. |
+|**type** | Must be set to `blob`. |
+|**direction** | Must be set to `out` for an output binding. Exceptions are noted in the [usage](#usage) section. |
+|**name** | The name of the variable that represents the blob in function code. Set to `$return` to reference the function return value.|
+|**path** | The path to the blob container. |
+|**connection** | The name of an app setting or setting collection that specifies how to connect to Azure Blobs. See [Connections](#connections).|
+See the [Example section](#example) for complete examples.
## Usage
-# [C#](#tab/csharp)
+The usage of the Blob output binding depends on the extension package version, and the C# modality used in your function app, which can be one of the following:
-
-# [C# Script](#tab/csharp-script)
-# [Java](#tab/java)
+<!--Any of the below pivots can be combined if the usage info is identical.-->
The `@BlobOutput` attribute gives you access to the blob that triggered the function. If you use a byte array with the attribute, set `dataType` to `binary`. Refer to the [output example](#example) for details.
-# [JavaScript](#tab/javascript)
-
+
Access the blob data using `context.bindings.<BINDING_NAME>`, where the binding name is defined in the _function.json_ file.
-# [PowerShell](#tab/powershell)
Access the blob data via a parameter that matches the name designated by binding's name parameter in the _function.json_ file.
-# [Python](#tab/python)
You can declare function parameters as the following types to write out to blob storage:
-* Strings as `func.Out[str]`
-* Streams as `func.Out[func.InputStream]`
+- Strings as `func.Out[str]`
+- Streams as `func.Out[func.InputStream]`
Refer to the [output example](#example) for details. -+ ## Exceptions and return codes
azure-functions Functions Bindings Storage Blob Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-trigger.md
Title: Azure Blob storage trigger for Azure Functions description: Learn how to run an Azure Function as Azure Blob storage data changes.- Previously updated : 02/13/2020- Last updated : 03/04/2022 ms.devlang: csharp, java, javascript, powershell, python
+zone_pivot_groups: programming-languages-set-functions-lang-workers
# Azure Blob storage trigger for Azure Functions
The Azure Blob storage trigger requires a general-purpose storage account. Stora
For information on setup and configuration details, see the [overview](./functions-bindings-storage-blob.md).
-## Polling
-
-Polling works as a hybrid between inspecting logs and running periodic container scans. Blobs are scanned in groups of 10,000 at a time with a continuation token used between intervals.
-
-> [!WARNING]
-> In addition, [storage logs are created on a "best effort"](/rest/api/storageservices/About-Storage-Analytics-Logging) basis. There's no guarantee that all events are captured. Under some conditions, logs may be missed.
->
-> If you require faster or more reliable blob processing, consider creating a [queue message](../storage/queues/storage-dotnet-how-to-use-queues.md) when you create the blob. Then use a [queue trigger](functions-bindings-storage-queue.md) instead of a blob trigger to process the blob. Another option is to use Event Grid; see the tutorial [Automate resizing uploaded images using Event Grid](../event-grid/resize-images-on-storage-blob-upload-event.md).
->
-
-## Alternatives
-
-### Event Grid trigger
-
-> [!NOTE]
-> When using Storage Extensions 5.x and higher, the Blob trigger has built-in support for an Event Grid based Blob trigger. For more information, see the [Storage extension 5.x and higher](#storage-extension-5x-and-higher) section below.
-
-The [Event Grid trigger](functions-bindings-event-grid.md) also has built-in support for [blob events](../storage/blobs/storage-blob-event-overview.md). Use Event Grid instead of the Blob storage trigger for the following scenarios:
--- **Blob-only storage accounts**: [Blob-only storage accounts](../storage/common/storage-account-overview.md#types-of-storage-accounts) are supported for blob input and output bindings but not for blob triggers.--- **High-scale**: High scale can be loosely defined as containers that have more than 100,000 blobs in them or storage accounts that have more than 100 blob updates per second.--- **Existing Blobs**: The blob trigger will process all existing blobs in the container when you set up the trigger. If you have a container with many existing blobs and only want to trigger for new blobs, use the Event Grid trigger.--- **Minimizing latency**: If your function app is on the Consumption plan, there can be up to a 10-minute delay in processing new blobs if a function app has gone idle. To avoid this latency, you can switch to an App Service plan with Always On enabled. You can also use an [Event Grid trigger](functions-bindings-event-grid.md) with your Blob storage account. For an example, see the [Event Grid tutorial](../event-grid/resize-images-on-storage-blob-upload-event.md?toc=%2Fazure%2Fazure-functions%2Ftoc.json).-
-See the [Image resize with Event Grid](../event-grid/resize-images-on-storage-blob-upload-event.md) tutorial of an Event Grid example.
-
-#### Storage Extension 5.x and higher
-
-When using the preview storage extension, there is built-in support for Event Grid in the Blob trigger which requires setting the `source` parameter to Event Grid in your existing Blob trigger.
-
-For more information on how to use the Blob Trigger based on Event Grid, refer to the [Event Grid Blob Trigger guide](./functions-event-grid-blob-trigger.md).
+## Example
-### Queue storage trigger
-Another approach to processing blobs is to write queue messages that correspond to blobs being created or modified and then use a [Queue storage trigger](./functions-bindings-storage-queue.md) to begin processing.
-## Example
-
-# [C#](#tab/csharp)
+# [In-process](#tab/in-process)
The following example shows a [C# function](functions-dotnet-class-library.md) that writes a log when a blob is added or updated in the `samples-workitems` container.
public static void Run([BlobTrigger("samples-workitems/{name}")] Stream myBlob,
The string `{name}` in the blob trigger path `samples-workitems/{name}` creates a [binding expression](./functions-bindings-expressions-patterns.md) that you can use in function code to access the file name of the triggering blob. For more information, see [Blob name patterns](#blob-name-patterns) later in this article.
-For more information about the `BlobTrigger` attribute, see [attributes and annotations](#attributes-and-annotations).
+For more information about the `BlobTrigger` attribute, see [Attributes](#attributes).
+
+# [Isolated process](#tab/isolated-process)
+
+The following example is a [C# function](dotnet-isolated-process-guide.md) that runs in an isolated process and uses a blob trigger with both blob input and blob output blob bindings. The function is triggered by the creation of a blob in the *test-samples-trigger* container. It reads a text file from the *test-samples-input* container and creates a new text file in an output container based on the name of the triggered file.
+ # [C# Script](#tab/csharp-script)
public static void Run(CloudBlockBlob myBlob, string name, ILogger log)
} ```
-# [Java](#tab/java)
++ This function writes a log when a blob is added or updated in the `myblob` container.
public void run(
} ```
-# [JavaScript](#tab/javascript)
The following example shows a blob trigger binding in a *function.json* file and [JavaScript code](functions-reference-node.md) that uses the binding. The function writes a log when a blob is added or updated in the `samples-workitems` container.
module.exports = async function(context) {
}; ```
-# [PowerShell](#tab/powershell)
The following example demonstrates how to create a function that runs when a file is added to `source` blob storage container.
param([byte[]] $InputBlob, $TriggerMetadata)
Write-Host "PowerShell Blob trigger: Name: $($TriggerMetadata.Name) Size: $($InputBlob.Length) bytes" ```
-# [Python](#tab/python)
The following example shows a blob trigger binding in a *function.json* file and [Python code](functions-reference-python.md) that uses the binding. The function writes a log when a blob is added or updated in the `samples-workitems` [container](../storage/blobs/storage-blobs-introduction.md#blob-storage-resources).
def main(myblob: func.InputStream):
logging.info('Python Blob trigger function processed %s', myblob.name) ``` -
+## Attributes
-## Attributes and annotations
+Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use the [BlobAttribute](/dotnet/api/microsoft.azure.webjobs.blobattribute) attribute to define the function. C# script instead uses a function.json configuration file.
-# [C#](#tab/csharp)
+The attribute's constructor takes the following parameters:
-In [C# class libraries](functions-dotnet-class-library.md), use the following attributes to configure a blob trigger:
+|Parameter | Description|
+||-|
+|**BlobPath** | The path to the blob.|
+|**Connection** | The name of an app setting or setting collection that specifies how to connect to Azure Blobs. See [Connections](#connections).|
+|**Access** | Indicates whether you will be reading or writing.|
-* [BlobTriggerAttribute](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs.Extensions.Storage/Blobs/BlobTriggerAttribute.cs)
+# [In-process](#tab/in-process)
- The attribute's constructor takes a path string that indicates the container to watch and optionally a [blob name pattern](#blob-name-patterns). Here's an example:
+In [C# class libraries](functions-dotnet-class-library.md), the attribute's constructor takes a path string that indicates the container to watch and optionally a [blob name pattern](#blob-name-patterns). Here's an example:
- ```csharp
- [FunctionName("ResizeImage")]
- public static void Run(
- [BlobTrigger("sample-images/{name}")] Stream image,
- [Blob("sample-images-md/{name}", FileAccess.Write)] Stream imageSmall)
- {
- ....
- }
- ```
+```csharp
+[FunctionName("ResizeImage")]
+public static void Run(
+ [BlobTrigger("sample-images/{name}")] Stream image,
+ [Blob("sample-images-md/{name}", FileAccess.Write)] Stream imageSmall)
+{
+ ....
+}
+```
- You can set the `Connection` property to specify the storage account to use, as shown in the following example:
- ```csharp
- [FunctionName("ResizeImage")]
- public static void Run(
- [BlobTrigger("sample-images/{name}", Connection = "StorageConnectionAppSetting")] Stream image,
- [Blob("sample-images-md/{name}", FileAccess.Write)] Stream imageSmall)
- {
- ....
- }
- ```
+# [Isolated process](#tab/isolated-process)
- For a complete example, see [Trigger example](#example).
+Here's an `BlobTrigger` attribute in a method signature:
-* [StorageAccountAttribute](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs/StorageAccountAttribute.cs)
- Provides another way to specify the storage account to use. The constructor takes the name of an app setting that contains a storage connection string. The attribute can be applied at the parameter, method, or class level. The following example shows class level and method level:
- ```csharp
- [StorageAccount("ClassLevelStorageAppSetting")]
- public static class AzureFunctions
- {
- [FunctionName("BlobTrigger")]
- [StorageAccount("FunctionLevelStorageAppSetting")]
- public static void Run( //...
- {
- ....
- }
- ```
+# [C# script](#tab/csharp-script)
-The storage account to use is determined in the following order:
+C# script uses a *function.json* file for configuration instead of attributes.
-* The `BlobTrigger` attribute's `Connection` property.
-* The `StorageAccount` attribute applied to the same parameter as the `BlobTrigger` attribute.
-* The `StorageAccount` attribute applied to the function.
-* The `StorageAccount` attribute applied to the class.
-* The default storage account for the function app ("AzureWebJobsStorage" app setting).
+|function.json property | Description|
+||-|
+|**type** | Must be set to `blobTrigger`. This property is set automatically when you create the trigger in the Azure portal.|
+|**direction** | Must be set to `in`. This property is set automatically when you create the trigger in the Azure portal. Exceptions are noted in the [usage](#usage) section. |
+|**name** | The name of the variable that represents the blob in function code. |
+|**path** | The [container](../storage/blobs/storage-blobs-introduction.md#blob-storage-resources) to monitor. May be a [blob name pattern](#blob-name-patterns). |
+|**connection** | The name of an app setting or setting collection that specifies how to connect to Azure Blobs. See [Connections](#connections).|
-# [C# Script](#tab/csharp-script)
+
-Attributes are not supported by C# Script.
-# [Java](#tab/java)
+## Annotations
The `@BlobTrigger` attribute is used to give you access to the blob that triggered the function. Refer to the [trigger example](#example) for details.
+## Configuration
-# [JavaScript](#tab/javascript)
+The following table explains the binding configuration properties that you set in the *function.json* file.
-Attributes are not supported by JavaScript.
+|function.json property |Description|
+||-|
+|**type** | Must be set to `blobTrigger`. This property is set automatically when you create the trigger in the Azure portal.|
+|**direction** | Must be set to `in`. This property is set automatically when you create the trigger in the Azure portal. Exceptions are noted in the [usage](#usage) section. |
+|**name** | The name of the variable that represents the blob in function code. |
+|**path** | The [container](../storage/blobs/storage-blobs-introduction.md#blob-storage-resources) to monitor. May be a [blob name pattern](#blob-name-patterns). |
+|**connection** | The name of an app setting or setting collection that specifies how to connect to Azure Blobs. See [Connections](#connections).|
-# [PowerShell](#tab/powershell)
-Attributes are not supported by PowerShell.
+See the [Example section](#example) for complete examples.
-# [Python](#tab/python)
+## Metadata
-Attributes are not supported by Python.
+The blob trigger provides several metadata properties. These properties can be used as part of binding expressions in other bindings or as parameters in your code. These values have the same semantics as the [CloudΓÇïBlob](/dotnet/api/microsoft.azure.storage.blob.cloudblob) type.
-
+|Property |Type |Description |
+|-|||
+|`BlobTrigger`|`string`|The path to the triggering blob.|
+|`Uri`|`System.Uri`|The blob's URI for the primary location.|
+|`Properties` |[BlobProperties](/dotnet/api/microsoft.azure.storage.blob.blobproperties)|The blob's system properties. |
+|`Metadata` |`IDictionary<string,string>`|The user-defined metadata for the blob.|
-## Configuration
+The following example logs the path to the triggering blob, including the container:
-The following table explains the binding configuration properties that you set in the *function.json* file and the `BlobTrigger` attribute.
-
-|function.json property | Attribute property |Description|
-|||-|
-|**type** | n/a | Must be set to `blobTrigger`. This property is set automatically when you create the trigger in the Azure portal.|
-|**direction** | n/a | Must be set to `in`. This property is set automatically when you create the trigger in the Azure portal. Exceptions are noted in the [usage](#usage) section. |
-|**name** | n/a | The name of the variable that represents the blob in function code. |
-|**path** | **BlobPath** |The [container](../storage/blobs/storage-blobs-introduction.md#blob-storage-resources) to monitor. May be a [blob name pattern](#blob-name-patterns). |
-|**connection** | **Connection** | The name of an app setting or setting collection that specifies how to connect to Azure Blobs. See [Connections](#connections).|
+```csharp
+public static void Run(string myBlob, string blobTrigger, ILogger log)
+{
+ log.LogInformation($"Full blob path: {blobTrigger}");
+}
+```
+## Metadata
+The blob trigger provides several metadata properties. These properties can be used as part of binding expressions in other bindings or as parameters in your code.
-## Usage
+|Property |Description |
+|-|--|
+|`blobTrigger`|The path to the triggering blob.|
+|`uri` |The blob's URI for the primary location.|
+|`properties` |The blob's system properties. |
+|`metadata` |The user-defined metadata for the blob.|
-# [C#](#tab/csharp)
+Metadata can be obtained from the `bindingData` property of the supplied `context` object, as shown in the following example, which logs the path to the triggering blob (`blobTrigger`), including the container:
+```javascript
+module.exports = async function (context, myBlob) {
+ context.log("Full blob path:", context.bindingData.blobTrigger);
+};
+```
-# [C# Script](#tab/csharp-script)
+## Metadata
+Metadata is available through the `$TriggerMetadata` parameter.
-# [Java](#tab/java)
+## Usage
-The `@BlobTrigger` attribute is used to give you access to the blob that triggered the function. Refer to the [trigger example](#example) for details.
-# [JavaScript](#tab/javascript)
+The usage of the Blob trigger depends on the extension package version, and the C# modality used in your function app, which can be one of the following:
-Access blob data using `context.bindings.<NAME>` where `<NAME>` matches the value defined in *function.json*.
-# [PowerShell](#tab/powershell)
+The `@BlobTrigger` attribute is used to give you access to the blob that triggered the function. Refer to the [trigger example](#example) for details.
+Access blob data using `context.bindings.<NAME>` where `<NAME>` matches the value defined in *function.json*.
Access the blob data via a parameter that matches the name designated by binding's name parameter in the _function.json_ file.-
-# [Python](#tab/python)
- Access blob data via the parameter typed as [InputStream](/python/api/azure-functions/azure.functions.inputstream). Refer to the [trigger example](#example) for details. - ## Blob name patterns
To look for curly braces in file names, escape the braces by using two braces. T
If the blob is named *{20140101}-soundfile.mp3*, the `name` variable value in the function code is *soundfile.mp3*.
-## Metadata
-# [C#](#tab/csharp)
+## Polling
-# [C# Script](#tab/csharp-script)
+Polling works as a hybrid between inspecting logs and running periodic container scans. Blobs are scanned in groups of 10,000 at a time with a continuation token used between intervals.
+> [!WARNING]
+> In addition, [storage logs are created on a "best effort"](/rest/api/storageservices/About-Storage-Analytics-Logging) basis. There's no guarantee that all events are captured. Under some conditions, logs may be missed.
+>
+> If you require faster or more reliable blob processing, consider creating a [queue message](../storage/queues/storage-dotnet-how-to-use-queues.md) when you create the blob. Then use a [queue trigger](functions-bindings-storage-queue.md) instead of a blob trigger to process the blob. Another option is to use Event Grid; see the tutorial [Automate resizing uploaded images using Event Grid](../event-grid/resize-images-on-storage-blob-upload-event.md).
+>
-# [Java](#tab/java)
+## Alternatives
-Metadata is not available in Java.
+### Event Grid trigger
-# [JavaScript](#tab/javascript)
+> [!NOTE]
+> When using Storage Extensions 5.x and higher, the Blob trigger has built-in support for an Event Grid based Blob trigger. For more information, see the [Storage extension 5.x and higher](#storage-extension-5x-and-higher) section below.
-```javascript
-module.exports = async function (context, myBlob) {
- context.log("Full blob path:", context.bindingData.blobTrigger);
-};
-```
+The [Event Grid trigger](functions-bindings-event-grid.md) also has built-in support for [blob events](../storage/blobs/storage-blob-event-overview.md). Use Event Grid instead of the Blob storage trigger for the following scenarios:
-# [PowerShell](#tab/powershell)
+- **Blob-only storage accounts**: [Blob-only storage accounts](../storage/common/storage-account-overview.md#types-of-storage-accounts) are supported for blob input and output bindings but not for blob triggers.
-Metadata is available through the `$TriggerMetadata` parameter.
+- **High-scale**: High scale can be loosely defined as containers that have more than 100,000 blobs in them or storage accounts that have more than 100 blob updates per second.
-# [Python](#tab/python)
+- **Existing Blobs**: The blob trigger will process all existing blobs in the container when you set up the trigger. If you have a container with many existing blobs and only want to trigger for new blobs, use the Event Grid trigger.
-Metadata is not available in Python.
+- **Minimizing latency**: If your function app is on the Consumption plan, there can be up to a 10-minute delay in processing new blobs if a function app has gone idle. To avoid this latency, you can switch to an App Service plan with Always On enabled. You can also use an [Event Grid trigger](functions-bindings-event-grid.md) with your Blob storage account. For an example, see the [Event Grid tutorial](../event-grid/resize-images-on-storage-blob-upload-event.md?toc=%2Fazure%2Fazure-functions%2Ftoc.json).
-
+See the [Image resize with Event Grid](../event-grid/resize-images-on-storage-blob-upload-event.md) tutorial of an Event Grid example.
+
+#### Storage Extension 5.x and higher
+
+When using the preview storage extension, there is built-in support for Event Grid in the Blob trigger, which requires setting the `source` parameter to Event Grid in your existing Blob trigger.
+
+For more information on how to use the Blob Trigger based on Event Grid, refer to the [Event Grid Blob Trigger guide](./functions-event-grid-blob-trigger.md).
+
+### Queue storage trigger
+
+Another approach to processing blobs is to write queue messages that correspond to blobs being created or modified and then use a [Queue storage trigger](./functions-bindings-storage-queue.md) to begin processing.
## Blob receipts
The Azure Functions runtime ensures that no blob trigger function gets called mo
Azure Functions stores blob receipts in a container named *azure-webjobs-hosts* in the Azure storage account for your function app (defined by the app setting `AzureWebJobsStorage`). A blob receipt has the following information:
-* The triggered function (`<FUNCTION_APP_NAME>.Functions.<FUNCTION_NAME>`, for example: `MyFunctionApp.Functions.CopyBlob`)
-* The container name
-* The blob type (`BlockBlob` or `PageBlob`)
-* The blob name
-* The ETag (a blob version identifier, for example: `0x8D1DC6E70A277EF`)
+- The triggered function (`<FUNCTION_APP_NAME>.Functions.<FUNCTION_NAME>`, for example: `MyFunctionApp.Functions.CopyBlob`)
+- The container name
+- The blob type (`BlockBlob` or `PageBlob`)
+- The blob name
+- The ETag (a blob version identifier, for example: `0x8D1DC6E70A277EF`)
To force reprocessing of a blob, delete the blob receipt for that blob from the *azure-webjobs-hosts* container manually. While reprocessing might not occur immediately, it's guaranteed to occur at a later point in time. To reprocess immediately, the *scaninfo* blob in *azure-webjobs-hosts/blobscaninfo* can be updated. Any blobs with a last modified timestamp after the `LatestScan` property will be scanned again. ## Poison blobs
-When a blob trigger function fails for a given blob, Azure Functions retries that function a total of 5 times by default.
+When a blob trigger function fails for a given blob, Azure Functions retries that function a total of five times by default.
If all 5 tries fail, Azure Functions adds a message to a Storage queue named *webjobs-blobtrigger-poison*. The maximum number of retries is configurable. The same MaxDequeueCount setting is used for poison blob handling and poison queue message handling. The queue message for poison blobs is a JSON object that contains the following properties:
-* FunctionId (in the format `<FUNCTION_APP_NAME>.Functions.<FUNCTION_NAME>`)
-* BlobType (`BlockBlob` or `PageBlob`)
-* ContainerName
-* BlobName
-* ETag (a blob version identifier, for example: `0x8D1DC6E70A277EF`)
+- FunctionId (in the format `<FUNCTION_APP_NAME>.Functions.<FUNCTION_NAME>`)
+- BlobType (`BlockBlob` or `PageBlob`)
+- ContainerName
+- BlobName
+- ETag (a blob version identifier, for example: `0x8D1DC6E70A277EF`)
## Concurrency and memory usage The blob trigger uses a queue internally, so the maximum number of concurrent function invocations is controlled by the [queues configuration in host.json](functions-host-json.md#queues). The default settings limit concurrency to 24 invocations. This limit applies separately to each function that uses a blob trigger. > [!NOTE]
-> For apps using the [5.0.0 or higher version of the Storage extension](functions-bindings-storage-blob.md#storage-extension-5x-and-higher), the queues configuration in host.json only applies to queue triggers. The blob trigger concurrency is instead controlled by [blobs configuration in host.json](functions-host-json.md#blobs).
+> For apps using the 5.0.0 or higher version of the Storage extension, the queues configuration in host.json only applies to queue triggers. The blob trigger concurrency is instead controlled by [blobs configuration in host.json](functions-host-json.md#blobs).
[The Consumption plan](event-driven-scaling.md) limits a function app on one virtual machine (VM) to 1.5 GB of memory. Memory is used by each concurrently executing function instance and by the Functions runtime itself. If a blob-triggered function loads the entire blob into memory, the maximum memory used by that function just for blobs is 24 * maximum blob size. For example, a function app with three blob-triggered functions and the default settings would have a maximum per-VM concurrency of 3*24 = 72 function invocations.
azure-functions Functions Bindings Storage Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob.md
Title: Azure Blob storage trigger and bindings for Azure Functions description: Learn to use the Azure Blob storage trigger and bindings in Azure Functions.- Previously updated : 02/13/2020- Last updated : 03/04/2022
+zone_pivot_groups: programming-languages-set-functions-lang-workers
# Azure Blob storage bindings for Azure Functions overview
Azure Functions integrates with [Azure Storage](../storage/index.yml) via [trigg
| Read blob storage data in a function | [Input binding](./functions-bindings-storage-blob-input.md) | | Allow a function to write blob storage data |[Output binding](./functions-bindings-storage-blob-output.md) |
-## Add to your Functions app
-### Functions 2.x and higher
+## Install extension
-Working with the trigger and bindings requires that you reference the appropriate package. The NuGet package is used for .NET class libraries while the extension bundle is used for all other application types.
+The extension NuGet package you install depends on the C# mode you're using in your function app:
-| Language | Add by... | Remarks
-|-||-|
-| C# | Installing the [NuGet package], version 3.x | |
-| C# Script, Java, JavaScript, Python, PowerShell | Registering the [extension bundle] | The [Azure Tools extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack) is recommended to use with Visual Studio Code. |
-| C# Script (online-only in Azure portal) | Adding a binding | To update existing binding extensions without having to republish your function app, see [Update your extensions]. |
+# [In-process](#tab/in-process)
-#### Storage extension 5.x and higher
+Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
-A new version of the Storage bindings extension is now available. It introduces the ability to [connect using an identity instead of a secret](./functions-reference.md#configure-an-identity-based-connection). For a tutorial on configuring your function apps with managed identities, see the tutorial [creating a function app with identity-based connections](./functions-identity-based-connections-tutorial.md). For .NET applications, the new extension version also changes the types that you can bind to, replacing the types from `WindowsAzure.Storage` and `Microsoft.Azure.Storage` with newer types from [Azure.Storage.Blobs](/dotnet/api/azure.storage.blobs). Learn more about these new types are different and how to migrate to them from the [Azure.Storage.Blobs Migration Guide](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/storage/Azure.Storage.Blobs/AzureStorageNetMigrationV12.md).
+# [Isolated process](#tab/isolated-process)
-This extension version is available by installing the [NuGet package], version 5.x, or it can be added from the extension bundle v3 by adding the following in your `host.json` file:
+Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated process](dotnet-isolated-process-guide.md).
+
+# [C# script](#tab/csharp-script)
+
+Functions run as C# script, which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions].
+++
+The functionality of the extension varies depending on the extension version:
+
+# [Extension 5.x and higher](#tab/extensionv5/in-process)
+
+A Blob-specific version of the Storage bindings extension is available. With this version, you can [connect using an identity instead of a secret](./functions-reference.md#configure-an-identity-based-connection). For a tutorial on configuring your function apps with managed identities, see the tutorial [creating a function app with identity-based connections](./functions-identity-based-connections-tutorial.md). For .NET applications, the new extension version also changes the types that you can bind to, replacing the types from `WindowsAzure.Storage` and `Microsoft.Azure.Storage` with newer types from [Azure.Storage.Blobs](/dotnet/api/azure.storage.blobs). Learn more about these new types are different and how to migrate to them from the [Azure.Storage.Blobs Migration Guide](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/storage/Azure.Storage.Blobs/AzureStorageNetMigrationV12.md).
+
+This extension version is available by installing the [NuGet package], version 5.x.
+
+# [Functions 2.x and higher](#tab/functionsv2/in-process)
+
+Working with the trigger and bindings requires that you reference the appropriate NuGet package. Install NuGet package, version 3.x. The package is used for .NET class libraries while the extension bundle is used for all other application types.
+
+# [Functions 1.x](#tab/functionsv1/in-process)
+
+Functions 1.x apps automatically have a reference the [Microsoft.Azure.WebJobs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs) NuGet package, version 2.x.
++
+# [Extension 5.x and higher](#tab/extensionv5/isolated-process)
+
+Add the extension to your project by installing the [NuGet package](https://www.nuget.org/packages//dotnet/api/microsoft.azure.webjobs.blobattribute.Blobs/5.0.0-beta.4), version 5.x.
+
+# [Functions 2.x and higher](#tab/functionsv2/isolated-process)
+
+Add the extension to your project by installing the [NuGet package](https://www.nuget.org/packages//dotnet/api/microsoft.azure.webjobs.blobattribute), version 3.x.
+
+# [Functions 1.x](#tab/functionsv1/isolated-process)
+
+Functions version 1.x doesn't support isolated process.
+
+# [Extension 5.x and higher](#tab/extensionv5/csharp-script)
+
+This extension version is available from the extension bundle v3 by adding the following lines in your `host.json` file:
```json {
This extension version is available by installing the [NuGet package], version 5
} ```
+To learn more, see [Update your extensions].
+
+You can install this version of the extension in your function app by registering the [extension bundle], version 3.x.
+
+# [Functions 2.x and higher](#tab/functionsv2/csharp-script)
+
+You can install this version of the extension in your function app by registering the [extension bundle], version 2.x.
+
+# [Functions 1.x](#tab/functionsv1/csharp-script)
+
+Functions 1.x apps automatically have a reference to the [Microsoft.Azure.WebJobs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs) NuGet package, version 2.x.
+++++
+## Install bundle
+
+The Blob storage binding is part of an [extension bundle], which is specified in your host.json project file. You may need to modify this bundle to change the version of the binding, or if bundles aren't already installed. To learn more, see [extension bundle].
+
+# [Bundle v3.x](#tab/extensionv3)
+
+You can add this version of the extension from the preview extension bundle v3 by adding or replacing the following code in your `host.json` file:
+
+```json
+{
+ "version": "3.0",
+ "extensionBundle": {
+ "id": "Microsoft.Azure.Functions.ExtensionBundle",
+ "version": "[3.3.0, 4.0.0)"
+ }
+}
+```
To learn more, see [Update your extensions].
-[core tools]: ./functions-run-local.md
-[extension bundle]: ./functions-bindings-register.md#extension-bundles
-[NuGet package]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage
-[Update your extensions]: ./functions-bindings-register.md
-[Azure Tools extension]: https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack
+# [Bundle v2.x](#tab/extensionv2)
-### Functions 1.x
+You can install this version of the extension in your function app by registering the [extension bundle], version 2.x.
-Functions 1.x apps automatically have a reference the [Microsoft.Azure.WebJobs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs) NuGet package, version 2.x.
+# [Functions 1.x](#tab/functions1)
+Functions 1.x apps automatically have a reference to the extension.
++++ ## host.json settings
-This section describes the function app configuration settings available for functions that this binding. These settings only apply when using [extension version 5.0.0 and higher](#storage-extension-5x-and-higher). The example host.json file below contains only the version 2.x+ settings for this binding. For more information about function app configuration settings in versions 2.x and later versions, see [host.json reference for Azure Functions](functions-host-json.md).
+This section describes the function app configuration settings available for functions that this binding. These settings only apply when using extension version 5.0.0 and higher. The example host.json file below contains only the version 2.x+ settings for this binding. For more information about function app configuration settings in versions 2.x and later versions, see [host.json reference for Azure Functions](functions-host-json.md).
> [!NOTE] > This section doesn't apply to extension versions before 5.0.0. For those earlier versions, there aren't any function app-wide configuration settings for blobs.
This section describes the function app configuration settings available for fun
"version": "2.0", "extensions": { "blobs": {
- "maxDegreeOfParallelism": "4"
+ "maxDegreeOfParallelism": 4
} } }
This section describes the function app configuration settings available for fun
- [Run a function when blob storage data changes](./functions-bindings-storage-blob-trigger.md) - [Read blob storage data when a function runs](./functions-bindings-storage-blob-input.md) - [Write blob storage data from a function](./functions-bindings-storage-blob-output.md)+
+[core tools]: ./functions-run-local.md
+[extension bundle]: ./functions-bindings-register.md#extension-bundles
+[NuGet package]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage
+[Update your extensions]: ./functions-bindings-register.md
+[Azure Tools extension]: https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack
azure-functions Functions Bindings Storage Queue Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-queue-output.md
Title: Azure Queue storage output binding for Azure Functions description: Learn to create Azure Queue storage messages in Azure Functions.-- Previously updated : 02/18/2020- Last updated : 03/04/2022 ms.devlang: csharp, java, javascript, powershell, python
+zone_pivot_groups: programming-languages-set-functions-lang-workers
# Azure Queue storage output bindings for Azure Functions
For information on setup and configuration details, see the [overview](./functio
## Example
-# [C#](#tab/csharp)
++
+# [In-process](#tab/in-process)
The following example shows a [C# function](functions-dotnet-class-library.md) that creates a queue message for each HTTP request received.
public static class QueueFunctions
} ```
+# [Isolated process](#tab/isolated-process)
++ # [C# Script](#tab/csharp-script) The following example shows an HTTP trigger binding in a *function.json* file and [C# script (.csx)](functions-reference-csharp.md) code that uses the binding. The function creates a queue item with a **CustomQueueMessage** object payload for each HTTP request received.
public static void Run(
} ```
-# [Java](#tab/java)
+
- The following example shows a Java function that creates a queue message for when triggered by an HTTP request.
+
+The following example shows a Java function that creates a queue message for when triggered by an HTTP request.
```java @FunctionName("httpToQueue")
public static void Run(
In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@QueueOutput` annotation on parameters whose value would be written to Queue storage. The parameter type should be `OutputBinding<T>`, where `T` is any native Java type of a POJO.
-# [JavaScript](#tab/javascript)
The following example shows an HTTP trigger binding in a *function.json* file and a [JavaScript function](functions-reference-node.md) that uses the binding. The function creates a queue item for each HTTP request received.
module.exports = async function(context) {
}; ```
-# [PowerShell](#tab/powershell)
The following code examples demonstrate how to output a queue message from an HTTP-triggered function. The configuration section with the `type` of `queue` defines the output binding. ```json {
-  "bindings": [
-    {
-      "authLevel": "anonymous",
-      "type": "httpTrigger",
-      "direction": "in",
-      "name": "Request",
-      "methods": [
-        "get",
-        "post"
-      ]
-    },
-    {
-      "type": "http",
-      "direction": "out",
-      "name": "Response"
-    },
-    {
-      "type": "queue",
-      "direction": "out",
-      "name": "Msg",
-      "queueName": "outqueue",
-      "connection": "MyStorageConnectionAppSetting"
-    }
-  ]
+ "bindings": [
+ {
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "Request",
+ "methods": [
+ "get",
+ "post"
+ ]
+ },
+ {
+ "type": "http",
+ "direction": "out",
+ "name": "Response"
+ },
+ {
+ "type": "queue",
+ "direction": "out",
+ "name": "Msg",
+ "queueName": "outqueue",
+ "connection": "MyStorageConnectionAppSetting"
+ }
+ ]
} ``` Using this binding configuration, a PowerShell function can create a queue message using `Push-OutputBinding`. In this example, a message is created from a query string or body parameter. ```powershell
-using namespace System.Net
+using namespace System.Net
-# Input bindings are passed in via param block.
-param($Request,ΓÇ»$TriggerMetadata)
+# Input bindings are passed in via param block.
+param($Request, $TriggerMetadata)
-# Write to the Azure Functions log stream.
-Write-Host "PowerShell HTTP trigger function processed a request."
+# Write to the Azure Functions log stream.
+Write-Host "PowerShell HTTP trigger function processed a request."
-# Interact with query parameters or the body of the request.
-$messageΓÇ»=ΓÇ»$Request.Query.Message
-Push-OutputBinding -Name Msg -Value $message
-Push-OutputBinding -Name Response -Value ([HttpResponseContext]@{
-    StatusCode = 200
-    Body = "OK"
+# Interact with query parameters or the body of the request.
+$message = $Request.Query.Message
+Push-OutputBinding -Name Msg -Value $message
+Push-OutputBinding -Name Response -Value ([HttpResponseContext]@{
+ StatusCode = 200
+ Body = "OK"
}) ``` To send multiple messages at once, define a message array and use `Push-OutputBinding` to send messages to the Queue output binding. ```powershell
-using namespace System.Net
+using namespace System.Net
-# Input bindings are passed in via param block.
-param($Request,ΓÇ»$TriggerMetadata)
+# Input bindings are passed in via param block.
+param($Request, $TriggerMetadata)
-# Write to the Azure Functions log stream.
-Write-Host "PowerShell HTTP trigger function processed a request."
+# Write to the Azure Functions log stream.
+Write-Host "PowerShell HTTP trigger function processed a request."
-# Interact with query parameters or the body of the request.
-$messageΓÇ»=ΓÇ»@("message1", "message2")
-Push-OutputBinding -Name Msg -Value $message
-Push-OutputBinding -Name Response -Value ([HttpResponseContext]@{
-    StatusCode = 200
-    Body = "OK"
+# Interact with query parameters or the body of the request.
+$message = @("message1", "message2")
+Push-OutputBinding -Name Msg -Value $message
+Push-OutputBinding -Name Response -Value ([HttpResponseContext]@{
+ StatusCode = 200
+ Body = "OK"
}) ```
-# [Python](#tab/python)
The following example demonstrates how to output single and multiple values to storage queues. The configuration needed for *function.json* is the same either way.
def main(req: func.HttpRequest, msg: func.Out[typing.List[str]]) -> func.HttpRes
return 'OK' ``` -
+## Attributes
+
+The attribute that defines an output binding in C# libraries depends on the mode in which the C# class library runs. C# script instead uses a function.json configuration file.
-## Attributes and annotations
-# [C#](#tab/csharp)
+# [In-process](#tab/in-process)
-In [C# class libraries](functions-dotnet-class-library.md), use the [QueueAttribute](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs.Extensions.Storage/Queues/QueueAttribute.cs).
+In [C# class libraries](functions-dotnet-class-library.md), use the [QueueAttribute](/dotnet/api/microsoft.azure.webjobs.queueattribute).
The attribute applies to an `out` parameter or the return value of the function. The attribute's constructor takes the name of the queue, as shown in the following example:
public static string Run([HttpTrigger] dynamic input, ILogger log)
} ```
-For a complete example, see [Output example](#example).
- You can use the `StorageAccount` attribute to specify the storage account at class, method, or parameter level. For more information, see Trigger - attributes.
-# [C# Script](#tab/csharp-script)
+# [Isolated process](#tab/isolated-process)
-Attributes are not supported by C# Script.
+When running in an isolated process, you use the [QueueOutputAttribute](https://github.com/Azure/azure-functions-dotnet-worker/blob/main/extensions/Worker.Extensions.Storage.Queues/src/QueueOutputAttribute.cs), which takes the name of the queue, as shown in the following example:
-# [Java](#tab/java)
-The `QueueOutput` annotation allows you to write a message as the output of a function. The following example shows an HTTP-triggered function that creates a queue message.
+Only returned variables are supported when running in an isolated process. Output parameters can't be used.
+
+# [C# script](#tab/csharp-script)
+
+C# script uses a function.json file for configuration instead of attributes.
+
+The following table explains the binding configuration properties that you set in the *function.json* file and the `Queue` attribute.
+
+|function.json property | Description|
+||-|
+|**type** |Must be set to `queue`. This property is set automatically when you create the trigger in the Azure portal.|
+|**direction** | Must be set to `out`. This property is set automatically when you create the trigger in the Azure portal. |
+|**name** | The name of the variable that represents the queue in function code. Set to `$return` to reference the function return value.|
+|**queueName** | The name of the queue. |
+|**connection** | The name of an app setting or setting collection that specifies how to connect to Azure Queues. See [Connections](#connections).|
+
+## Annotations
+
+The [QueueOutput](/java/api/com.microsoft.azure.functions.annotation.queueoutput) annotation allows you to write a message as the output of a function. The following example shows an HTTP-triggered function that creates a queue message.
```java package com.function;
public class HttpTriggerQueueOutput {
|`queueName` | Declares the queue name in the storage account. | |`connection` | Points to the storage account connection string. |
-The parameter associated with the `QueueOutput` annotation is typed as an [OutputBinding\<T\>](https://github.com/Azure/azure-functions-java-library/blob/master/src/main/java/com/microsoft/azure/functions/OutputBinding.java) instance.
+The parameter associated with the [QueueOutput](/java/api/com.microsoft.azure.functions.annotation.queueoutput) annotation is typed as an [OutputBinding\<T\>](/java/api/com.microsoft.azure.functions.outputbinding) instance.
+## Configuration
-# [JavaScript](#tab/javascript)
+The following table explains the binding configuration properties that you set in the *function.json* file.
-Attributes are not supported by JavaScript.
+|function.json property | Description|
+||--|
+|**type** |Must be set to `queue`. This property is set automatically when you create the trigger in the Azure portal.|
+|**direction** | Must be set to `out`. This property is set automatically when you create the trigger in the Azure portal. |
+|**name** | The name of the variable that represents the queue in function code. Set to `$return` to reference the function return value.|
+|**queueName** | The name of the queue. |
+|**connection** | The name of an app setting or setting collection that specifies how to connect to Azure Queues. See [Connections](#connections).|
-# [PowerShell](#tab/powershell)
-Attributes are not supported by PowerShell.
-# [Python](#tab/python)
+See the [Example section](#example) for complete examples.
-Attributes are not supported by Python.
+## Usage
+
+The usage of the Queue output binding depends on the extension package version and the C# modality used in your function app, which can be one of the following:
+
+# [In-process class library](#tab/in-process)
+
+An in-process class library is a compiled C# function runs in the same process as the Functions runtime.
+
+# [Isolated process](#tab/isolated-process)
+
+An isolated process class library compiled C# function runs in a process isolated from the runtime. Isolated process is required to support C# functions running on .NET 5.0.
+
+# [C# script](#tab/csharp-script)
+
+C# script is used primarily when creating C# functions in the Azure portal.
-## Configuration
+Choose a version to see usage details for the mode and version.
-The following table explains the binding configuration properties that you set in the *function.json* file and the `Queue` attribute.
+# [Extension 5.x+](#tab/extensionv5/in-process)
-|function.json property | Attribute property |Description|
-|||-|
-|**type** | n/a | Must be set to `queue`. This property is set automatically when you create the trigger in the Azure portal.|
-|**direction** | n/a | Must be set to `out`. This property is set automatically when you create the trigger in the Azure portal. |
-|**name** | n/a | The name of the variable that represents the queue in function code. Set to `$return` to reference the function return value.|
-|**queueName** |**QueueName** | The name of the queue. |
-|**connection** | **Connection** |The name of an app setting or setting collection that specifies how to connect to Azure Queues. See [Connections](#connections).|
+Write a single queue message by using a method parameter such as `out T paramName`. You can use the method return type instead of an `out` parameter, and `T` can be any of the following types:
+* An object serializable as JSON
+* `string`
+* `byte[]`
+* [QueueMessage]
+For examples using these types, see [the GitHub repository for the extension](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Microsoft.Azure.WebJobs.Extensions.Storage.Queues#examples).
-## Usage
+You can write multiple messages to the queue by using one of the following types:
-# [C#](#tab/csharp)
+* `ICollector<T>` or `IAsyncCollector<T>`
+* [QueueClient]
+
+For examples using [QueueMessage] and [QueueClient], see [the GitHub repository for the extension](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Microsoft.Azure.WebJobs.Extensions.Storage.Queues#examples).
+
-### Default
+# [Extension 2.x+](#tab/extensionv2/in-process)
Write a single queue message by using a method parameter such as `out T paramName`. You can use the method return type instead of an `out` parameter, and `T` can be any of the following types:
Write a single queue message by using a method parameter such as `out T paramNam
* `byte[]` * [CloudQueueMessage]
-If you try to bind to `CloudQueueMessage` and get an error message, make sure that you have a reference to [the correct Storage SDK version](functions-bindings-storage-queue.md#azure-storage-sdk-version-in-functions-1x).
+If you try to bind to [CloudQueueMessage] and get an error message, make sure that you have a reference to [the correct Storage SDK version](functions-bindings-storage-queue.md#azure-storage-sdk-version-in-functions-1x).
-In C# and C# script, write multiple queue messages by using one of the following types:
+You can write multiple messages to the queue by using one of the following types:
* `ICollector<T>` or `IAsyncCollector<T>` * [CloudQueue](/dotnet/api/microsoft.azure.storage.queue.cloudqueue)
-### Additional types
-Apps using the [5.0.0 or higher version of the Storage extension](./functions-bindings-storage-queue.md#storage-extension-5x-and-higher) may also use types from the [Azure SDK for .NET](/dotnet/api/overview/azure/storage.queues-readme). This version drops support for the legacy `CloudQueue` and `CloudQueueMessage` types in favor of the following types:
+# [Extension 5.x+](#tab/extensionv5/isolated-process)
-- [QueueMessage](/dotnet/api/azure.storage.queues.models.queuemessage)-- [QueueClient](/dotnet/api/azure.storage.queues.queueclient) for writing multiple queue messages
+Isolated process currently only supports binding to string parameters.
-For examples using these types, see [the GitHub repository for the extension](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Microsoft.Azure.WebJobs.Extensions.Storage.Queues#examples).
+# [Extension 2.x+](#tab/extensionv2/isolated-process)
-# [C# Script](#tab/csharp-script)
+Isolated process currently only supports binding to string parameters.
-### Default
+# [Extension 5.x+](#tab/extensionv5/csharp-script)
-Write a single queue message by using a method parameter such as `out T paramName`. The `paramName` is the value specified in the `name` property of *function.json*. You can use the method return type instead of an `out` parameter, and `T` can be any of the following types:
+Write a single queue message by using a method parameter such as `out T paramName`. You can use the method return type instead of an `out` parameter, and `T` can be any of the following types:
* An object serializable as JSON * `string` * `byte[]`
-* [CloudQueueMessage]
+* [QueueMessage]
-If you try to bind to `CloudQueueMessage` and get an error message, make sure that you have a reference to [the correct Storage SDK version](functions-bindings-storage-queue.md#azure-storage-sdk-version-in-functions-1x).
+For examples using these types, see [the GitHub repository for the extension](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Microsoft.Azure.WebJobs.Extensions.Storage.Queues#examples).
-In C# and C# script, write multiple queue messages by using one of the following types:
+You can write multiple messages to the queue by using one of the following types:
* `ICollector<T>` or `IAsyncCollector<T>`
-* [CloudQueue](/dotnet/api/microsoft.azure.storage.queue.cloudqueue)
+* [QueueClient]
-### Additional types
+For examples using [QueueMessage] and [QueueClient], see [the GitHub repository for the extension](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Microsoft.Azure.WebJobs.Extensions.Storage.Queues#examples).
-Apps using the [5.0.0 or higher version of the Storage extension](./functions-bindings-storage-queue.md#storage-extension-5x-and-higher) may also use types from the [Azure SDK for .NET](/dotnet/api/overview/azure/storage.queues-readme). This version drops support for the legacy `CloudQueue` and `CloudQueueMessage` types in favor of the following types:
+# [Extension 2.x+](#tab/extensionv2/csharp-script)
-- [QueueMessage](/dotnet/api/azure.storage.queues.models.queuemessage)-- [QueueClient](/dotnet/api/azure.storage.queues.queueclient) for writing multiple queue messages
+Write a single queue message by using a method parameter such as `out T paramName`. You can use the method return type instead of an `out` parameter, and `T` can be any of the following types:
-For examples using these types, see [the GitHub repository for the extension](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Microsoft.Azure.WebJobs.Extensions.Storage.Queues#examples).
+* An object serializable as JSON
+* `string`
+* `byte[]`
+* [CloudQueueMessage]
-# [Java](#tab/java)
+If you try to bind to [CloudQueueMessage] and get an error message, make sure that you have a reference to [the correct Storage SDK version](functions-bindings-storage-queue.md#azure-storage-sdk-version-in-functions-1x).
-There are two options for outputting an Queue message from a function by using the [QueueOutput](/java/api/com.microsoft.azure.functions.annotation.queueoutput) annotation:
+You can write multiple messages to the queue by using one of the following types:
-- **Return value**: By applying the annotation to the function itself, the return value of the function is persisted as an Queue message.
+* `ICollector<T>` or `IAsyncCollector<T>`
+* [CloudQueue](/dotnet/api/microsoft.azure.storage.queue.cloudqueue)
-- **Imperative**: To explicitly set the message value, apply the annotation to a specific parameter of the type [`OutputBinding<T>`](/java/api/com.microsoft.azure.functions.outputbinding), where `T` is a POJO or any native Java type. With this configuration, passing a value to the `setValue` method persists the value as an Queue message.+
-# [JavaScript](#tab/javascript)
+<!--Any of the below pivots can be combined if the usage info is identical.-->
+There are two options for writing to a queue from a function by using the [QueueOutput](/java/api/com.microsoft.azure.functions.annotation.queueoutput) annotation:
-The output queue item is available via `context.bindings.<NAME>` where `<NAME>` matches the name defined in *function.json*. You can use a string or a JSON-serializable object for the queue item payload.
+- **Return value**: By applying the annotation to the function itself, the return value of the function is written to the queue.
-# [PowerShell](#tab/powershell)
+- **Imperative**: To explicitly set the message value, apply the annotation to a specific parameter of the type [`OutputBinding<T>`](/java/api/com.microsoft.azure.functions.outputbinding), where `T` is a POJO or any native Java type. With this configuration, passing a value to the `setValue` method writes the value to the queue.
-Output to the queue message is available via `Push-OutputBinding` where you pass arguments that match the name designated by binding's `name` parameter in the *function.json* file.
+
+The output queue item is available via `context.bindings.<NAME>` where `<NAME>` matches the name defined in *function.json*. You can use a string or a JSON-serializable object for the queue item payload.
-# [Python](#tab/python)
+
+Output to the queue message is available via `Push-OutputBinding` where you pass arguments that match the name designated by binding's `name` parameter in the *function.json* file.
-There are two options for outputting an Queue message from a function:
+
+There are two options for writing from your function to the configured queue:
- **Return value**: Set the `name` property in *function.json* to `$return`. With this configuration, the function's return value is persisted as a Queue storage message. - **Imperative**: Pass a value to the [set](/python/api/azure-functions/azure.functions.out#set-val--t--none) method of the parameter declared as an [Out](/python/api/azure-functions/azure.functions.out) type. The value passed to `set` is persisted as a Queue storage message. -+ ## Exceptions and return codes
There are two options for outputting an Queue message from a function:
<!-- LINKS --> [CloudQueueMessage]: /dotnet/api/microsoft.azure.storage.queue.cloudqueuemessage
+[QueueMessage]: /dotnet/api/azure.storage.queues.models.queuemessage
+[QueueClient]: /dotnet/api/azure.storage.queues.queueclient
azure-functions Functions Bindings Storage Queue Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-queue-trigger.md
Title: Azure Queue storage trigger for Azure Functions description: Learn to run an Azure Function as Azure Queue storage data changes.-- Previously updated : 02/18/2020- Last updated : 03/04/2022 ms.devlang: csharp, java, javascript, powershell, python
+zone_pivot_groups: programming-languages-set-functions-lang-workers
# Azure Queue storage trigger for Azure Functions The queue storage trigger runs a function as messages are added to Azure Queue storage.
-## Encoding
-
-Functions expect a *base64* encoded string. Any adjustments to the encoding type (in order to prepare data as a *base64* encoded string) need to be implemented in the calling service.
- ## Example + Use the queue trigger to start a function when a new item is received on a queue. The queue message is provided as input to the function.
-# [C#](#tab/csharp)
+
+# [In-process](#tab/in-process)
The following example shows a [C# function](functions-dotnet-class-library.md) that polls the `myqueue-items` queue and writes a log each time a queue item is processed.
public static class QueueFunctions
} ```
+# [Isolated process](#tab/isolated-process)
+
+The following example shows a [C# function](dotnet-isolated-process-guide.md) that polls the `input-queue` queue and writes several messages to an output queue each time a queue item is processed.
++ # [C# Script](#tab/csharp-script) The following example shows a queue trigger binding in a *function.json* file and [C# script (.csx)](functions-reference-csharp.md) code that uses the binding. The function polls the `myqueue-items` queue and writes a log each time a queue item is processed.
public static void Run(CloudQueueMessage myQueueItem,
The [usage](#usage) section explains `myQueueItem`, which is named by the `name` property in function.json. The [message metadata section](#message-metadata) explains all of the other variables shown.
-# [Java](#tab/java)
++ The following Java example shows a storage queue trigger function, which logs the triggered message placed into queue `myqueuename`.
public void run(
} ```
-# [JavaScript](#tab/javascript)
The following example shows a queue trigger binding in a *function.json* file and a [JavaScript function](functions-reference-node.md) that uses the binding. The function polls the `myqueue-items` queue and writes a log each time a queue item is processed.
module.exports = async function (context, message) {
The [usage](#usage) section explains `myQueueItem`, which is named by the `name` property in function.json. The [message metadata section](#message-metadata) explains all of the other variables shown.
-# [PowerShell](#tab/powershell)
The following example demonstrates how to read a queue message passed to a function via a trigger.
Write-Host "Pop receipt: $($TriggerMetadata.PopReceipt)"
Write-Host "Dequeue count: $($TriggerMetadata.DequeueCount)" ```
-# [Python](#tab/python)
The following example demonstrates how to read a queue message passed to a function via a trigger.
def main(msg: func.QueueMessage):
logging.info(result) ```
-
+## Attributes
-## Attributes and annotations
+Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use the [QueueTriggerAttribute](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs.Extensions.Storage/Queues/QueueTriggerAttribute.cs) to define the function. C# script instead uses a function.json configuration file.
-# [C#](#tab/csharp)
-In [C# class libraries](functions-dotnet-class-library.md), use the following attributes to configure a queue trigger:
+# [In-process](#tab/in-process)
-* [QueueTriggerAttribute](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs.Extensions.Storage/Queues/QueueTriggerAttribute.cs)
+In [C# class libraries](functions-dotnet-class-library.md), the attribute's constructor takes the name of the queue to monitor, as shown in the following example:
- The attribute's constructor takes the name of the queue to monitor, as shown in the following example:
+```csharp
+[FunctionName("QueueTrigger")]
+public static void Run(
+ [QueueTrigger("myqueue-items")] string myQueueItem,
+ ILogger log)
+{
+ ...
+}
+```
- ```csharp
- [FunctionName("QueueTrigger")]
- public static void Run(
- [QueueTrigger("myqueue-items")] string myQueueItem,
- ILogger log)
- {
- ...
- }
- ```
+You can set the `Connection` property to specify the app setting that contains the storage account connection string to use, as shown in the following example:
- You can set the `Connection` property to specify the app setting that contains the storage account connection string to use, as shown in the following example:
+```csharp
+[FunctionName("QueueTrigger")]
+public static void Run(
+ [QueueTrigger("myqueue-items", Connection = "StorageConnectionAppSetting")] string myQueueItem,
+ ILogger log)
+{
+ ....
+}
+```
- ```csharp
- [FunctionName("QueueTrigger")]
- public static void Run(
- [QueueTrigger("myqueue-items", Connection = "StorageConnectionAppSetting")] string myQueueItem,
- ILogger log)
- {
- ....
- }
- ```
+# [Isolated process](#tab/isolated-process)
- For a complete example, see [example](#example).
+In [C# class libraries](dotnet-isolated-process-guide.md), the attribute's constructor takes the name of the queue to monitor, as shown in the following example:
-* [StorageAccountAttribute](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs/StorageAccountAttribute.cs)
- Provides another way to specify the storage account to use. The constructor takes the name of an app setting that contains a storage connection string. The attribute can be applied at the parameter, method, or class level. The following example shows class level and method level:
+This example also demonstrates setting the [connection string setting](#connections) in the attribute itself.
- ```csharp
- [StorageAccount("ClassLevelStorageAppSetting")]
- public static class AzureFunctions
- {
- [FunctionName("QueueTrigger")]
- [StorageAccount("FunctionLevelStorageAppSetting")]
- public static void Run( //...
- {
- ...
- }
- ```
+# [C# script](#tab/csharp-script)
-The storage account to use is determined in the following order:
+C# script uses a function.json file for configuration instead of attributes.
-* The `QueueTrigger` attribute's `Connection` property.
-* The `StorageAccount` attribute applied to the same parameter as the `QueueTrigger` attribute.
-* The `StorageAccount` attribute applied to the function.
-* The `StorageAccount` attribute applied to the class.
-* The "AzureWebJobsStorage" app setting.
+The following table explains the binding configuration properties for C# script that you set in the *function.json* file.
-# [C# Script](#tab/csharp-script)
+|function.json property | Description|
+||-|
+|**type** |Must be set to `queueTrigger`. This property is set automatically when you create the trigger in the Azure portal.|
+|**direction**| In the *function.json* file only. Must be set to `in`. This property is set automatically when you create the trigger in the Azure portal. |
+|**name** | The name of the variable that contains the queue item payload in the function code. |
+|**queueName** | The name of the queue to poll. |
+|**connection** | The name of an app setting or setting collection that specifies how to connect to Azure Queues. See [Connections](#connections).|
-Attributes are not supported by C# Script.
+
-# [Java](#tab/java)
+## Annotations
The `QueueTrigger` annotation gives you access to the queue that triggers the function. The following example makes the queue message available to the function via the `message` parameter.
public class QueueTriggerDemo {
|`queueName` | Declares the queue name in the storage account. | |`connection` | Points to the storage account connection string. |
-# [JavaScript](#tab/javascript)
-
-Attributes are not supported by JavaScript.
+## Configuration
-# [PowerShell](#tab/powershell)
+The following table explains the binding configuration properties that you set in the *function.json* file and the `QueueTrigger` attribute.
-Attributes are not supported by PowerShell.
+|function.json property | Description|
+|||
+|**type** | Must be set to `queueTrigger`. This property is set automatically when you create the trigger in the Azure portal.|
+|**direction**| In the *function.json* file only. Must be set to `in`. This property is set automatically when you create the trigger in the Azure portal. |
+|**name** | The name of the variable that contains the queue item payload in the function code. |
+|**queueName** | The name of the queue to poll. |
+|**connection** | The name of an app setting or setting collection that specifies how to connect to Azure Queues. See [Connections](#connections).|
-# [Python](#tab/python)
-Attributes are not supported by Python.
+See the [Example section](#example) for complete examples.
-
-## Configuration
+## Usage
-The following table explains the binding configuration properties that you set in the *function.json* file and the `QueueTrigger` attribute.
+<a name="encoding"></a>
-|function.json property | Attribute property |Description|
-|||-|
-|**type** | n/a| Must be set to `queueTrigger`. This property is set automatically when you create the trigger in the Azure portal.|
-|**direction**| n/a | In the *function.json* file only. Must be set to `in`. This property is set automatically when you create the trigger in the Azure portal. |
-|**name** | n/a |The name of the variable that contains the queue item payload in the function code. |
-|**queueName** | **QueueName**| The name of the queue to poll. |
-|**connection** | **Connection** |The name of an app setting or setting collection that specifies how to connect to Azure Queues. See [Connections](#connections).|
+> [!NOTE]
+> Functions expect a *base64* encoded string. Any adjustments to the encoding type (in order to prepare data as a *base64* encoded string) need to be implemented in the calling service.
+The usage of the Blob trigger depends on the extension package version, and the C# modality used in your function app, which can be one of the following:
-## Usage
+# [In-process class library](#tab/in-process)
-# [C#](#tab/csharp)
+An in-process class library is a compiled C# function runs in the same process as the Functions runtime.
+
+# [Isolated process](#tab/isolated-process)
-### Default
+An isolated process class library compiled C# function runs in a process isolated from the runtime. Isolated process is required to support C# functions running on .NET 5.0.
+
+# [C# script](#tab/csharp-script)
-Access the message data by using a method parameter such as `string paramName`. You can bind to any of the following types:
+C# script is used primarily when creating C# functions in the Azure portal.
-* Object - The Functions runtime deserializes a JSON payload into an instance of an arbitrary class defined in your code.
-* `string`
-* `byte[]`
-* [CloudQueueMessage]
+
-If you try to bind to `CloudQueueMessage` and get an error message, make sure that you have a reference to [the correct Storage SDK version](functions-bindings-storage-queue.md#azure-storage-sdk-version-in-functions-1x).
+Choose a version to see usage details for the mode and version.
-### Additional types
+# [Extension 5.x+](#tab/extensionv5/in-process)
-Apps using the [5.0.0 or higher version of the Storage extension](./functions-bindings-storage-queue.md#storage-extension-5x-and-higher) may also use types from the [Azure SDK for .NET](/dotnet/api/overview/azure/storage.queues-readme). This version drops support for the legacy `CloudQueueMessage` type in favor of the following types:
+Access the message data by using a method parameter such as `string paramName`. The `paramName` is the value specified in the [QueueTriggerAttribute](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs.Extensions.Storage/Queues/QueueTriggerAttribute.cs). You can bind to any of the following types:
-- [QueueMessage](/dotnet/api/azure.storage.queues.models.queuemessage)
+* Plain-old CLR object (POCO)
+* `string`
+* `byte[]`
+* [QueueMessage]
-For examples using these types, see [the GitHub repository for the extension](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Microsoft.Azure.WebJobs.Extensions.Storage.Queues#examples).
+When binding to an object, the Functions runtime tries to deserialize the JSON payload into an instance of an arbitrary class defined in your code. For examples using [QueueMessage], see [the GitHub repository for the extension](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Microsoft.Azure.WebJobs.Extensions.Storage.Queues#examples).
-# [C# Script](#tab/csharp-script)
-### Default
+# [Extension 2.x+](#tab/extensionv2/in-process)
-Access the message data by using a method parameter such as `string paramName`. The `paramName` is the value specified in the `name` property of *function.json*. You can bind to any of the following types:
+Access the message data by using a method parameter such as `string paramName`. The `paramName` is the value specified in the [QueueTriggerAttribute](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs.Extensions.Storage/Queues/QueueTriggerAttribute.cs). You can bind to any of the following types:
-* Object - The Functions runtime deserializes a JSON payload into an instance of an arbitrary class defined in your code.
+* Plain-old CLR object (POCO)
* `string` * `byte[]` * [CloudQueueMessage]
-If you try to bind to `CloudQueueMessage` and get an error message, make sure that you have a reference to [the correct Storage SDK version](functions-bindings-storage-queue.md#azure-storage-sdk-version-in-functions-1x).
+When binding to an object, the Functions runtime tries to deserialize the JSON payload into an instance of an arbitrary class defined in your code. If you try to bind to [CloudQueueMessage] and get an error message, make sure that you have a reference to [the correct Storage SDK version](functions-bindings-storage-queue.md).
-### Additional types
-Apps using the [5.0.0 or higher version of the Storage extension](./functions-bindings-storage-queue.md#storage-extension-5x-and-higher) may also use types from the [Azure SDK for .NET](/dotnet/api/overview/azure/storage.queues-readme). This version drops support for the legacy `CloudQueueMessage` type in favor of the following types:
+# [Extension 5.x+](#tab/extensionv5/isolated-process)
-- [QueueMessage](/dotnet/api/azure.storage.queues.models.queuemessage)
+Isolated process currently only supports binding to string parameters.
-For examples using these types, see [the GitHub repository for the extension](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Microsoft.Azure.WebJobs.Extensions.Storage.Queues#examples).
+# [Extension 2.x+](#tab/extensionv2/isolated-process)
-# [Java](#tab/java)
+Isolated process currently only supports binding to string parameters.
-The [QueueTrigger](/java/api/com.microsoft.azure.functions.annotation.queuetrigger) annotation gives you access to the queue message that triggered the function.
+# [Extension 5.x+](#tab/extensionv5/csharp-script)
-# [JavaScript](#tab/javascript)
+Access the message data by using a method parameter such as `string paramName`. The `paramName` is the value specified in the *function.json* file. You can bind to any of the following types:
-The queue item payload is available via `context.bindings.<NAME>` where `<NAME>` matches the name defined in *function.json*. If the payload is JSON, the value is deserialized into an object.
+* Plain-old CLR object (POCO)
+* `string`
+* `byte[]`
+* [QueueMessage]
-# [PowerShell](#tab/powershell)
+When binding to an object, the Functions runtime tries to deserialize the JSON payload into an instance of an arbitrary class defined in your code. For examples using [QueueMessage], see [the GitHub repository for the extension](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Microsoft.Azure.WebJobs.Extensions.Storage.Queues#examples).
-Access the queue message via string parameter that matches the name designated by binding's `name` parameter in the *function.json* file.
-# [Python](#tab/python)
+# [Extension 2.x+](#tab/extensionv2/csharp-script)
-Access the queue message via the parameter typed as [QueueMessage](/python/api/azure-functions/azure.functions.queuemessage).
+Access the message data by using a method parameter such as `string paramName`. The `paramName` is the value specified in the *function.json* file. You can bind to any of the following types:
+
+* Plain-old CLR object (POCO)
+* `string`
+* `byte[]`
+* [CloudQueueMessage]
+
+When binding to an object, the Functions runtime tries to deserialize the JSON payload into an instance of an arbitrary class defined in your code. If you try to bind to [CloudQueueMessage] and get an error message, make sure that you have a reference to [the correct Storage SDK version](functions-bindings-storage-queue.md).
-## Message metadata
+<!--Any of the below pivots can be combined if the usage info is identical.-->
+The [QueueTrigger](/java/api/com.microsoft.azure.functions.annotation.queuetrigger) annotation gives you access to the queue message that triggered the function.
+The queue item payload is available via `context.bindings.<NAME>` where `<NAME>` matches the name defined in *function.json*. If the payload is JSON, the value is deserialized into an object.
+Access the queue message via string parameter that matches the name designated by binding's `name` parameter in the *function.json* file.
+Access the queue message via the parameter typed as [QueueMessage](/python/api/azure-functions/azure.functions.queuemessage).
+
+## <a name="message-metadata"></a>Metadata
-The queue trigger provides several [metadata properties](./functions-bindings-expressions-patterns.md#trigger-metadata). These properties can be used as part of binding expressions in other bindings or as parameters in your code. The properties are members of the [CloudQueueMessage](/dotnet/api/microsoft.azure.storage.queue.cloudqueuemessage) class.
+The queue trigger provides several [metadata properties](./functions-bindings-expressions-patterns.md#trigger-metadata). These properties can be used as part of binding expressions in other bindings or as parameters in your code.
+
+The properties are members of the [CloudQueueMessage] class.
|Property|Type|Description| |--|-|--|
The queue trigger provides several [metadata properties](./functions-bindings-ex
|`NextVisibleTime`|`DateTimeOffset`|The time that the message will next be visible.| |`PopReceipt`|`string`|The message's pop receipt.| + ## Poison messages When a queue trigger function fails, Azure Functions retries the function up to five times for a given queue message, including the first try. If all five attempts fail, the functions runtime adds a message to a queue named *&lt;originalqueuename&gt;-poison*. You can write a function to process messages from the poison queue by logging them or sending a notification that manual attention is needed.
The queue trigger automatically prevents a function from processing a queue mess
## host.json properties
-The [host.json](functions-host-json.md#queues) file contains settings that control queue trigger behavior. See the [host.json settings](functions-bindings-storage-queue.md#hostjson-settings) section for details regarding available settings.
+The host.json file contains settings that control queue trigger behavior. See the [host.json settings](functions-bindings-storage-queue.md#host-json) section for details regarding available settings.
## Next steps
The [host.json](functions-host-json.md#queues) file contains settings that contr
<!-- LINKS --> [CloudQueueMessage]: /dotnet/api/microsoft.azure.storage.queue.cloudqueuemessage
+[QueueMessage]: /dotnet/api/azure.storage.queues.models.queuemessage
azure-functions Functions Bindings Storage Queue https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-queue.md
Title: Azure Queue storage trigger and bindings for Azure Functions overview description: Understand how to use the Azure Queue storage trigger and output binding in Azure Functions.-- Previously updated : 02/18/2020-- Last updated : 03/04/2022
+zone_pivot_groups: programming-languages-set-functions-lang-workers
# Azure Queue storage trigger and bindings for Azure Functions overview
Azure Functions can run as new Azure Queue storage messages are created and can
| Run a function as queue storage data changes | [Trigger](./functions-bindings-storage-queue-trigger.md) | | Write queue storage messages |[Output binding](./functions-bindings-storage-queue-output.md) |
-## Add to your Functions app
+## Install extension
+
+The extension NuGet package you install depends on the C# mode you're using in your function app:
+
+# [In-process](#tab/in-process)
+
+Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
+
+# [Isolated process](#tab/isolated-process)
+
+Functions execute in an isolated C# worker process. To learn more, see [Guide for running functions on .NET 5.0 in Azure](dotnet-isolated-process-guide.md).
+
+# [C# script](#tab/csharp-script)
+
+Functions run as C# script, which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions].
+++
+The functionality of the extension varies depending on the extension version:
+
+# [Extension 5.x+](#tab/extensionv5/in-process)
+
+<a name="storage-extension-5x-and-higher"></a>
+A new version of the Storage bindings extension is available in preview. It introduces the ability to [connect using an identity instead of a secret](./functions-reference.md#configure-an-identity-based-connection). For a tutorial on configuring your function apps with managed identities, see the [creating a function app with identity-based connections tutorial](./functions-identity-based-connections-tutorial.md). For .NET applications, the new extension version also changes the types that you can bind to, replacing the types from `WindowsAzure.Storage` and `Microsoft.Azure.Storage` with newer types from [Azure.Storage.Queues](/dotnet/api/azure.storage.queues).
+
+This extension version is available by installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage.Queues), version 5.x.
++
+# [Functions 2.x+](#tab/functionsv2/in-process)
+
+<a name="functions-2x-and-higher"></a>
+Working with the trigger and bindings requires that you reference the appropriate NuGet package. Install the [NuGet package], version 3.x or 4.x.
+
+# [Functions 1.x](#tab/functionsv1/in-process)
+
+Functions 1.x apps automatically have a reference the [Microsoft.Azure.WebJobs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs) NuGet package, version 2.x.
++
+# [Extension 5.x+](#tab/extensionv5/isolated-process)
-### Functions 2.x and higher
+Add the extension to your project by installing the [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Storage.Queues), version 5.x.
-Working with the trigger and bindings requires that you reference the appropriate package. The NuGet package is used for .NET class libraries while the extension bundle is used for all other application types.
+
+# [Functions 2.x+](#tab/functionsv2/isolated-process)
-| Language | Add by... | Remarks
-|-||-|
-| C# | Installing the [NuGet package], version 3.x | |
-| C# Script, Java, JavaScript, Python, PowerShell | Registering the [extension bundle] | The [Azure Tools extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack) is recommended to use with Visual Studio Code. |
-| C# Script (online-only in Azure portal) | Adding a binding | To update existing binding extensions without having to republish your function app, see [Update your extensions]. |
+Add the extension to your project by installing the [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Storage/), version 4.x.
-#### Storage extension 5.x and higher
+# [Functions 1.x](#tab/functionsv1/isolated-process)
-A new version of the Storage bindings extension is now available. It introduces the ability to [connect using an identity instead of a secret](./functions-reference.md#configure-an-identity-based-connection). For a tutorial on configuring your function apps with managed identities, see the [creating a function app with identity-based connections tutorial](./functions-identity-based-connections-tutorial.md). For .NET applications, the new extension version also changes the types that you can bind to, replacing the types from `WindowsAzure.Storage` and `Microsoft.Azure.Storage` with newer types from [Azure.Storage.Queues](/dotnet/api/azure.storage.queues).
+Functions version 1.x doesn't support isolated process.
-This extension version is available by installing the [NuGet package], version 5.x, or it can be added from the extension bundle v3 by adding the following in your `host.json` file:
+# [Extension 5.x+](#tab/extensionv5/csharp-script)
+
+This extension version is available from the extension bundle v3 by adding the following lines in your `host.json` file:
```json {
This extension version is available by installing the [NuGet package], version 5
} ```
+To learn more, see [Update your extensions].
+
+You can install this version of the extension in your function app by registering the [extension bundle], version 3.x.
+ [!INCLUDE [functions-bindings-storage-extension-v5-tables-note](../../includes/functions-bindings-storage-extension-v5-tables-note.md)]
+# [Functions 2.x+](#tab/functionsv2/csharp-script)
+
+You can install this version of the extension in your function app by registering the [extension bundle], version 2.x.
+
+# [Functions 1.x](#tab/functionsv1/csharp-script)
+
+Functions 1.x apps automatically have a reference to the [Microsoft.Azure.WebJobs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs) NuGet package, version 2.x.
+++++
+## Install bundle
+
+The Blob storage binding is part of an [extension bundle], which is specified in your host.json project file. You may need to modify this bundle to change the version of the binding, or if bundles aren't already installed. To learn more, see [extension bundle].
+
+# [Bundle v3.x](#tab/extensionv3)
+
+You can add this version of the extension from the preview extension bundle v3 by adding or replacing the following code in your `host.json` file:
+
+```json
+{
+ "version": "3.0",
+ "extensionBundle": {
+ "id": "Microsoft.Azure.Functions.ExtensionBundle",
+ "version": "[3.3.0, 4.0.0)"
+ }
+}
+```
+ To learn more, see [Update your extensions].
-[core tools]: ./functions-run-local.md
-[extension bundle]: ./functions-bindings-register.md#extension-bundles
-[NuGet package]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage
-[Update your extensions]: ./functions-bindings-register.md
-[Azure Tools extension]: https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack
-### Functions 1.x
+# [Bundle v2.x](#tab/extensionv2)
-Functions 1.x apps automatically have a reference the [Microsoft.Azure.WebJobs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs) NuGet package, version 2.x.
+You can install this version of the extension in your function app by registering the [extension bundle], version 2.x.
+# [Functions 1.x](#tab/functions1)
+
+Functions 1.x apps automatically have a reference to the extension.
++
-<a name="host-json"></a>
-## host.json settings
+## <a name="host-json"></a>host.json settings
[!INCLUDE [functions-host-json-section-intro](../../includes/functions-host-json-section-intro.md)]
Functions 1.x apps automatically have a reference the [Microsoft.Azure.WebJobs](
- [Run a function as queue storage data changes (Trigger)](./functions-bindings-storage-queue-trigger.md) - [Write queue storage messages (Output binding)](./functions-bindings-storage-queue-output.md)
+
+[extension bundle]: ./functions-bindings-register.md#extension-bundles
+[NuGet package]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage
+[Update your extensions]: ./functions-bindings-register.md
azure-functions Functions Bindings Storage Table Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-table-input.md
Title: Azure Tables input bindings for Azure Functions description: Understand how to use Azure Tables input bindings in Azure Functions.- Previously updated : 01/23/2022- Last updated : 03/04/2022 ms.devlang: csharp, java, javascript, powershell, python zone_pivot_groups: programming-languages-set-functions-lang-workers
For information on setup and configuration details, see the [overview](./functio
::: zone pivot="programming-language-csharp"
-The usage of the binding depends on the extension package version, and the C# modality used in your function app, which can be one of the following:
+The usage of the binding depends on the extension package version and the C# modality used in your function app, which can be one of the following:
# [In-process](#tab/in-process)
azure-functions Functions Bindings Storage Table Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-table-output.md
Title: Azure Tables output bindings for Azure Functions description: Understand how to use Azure Tables output bindings in Azure Functions.- Previously updated : 01/23/2022- Last updated : 03/04/2022 ms.devlang: csharp, java, javascript, powershell, python zone_pivot_groups: programming-languages-set-functions-lang-workers
azure-functions Functions Bindings Storage Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-table.md
Title: Azure Tables bindings for Azure Functions description: Understand how to use Azure Tables bindings in Azure Functions.- Previously updated : 01/23/2022- Last updated : 03/04/2022 zone_pivot_groups: programming-languages-set-functions-lang-workers -
+
# Azure Tables bindings for Azure Functions Azure Functions integrates with [Azure Tables](../cosmos-db/table/introduction.md) via [triggers and bindings](./functions-triggers-bindings.md). Integrating with Azure Tables allows you to build functions that read and write data using the Tables API for [Azure Storage](../storage/index.yml) and [Cosmos DB](../cosmos-db/introduction.md).
azure-functions Functions Bindings Timer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-timer.md
Title: Timer trigger for Azure Functions description: Understand how to use timer triggers in Azure Functions.-- ms.assetid: d2f013d1-f458-42ae-baf8-1810138118ac Previously updated : 11/18/2020- Last updated : 03/04/2022 ms.devlang: csharp, java, javascript, powershell, python -
+zone_pivot_groups: programming-languages-set-functions-lang-workers
+ # Timer trigger for Azure Functions This article explains how to work with timer triggers in Azure Functions. A timer trigger lets you run a function on a schedule.
This article explains how to work with timer triggers in Azure Functions. A time
For information on how to manually run a timer-triggered function, see [Manually run a non HTTP-triggered function](./functions-manually-run-non-http.md).
-## Packages - Functions 2.x and higher
-
-The timer trigger is provided in the [Microsoft.Azure.WebJobs.Extensions](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions) NuGet package, version 3.x. Source code for the package is in the [azure-webjobs-sdk-extensions](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions/Extensions/Timers/) GitHub repository.
- [!INCLUDE [functions-package-auto](../../includes/functions-package-auto.md)]
-## Packages - Functions 1.x
+Source code for the timer extension package is in the [azure-webjobs-sdk-extensions](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions/Extensions/Timers/) GitHub repository.
-The timer trigger is provided in the [Microsoft.Azure.WebJobs.Extensions](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions) NuGet package, version 2.x. Source code for the package is in the [azure-webjobs-sdk-extensions](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/v2.x/src/WebJobs.Extensions/Extensions/Timers/) GitHub repository.
+## Example
-## Example
+This example shows a C# function that executes each time the minutes have a value divisible by five. For example, when the function starts at 18:55:00, the next execution is at 19:00:00. A `TimerInfo` object is passed to the function.
-# [C#](#tab/csharp)
-The following example shows a [C# function](functions-dotnet-class-library.md) that is executed each time the minutes have a value divisible by five (eg if the function starts at 18:55:00, the next performance will be at 19:00:00). The [`TimerInfo`](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions/Extensions/Timers/TimerInfo.cs) object is passed into the function.
+# [In-process](#tab/in-process)
-```cs
+```csharp
[FunctionName("TimerTriggerCSharp")] public static void Run([TimerTrigger("0 */5 * * * *")]TimerInfo myTimer, ILogger log) {
public static void Run([TimerTrigger("0 */5 * * * *")]TimerInfo myTimer, ILogger
} ```
+# [Isolated process](#tab/isolated-process)
+++ # [C# Script](#tab/csharp-script) The following example shows a timer trigger binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function writes a log indicating whether this function invocation is due to a missed schedule occurrence. The [`TimerInfo`](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions/Extensions/Timers/TimerInfo.cs) object is passed into the function.
public static void Run(TimerInfo myTimer, ILogger log)
} ```
-# [Java](#tab/java)
The following example function triggers and executes every five minutes. The `@TimerTrigger` annotation on the function defines the schedule using the same string format as [CRON expressions](https://en.wikipedia.org/wiki/Cron#CRON_expression).
public void keepAlive(
} ```
-# [JavaScript](#tab/javascript)
-The following example shows a timer trigger binding in a *function.json* file and a [JavaScript function](functions-reference-node.md) that uses the binding. The function writes a log indicating whether this function invocation is due to a missed schedule occurrence. A [timer object](#usage) is passed into the function.
+The following example shows a timer trigger binding in a *function.json* file and function code that uses the binding, where an instance representing the timer is passed to the function. The function writes a log indicating whether this function invocation is due to a missed schedule occurrence.
Here's the binding data in the *function.json* file:
Here's the binding data in the *function.json* file:
} ``` + Here's the JavaScript code: ```JavaScript
module.exports = async function (context, myTimer) {
}; ```
-# [PowerShell](#tab/powershell)
-
-The following example demonstrates how to configure the *function.json* and *run.ps1* file for a timer trigger in [PowerShell](./functions-reference-powershell.md).
-```json
-{
-  "bindings": [
-    {
-      "name": "Timer",
-      "type": "timerTrigger",
-      "direction": "in",
-      "schedule": "0 */5 * * * *"
-    }
-  ]
-}
-```
+The following is the timer function code in the run.ps1 file:
```powershell # Input bindings are passed in via param block.
-param($Timer)
+param($myTimer)
# Get the current universal time in the default string format. $currentUTCtime = (Get-Date).ToUniversalTime() # The 'IsPastDue' property is 'true' when the current function invocation is later than scheduled.
-ifΓÇ»($Timer.IsPastDue)ΓÇ»{
+ifΓÇ»($myTimer.IsPastDue)ΓÇ»{
    Write-Host "PowerShell timer is running late!" } # Write an information log with the current time. Write-Host "PowerShell timer trigger function ran! TIME: $currentUTCtime" ```
-An instance of the [timer object](#usage) is passed as the first argument to the function.
-
-# [Python](#tab/python)
-
-The following example uses a timer trigger binding whose configuration is described in the *function.json* file. The actual [Python function](functions-reference-python.md) that uses the binding is described in the *__init__.py* file. The object passed into the function is of type [azure.functions.TimerRequest object](/python/api/azure-functions/azure.functions.timerrequest). The function logic writes to the logs indicating whether the current invocation is due to a missed schedule occurrence.
-
-Here's the binding data in the *function.json* file:
-
-```json
-{
- "name": "mytimer",
- "type": "timerTrigger",
- "direction": "in",
- "schedule": "0 */5 * * * *"
-}
-```
-
-Here's the Python code:
+Here's the Python code, where the object passed into the function is of type [azure.functions.TimerRequest object](/python/api/azure-functions/azure.functions.timerrequest).
```python import datetime
def main(mytimer: func.TimerRequest) -> None:
logging.info('Python timer trigger function ran at %s', utc_timestamp) ``` --
-## Attributes and annotations
+## Attributes
-# [C#](#tab/csharp)
+Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use the [TimerTriggerAttribute](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions/Extensions/Timers/TimerTriggerAttribute.cs) attribute to define the function.
-In [C# class libraries](functions-dotnet-class-library.md), use the [TimerTriggerAttribute](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions/Extensions/Timers/TimerTriggerAttribute.cs).
+C# script instead uses a function.json configuration file.
-The attribute's constructor takes a CRON expression or a `TimeSpan`. You can use `TimeSpan` only if the function app is running on an App Service plan. `TimeSpan` is not supported for Consumption or Elastic Premium Functions.
+# [In-process](#tab/in-process)
-The following example shows a CRON expression:
+|Attribute property | Description|
+||-|
+|**Schedule**| A [CRON expression](#ncrontab-expressions) or a [TimeSpan](#timespan) value. A `TimeSpan` can be used only for a function app that runs on an App Service Plan. You can put the schedule expression in an app setting and set this property to the app setting name wrapped in **%** signs, as `%ScheduleAppSetting%`. |
+|**RunOnStartup**| If `true`, the function is invoked when the runtime starts. For example, the runtime starts when the function app wakes up after going idle due to inactivity. when the function app restarts due to function changes, and when the function app scales out. *Use with caution.* **RunOnStartup** should rarely if ever be set to `true`, especially in production. |
+|**UseMonitor**| Set to `true` or `false` to indicate whether the schedule should be monitored. Schedule monitoring persists schedule occurrences to aid in ensuring the schedule is maintained correctly even when function app instances restart. If not set explicitly, the default is `true` for schedules that have a recurrence interval greater than or equal to 1 minute. For schedules that trigger more than once per minute, the default is `false`. |
-```csharp
-[FunctionName("TimerTriggerCSharp")]
-public static void Run([TimerTrigger("0 */5 * * * *")]TimerInfo myTimer, ILogger log)
-{
- if (myTimer.IsPastDue)
- {
- log.LogInformation("Timer is running late!");
- }
- log.LogInformation($"C# Timer trigger function executed at: {DateTime.Now}");
-}
-```
+# [Isolated process](#tab/isolated-process)
-# [C# Script](#tab/csharp-script)
-
-Attributes are not supported by C# Script.
+|Attribute property | Description|
+||-|
+|**Schedule**| A [CRON expression](#ncrontab-expressions) or a [TimeSpan](#timespan) value. A `TimeSpan` can be used only for a function app that runs on an App Service Plan. You can put the schedule expression in an app setting and set this property to the app setting name wrapped in **%** signs, as `%ScheduleAppSetting%`. |
+|**RunOnStartup**| If `true`, the function is invoked when the runtime starts. For example, the runtime starts when the function app wakes up after going idle due to inactivity. when the function app restarts due to function changes, and when the function app scales out. *Use with caution.* **RunOnStartup** should rarely if ever be set to `true`, especially in production. |
+|**UseMonitor**| Set to `true` or `false` to indicate whether the schedule should be monitored. Schedule monitoring persists schedule occurrences to aid in ensuring the schedule is maintained correctly even when function app instances restart. If not set explicitly, the default is `true` for schedules that have a recurrence interval greater than or equal to 1 minute. For schedules that trigger more than once per minute, the default is `false`. |
-# [Java](#tab/java)
+# [C# script](#tab/csharp-script)
-The `@TimerTrigger` annotation on the function defines the schedule using the same string format as [CRON expressions](https://en.wikipedia.org/wiki/Cron#CRON_expression).
+C# script uses a function.json file for configuration instead of attributes.
-```java
-@FunctionName("keepAlive")
-public void keepAlive(
- @TimerTrigger(name = "keepAliveTrigger", schedule = "0 */5 * * * *") String timerInfo,
- ExecutionContext context
- ) {
- // timeInfo is a JSON string, you can deserialize it to an object using your favorite JSON library
- context.getLogger().info("Timer is triggered: " + timerInfo);
-}
-```
+The following table explains the binding configuration properties for C# script that you set in the *function.json* file.
-# [JavaScript](#tab/javascript)
+|function.json property | Description|
+||-|
+|**type** | Must be set to "timerTrigger". This property is set automatically when you create the trigger in the Azure portal.|
+|**direction** | Must be set to "in". This property is set automatically when you create the trigger in the Azure portal. |
+|**name** | The name of the variable that represents the timer object in function code. |
+|**schedule**| A [CRON expression](#ncrontab-expressions) or a [TimeSpan](#timespan) value. A `TimeSpan` can be used only for a function app that runs on an App Service Plan. You can put the schedule expression in an app setting and set this property to the app setting name wrapped in **%** signs, as in this example: "%ScheduleAppSetting%". |
+|**runOnStartup**| If `true`, the function is invoked when the runtime starts. For example, the runtime starts when the function app wakes up after going idle due to inactivity. when the function app restarts due to function changes, and when the function app scales out. *Use with caution.* **runOnStartup** should rarely if ever be set to `true`, especially in production. |
+|**useMonitor**| Set to `true` or `false` to indicate whether the schedule should be monitored. Schedule monitoring persists schedule occurrences to aid in ensuring the schedule is maintained correctly even when function app instances restart. If not set explicitly, the default is `true` for schedules that have a recurrence interval greater than or equal to 1 minute. For schedules that trigger more than once per minute, the default is `false`. |
-Attributes are not supported by JavaScript.
+
-# [PowerShell](#tab/powershell)
+## Annotations
-Attributes are not supported by PowerShell.
+The `@TimerTrigger` annotation on the function defines the `schedule` using the same string format as [CRON expressions](https://en.wikipedia.org/wiki/Cron#CRON_expression). The annotation supports the following settings:
-# [Python](#tab/python)
++ [dataType](/java/api/com.microsoft.azure.functions.annotation.timertrigger.datatype)++ [name](/java/api/com.microsoft.azure.functions.annotation.timertrigger.name)++ [schedule](/java/api/com.microsoft.azure.functions.annotation.timertrigger.schedule)
-Attributes are not supported by Python.
+
+## Configuration
-
+The following table explains the binding configuration properties that you set in the *function.json* file.
-## Configuration
+|function.json property | Description|
+||-|
+|**type** | Must be set to "timerTrigger". This property is set automatically when you create the trigger in the Azure portal.|
+|**direction** | Must be set to "in". This property is set automatically when you create the trigger in the Azure portal. |
+|**name** | The name of the variable that represents the timer object in function code. |
+|**schedule**| A [CRON expression](#ncrontab-expressions) or a [TimeSpan](#timespan) value. A `TimeSpan` can be used only for a function app that runs on an App Service Plan. You can put the schedule expression in an app setting and set this property to the app setting name wrapped in **%** signs, as in this example: "%ScheduleAppSetting%". |
+|**runOnStartup**| If `true`, the function is invoked when the runtime starts. For example, the runtime starts when the function app wakes up after going idle due to inactivity. when the function app restarts due to function changes, and when the function app scales out. *Use with caution.* **runOnStartup** should rarely if ever be set to `true`, especially in production. |
+|**useMonitor**| Set to `true` or `false` to indicate whether the schedule should be monitored. Schedule monitoring persists schedule occurrences to aid in ensuring the schedule is maintained correctly even when function app instances restart. If not set explicitly, the default is `true` for schedules that have a recurrence interval greater than or equal to 1 minute. For schedules that trigger more than once per minute, the default is `false`. |
-The following table explains the binding configuration properties that you set in the *function.json* file and the `TimerTrigger` attribute.
+<!--The following Include and Caution are from the original file and I wasn't sure if these need to be here-->
-|function.json property | Attribute property |Description|
-|||-|
-|**type** | n/a | Must be set to "timerTrigger". This property is set automatically when you create the trigger in the Azure portal.|
-|**direction** | n/a | Must be set to "in". This property is set automatically when you create the trigger in the Azure portal. |
-|**name** | n/a | The name of the variable that represents the timer object in function code. |
-|**schedule**|**ScheduleExpression**|A [CRON expression](#ncrontab-expressions) or a [TimeSpan](#timespan) value. A `TimeSpan` can be used only for a function app that runs on an App Service Plan. You can put the schedule expression in an app setting and set this property to the app setting name wrapped in **%** signs, as in this example: "%ScheduleAppSetting%". |
-|**runOnStartup**|**RunOnStartup**|If `true`, the function is invoked when the runtime starts. For example, the runtime starts when the function app wakes up after going idle due to inactivity. when the function app restarts due to function changes, and when the function app scales out. So **runOnStartup** should rarely if ever be set to `true`, especially in production. |
-|**useMonitor**|**UseMonitor**|Set to `true` or `false` to indicate whether the schedule should be monitored. Schedule monitoring persists schedule occurrences to aid in ensuring the schedule is maintained correctly even when function app instances restart. If not set explicitly, the default is `true` for schedules that have a recurrence interval greater than or equal to 1 minute. For schedules that trigger more than once per minute, the default is `false`.
[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)] > [!CAUTION]
-> We recommend against setting **runOnStartup** to `true` in production. Using this setting makes code execute at highly unpredictable times. In certain production settings, these extra executions can result in significantly higher costs for apps hosted in Consumption plans. For example, with **runOnStartup** enabled the trigger is invoked whenever your function app is scaled. Make sure you fully understand the production behavior of your functions before enabling **runOnStartup** in production.
+> Don't set **runOnStartup** to `true` in production. Using this setting makes code execute at highly unpredictable times. In certain production settings, these extra executions can result in significantly higher costs for apps hosted in a Consumption plan. For example, with **runOnStartup** enabled the trigger is invoked whenever your function app is scaled. Make sure you fully understand the production behavior of your functions before enabling **runOnStartup** in production.
+
+See the [Example section](#example) for complete examples.
## Usage
When a timer trigger function is invoked, a timer object is passed into the func
The `isPastDue` property is `true` when the current function invocation is later than scheduled. For example, a function app restart might cause an invocation to be missed.
-## NCRONTAB expressions
+### NCRONTAB expressions
Azure Functions uses the [NCronTab](https://github.com/atifaziz/NCrontab) library to interpret NCRONTAB expressions. An NCRONTAB expression is similar to a CRON expression except that it includes an additional sixth field at the beginning to use for time precision in seconds:
Each field can have one of the following types of values:
[!INCLUDE [functions-cron-expressions-months-days](../../includes/functions-cron-expressions-months-days.md)]
-### NCRONTAB examples
+#### NCRONTAB examples
Here are some examples of NCRONTAB expressions you can use for the timer trigger in Azure Functions.
Here are some examples of NCRONTAB expressions you can use for the timer trigger
> [!NOTE] > NCRONTAB expression supports both **five field** and **six field** format. The sixth field position is a value for seconds which is placed at the beginning of the expression.
-### NCRONTAB time zones
+#### NCRONTAB time zones
The numbers in a CRON expression refer to a time and date, not a time span. For example, a 5 in the `hour` field refers to 5:00 AM, not every 5 hours. [!INCLUDE [functions-timezone](../../includes/functions-timezone.md)]
-## TimeSpan
+### TimeSpan
A `TimeSpan` can be used only for a function app that runs on an App Service Plan.
Expressed as a string, the `TimeSpan` format is `hh:mm:ss` when `hh` is less tha
| "25:00:00" | every 25 days | | "1.00:00:00" | every day |
-## Scale-out
+### Scale-out
If a function app scales out to multiple instances, only a single instance of a timer-triggered function is run across all instances. It will not trigger again if there is an outstanding invocation is still running.
-## Function apps sharing Storage
+### Function apps sharing Storage
If you are sharing storage accounts across function apps that are not deployed to app service, you might need to explicitly assign host ID to each app.
You can omit the identifying value or manually set each function app's identifyi
The timer trigger uses a storage lock to ensure that there is only one timer instance when a function app scales out to multiple instances. If two function apps share the same identifying configuration and each uses a timer trigger, only one timer runs.
-## Retry behavior
+### Retry behavior
Unlike the queue trigger, the timer trigger doesn't retry after a function fails. When a function fails, it isn't called again until the next time on the schedule.
-## Manually invoke a timer trigger
+### Manually invoke a timer trigger
The timer trigger for Azure Functions provides an HTTP webhook that can be invoked to manually trigger the function. This can be extremely useful in the following scenarios.
The timer trigger for Azure Functions provides an HTTP webhook that can be invok
Please refer to [manually run a non HTTP-triggered function](./functions-manually-run-non-http.md) for details on how to manually invoke a timer triggered function.
-## Troubleshooting
+### Troubleshooting
For information about what to do when the timer trigger doesn't work as expected, see [Investigating and reporting issues with timer triggered functions not firing](https://github.com/Azure/azure-functions-host/wiki/Investigating-and-reporting-issues-with-timer-triggered-functions-not-firing). + ## Next steps > [!div class="nextstepaction"] > [Go to a quickstart that uses a timer trigger](functions-create-scheduled-function.md) > [!div class="nextstepaction"]
-> [Learn more about Azure functions triggers and bindings](functions-triggers-bindings.md)
+> [Learn more about Azure functions triggers and bindings](functions-triggers-bindings.md)
azure-functions Functions Bindings Twilio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-twilio.md
Title: Azure Functions Twilio binding description: Understand how to use Twilio bindings with Azure Functions.-- Previously updated : 07/09/2018- Last updated : 03/04/2022 ms.devlang: csharp, java, javascript, python
+zone_pivot_groups: programming-languages-set-functions-lang-workers
# Twilio binding for Azure Functions
This article explains how to send text messages by using [Twilio](https://www.tw
[!INCLUDE [intro](../../includes/functions-bindings-intro.md)]
-## Packages - Functions 1.x
+
+## Install extension
+
+The extension NuGet package you install depends on the C# mode you're using in your function app:
+
+# [In-process](#tab/in-process)
+
+Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
+
+# [Isolated process](#tab/isolated-process)
+
+Functions execute in an isolated C# worker process. To learn more, see [Guide for running functions on .NET 5.0 in Azure](dotnet-isolated-process-guide.md).
+
+# [C# script](#tab/csharp-script)
+
+Functions run as C# script, which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions].
+++
+The functionality of the extension varies depending on the extension version:
+
+# [Functions v2.x+](#tab/functionsv2/in-process)
-The Twilio bindings are provided in the [Microsoft.Azure.WebJobs.Extensions.Twilio](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Twilio) NuGet package, version 1.x. Source code for the package is in the [azure-webjobs-sdk](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/v2.x/src/WebJobs.Extensions.Twilio/) GitHub repository.
+Add the extension to your project by installing the [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Twilio), version 3.x.
+# [Functions v1.x](#tab/functionsv1/in-process)
-## Packages - Functions 2.x and higher
+Add the extension to your project by installing the [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Twilio), version 1.x.
-The Twilio bindings are provided in the [Microsoft.Azure.WebJobs.Extensions.Twilio](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Twilio) NuGet package, version 3.x. Source code for the package is in the [azure-webjobs-sdk](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.Twilio/) GitHub repository.
+# [Functions v2.x+](#tab/functionsv2/isolated-process)
+There is currently no support for Twilio for an isolated process app.
-<a id="example"></a>
+# [Functions v1.x](#tab/functionsv1/isolated-process)
-## Example - Functions 2.x and higher
+Functions 1.x doesn't support running in an isolated process.
-# [C#](#tab/csharp)
+# [Functions v2.x+](#tab/functionsv2/csharp-script)
+
+This version of the extension should already be available to your function app with [extension bundle], version 2.x.
+
+# [Functions 1.x](#tab/functionsv1/csharp-script)
+
+You can add the extension to your project by explicitly installing the [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Twilio), version 1.x. To learn more, see [Explicitly install extensions](functions-bindings-register.md#explicitly-install-extensions).
++++
+## Install bundle
+
+Starting with Functions version 2.x, the HTTP extension is part of an [extension bundle], which is specified in your host.json project file. To learn more, see [extension bundle].
+
+# [Bundle v2.x](#tab/functionsv2)
+
+This version of the extension should already be available to your function app with [extension bundle], version 2.x.
+
+# [Functions 1.x](#tab/functionsv1)
+
+You can add the extension to your project by explicitly installing the [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.SendGrid), version 2.x. To learn more, see [Explicitly install extensions](functions-bindings-register.md#explicitly-install-extensions).
+++++++
+## Example
+
+Unless otherwise noted, these examples are specific to version 2.x and later version of the Functions runtime.
++
+# [In-process](#tab/in-process)
The following example shows a [C# function](functions-dotnet-class-library.md) that sends a text message when triggered by a queue message.
namespace TwilioQueueOutput
This example uses the `TwilioSms` attribute with the method return value. An alternative is to use the attribute with an `out CreateMessageOptions` parameter or an `ICollector<CreateMessageOptions>` or `IAsyncCollector<CreateMessageOptions>` parameter.
+# [Isolated process](#tab/isolated-process)
+
+The Twilio binding isn't currently supported for a function app running in an isolated process.
+ # [C# Script](#tab/csharp-script) The following example shows a Twilio output binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function uses an `out` parameter to send a text message.
public static async Task Run(string myQueueItem, IAsyncCollector<CreateMessageOp
} ```
-# [JavaScript](#tab/javascript)
++ The following example shows a Twilio output binding in a *function.json* file and a [JavaScript function](functions-reference-node.md) that uses the binding. Here's binding data in the *function.json* file:
module.exports = async function (context, myQueueItem) {
}; ```
-# [Python](#tab/python)
+
+Complete PowerShell examples aren't currently available for SendGrid bindings.
The following example shows how to send an SMS message using the output binding as defined in the following *function.json*.
def main(req: func.HttpRequest, twilioMessage: func.Out[str]) -> func.HttpRespon
return func.HttpResponse(f"Message sent") ```
-# [Java](#tab/java)
- The following example shows how to use the [TwilioSmsOutput](/java/api/com.microsoft.azure.functions.annotation.twiliosmsoutput) annotation to send an SMS message. Values for `to`, `from`, and `body` are required in the attribute definition even if you override them programmatically. ```java
public class TwilioOutput {
} ``` -
+## Attributes
-## Attributes and annotations
+Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the output binding. C# script instead uses a function.json configuration file.
-# [C#](#tab/csharp)
+# [In-process](#tab/in-process)
-In [C# class libraries](functions-dotnet-class-library.md), use the [TwilioSms](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.Twilio/TwilioSMSAttribute.cs) attribute.
+In [in-process](functions-dotnet-class-library.md) function apps, use the [TwilioSmsAttribute](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.Twilio/TwilioSMSAttribute.cs), which supports the following parameters.
-For information about attribute properties that you can configure, see [Configuration](#configuration). Here's a `TwilioSms` attribute example in a method signature:
+| Attribute/annotation property | Description |
+|-|-|
+| **AccountSidSetting**| This value must be set to the name of an app setting that holds your Twilio Account Sid (`TwilioAccountSid`). When not set, the default app setting name is `AzureWebJobsTwilioAccountSid`. |
+|**AuthTokenSetting**| This value must be set to the name of an app setting that holds your Twilio authentication token (`TwilioAccountAuthToken`). When not set, the default app setting name is `AzureWebJobsTwilioAuthToken`. |
+| **To**| This value is set to the phone number that the SMS text is sent to.|
+| **From**| This value is set to the phone number that the SMS text is sent from.|
+| **Body**| This value can be used to hard code the SMS text message if you don't need to set it dynamically in the code for your function. |
-```csharp
-[FunctionName("QueueTwilio")]
-[return: TwilioSms(AccountSidSetting = "TwilioAccountSid", AuthTokenSetting = "TwilioAuthToken", From = "+1425XXXXXXX")]
-public static CreateMessageOptions Run(
-[QueueTrigger("myqueue-items", Connection = "AzureWebJobsStorage")] JObject order, ILogger log)
-{
- ...
-}
-```
-For a complete example, see [C# example](#example).
+# [Isolated process](#tab/isolated-process)
+
+The Twilio binding isn't currently supported for a function app running in an isolated process.
# [C# Script](#tab/csharp-script)
-Attributes are not supported by C# Script.
+
-# [JavaScript](#tab/javascript)
+## Annotations
-Attributes are not supported by JavaScript.
+The [TwilioSmsOutput](/java/api/com.microsoft.azure.functions.annotation.twiliosmsoutput) annotation allows you to declaratively configure the Twilio output binding by providing the following configuration values:
-# [Python](#tab/python)
+ +
-Attributes are not supported by Python.
+Place the [TwilioSmsOutput](/java/api/com.microsoft.azure.functions.annotation.twiliosmsoutput) annotation on an [`OutputBinding<T>`](/java/api/com.microsoft.azure.functions.outputbinding) parameter, where `T` may be any native Java type such as `int`, `String`, `byte[]`, or a POJO type.
-# [Java](#tab/java)
+## Configuration
-Place [TwilioSmsOutput](/java/api/com.microsoft.azure.functions.annotation.twiliosmsoutput) annotation on an [`OutputBinding<T>`](/java/api/com.microsoft.azure.functions.outputbinding) parameter where `T` may be any native Java type such as `int`, `String`, `byte[]`, or a POJO type.
+The following table explains the binding configuration properties that you set in the *function.json* file, which differs by runtime version:
+
+# [Functions v2.x+](#tab/functionsv2)
+
+| function.json property | Description|
+|||
+|**type**| must be set to `twilioSms`.|
+|**direction**| must be set to `out`.|
+|**name**| Variable name used in function code for the Twilio SMS text message. |
+|**accountSidSetting**| This value must be set to the name of an app setting that holds your Twilio Account Sid (`TwilioAccountSid`). When not set, the default app setting name is `AzureWebJobsTwilioAccountSid`. |
+|**authTokenSetting**| This value must be set to the name of an app setting that holds your Twilio authentication token (`TwilioAccountAuthToken`). When not set, the default app setting name is `AzureWebJobsTwilioAuthToken`. |
+|**from** | This value is set to the phone number that the SMS text is sent from.|
+|**body** | This value can be used to hard code the SMS text message if you don't need to set it dynamically in the code for your function. |
+
+In version 2.x, you set the `to` value in your code.
+
+# [Functions 1.x](#tab/functionsv1)
+
+| function.json property | Description|
+||--|
+|**type**|must be set to `twilioSms`.|
+|**direction**| must be set to `out`.|
+|**name**| Variable name used in function code for the Twilio SMS text message. |
+|**accountSid**| This value must be set to the name of an app setting that holds your Twilio Account Sid (`TwilioAccountSid`). When not set, the default app setting name is `AzureWebJobsTwilioAccountSid`. |
+|**authToken**|This value must be set to the name of an app setting that holds your Twilio authentication token (`TwilioAccountAuthToken`). When not set, the default app setting name is `AzureWebJobsTwilioAuthToken`. |
+|**to**| This value is set to the phone number that the SMS text is sent to.|
+|**from**| This value is set to the phone number that the SMS text is sent from.|
+|**body**| This value can be used to hard code the SMS text message if you don't need to set it dynamically in the code for your function. |
-## Configuration
-
-The following table explains the binding configuration properties that you set in the *function.json* file and the `TwilioSms` attribute.
-
-| v1 function.json property | v2 function.json property | Attribute property |Description|
-||||-|
-|**type**|**type**| must be set to `twilioSms`.|
-|**direction**|**direction**| must be set to `out`.|
-|**name**|**name**| Variable name used in function code for the Twilio SMS text message. |
-|**accountSid**|**accountSidSetting**| **AccountSidSetting**| This value must be set to the name of an app setting that holds your Twilio Account Sid (`TwilioAccountSid`). If not set, the default app setting name is "AzureWebJobsTwilioAccountSid". |
-|**authToken**|**authTokenSetting**|**AuthTokenSetting**| This value must be set to the name of an app setting that holds your Twilio authentication token (`TwilioAccountAuthToken`). If not set, the default app setting name is "AzureWebJobsTwilioAuthToken". |
-|**to**| N/A - specify in code | **To**| This value is set to the phone number that the SMS text is sent to.|
-|**from**|**from** | **From**| This value is set to the phone number that the SMS text is sent from.|
-|**body**|**body** | **Body**| This value can be used to hard code the SMS text message if you don't need to set it dynamically in the code for your function. |
[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)]
The following table explains the binding configuration properties that you set i
> [!div class="nextstepaction"] > [Learn more about Azure functions triggers and bindings](functions-triggers-bindings.md)+
+[extension bundle]: ./functions-bindings-register.md#extension-bundles
+[Update your extensions]: ./functions-bindings-register.md
azure-functions Functions Bindings Warmup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-warmup.md
Title: Azure Functions warmup trigger description: Understand how to use the warmup trigger in Azure Functions.-- keywords: azure functions, functions, event processing, warmup, cold start, premium, dynamic compute, serverless architecture- ms.devlang: csharp, java, javascript, python Previously updated : 11/08/2019- Last updated : 03/04/2022
+zone_pivot_groups: programming-languages-set-functions-lang-workers
-# Azure Functions warm-up trigger
-
-This article explains how to work with the warmup trigger in Azure Functions. A warmup trigger is invoked when an instance is added to scale a running function app. You can use a warmup trigger to pre-load custom dependencies during the [pre-warming process](./functions-premium-plan.md#pre-warmed-instances) so that your functions are ready to start processing requests immediately.
-
-> [!NOTE]
-> The warmup trigger isn't supported for function apps running in a Consumption plan.
--
-## Packages - Functions 2.x and higher
-
-The [Microsoft.Azure.WebJobs.Extensions](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions) NuGet package, version **3.0.5 or higher** is required. Source code for the package is in the [azure-webjobs-sdk-extensions](https://github.com/Azure/azure-webjobs-sdk-extensions/tree/main/src/WebJobs.Extensions/Extensions/Warmup) GitHub repository.
-
+# Azure Functions warmup trigger
-## Trigger
+This article explains how to work with the warmup trigger in Azure Functions. A warmup trigger is invoked when an instance is added to scale a running function app. The warmup trigger lets you define a function that's run when a new instance of your function app is started. You can use a warmup trigger to pre-load custom dependencies during the pre-warming process so your functions are ready to start processing requests immediately. Some actions for a warmup trigger might include opening connections, loading dependencies, or running any other custom logic before your app begins receiving traffic. To learn more, see [pre-warmed instances](./functions-premium-plan.md#pre-warmed-instances).
-The warmup trigger lets you define a function that will be run on a new instance when it is added to your running app. You can use a warmup function to open connections, load dependencies, or run any other custom logic before your app will begin receiving traffic.
+The following considerations apply when using a warmup trigger:
-The warmup trigger is intended to create shared dependencies that will be used by the other functions in your app. [See examples of shared dependencies here](./manage-connections.md#client-code-examples).
+* The warmup trigger isn't available to apps running on the [Consumption plan](./consumption-plan.md).
+* The warmup trigger isn't supported on version 1.x of the Functions runtime.
+* Support for the warmup trigger is provided by default in all development environments. You don't have to manually install the package or register the extension.
+* There can be only one warmup trigger function per function app, and it can't be invoked after the instance is already running.
+* The warmup trigger is only called during scale-out operations, not during restarts or other non-scale startups. Make sure your logic can load all required dependencies without relying on the warmup trigger. Lazy loading is a good pattern to achieve this goal.
+* Dependencies created by warmup trigger should be shared with other functions in your app. To learn more, see [Static clients](manage-connections.md#static-clients).
-Note that the warmup trigger is only called during scale-out operations, not during restarts or other non-scale startups. You must ensure your logic can load all necessary dependencies without using the warmup trigger. Lazy loading is a good pattern to achieve this.
+## Example
-## Trigger - example
-# [C#](#tab/csharp)
+<!--Optional intro text goes here, followed by the C# modes include.-->
-The following example shows a [C# function](functions-dotnet-class-library.md) that will run on each new instance when it is added to your app. A return value attribute isn't required.
+# [In-process](#tab/in-process)
-* Your function must be named ```warmup``` (case-insensitive) and there may only be one warmup function per app.
-* To use warmup as a .NET class library function, please make sure you have a package reference to **Microsoft.Azure.WebJobs.Extensions >= 3.0.5**
- * ```<PackageReference Include="Microsoft.Azure.WebJobs.Extensions" Version="3.0.5" />```
--
-Placeholder comments show where in the application to declare and initialize shared dependencies.
-[Learn more about shared dependencies here](./manage-connections.md#client-code-examples).
+The following example shows a [C# function](functions-dotnet-class-library.md) that runs on each new instance when it's added to your app.
```cs using Microsoft.Azure.WebJobs;
namespace WarmupSample
} } ```
-# [C# Script](#tab/csharp-script)
+# [Isolated process](#tab/isolated-process)
-The following example shows a warmup trigger in a *function.json* file and a [C# script function](functions-reference-csharp.md) that will run on each new instance when it is added to your app.
+The following example shows a [C# function](dotnet-isolated-process-guide.md) that runs on each new instance when it's added to your app.
-Your function must be named ```warmup``` (case-insensitive), and there may only be one warmup function per app.
+
+# [C# Script](#tab/csharp-script)
+
+The following example shows a warmup trigger in a *function.json* file and a [C# script function](functions-reference-csharp.md) that runs on each new instance when it's added to your app.
Here's the *function.json* file:
Here's the *function.json* file:
} ```
-The [configuration](#triggerconfiguration) section explains these properties.
+For more information, see [Attributes](#attributes).
```cs public static void Run(WarmupContext warmupContext, ILogger log)
public static void Run(WarmupContext warmupContext, ILogger log)
} ```
-# [JavaScript](#tab/javascript)
+++
+The following example shows a warmup trigger that runs when each new instance is added to your app.
+
+```java
+@FunctionName("Warmup")
+public void run( ExecutionContext context) {
+ context.getLogger().info("Function App instance is warm 🌞🌞🌞");
+}
+```
-The following example shows a warmup trigger in a *function.json* file and a [JavaScript function](functions-reference-node.md) that will run on each new instance when it is added to your app.
-Your function must be named ```warmup``` (case-insensitive) and there may only be one warmup function per app.
+The following example shows a warmup trigger in a *function.json* file and a [JavaScript function](functions-reference-node.md) that runs on each new instance when it's added to your app.
Here's the *function.json* file:
Here's the *function.json* file:
} ```
-The [configuration](#triggerconfiguration) section explains these properties.
+The [configuration](#configuration) section explains these properties.
Here's the JavaScript code:
module.exports = async function (context, warmupContext) {
}; ```
-# [Python](#tab/python)
+Here's the *function.json* file:
+
+```json
+{
+ "bindings": [
+ {
+ "type": "warmupTrigger",
+ "direction": "in",
+ "name": "warmupContext"
+ }
+ ]
+}
+```
+PowerShell example code pending.
+
+<!--Content and samples from the PowerShell tab in ##Examples go here.-->
-The following example shows a warmup trigger in a *function.json* file and a [Python function](functions-reference-python.md) that will run on each new instance when it is added to your app.
-Your function must be named ```warmup``` (case-insensitive) and there may only be one warmup function per app.
+The following example shows a warmup trigger in a *function.json* file and a [Python function](functions-reference-python.md) that runs on each new instance when it'is added to your app.
+
+Your function must be named `warmup` (case-insensitive) and there may only be one warmup function per app.
Here's the *function.json* file:
Here's the *function.json* file:
} ```
-The [configuration](#triggerconfiguration) section explains these properties.
+For more information, see [Configuration](#configuration).
Here's the Python code:
def main(warmupContext: func.Context) -> None:
logging.info('Function App instance is warm 🌞🌞🌞') ```
-# [Java](#tab/java)
+## Attributes
-The following example shows a warmup trigger that runs when each new instance is added to your app.
+Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use the `WarmupTriggerAttribute` to define the function. C# script instead uses a *function.json* configuration file.
-Your function must be named `warmup` (case-insensitive) and there may only be one warmup function per app.
+# [In-process](#tab/in-process)
-```java
-@FunctionName("Warmup")
-public void run( ExecutionContext context) {
- context.getLogger().info("Function App instance is warm 🌞🌞🌞");
-}
-```
+Use the `WarmupTriggerAttribute` to define the function. This attribute has no parameters.
-
+# [Isolated process](#tab/isolated-process)
-## Trigger - attributes
+Use the `WarmupTriggerAttribute` to define the function. This attribute has no parameters.
-In [C# class libraries](functions-dotnet-class-library.md), the `WarmupTrigger` attribute is available to configure the function.
+# [C# script](#tab/csharp-script)
-# [C#](#tab/csharp)
+C# script uses a function.json file for configuration instead of attributes.
-This example demonstrates how to use the [warmup](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/dev/src/WebJobs.Extensions/Extensions/Warmup/Trigger/WarmupTriggerAttribute.cs) attribute.
+The following table explains the binding configuration properties for C# script that you set in the *function.json* file.
-Note that your function must be called ```Warmup``` and there can only be one warmup function per app.
+|function.json property |Description |
+||-|
+| **type** | Required - must be set to `warmupTrigger`. |
+| **direction** | Required - must be set to `in`. |
+| **name** | Required - the name of the binding parameter, which is usually `warmupContext`. |
-```csharp
- [FunctionName("Warmup")]
- public static void Run(
- [WarmupTrigger()] WarmupContext context, ILogger log)
- {
- ...
- }
-```
+
-For a complete example, see the [trigger example](#triggerexample).
+## Annotations
-# [C# Script](#tab/csharp-script)
+Annotations aren't required by a warmup trigger. Just use a name of `warmup` (case-insensitive) for the `FunctionName` annotation.
-Attributes are not supported by C# Script.
+## Configuration
-# [JavaScript](#tab/javascript)
+The following table explains the binding configuration properties that you set in the *function.json* file.
-Attributes are not supported by JavaScript.
+|function.json property |Description|
+||-|
+| **type** | Required - must be set to `warmupTrigger`. |
+| **direction** | Required - must be set to `in`. |
+| **name** | Required - the variable name used in function code. A `name` of `warmupContext` is recommended for the binding parameter.|
-# [Python](#tab/python)
-Attributes are not supported by Python.
+See the [Example section](#example) for complete examples.
-# [Java](#tab/java)
+## Usage
-The warmup trigger is not supported in Java as an attribute.
+The following considerations apply to using a warmup function in C#:
-
+# [In-process](#tab/in-process)
-## Trigger - configuration
+- Your function must be named `warmup` (case-insensitive) using the `FunctionNameAttribute`.
+- A return value attribute isn't required.
+- You must be using version `3.0.5` of the `Microsoft.Azure.WebJobs.Extensions` package, or a later version.
+- You can pass a `WarmupContext` instance to the function.
-The following table explains the binding configuration properties that you set in the *function.json* file and the `WarmupTrigger` attribute.
+# [Isolated process](#tab/isolated-process)
-|function.json property | Attribute property |Description|
-|||-|
-| **type** | n/a| Required - must be set to `warmupTrigger`. |
-| **direction** | n/a| Required - must be set to `in`. |
-| **name** | n/a| Required - the variable name used in function code.|
+- Your function must be named `warmup` (case-insensitive) using the `FunctionNameAttribute`.
+- A return value attribute isn't required.
+- You can pass an object instance to the function.
-## Trigger - usage
+# [C# script](#tab/csharp-script)
-No additional information is provided to a warmup triggered function when it is invoked.
+Not supported for version 1.x of the Functions runtime.
-## Trigger - limits
+
-* The warmup trigger is not available to apps running on the [Consumption plan](./consumption-plan.md).
-* The warmup trigger is only called during scale-out operations, not during restarts or other non-scale startups. You must ensure your logic can load all necessary dependencies without using the warmup trigger. Lazy loading is a good pattern to achieve this.
-* The warmup trigger cannot be invoked once an instance is already running.
-* There can only be one warmup trigger function per function app.
+Your function must be named `warmup` (case-insensitive) using the `FunctionName` annotation.
+The function type in function.json must be set to `warmupTrigger`.
## Next steps
-[Learn more about Azure functions triggers and bindings](functions-triggers-bindings.md)
++ [Learn more about Azure functions triggers and bindings](functions-triggers-bindings.md)++ [Learn more about Premium plan](functions-premium-plan.md)
azure-functions Functions Create Function Linux Custom Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-function-linux-custom-image.md
A function app on Azure manages the execution of your functions in your hosting
1. The function can now use this connection string to access the storage account. > [!NOTE]
-> If you publish your custom image to a private container account, you should use environment variables in the Dockerfile for the connection string instead. For more information, see the [ENV instruction](https://docs.docker.com/engine/reference/builder/#env). You should also set the variables `DOCKER_REGISTRY_SERVER_USERNAME` and `DOCKER_REGISTRY_SERVER_PASSWORD`. To use the values, then, you must rebuild the image, push the image to the registry, and then restart the function app on Azure.
+> If you publish your custom image to a private container registry, you should use environment variables in the Dockerfile for the connection string instead. For more information, see the [ENV instruction](https://docs.docker.com/engine/reference/builder/#env). You should also set the variables `DOCKER_REGISTRY_SERVER_USERNAME` and `DOCKER_REGISTRY_SERVER_PASSWORD`. To use the values, then, you must rebuild the image, push the image to the registry, and then restart the function app on Azure.
## Verify your functions on Azure
azure-functions Functions Host Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-host-json.md
This setting is a child of [logging](#logging). It controls the console logging
## cosmosDb
-Configuration setting can be found in [Cosmos DB triggers and bindings](functions-bindings-cosmosdb-v2-output.md#host-json).
+Configuration setting can be found in [Cosmos DB triggers and bindings](functions-bindings-cosmosdb-v2.md#hostjson-settings).
## customHandler
Configuration settings for [Host health monitor](https://github.com/Azure/azure-
## http
-Configuration settings can be found in [http triggers and bindings](functions-bindings-http-webhook-output.md#hostjson-settings).
+Configuration settings can be found in [http triggers and bindings](functions-bindings-http-webhook.md#hostjson-settings).
## logging
Configuration setting can be found in [SendGrid triggers and bindings](functions
## serviceBus
-Configuration setting can be found in [Service Bus triggers and bindings](functions-bindings-service-bus.md#host-json).
+Configuration setting can be found in [Service Bus triggers and bindings](functions-bindings-service-bus.md).
## singleton
azure-functions Functions Identity Based Connections Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-identity-based-connections-tutorial.md
Similar to the steps you took before with the user-assigned identity and your ke
1. Click on your application. It should move down into the **Selected members** section. Click **Select**. 1. Back on the **Add role assignment** screen, click **Review + assign**. Review the configuration, and then click **Review + assign**.
+
+> [!TIP]
+> If you intend to use the function app for a blob-triggered function, you will need to repeat these steps for the **Storage Account Contributor** and **Storage Queue Data Contributor** roles over the account used by AzureWebJobsStorage. To learn more, see [Blob trigger identity-based connections](./functions-bindings-storage-blob-trigger.md#identity-based-connections).
### Edit the AzureWebJobsStorage configuration
Next you will update your function app to use its system-assigned identity when
You've removed the storage connection string requirement for AzureWebJobsStorage by configuring your app to instead connect to blobs using managed identities.
+> [!NOTE]
+> The `__accountName` syntax is unique to the AzureWebJobsStorage connection and cannot be used for other storage connections. To learn to define other connections, check the reference for each trigger and binding your app uses.
+ ## Next steps This tutorial showed how to create a function app without storing secrets in its configuration.
azure-functions Functions Premium Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-premium-plan.md
Pre-warmed instances are instances warmed as a buffer during scale and activatio
When an app has a long warm-up (like a custom container image), you may need to increase this buffer. A pre-warmed instance becomes active only after all active instances have been sufficiently used.
+You can also define a warmup trigger that is run during the pre-warming process. You can use a warmup trigger to pre-load custom dependencies during the pre-warming process so your functions are ready to start processing requests immediately. To learn more, see [Azure Functions warmup trigger](functions-bindings-warmup.md).
+ Consider this example of how always-ready instances and pre-warmed instances work together. A premium function app has five always ready instances configured, and the default of one pre-warmed instance. When the app is idle and no events are triggering, the app is provisioned and running with five instances. At this time, you aren't billed for a pre-warmed instance as the always-ready instances aren't used, and no pre-warmed instance is allocated. As soon as the first trigger comes in, the five always-ready instances become active, and a pre-warmed instance is allocated. The app is now running with six provisioned instances: the five now-active always ready instances, and the sixth pre-warmed and inactive buffer. If the rate of executions continues to increase, the five active instances are eventually used. When the platform decides to scale beyond five instances, it scales into the pre-warmed instance. When that happens, there are now six active instances, and a seventh instance is instantly provisioned and fill the pre-warmed buffer. This sequence of scaling and pre-warming continues until the maximum instance count for the app is reached. No instances are pre-warmed or activated beyond the maximum.
azure-functions Functions Proxies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-proxies.md
This article explains how to configure and work with Azure Functions Proxies. Wi
Standard Functions billing applies to proxy executions. For more information, see [Azure Functions pricing](https://azure.microsoft.com/pricing/details/functions/). - > [!NOTE] > Proxies is available in Azure Functions [versions](./functions-versions.md) 1.x to 3.x. >
azure-functions Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference.md
When the connection name resolves to a single exact value, the runtime identifie
However, a connection name can also refer to a collection of multiple configuration items, useful for configuring [identity-based connections](#configure-an-identity-based-connection). Environment variables can be treated as a collection by using a shared prefix that ends in double underscores `__`. The group can then be referenced by setting the connection name to this prefix.
-For example, the `connection` property for a Azure Blob trigger definition might be "Storage1". As long as there is no single string value configured by an environment variable named "Storage1", an environment variable named `Storage1__blobServiceUri` could be used to inform the `blobServiceUri` property of the connection. The connection properties are different for each service. Refer to the documentation for the component that uses the connection.
+For example, the `connection` property for an Azure Blob trigger definition might be "Storage1". As long as there is no single string value configured by an environment variable named "Storage1", an environment variable named `Storage1__blobServiceUri` could be used to inform the `blobServiceUri` property of the connection. The connection properties are different for each service. Refer to the documentation for the component that uses the connection.
### Configure an identity-based connection
Identity-based connections are supported by the following components:
| Connection source | Plans supported | Learn more | ||--|--|
-| Azure Blob triggers and bindings | All | [Extension version 5.0.0 or later](./functions-bindings-storage-blob.md#storage-extension-5x-and-higher) |
+| Azure Blob triggers and bindings | All | [Extension version 5.0.0 or later](./functions-bindings-storage-blob.md#install-extension) |
| Azure Queue triggers and bindings | All | [Extension version 5.0.0 or later](./functions-bindings-storage-queue.md#storage-extension-5x-and-higher) |
-| Azure Event Hubs triggers and bindings | All | [Extension version 5.0.0 or later](./functions-bindings-event-hubs.md#event-hubs-extension-5x-and-higher) |
-| Azure Service Bus triggers and bindings | All | [Extension version 5.0.0 or later](./functions-bindings-service-bus.md#service-bus-extension-5x-and-higher) |
-| Azure Cosmos DB triggers and bindings - Preview | Elastic Premium | [Extension version 4.0.0-preview1 or later](./functions-bindings-cosmosdb-v2.md#cosmos-db-extension-4x-and-higher) |
+| Azure Event Hubs triggers and bindings | All | [Extension version 5.0.0 or later](./functions-bindings-event-hubs.md?tabs=extensionv5) |
+| Azure Service Bus triggers and bindings | All | [Extension version 5.0.0 or later](./functions-bindings-service-bus.md) |
+| Azure Cosmos DB triggers and bindings - Preview | Elastic Premium | [Extension version 4.0.0-preview1 or later](.//functions-bindings-cosmosdb-v2.md?tabs=extensionv4) |
+| Azure Tables (when using Azure Storage) - Preview | All | [Table API extension](./functions-bindings-storage-table.md#table-api-extension) |
| Host-required storage ("AzureWebJobsStorage") - Preview | All | [Connecting to host storage with an identity](#connecting-to-host-storage-with-an-identity-preview) | > [!NOTE]
Choose a tab below to learn about permissions for each component:
[!INCLUDE [functions-cosmos-permissions](../../includes/functions-cosmos-permissions.md)]
+# [Azure Tables API extension (preview)](#tab/table)
++ # [Functions host storage (preview)](#tab/azurewebjobsstorage) [!INCLUDE [functions-azurewebjobsstorage-permissions](../../includes/functions-azurewebjobsstorage-permissions.md)]
When running locally, the above configuration tells the runtime to use your loca
If none of these options are successful, an error will occur.
-Because this is using the your developer identity, you may already have some roles against development resources, but they may not provide data access. Management roles like [Owner](../role-based-access-control/built-in-roles.md#owner) are not sufficient. Double-check what permissions are required for connections for each component, and make sure that you have them assigned to yourself.
+Your identity may already have some role assignments against Azure resources used for development, but those roles may not provide the necessary data access. Management roles like [Owner](../role-based-access-control/built-in-roles.md#owner) are not sufficient. Double-check what permissions are required for connections for each component, and make sure that you have them assigned to yourself.
In some cases, you may wish to specify use of a different identity. You can add configuration properties for the connection that point to the alternate identity based on a client ID and client Secret for an Azure Active Directory service principal. **This configuration option is not supported when hosted in the Azure Functions service.** To use an ID and secret on your local machine, define the connection with the following additional properties:
Here is an example of `local.settings.json` properties required for identity-bas
#### Connecting to host storage with an identity (Preview)
-Azure Functions by default uses the "AzureWebJobsStorage" connection for core behaviors such as coordinating singleton execution of timer triggers and default app key storage. This can be configured to leverage an identity as well.
+The Azure Functions host uses the "AzureWebJobsStorage" connection for core behaviors such as coordinating singleton execution of timer triggers and default app key storage. This can be configured to leverage an identity as well.
> [!CAUTION]
-> Other components in Functions rely on "AzureWebJobsStorage" for default behaviors. You should not move it to an identity-based connection if you are using older versions of extensions that do not support this type of connection, including triggers and bindings for Azure Blobs and Event Hubs. Similarly, `AzureWebJobsStorage` is used for deployment artifacts when using server-side build in Linux Consumption, and if you enable this, you will need to deploy via [an external deployment package](run-functions-from-deployment-package.md).
+> Other components in Functions rely on "AzureWebJobsStorage" for default behaviors. You should not move it to an identity-based connection if you are using older versions of extensions that do not support this type of connection, including triggers and bindings for Azure Blobs, Event Hubs, and Durable Functions. Similarly, `AzureWebJobsStorage` is used for deployment artifacts when using server-side build in Linux Consumption, and if you enable this, you will need to deploy via [an external deployment package](run-functions-from-deployment-package.md).
> > In addition, some apps reuse "AzureWebJobsStorage" for other storage connections in their triggers, bindings, and/or function code. Make sure that all uses of "AzureWebJobsStorage" are able to use the identity-based connection format before changing this connection from a connection string.
To use an identity-based connection for "AzureWebJobsStorage", configure the fol
|--|--|| | `AzureWebJobsStorage__blobServiceUri`| The data plane URI of the blob service of the storage account, using the HTTPS scheme. | https://<storage_account_name>.blob.core.windows.net | | `AzureWebJobsStorage__queueServiceUri` | The data plane URI of the queue service of the storage account, using the HTTPS scheme. | https://<storage_account_name>.queue.core.windows.net |
+| `AzureWebJobsStorage__tableServiceUri` | The data plane URI of a table service of the storage account, using the HTTPS scheme. | https://<storage_account_name>.table.core.windows.net |
[Common properties for identity-based connections](#common-properties-for-identity-based-connections) may also be set as well.
-If you are using a storage account that uses the default DNS suffix and service name for global Azure, following the `https://<accountName>.blob/queue/file/table.core.windows.net` format, you can instead set `AzureWebJobsStorage__accountName` to the name of your storage account. The blob and queue endpoints will be inferred for this account. This will not work if the storage account is in a sovereign cloud or has a custom DNS.
+If you are configuring "AzureWebJobsStorage" using a storage account that uses the default DNS suffix and service name for global Azure, following the `https://<accountName>.blob/queue/file/table.core.windows.net` format, you can instead set `AzureWebJobsStorage__accountName` to the name of your storage account. The blob and queue endpoints will be inferred for this account. This will not work if the storage account is in a sovereign cloud or has a custom DNS.
| Setting | Description | Example value | |--|--||
-| `AzureWebJobsStorage__accountName` | The account name of a storage account, valid only if the account is not in a sovereign cloud and does not have a custom DNS. | <storage_account_name> |
+| `AzureWebJobsStorage__accountName` | The account name of a storage account, valid only if the account is not in a sovereign cloud and does not have a custom DNS. This syntax is unique to "AzureWebJobsStorage" and cannot be used for other identity-based connections. | <storage_account_name> |
[!INCLUDE [functions-azurewebjobsstorage-permissions](../../includes/functions-azurewebjobsstorage-permissions.md)]
azure-functions Functions Run Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-run-local.md
You can make GET requests from a browser passing data in the query string. For a
For all functions other than HTTP and Event Grid triggers, you can test your functions locally using REST by calling a special endpoint called an _administration endpoint_. Calling this endpoint with an HTTP POST request on the local server triggers the function.
-To test Event Grid triggered functions locally, see [Local testing with viewer web app](functions-bindings-event-grid-trigger.md#local-testing-with-viewer-web-app).
+To test Event Grid triggered functions locally, see [Local testing with viewer web app](event-grid-how-tos.md#local-testing-with-viewer-web-app).
You can optionally pass test data to the execution in the body of the POST request. This functionality is similar to the **Test** tab in the Azure portal.
azure-functions Performance Reliability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/performance-reliability.md
For C# functions, you can change the type to a strongly-typed array. For exampl
The `host.json` file in the function app allows for configuration of host runtime and trigger behaviors. In addition to batching behaviors, you can manage concurrency for a number of triggers. Often adjusting the values in these options can help each instance scale appropriately for the demands of the invoked functions.
-Settings in the host.json file apply across all functions within the app, within a *single instance* of the function. For example, if you had a function app with two HTTP functions and [`maxConcurrentRequests`](functions-bindings-http-webhook-output.md#hostjson-settings) requests set to 25, a request to either HTTP trigger would count towards the shared 25 concurrent requests. When that function app is scaled to 10 instances, the ten functions effectively allow 250 concurrent requests (10 instances * 25 concurrent requests per instance).
+Settings in the host.json file apply across all functions within the app, within a *single instance* of the function. For example, if you had a function app with two HTTP functions and [`maxConcurrentRequests`](functions-bindings-http-webhook.md#hostjson-settings) requests set to 25, a request to either HTTP trigger would count towards the shared 25 concurrent requests. When that function app is scaled to 10 instances, the ten functions effectively allow 250 concurrent requests (10 instances * 25 concurrent requests per instance).
Other host configuration options are found in the [host.json configuration article](functions-host-json.md).
azure-functions Shift Expressjs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/shift-expressjs.md
When migrating code to a serverless architecture, refactoring Express.js endpoin
- **Differing APIs**: The API used to process both requests and responses differs among Azure Functions and Express.js. The following example details the required changes. -- **Default route**: By default, Azure Functions endpoints are exposed under the `api` route. Routing rules are configurable via [`routePrefix` in the _host.json_ file](./functions-bindings-http-webhook-output.md#hostjson-settings).
+- **Default route**: By default, Azure Functions endpoints are exposed under the `api` route. Routing rules are configurable via [`routePrefix` in the _host.json_ file](./functions-bindings-http-webhook.md#hostjson-settings).
- **Configuration and conventions**: A Functions app uses the _function.json_ file to define HTTP verbs, define security policies, and can configure the function's [input and output](./functions-triggers-bindings.md). By default, the folder name that which contains the function files defines the endpoint name, but you can change the name via the `route` property in the [function.json](./functions-bindings-http-webhook-trigger.md#customize-the-http-endpoint) file.
azure-government Documentation Government Overview Wwps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-overview-wwps.md
Previously updated : 07/22/2021
+recommendations: false
Last updated : 03/07/2022 # Azure for secure worldwide public sector cloud adoption
This article addresses common data residency, security, and isolation concerns p
## Executive summary
-Microsoft Azure provides strong customer commitments regarding data residency and transfer policies. Most Azure services enable you to specify the deployment region. For those services, Microsoft will not store your data outside your specified geography. You can use extensive and robust data encryption options to help safeguard your data in Azure and control who can access it.
+Microsoft Azure provides strong customer commitments regarding data residency and transfer policies. Most Azure services enable you to specify the deployment region. For those services, Microsoft won't store your data outside your specified geography. You can use extensive and robust data encryption options to help safeguard your data in Azure and control who can access it.
Listed below are some of the options available to you to safeguard your data in Azure: - You can choose to store your most sensitive content in services that store data at rest in Geography. - You can obtain further protection by encrypting data with your own key using Azure Key Vault.-- While you cannot control the precise network path for data in transit, data encryption in transit helps protect data from interception.
+- While you can't control the precise network path for data in transit, data encryption in transit helps protect data from interception.
- Azure is a 24x7 globally operated service; however, support and troubleshooting rarely require access to your data. - If you want extra control for support and troubleshooting scenarios, you can use Customer Lockbox for Azure to approve or deny access to your data. - Microsoft will notify you of any breach of your data (customer or personal) within 72 hours of incident declaration.
This article addresses common data residency, security, and isolation concerns p
## Data residency
-Established privacy regulations are silent on **data residency and data location**, and permit data transfers in accordance with approved mechanisms such as the EU Standard Contractual Clauses (also known as EU Model Clauses). Microsoft commits contractually in the Online Services Terms [Data Protection Addendum](https://aka.ms/DPA) (DPA) that all potential transfers of customer data out of the EU, European Economic Area (EEA), and Switzerland shall be governed by the EU Model Clauses. Microsoft will abide by the requirements of the EEA and Swiss data protection laws regarding the collection, use, transfer, retention, and other processing of personal data from the EEA and Switzerland. All transfers of personal data are subject to appropriate safeguards and documentation requirements. However, many customers considering cloud adoption are seeking assurances about customer and personal data being kept within the geographic boundaries corresponding to customer operations or location of customerΓÇÖs end users.
+Established privacy regulations are silent on **data residency and data location**, and permit data transfers in accordance with approved mechanisms such as the EU Standard Contractual Clauses (also known as EU Model Clauses). Microsoft commits contractually in the Microsoft Products and Services [Data Protection Addendum](https://aka.ms/DPA) (DPA) that all potential transfers of customer data out of the EU, European Economic Area (EEA), and Switzerland shall be governed by the EU Model Clauses. Microsoft will abide by the requirements of the EEA and Swiss data protection laws regarding the collection, use, transfer, retention, and other processing of personal data from the EEA and Switzerland. All transfers of personal data are subject to appropriate safeguards and documentation requirements. However, many customers considering cloud adoption are seeking assurances about customer and personal data being kept within the geographic boundaries corresponding to customer operations or location of customerΓÇÖs end users.
**Data sovereignty** implies data residency; however, it also introduces rules and requirements that define who has control over and access to customer data stored in the cloud. In many cases, data sovereignty mandates that customer data be subject to the laws and legal jurisdiction of the country or region in which data resides. These laws can have direct implications on data access even for platform maintenance or customer-initiated support requests. You can use Azure public multi-tenant cloud in combination with Azure Stack products for on-premises and edge solutions to meet your data sovereignty requirements, as described later in this article. These other products can be deployed to put you solely in control of your data, including storage, processing, transmission, and remote access.
Among several [data categories and definitions](https://www.microsoft.com/trust-
- **Customer data** is all data that you provide to Microsoft to manage on your behalf through your use of Microsoft online services. - **Customer content** is a subset of customer data and includes, for example, the content stored in your Azure Storage account.-- **Personal data** means any information associated with a specific natural person, for example, names and contact information of your end users. However, personal data could also include data that is not customer data, such as user ID that Azure can generate and assign to your administrator ΓÇô such personal data is considered pseudonymous because it cannot identify an individual on its own.
+- **Personal data** means any information associated with a specific natural person, for example, names and contact information of your end users. However, personal data could also include data that isn't customer data, such as user ID that Azure can generate and assign to your administrator ΓÇô such personal data is considered pseudonymous because it can't identify an individual on its own.
- **Support and consulting data** mean all data provided by you to Microsoft to obtain support or Professional Services.
-The following sections address key cloud implications for data residency and the fundamental principles guiding MicrosoftΓÇÖs safeguarding of your data at rest, in transit, and as part of support requests that you initiate.
+For more information about your options to control data residency and meet your data protection obligations, see [Enabling data residency and data protection in Microsoft Azure regions](https://azure.microsoft.com/resources/achieving-compliant-data-residency-and-security-with-azure/). The following sections address key cloud implications for data residency and the fundamental principles guiding MicrosoftΓÇÖs safeguarding of your data at rest, in transit, and as part of support requests that you initiate.
### Data at rest
Microsoft provides transparent insight into data location for all online service
Microsoft Azure provides [strong customer commitments](https://azure.microsoft.com/global-infrastructure/data-residency/) regarding data residency and transfer policies: -- **Data storage for regional -- **Data storage for non-regional
+- **Data storage for regional
+- **Data storage for non-regional
Your data in an Azure Storage account is [always replicated](../storage/common/storage-redundancy.md) to help ensure durability and high availability. Azure Storage copies your data to protect it from transient hardware failures, network or power outages, and even massive natural disasters. You can typically choose to replicate your data within the same data center, across availability zones within the same region, or across geographically separated regions. Specifically, when creating a storage account, you can select one of the following redundancy options:
Your data in an Azure Storage account is [always replicated](../storage/common/s
Data in an Azure Storage account is always replicated three times in the primary region. Azure Storage provides LRS and ZRS redundancy options for replicating data in the primary region. For applications requiring high availability, you can choose geo-replication to a secondary region that is hundreds of kilometers away from the primary region. Azure Storage offers GRS and GZRS options for copying data to a secondary region. More options are available to you for configuring read access (RA) to the secondary region (RA-GRS and RA-GZRS), as explained in [Read access to data in the secondary region](../storage/common/storage-redundancy.md#read-access-to-data-in-the-secondary-region).
-Azure Storage redundancy options can have implications on data residency as Azure relies on [paired regions](../availability-zones/cross-region-replication-azure.md) to deliver [geo-redundant storage](../storage/common/storage-redundancy.md#geo-redundant-storage) (GRS). For example, if you are concerned about geo-replication across regions that span country boundaries, you may want to choose LRS or ZRS to keep Azure Storage data at rest within the geographic boundaries of the country in which the primary region is located. Similarly, [geo replication for Azure SQL Database](../azure-sql/database/active-geo-replication-overview.md) can be obtained by configuring asynchronous replication of transactions to any region in the world, although it is recommended that paired regions be used for this purpose as well. If you need to keep relational data inside the geographic boundaries of your country/region, you should not configure Azure SQL Database asynchronous replication to a region outside that country.
+Azure Storage redundancy options can have implications on data residency as Azure relies on [paired regions](../availability-zones/cross-region-replication-azure.md) to deliver [geo-redundant storage](../storage/common/storage-redundancy.md#geo-redundant-storage) (GRS). For example, if you're concerned about geo-replication across regions that span country boundaries, you may want to choose LRS or ZRS to keep Azure Storage data at rest within the geographic boundaries of the country in which the primary region is located. Similarly, [geo replication for Azure SQL Database](../azure-sql/database/active-geo-replication-overview.md) can be obtained by configuring asynchronous replication of transactions to any region in the world, although it's recommended that paired regions be used for this purpose as well. If you need to keep relational data inside the geographic boundaries of your country/region, you shouldn't configure Azure SQL Database asynchronous replication to a region outside that country.
As described on the [data location page](https://azure.microsoft.com/global-infrastructure/data-residency/), most Azure **regional** services honor the data at rest commitment to ensure that your data remains within the geographic boundary where the corresponding service is deployed. A handful of exceptions to this rule are noted on the data location page. You should review these exceptions to determine if the type of data stored outside your chosen deployment Geography meets your needs.
-**Non-regional** Azure services do not enable you to specify the region where the services will be deployed. Some non-regional services do not store your data at all but merely provide global routing functions such as Azure Traffic Manager or Azure DNS. Other non-regional services are intended for data caching at edge locations around the globe, such as the Content Delivery Network ΓÇô such services are optional and you should not use them for sensitive customer content you wish to keep in your Geography. One non-regional service that warrants extra discussion is **Azure Active Directory**, which is discussed in the next section.
+**Non-regional** Azure services don't enable you to specify the region where the services will be deployed. Some non-regional services don't store your data at all but merely provide global routing functions such as Azure Traffic Manager or Azure DNS. Other non-regional services are intended for data caching at edge locations around the globe, such as the Content Delivery Network ΓÇô such services are optional and you shouldn't use them for sensitive customer content you wish to keep in your Geography. One non-regional service that warrants extra discussion is **Azure Active Directory**, which is discussed in the next section.
#### *Customer data in Azure Active Directory*
Azure Active Directory (Azure AD) is a non-regional service that may store ident
- Europe, where Azure AD keeps most of the identity data within European datacenters except as noted in [Identity data storage for European customers in Azure Active Directory](../active-directory/fundamentals/active-directory-data-storage-eu.md). - Australia and New Zealand, where identity data is stored in Australia except as noted in [Customer data storage for Australian and New Zealand customers in Azure Active Directory](../active-directory/fundamentals/active-directory-data-storage-australia-newzealand.md).
-Azure AD provides a [dashboard](https://go.microsoft.com/fwlink/?linkid=2092972) with transparent insight into data location for every Azure AD component service. Among other features, Azure AD is an identity management service that stores directory data for your Azure administrators, including user **personal data** categorized as **End User Identifiable Information (EUII)**, for example, names, email addresses, and so on. In Azure AD, you can create User, Group, Device, Application, and other entities using various attribute types such as Integer, DateTime, Binary, String (limited to 256 characters), and so on. Azure AD is not intended to store your customer content and it is not possible to store blobs, files, database records, and similar structures in Azure AD. Moreover, Azure AD is not intended to be an identity management service for your external end users ΓÇô [Azure AD B2C](../active-directory-b2c/overview.md) should be used for that purpose.
+Azure AD provides a [dashboard](https://go.microsoft.com/fwlink/?linkid=2092972) with transparent insight into data location for every Azure AD component service. Among other features, Azure AD is an identity management service that stores directory data for your Azure administrators, including user **personal data** categorized as **End User Identifiable Information (EUII)**, for example, names, email addresses, and so on. In Azure AD, you can create User, Group, Device, Application, and other entities using various attribute types such as Integer, DateTime, Binary, String (limited to 256 characters), and so on. Azure AD isn't intended to store your customer content and it isn't possible to store blobs, files, database records, and similar structures in Azure AD. Moreover, Azure AD isn't intended to be an identity management service for your external end users ΓÇô [Azure AD B2C](../active-directory-b2c/overview.md) should be used for that purpose.
Azure AD implements extensive **data protection features**, including tenant isolation and access control, data encryption in transit, secrets encryption and management, disk level encryption, advanced cryptographic algorithms used by various Azure AD components, data operational considerations for insider access, and more. Detailed information is available from a whitepaper [Active Directory Data Security Considerations](https://aka.ms/AADDataWhitePaper).
Azure AD implements extensive **data protection features**, including tenant iso
Personal data is defined broadly. It includes not just customer data but also unique personal identifiers such as Probably Unique Identifier (PUID) and Globally Unique Identifier (GUID), the latter being often labeled as Universally Unique Identifier (UUID). These unique personal identifiers are *pseudonymous identifiers*. This type of information is generated automatically to track users who interact directly with Azure services, such as your administrators. For example, PUID is a random string generated programmatically via a combination of characters and digits to provide a high probability of uniqueness. Pseudonymous identifiers are stored in centralized internal Azure systems.
-Whereas EUII represents data that could be used on its own to identify a user (for example, user name, display name, user principal name, or even user-specific IP address), pseudonymous identifiers are considered pseudonymous because they cannot identify an individual on their own. Pseudonymous identifiers do not contain any information that you uploaded or created.
+Whereas EUII represents data that could be used on its own to identify a user (for example, user name, display name, user principal name, or even user-specific IP address), pseudonymous identifiers are considered pseudonymous because they can't identify an individual on their own. Pseudonymous identifiers don't contain any information that you uploaded or created.
### Data in transit
-**While you cannot control the precise network path for data in transit, data encryption in transit helps protect data from interception.**
+**While you can't control the precise network path for data in transit, data encryption in transit helps protect data from interception.**
Data in transit applies to the following scenarios involving data traveling between:
Data in transit applies to the following scenarios involving data traveling betw
- Your on-premises datacenter and Azure region - Microsoft datacenters as part of expected Azure service operation
-While data in transit between two points within your chosen Geography will typically remain in that Geography, it is not possible to guarantee this outcome 100% of the time because of the way networks automatically reroute traffic to avoid congestion or bypass other interruptions. That said, data in transit can be protected through encryption as detailed below and in *[Data encryption in transit](#data-encryption-in-transit)* section.
+While data in transit between two points within your chosen Geography will typically remain in that Geography, it isn't possible to guarantee this outcome 100% of the time because of the way networks automatically reroute traffic to avoid congestion or bypass other interruptions. That said, data in transit can be protected through encryption as detailed below and in *[Data encryption in transit](#data-encryption-in-transit)* section.
#### *Your end users connection to Azure service*
-Most customers will connect to Azure over the Internet, and the precise routing of network traffic will depend on the many network providers that contribute to Internet infrastructure. As stated in the Microsoft Online Services Terms [Data Protection Addendum](https://aka.ms/DPA) (DPA), Microsoft does not control or limit the regions from which you or your end users may access or move customer data. You can increase security by enabling encryption in transit. For example, you can use [Azure Application Gateway](../application-gateway/application-gateway-end-to-end-ssl-powershell.md) to configure end-to-end encryption of traffic. As described in *[Data encryption in transit](#data-encryption-in-transit)* section, Azure uses the Transport Layer Security (TLS) protocol to help protect data when it is traveling between you and Azure services. However, Microsoft cannot control network traffic paths corresponding to your end-user interaction with Azure.
+Most customers will connect to Azure over the Internet, and the precise routing of network traffic will depend on the many network providers that contribute to Internet infrastructure. As stated in the Microsoft Products and Services [Data Protection Addendum](https://aka.ms/dpa) (DPA), Microsoft doesn't control or limit the regions from which you or your end users may access or move customer data. You can increase security by enabling encryption in transit. For example, you can use [Azure Application Gateway](../application-gateway/application-gateway-end-to-end-ssl-powershell.md) to configure end-to-end encryption of traffic. As described in *[Data encryption in transit](#data-encryption-in-transit)* section, Azure uses the Transport Layer Security (TLS) protocol to help protect data when it's traveling between you and Azure services. However, Microsoft can't control network traffic paths corresponding to your end-user interaction with Azure.
#### *Your datacenter connection to Azure region* [Virtual Network](../virtual-network/virtual-networks-overview.md) (VNet) provides a means for Azure virtual machines (VMs) to act as part of your internal (on-premises) network. You have options to securely connect to a VNet from your on-premises infrastructure ΓÇô choose an [IPSec protected VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md) (for example, point-to-site VPN or site-to-site VPN) or a private connection by using Azure [ExpressRoute](../expressroute/expressroute-introduction.md) with several [data encryption options](../expressroute/expressroute-about-encryption.md). - **IPSec protected VPN** uses an encrypted tunnel established across the public Internet, which means that you need to rely on the local Internet service providers for any network-related assurances.-- **ExpressRoute** allows you to create private connections between Microsoft datacenters and your on-premises infrastructure or colocation facility. ExpressRoute connections do not go over the public Internet and offer lower latency and higher reliability than IPSec protected VPN connections. [ExpressRoute locations](../expressroute/expressroute-locations-providers.md) are the entry points to MicrosoftΓÇÖs global network backbone and they may or may not match the location of Azure regions. For example, you can connect to Microsoft in Amsterdam through ExpressRoute and have access to all Azure cloud services hosted in Northern and Western Europe. However, itΓÇÖs also possible to have access to the same Azure regions from ExpressRoute connections located elsewhere in the world. Once the network traffic enters the Microsoft backbone, it is guaranteed to traverse that private networking infrastructure instead of the public Internet.
+- **ExpressRoute** allows you to create private connections between Microsoft datacenters and your on-premises infrastructure or colocation facility. ExpressRoute connections don't go over the public Internet and offer lower latency and higher reliability than IPSec protected VPN connections. [ExpressRoute locations](../expressroute/expressroute-locations-providers.md) are the entry points to MicrosoftΓÇÖs global network backbone and they may or may not match the location of Azure regions. For example, you can connect to Microsoft in Amsterdam through ExpressRoute and have access to all Azure cloud services hosted in Northern and Western Europe. However, itΓÇÖs also possible to have access to the same Azure regions from ExpressRoute connections located elsewhere in the world. Once the network traffic enters the Microsoft backbone, it's guaranteed to traverse that private networking infrastructure instead of the public Internet.
#### *Traffic across Microsoft global network backbone* As described in *[Data at rest](#data-at-rest)* section, Azure services such as Storage and SQL Database can be configured for geo-replication to help ensure durability and high availability especially for disaster recovery scenarios. Azure relies on [paired regions](../availability-zones/cross-region-replication-azure.md) to deliver [geo-redundant storage](../storage/common/storage-redundancy.md#geo-redundant-storage) (GRS), and paired regions are also recommended when configuring active [geo-replication](../azure-sql/database/active-geo-replication-overview.md) for Azure SQL Database. Paired regions are located within the same Geography.
-Inter-region traffic is encrypted using [Media Access Control Security](https://1.ieee802.org/security/802-1ae/) (MACsec), which protects network traffic at the data link layer (Layer 2 of the networking stack) and relies on AES-128 block cipher for encryption. This traffic stays entirely within the Microsoft [global network backbone](../networking/microsoft-global-network.md) and never enters the public Internet. The backbone is one of the largest in the world with more than 200,000 km of lit fiber optic and undersea cable systems. However, network traffic is not guaranteed to always follow the same path from one Azure region to another. To provide the reliability needed for the Azure cloud, Microsoft has many physical networking paths with automatic routing around congestion or failures for optimal reliability. Therefore, Microsoft cannot guarantee that network traffic traversing between Azure regions will always be confined to the corresponding Geography. In networking infrastructure disruptions, Microsoft can reroute the encrypted network traffic across its private backbone to ensure service availability and best possible performance.
+Inter-region traffic is encrypted using [Media Access Control Security](https://1.ieee802.org/security/802-1ae/) (MACsec), which protects network traffic at the data link layer (Layer 2 of the networking stack) and relies on AES-128 block cipher for encryption. This traffic stays entirely within the Microsoft [global network backbone](../networking/microsoft-global-network.md) and never enters the public Internet. The backbone is one of the largest in the world with more than 200,000 km of lit fiber optic and undersea cable systems. However, network traffic isn't guaranteed to always follow the same path from one Azure region to another. To provide the reliability needed for the Azure cloud, Microsoft has many physical networking paths with automatic routing around congestion or failures for optimal reliability. Therefore, Microsoft can't guarantee that network traffic traversing between Azure regions will always be confined to the corresponding Geography. In networking infrastructure disruptions, Microsoft can reroute the encrypted network traffic across its private backbone to ensure service availability and best possible performance.
### Data for customer support and troubleshooting **Azure is a 24x7 globally operated service; however, support and troubleshooting rarely requires access to your data. If you want extra control over support and troubleshooting scenarios, you can use Customer Lockbox for Azure to approve or deny access requests to your data.**
-Microsoft [Azure support](https://azure.microsoft.com/support/options/) is available in markets where Azure is offered. It is staffed globally to accommodate 24x7 access to support engineers via email and phone for technical support. You can [create and manage support requests](../azure-portal/supportability/how-to-create-azure-support-request.md) in the Azure portal. As needed, frontline support engineers can escalate your requests to Azure DevOps personnel responsible for Azure service development and operations. These Azure DevOps engineers are also staffed globally. The same production access controls and processes are imposed on all Microsoft engineers, which include support staff comprised of both Microsoft full-time employees and subprocessors/vendors.
+Microsoft [Azure support](https://azure.microsoft.com/support/options/) is available in markets where Azure is offered. It's staffed globally to accommodate 24x7 access to support engineers via email and phone for technical support. You can [create and manage support requests](../azure-portal/supportability/how-to-create-azure-support-request.md) in the Azure portal. As needed, frontline support engineers can escalate your requests to Azure DevOps personnel responsible for Azure service development and operations. These Azure DevOps engineers are also staffed globally. The same production access controls and processes are imposed on all Microsoft engineers, which include support staff comprised of both Microsoft full-time employees and subprocessors/vendors.
-As explained in *[Data encryption at rest](#data-encryption-at-rest)* section, **your data is encrypted at rest** by default when stored in Azure and you can control your own encryption keys in Azure Key Vault. Moreover, access to your data is not needed to resolve most customer support requests. Microsoft engineers rely heavily on logs to provide customer support. As described in *[Insider data access](#insider-data-access)* section, Azure has controls in place to restrict access to your data for support and troubleshooting scenarios should that access be necessary. For example, **Just-in-Time (JIT)** access provisions restrict access to production systems to Microsoft engineers who are authorized to be in that role and were granted temporary access credentials. As part of the support workflow, **Customer Lockbox** puts you in charge of approving or denying access to your data by Microsoft engineers. When combined, these Azure technologies and processes (data encryption, JIT, and Customer Lockbox) provide appropriate risk mitigation to safeguard confidentiality and integrity of your data.
+As explained in *[Data encryption at rest](#data-encryption-at-rest)* section, **your data is encrypted at rest** by default when stored in Azure and you can control your own encryption keys in Azure Key Vault. Moreover, access to your data isn't needed to resolve most customer support requests. Microsoft engineers rely heavily on logs to provide customer support. As described in *[Insider data access](#insider-data-access)* section, Azure has controls in place to restrict access to your data for support and troubleshooting scenarios should that access be necessary. For example, **Just-in-Time (JIT)** access provisions restrict access to production systems to Microsoft engineers who are authorized to be in that role and were granted temporary access credentials. As part of the support workflow, **Customer Lockbox** puts you in charge of approving or denying access to your data by Microsoft engineers. When combined, these Azure technologies and processes (data encryption, JIT, and Customer Lockbox) provide appropriate risk mitigation to safeguard confidentiality and integrity of your data.
Government customers worldwide expect to be fully in control of protecting their data in the cloud. As described in the next section, Azure provides extensive options for data encryption through its entire lifecycle (at rest, in transit, and in use), including customer control of encryption keys. ## Data encryption
-Azure has extensive support to safeguard your data using [data encryption](../security/fundamentals/encryption-overview.md). **If you require extra security for your most sensitive customer content stored in Azure services, you can encrypt it using your own encryption keys that you control in Azure Key Vault. While you cannot control the precise network path for data in transit, data encryption in transit helps protect data from interception.** Azure supports the following data encryption models:
+Azure has extensive support to safeguard your data using [data encryption](../security/fundamentals/encryption-overview.md). **If you require extra security for your most sensitive customer content stored in Azure services, you can encrypt it using your own encryption keys that you control in Azure Key Vault. While you can't control the precise network path for data in transit, data encryption in transit helps protect data from interception.** Azure supports the following data encryption models:
- Server-side encryption that uses service-managed keys, customer-managed keys (CMK) in Azure, or CMK in customer-controlled hardware. - Client-side encryption that enables you to manage and store keys on-premises or in another secure location.
Data encryption provides isolation assurances that are tied directly to encrypti
Proper protection and management of encryption keys is essential for data security. **[Azure Key Vault](../key-vault/index.yml) is a cloud service for securely storing and managing secrets.** The Key Vault service supports two resource types: -- **Vault** supports software-protected and hardware security module (HSM)-protected secrets, keys, and certificates.-- **Managed HSM** supports only HSM-protected cryptographic keys.
+- **[Vault](../key-vault/general/overview.md)** supports software-protected and hardware security module (HSM)-protected [secrets, keys, and certificates](../key-vault/general/about-keys-secrets-certificates.md). Vaults provide a multi-tenant, low-cost, easy to deploy, zone-resilient (where available), and highly available key management solution suitable for most common cloud application scenarios. The corresponding HSMs have [FIPS 140 Level 2](/azure/compliance/offerings/offering-fips-140-2) validation.
+- **[Managed HSM](../key-vault/managed-hsm/overview.md)** supports only HSM-protected cryptographic keys. It provides a single-tenant, fully managed, highly available, zone-resilient (where available) HSM as a service to store and manage your cryptographic keys. It's most suitable for applications and usage scenarios that handle high value keys. It also helps you meet the most stringent security, compliance, and regulatory requirements. Managed HSM uses [FIPS 140 Level 3](/azure/compliance/offerings/offering-fips-140-2) validated HSMs to protect your cryptographic keys.
-Key Vault enables you to store your encryption keys in hardware security modules (HSMs) that are FIPS 140 validated. With Azure Key Vault, you can import or generate encryption keys in HSMs, ensuring that keys never leave the HSM protection boundary to support *bring your own key* (BYOK) scenarios. **Keys generated inside the Azure Key Vault HSMs are not exportable ΓÇô there can be no clear-text version of the key outside the HSMs.** This binding is enforced by the underlying HSM.
+Key Vault enables you to store your encryption keys in hardware security modules (HSMs) that are FIPS 140 validated. With Azure Key Vault, you can import or generate encryption keys in HSMs, ensuring that keys never leave the HSM protection boundary to support *bring your own key* (BYOK) scenarios. **Keys generated inside the Azure Key Vault HSMs aren't exportable ΓÇô there can be no clear-text version of the key outside the HSMs.** This binding is enforced by the underlying HSM.
-**Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents do not see or extract your cryptographic keys.**
+> [!NOTE]
+> Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents don't see or extract your cryptographic keys.
For more information, see [Azure Key Vault](./azure-secure-isolation-guidance.md#azure-key-vault).
Azure provides many options for [encrypting data in transit](../security/fundame
Azure provides extensive options for [encrypting data at rest](../security/fundamentals/encryption-atrest.md) to help you safeguard your data and meet your compliance needs using both Microsoft-managed encryption keys and customer-managed encryption keys. This process relies on multiple encryption keys and services such as Azure Key Vault and Azure Active Directory to ensure secure key access and centralized key management. For more information about Azure Storage encryption and Azure Disk encryption, see [Data encryption at rest](./azure-secure-isolation-guidance.md#data-encryption-at-rest).
-Azure SQL Database provides [transparent data encryption](../azure-sql/database/transparent-data-encryption-tde-overview.md) (TDE) at rest by [default](https://azure.microsoft.com/updates/newly-created-azure-sql-databases-encrypted-by-default/). TDE performs real-time encryption and decryption operations on the data and log files. Database Encryption Key (DEK) is a symmetric key stored in the database boot record for availability during recovery. It is secured via a certificate stored in the master database of the server or an asymmetric key called TDE Protector stored under your control in [Azure Key Vault](../key-vault/general/security-features.md). Key Vault supports [bring your own key](../azure-sql/database/transparent-data-encryption-byok-overview.md) (BYOK), which enables you to store the TDE Protector in Key Vault and control key management tasks including key permissions, rotation, deletion, enabling auditing/reporting on all TDE Protectors, and so on. The key can be generated by the Key Vault, imported, or [transferred to the Key Vault from an on-premises HSM device](../key-vault/keys/hsm-protected-keys.md). You can also use the [Always Encrypted](../azure-sql/database/always-encrypted-azure-key-vault-configure.md) feature of Azure SQL Database, which is designed specifically to help protect sensitive data by allowing you to encrypt data inside your applications and [never reveal the encryption keys to the database engine](/sql/relational-databases/security/encryption/always-encrypted-database-engine). In this manner, Always Encrypted provides separation between those users who own the data (and can view it) and those users who manage the data (but should have no access).
+Azure SQL Database provides [transparent data encryption](../azure-sql/database/transparent-data-encryption-tde-overview.md) (TDE) at rest by [default](https://azure.microsoft.com/updates/newly-created-azure-sql-databases-encrypted-by-default/). TDE performs real-time encryption and decryption operations on the data and log files. Database Encryption Key (DEK) is a symmetric key stored in the database boot record for availability during recovery. It's secured via a certificate stored in the master database of the server or an asymmetric key called TDE Protector stored under your control in [Azure Key Vault](../key-vault/general/security-features.md). Key Vault supports [bring your own key](../azure-sql/database/transparent-data-encryption-byok-overview.md) (BYOK), which enables you to store the TDE Protector in Key Vault and control key management tasks including key permissions, rotation, deletion, enabling auditing/reporting on all TDE Protectors, and so on. The key can be generated by the Key Vault, imported, or [transferred to the Key Vault from an on-premises HSM device](../key-vault/keys/hsm-protected-keys.md). You can also use the [Always Encrypted](../azure-sql/database/always-encrypted-azure-key-vault-configure.md) feature of Azure SQL Database, which is designed specifically to help protect sensitive data by allowing you to encrypt data inside your applications and [never reveal the encryption keys to the database engine](/sql/relational-databases/security/encryption/always-encrypted-database-engine). In this manner, Always Encrypted provides separation between those users who own the data (and can view it) and those users who manage the data (but should have no access).
### Data encryption in use
-Microsoft enables you to protect your data throughout its entire lifecycle: at rest, in transit, and in use. Azure confidential computing and Homomorphic encryption are two techniques that safeguard your data while it is processed in the cloud.
+Microsoft enables you to protect your data throughout its entire lifecycle: at rest, in transit, and in use. Azure confidential computing and Homomorphic encryption are two techniques that safeguard your data while it's processed in the cloud.
#### *Azure confidential computing*
-[Azure confidential computing](https://azure.microsoft.com/solutions/confidential-compute/) is a set of data security capabilities that offers encryption of data while in use. This approach means that data can be processed in the cloud with the assurance that it is always under your control. Azure confidential computing supports two different technologies for data encryption while in use.
+[Azure confidential computing](../confidential-computing/index.yml) is a set of data security capabilities that offers encryption of data while in use. This approach means that data can be processed in the cloud with the assurance that it's always under your control, even when data is in use while in memory during computations. Azure confidential computing supports different virtual machines for IaaS workloads:
-First, you can choose Azure VMs based on [Intel Software Guard Extensions](https://software.intel.com/sgx) (SGX) technology that supports confidentiality in a granular manner down to the application level. With this approach, when data is in the clear, which is needed for efficient data processing in memory, the data is protected inside a hardware-based [trusted execution environment](../confidential-computing/overview.md) (TEE, also known as an enclave), as depicted in Figure 1. Intel SGX isolates a portion of physical memory to create an enclave where select code and data are protected from viewing or modification. TEE helps ensure that only the application designer has access to TEE data; access is denied to everyone else including Azure administrators. Moreover, TEE helps ensure that only authorized code is permitted to access data. If the code is altered or tampered with, the operations are denied, and the environment is disabled.
+- **Trusted launch VMs:** [Trusted launch](../virtual-machines/trusted-launch.md) is available across [generation 2 VMs](../virtual-machines/generation-2.md), bringing hardened security features ΓÇô secure boot, virtual trusted platform module, and boot integrity monitoring ΓÇô that protect against boot kits, rootkits, and kernel-level malware.
+- **Confidential VMs with AMD SEV-SNP technology:** You can choose Azure VMs based on AMD EPYC 7003 series CPUs to lift and shift applications without requiring any code changes. These AMD EPYC CPUs use AMD [Secure Encrypted Virtualization ΓÇô Secure Nested Paging](https://www.amd.com/en/processors/amd-secure-encrypted-virtualization) (SEV-SNP) technology to encrypt your entire virtual machine at runtime. The encryption keys used for VM encryption are generated and safeguarded by a dedicated secure processor on the EPYC CPU and can't be extracted by any external means. These Azure VMs are currently in Preview and available to select customers. For more information, see [Azure and AMD announce landmark in confidential computing evolution](https://azure.microsoft.com/blog/azure-and-amd-enable-lift-and-shift-confidential-computing/).
-**Figure 1.** Trusted execution environment protection
+- **Confidential VMs with Intel SGX application enclaves:** You can choose Azure VMs based on [Intel Software Guard Extensions](https://software.intel.com/sgx) (Intel SGX) technology that supports confidentiality in a granular manner down to the application level. With this approach, when data is in the clear, which is needed for efficient data processing in memory, the data is protected inside a hardware-based [trusted execution environment](../confidential-computing/overview.md) (TEE, also known as an enclave), as depicted in Figure 1. Intel SGX isolates a portion of physical memory to create an enclave where select code and data are protected from viewing or modification. TEE helps ensure that only the application designer has access to TEE data ΓÇô access is denied to everyone else including Azure administrators. Moreover, TEE helps ensure that only authorized code is permitted to access data. If the code is altered or tampered with, the operations are denied, and the environment is disabled.
-Azure [DCsv2-series virtual machines](../virtual-machines/dcv2-series.md) have the latest generation of Intel Xeon processors with Intel SGX technology. The protection offered by Intel SGX, when used appropriately by application developers, can prevent compromise due to attacks from privileged software and many hardware-based attacks. An application using Intel SGX needs to be refactored into trusted and untrusted components. The untrusted part of the application sets up the enclave, which then allows the trusted part to run inside the enclave. No other code, irrespective of the privilege level, has access to the code executing within the enclave or the data associated with enclave code. Design best practices call for the trusted partition to contain just the minimum amount of content required to protect customerΓÇÖs secrets. For more information, see [Application development on Intel SGX](../confidential-computing/application-development.md).
+ :::image type="content" source="./media/wwps-hardware-backed-enclave.png" alt-text="Trusted execution environment protection" border="false":::
-Second, you can choose Azure VMs based on AMD EPYC 3rd Generation CPUs to lift and shift applications without requiring any code changes. These AMD EPYC CPUs make it possible to encrypt your entire virtual machine at runtime. The encryption keys used for VM encryption are generated and safeguarded by a dedicated secure processor on the EPYC CPU and cannot be extracted by any external means. These Azure VMs are currently in Preview.
+ **Figure 1.** Trusted execution environment protection
+
+ Azure DCsv2, DCsv3, and DCdsv3 series virtual machines have the latest generation of Intel Xeon processors with Intel SGX technology. For more information, see [Build with SGX enclaves](../confidential-computing/confidential-computing-enclaves.md). The protection offered by Intel SGX, when used appropriately by application developers, can prevent compromise due to attacks from privileged software and many hardware-based attacks. An application using Intel SGX needs to be refactored into trusted and untrusted components. The untrusted part of the application sets up the enclave, which then allows the trusted part to run inside the enclave. No other code, irrespective of the privilege level, has access to the code executing within the enclave or the data associated with enclave code. Design best practices call for the trusted partition to contain just the minimum amount of content required to protect customerΓÇÖs secrets. For more information, see [Application development on Intel SGX](../confidential-computing/application-development.md).
+
+Technologies like [Intel Software Guard Extensions](https://software.intel.com/sgx) (Intel SGX), or [AMD Secure Encrypted Virtualization ΓÇô Secure Nested Paging](https://www.amd.com/en/processors/amd-secure-encrypted-virtualization) (SEV-SNP) are recent CPU improvements supporting confidential computing implementations. These technologies are designed as virtualization extensions and provide feature sets including memory encryption and integrity, CPU-state confidentiality and integrity, and attestation. Azure provides extra [Confidential computing offerings](../confidential-computing/overview-azure-products.md#azure-offerings) that are generally available or available in preview:
+
+- **[Microsoft Azure Attestation](../attestation/overview.md)** ΓÇô A remote attestation service for validating the trustworthiness of multiple Trusted Execution Environments (TEEs) and verifying integrity of the binaries running inside the TEEs.
+- **[Azure Confidential Ledger](../confidential-ledger/overview.md)** ΓÇô A tamper-proof register for storing sensitive data for record keeping and auditing or for data transparency in multi-party scenarios. It offers Write-Once-Read-Many guarantees, which make data non-erasable and non-modifiable. The service is built on Microsoft Research's Confidential Consortium Framework.
+- **[Enclave aware containers](../confidential-computing/enclave-aware-containers.md)** running on Azure Kubernetes Service (AKS) ΓÇô Confidential computing nodes on AKS use Intel SGX to create isolated enclave environments in the nodes between each container application.
+- **[Always Encrypted with secure enclaves in Azure SQL](/sql/relational-databases/security/encryption/always-encrypted-enclaves)** ΓÇô The confidentiality of sensitive data is protected from malware and high-privileged unauthorized users by running SQL queries directly inside a TEE when the SQL statement contains any operations on encrypted data that require the use of the secure enclave where the database engine runs.
+- **[Confidential computing at the edge](../iot-edge/deploy-confidential-applications.md)** ΓÇô Azure IoT Edge supports confidential applications that run within secure enclaves on an Internet of Things (IoT) device. IoT devices are often exposed to tampering and forgery because they're physically accessible by bad actors. Confidential IoT Edge devices add trust and integrity at the edge by protecting the access to data captured by and stored inside the device itself before streaming it to the cloud.
Based on customer feedback, Microsoft has started to invest in higher-level [scenarios for Azure confidential computing](../confidential-computing/use-cases-scenarios.md). You can review the scenario recommendations as a starting point for developing your own applications using confidential computing services and frameworks.
Data encryption in the cloud is an important risk mitigation requirement expecte
## Insider data access
-Insider threat is characterized as potential for providing back-door connections and cloud service provider (CSP) privileged administrator access to your systems and data. Microsoft provides strong [customer commitments](https://www.microsoft.com/trust-center/privacy/data-access) regarding who can access your data and on what terms. Access to your data by Microsoft operations and support personnel is **denied by default**. Access to your data is not needed to operate Azure. Moreover, for most support scenarios involving customer-initiated troubleshooting tickets, access to your data is not needed.
+Insider threat is characterized as potential for providing back-door connections and cloud service provider (CSP) privileged administrator access to your systems and data. Microsoft provides strong [customer commitments](https://www.microsoft.com/trust-center/privacy/data-access) regarding who can access your data and on what terms. Access to your data by Microsoft operations and support personnel is **denied by default**. Access to your data isn't needed to operate Azure. Moreover, for most support scenarios involving customer-initiated troubleshooting tickets, access to your data isn't needed.
No default access rights and Just-in-Time (JIT) access provisions reduce greatly the risks associated with traditional on-premises administrator elevated access rights that typically persist throughout the duration of employment. Microsoft makes it considerably more difficult for malicious insiders to tamper with your applications and data. The same access control restrictions and processes are imposed on all Microsoft engineers, including both full-time employees and subprocessors/vendors.
For more information on how Microsoft restricts insider access to your data, see
Government requests for your data follow a strict procedure according to [Microsoft practices for responding to government requests](https://blogs.microsoft.com/datalaw/our-practices/). Microsoft takes strong measures to help protect your data from inappropriate access or use by unauthorized persons. These measures include restricting access by Microsoft personnel and subcontractors and carefully defining requirements for responding to government requests for your data. Microsoft ensures that there are no back-door channels and no direct or unfettered government access to your data. Microsoft imposes special requirements for government and law enforcement requests for your data.
-As stated in the Microsoft Online Services Terms [Data Protection Addendum](https://aka.ms/DPA) (DPA), Microsoft will not disclose your data to law enforcement unless required by law. If law enforcement contacts Microsoft with a demand for your data, Microsoft will attempt to redirect the law enforcement agency to request that data directly from you. If compelled to disclose your data to law enforcement, Microsoft will promptly notify you and provide a copy of the demand unless legally prohibited from doing so.
+As stated in the Microsoft Products and Services [Data Protection Addendum](https://aka.ms/dpa) (DPA), Microsoft won't disclose your data to law enforcement unless required by law. If law enforcement contacts Microsoft with a demand for your data, Microsoft will attempt to redirect the law enforcement agency to request that data directly from you. If compelled to disclose your data to law enforcement, Microsoft will promptly notify you and provide a copy of the demand unless legally prohibited from doing so.
Government requests for your data must comply with applicable laws. - A subpoena or its local equivalent is required to request non-content data. - A warrant, court order, or its local equivalent is required for content data.
-Every year, Microsoft rejects many law enforcement requests for your data. Challenges to government requests can take many forms. In many of these cases, Microsoft simply informs the requesting government that it is unable to disclose the requested information and explains the reason for rejecting the request. Where appropriate, Microsoft challenges requests in court.
+Every year, Microsoft rejects many law enforcement requests for your data. Challenges to government requests can take many forms. In many of these cases, Microsoft simply informs the requesting government that it's unable to disclose the requested information and explains the reason for rejecting the request. Where appropriate, Microsoft challenges requests in court.
Our [Law Enforcement Request Report](https://www.microsoft.com/corporate-responsibility/law-enforcement-requests-report?rtc=1) and [US National Security Order Report](https://www.microsoft.com/corporate-responsibility/us-national-security-orders-report) are updated every six months and show that most of our customers are never impacted by government requests for data.
Our [Law Enforcement Request Report](https://www.microsoft.com/corporate-respons
The [CLOUD Act](https://www.congress.gov/bill/115th-congress/house-bill/4943) is a United States law that was enacted in March 2018. For more information, see MicrosoftΓÇÖs [blog post](https://blogs.microsoft.com/on-the-issues/2018/04/03/the-cloud-act-is-an-important-step-forward-but-now-more-steps-need-to-follow/) and the [follow-up blog post](https://blogs.microsoft.com/on-the-issues/2018/09/11/a-call-for-principle-based-international-agreements-to-govern-law-enforcement-access-to-data/) that describes MicrosoftΓÇÖs call for principle-based international agreements governing law enforcement access to data. Key points of interest to government customers procuring Azure services are captured below. - The CLOUD Act enables governments to negotiate new government-to-government agreements that will result in greater transparency and certainty for how information is disclosed to law enforcement agencies across international borders.-- The CLOUD Act is not a mechanism for greater government surveillance; it is a mechanism toward ensuring that your data is ultimately protected by the laws of your home country/region while continuing to facilitate lawful access to evidence for legitimate criminal investigations. Law enforcement in the US still needs to obtain a warrant demonstrating probable cause of a crime from an independent court before seeking the contents of communications. The CLOUD Act requires similar protections for other countries seeking bilateral agreements.-- While the CLOUD Act creates new rights under new international agreements, it also preserves the common law right of cloud service providers to go to court to challenge search warrants when there is a conflict of laws ΓÇô even without these new treaties in place.
+- The CLOUD Act isn't a mechanism for greater government surveillance; it's a mechanism toward ensuring that your data is ultimately protected by the laws of your home country/region while continuing to facilitate lawful access to evidence for legitimate criminal investigations. Law enforcement in the US still needs to obtain a warrant demonstrating probable cause of a crime from an independent court before seeking the contents of communications. The CLOUD Act requires similar protections for other countries seeking bilateral agreements.
+- While the CLOUD Act creates new rights under new international agreements, it also preserves the common law right of cloud service providers to go to court to challenge search warrants when there's a conflict of laws ΓÇô even without these new treaties in place.
- Microsoft retains the legal right to object to a law enforcement order in the United States where the order clearly conflicts with the laws of the country/region where your data is hosted. Microsoft will continue to carefully evaluate every law enforcement request and exercise its rights to protect customers where appropriate. - For legitimate enterprise customers, US law enforcement will, in most instances, now go directly to customers rather than to Microsoft for information requests.
-**Microsoft does not disclose extra data as a result of the CLOUD Act**. This law does not practically change any of the legal and privacy protections that previously applied to law enforcement requests for data ΓÇô and those protections continue to apply. Microsoft adheres to the same principles and customer commitments related to government demands for user data.
+**Microsoft doesn't disclose extra data as a result of the CLOUD Act**. This law doesn't practically change any of the legal and privacy protections that previously applied to law enforcement requests for data ΓÇô and those protections continue to apply. Microsoft adheres to the same principles and customer commitments related to government demands for user data.
Most government customers have requirements in place for handling security incidents, including data breach notifications. Microsoft has a mature security and privacy incident management process in place that is described in the next section.
Most government customers have requirements in place for handling security incid
**Microsoft will notify you of any breach of your data (customer or personal) within 72 hours of incident declaration. You can monitor potential threats and respond to incidents on your own using Microsoft Defender for Cloud.**
-Microsoft is responsible for monitoring and remediating security and availability incidents affecting the Azure platform and notifying you of any security breaches involving your data. Azure has a mature security and privacy incident management process that is used for this purpose. You are responsible for monitoring your own resources provisioned in Azure, as described in the next section.
+Microsoft is responsible for monitoring and remediating security and availability incidents affecting the Azure platform and notifying you of any security breaches involving your data. Azure has a mature security and privacy incident management process that is used for this purpose. You're responsible for monitoring your own resources provisioned in Azure, as described in the next section.
### Shared responsibility
-The NIST [SP 800-145](https://csrc.nist.gov/publications/detail/sp/800-145/final) standard defines the following cloud service models: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). The [shared responsibility](../security/fundamentals/shared-responsibility.md) model for cloud computing is depicted in Figure 2. With on-premises deployment in your own datacenter, you assume the responsibility for all layers in the stack. As workloads get migrated to the cloud, Microsoft assumes progressively more responsibility depending on the cloud service model. For example, with the IaaS model, MicrosoftΓÇÖs responsibility ends at the Hypervisor layer, and you are responsible for all layers above the virtualization layer, including maintaining the base operating system in guest Virtual Machines.
+The NIST [SP 800-145](https://csrc.nist.gov/publications/detail/sp/800-145/final) standard defines the following cloud service models: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). The [shared responsibility](../security/fundamentals/shared-responsibility.md) model for cloud computing is depicted in Figure 2. With on-premises deployment in your own datacenter, you assume the responsibility for all layers in the stack. As workloads get migrated to the cloud, Microsoft assumes progressively more responsibility depending on the cloud service model. For example, with the IaaS model, MicrosoftΓÇÖs responsibility ends at the Hypervisor layer, and you're responsible for all layers above the virtualization layer, including maintaining the base operating system in guest Virtual Machines.
:::image type="content" source="./media/wwps-shared-responsibility.png" alt-text="Shared responsibility model in cloud computing" border="false"::: **Figure 2.** Shared responsibility model in cloud computing
-In line with the shared responsibility model, Microsoft does not inspect, approve, or monitor your individual applications deployed on Azure. For example, Microsoft does not know what firewall ports need to be open for your application to function correctly, what the back-end database schema looks like, what constitutes normal network traffic for the application, and so on. Microsoft has extensive monitoring infrastructure in place for the cloud platform; however, you are responsible for provisioning and monitoring your own resources in Azure. You can deploy a range of Azure services to monitor and safeguard your applications and data, as described in the next section.
+In line with the shared responsibility model, Microsoft doesn't inspect, approve, or monitor your individual applications deployed on Azure. For example, Microsoft doesn't know what firewall ports need to be open for your application to function correctly, what the back-end database schema looks like, what constitutes normal network traffic for the application, and so on. Microsoft has extensive monitoring infrastructure in place for the cloud platform; however, you're responsible for provisioning and monitoring your own resources in Azure. You can deploy a range of Azure services to monitor and safeguard your applications and data, as described in the next section.
### Essential Azure services for extra protection
Security incident response, including breach notification, is a subset of Micros
The goal of this process is to restore normal service operations and security as quickly as possible after an issue is detected, and an investigation started. Moreover, Microsoft enables you to investigate, manage, and respond to security incidents in your Azure subscriptions. For more information, see [Incident management implementation guidance: Azure and Office 365](https://servicetrust.microsoft.com/ViewPage/TrustDocumentsV3?command=Download&downloadType=Document&downloadId=a8a7cb87-9710-4d09-8748-0835b6754e95&tab=7f51cb60-3d6c-11e9-b2af-7bb9f5d2d913&docTab=7f51cb60-3d6c-11e9-b2af-7bb9f5d2d913_FAQ_and_White_Papers).
-If during the investigation of a security or privacy event, Microsoft becomes aware that customer or personal data has been exposed or accessed by an unauthorized party, the security incident manager is required to trigger the incident notification subprocess in consultation with the Microsoft legal affairs division. This subprocess is designed to fulfill incident notification requirements stipulated in Azure customer contracts (see *Security Incident Notification* in the Microsoft Online Services Terms [Data Protection Addendum](https://aka.ms/DPA)). Customer notification and external reporting obligations (if any) are triggered by a security incident being declared. The customer notification subprocess begins in parallel with security incident investigation and mitigation phases to help minimize any impact resulting from the security incident.
+If during the investigation of a security or privacy event, Microsoft becomes aware that customer or personal data has been exposed or accessed by an unauthorized party, the security incident manager is required to trigger the incident notification subprocess in consultation with the Microsoft legal affairs division. This subprocess is designed to fulfill incident notification requirements stipulated in Azure customer contracts (see *Security Incident Notification* in the Microsoft Products and Services [Data Protection Addendum](https://aka.ms/DPA)). Customer notification and external reporting obligations (if any) are triggered by a security incident being declared. The customer notification subprocess begins in parallel with security incident investigation and mitigation phases to help minimize any impact resulting from the security incident.
-Microsoft will notify you, Data Protection Authorities, and data subjects (each as applicable) of any breach of customer or personal data within 72 hours of incident declaration. **The notification process upon a declared security or privacy incident will occur as expeditiously as possible while still considering the security risks of proceeding quickly**. In practice, this approach means that most notifications will take place well before the 72-hr deadline to which Microsoft commits contractually. Notification of a security or privacy incident will be delivered to one or more of your administrators by any means Microsoft selects, including via email. You should [provide security contact details](../security-center/security-center-provide-security-contact-details.md) for your Azure subscription ΓÇô this information will be used by Microsoft to contact you if the MSRC discovers that your data has been exposed or accessed by an unlawful or unauthorized party. To ensure that notification can be delivered successfully, it is your responsibility to maintain correct administrative contact information for each applicable subscription.
+Microsoft will notify you, Data Protection Authorities, and data subjects (each as applicable) of any breach of customer or personal data within 72 hours of incident declaration. **The notification process upon a declared security or privacy incident will occur as expeditiously as possible while still considering the security risks of proceeding quickly**. In practice, this approach means that most notifications will take place well before the 72-hr deadline to which Microsoft commits contractually. Notification of a security or privacy incident will be delivered to one or more of your administrators by any means Microsoft selects, including via email. You should [provide security contact details](../security-center/security-center-provide-security-contact-details.md) for your Azure subscription ΓÇô this information will be used by Microsoft to contact you if the MSRC discovers that your data has been exposed or accessed by an unlawful or unauthorized party. To ensure that notification can be delivered successfully, it's your responsibility to maintain correct administrative contact information for each applicable subscription.
-Most Azure security and privacy investigations do not result in declared security incidents. Most external threats do not lead to breaches of your data because of extensive platform security measures that Microsoft has in place. Microsoft has deployed extensive monitoring and diagnostics infrastructure throughout Azure that relies on big-data analytics and machine learning to get insight into the platform health, including real-time threat intelligence. While Microsoft takes all platform attacks seriously, it would be impractical to notify you of *potential* attacks at the platform level.
+Most Azure security and privacy investigations don't result in declared security incidents. Most external threats don't lead to breaches of your data because of extensive platform security measures that Microsoft has in place. Microsoft has deployed extensive monitoring and diagnostics infrastructure throughout Azure that relies on big-data analytics and machine learning to get insight into the platform health, including real-time threat intelligence. While Microsoft takes all platform attacks seriously, it would be impractical to notify you of *potential* attacks at the platform level.
-Aside from controls implemented by Microsoft to safeguard customer data, government customers deployed on Azure derive considerable benefits from security research that Microsoft conducts to protect the cloud platform. Microsoft global threat intelligence is one of the largest in the industry, and it is derived from one of the most diverse sets of threat telemetry sources. It is both the volume and diversity of threat telemetry that makes Microsoft machine learning algorithms applied to that telemetry so powerful. All Azure customers benefit directly from these investments as described in the next section.
+Aside from controls implemented by Microsoft to safeguard customer data, government customers deployed on Azure derive considerable benefits from security research that Microsoft conducts to protect the cloud platform. Microsoft global threat intelligence is one of the largest in the industry, and it's derived from one of the most diverse sets of threat telemetry sources. It's both the volume and diversity of threat telemetry that makes Microsoft machine learning algorithms applied to that telemetry so powerful. All Azure customers benefit directly from these investments as described in the next section.
## Threat detection and prevention
Azure Stack Hub and Azure Stack Edge represent key enabling technologies that al
### Azure Stack Hub
-[Azure Stack Hub](https://azure.microsoft.com/products/azure-stack/hub/) (formerly Azure Stack) is an integrated system of software and validated hardware that you can purchase from Microsoft hardware partners, deploy in your own data center, and then operate entirely on your own or with the help from a managed service provider. With Azure Stack Hub, you are always fully in control of access to your data. Azure Stack Hub can accommodate up to [16 physical servers per Azure Stack Hub scale unit](/azure-stack/operator/azure-stack-overview). It represents an extension of Azure, enabling you to provision various IaaS and PaaS services and effectively bring multi-tenant cloud technology to on-premises and edge environments. You can run many types of VM instances, App Services, Containers (including Cognitive Services containers), Functions, Azure Monitor, Key Vault, Event Hubs, and other services while using the same development tools, APIs, and management processes you use in Azure. Azure Stack Hub is not dependent on connectivity to Azure to run deployed applications and enable operations via local connectivity.
+[Azure Stack Hub](https://azure.microsoft.com/products/azure-stack/hub/) (formerly Azure Stack) is an integrated system of software and validated hardware that you can purchase from Microsoft hardware partners, deploy in your own data center, and then operate entirely on your own or with the help from a managed service provider. With Azure Stack Hub, you're always fully in control of access to your data. Azure Stack Hub can accommodate up to [16 physical servers per Azure Stack Hub scale unit](/azure-stack/operator/azure-stack-overview). It represents an extension of Azure, enabling you to provision various IaaS and PaaS services and effectively bring multi-tenant cloud technology to on-premises and edge environments. You can run many types of VM instances, App Services, Containers (including Cognitive Services containers), Functions, Azure Monitor, Key Vault, Event Hubs, and other services while using the same development tools, APIs, and management processes you use in Azure. Azure Stack Hub isn't dependent on connectivity to Azure to run deployed applications and enable operations via local connectivity.
In addition to Azure Stack Hub, which is intended for on-premises deployment (for example, in a data center), a ruggedized and field-deployable version called [Tactical Azure Stack Hub](https://www.delltechnologies.com/en-us/collaterals/unauth/data-sheets/products/converged-infrastructure/dell-emc-integrated-system-for-azure-stack-hub-tactical-spec-sheet.pdf) is also available to address tactical edge deployments for limited or no connectivity, fully mobile requirements, and harsh conditions requiring military specification solutions.
Azure Stack Hub brings the following [value proposition for key scenarios](/azur
Azure Stack Hub requires Azure Active Directory (Azure AD) or Active Directory Federation Services (ADFS), backed by Active Directory as an [identity provider](/azure-stack/operator/azure-stack-identity-overview). You can use [role-based access control](/azure-stack/user/azure-stack-manage-permissions) (RBAC) to grant system access to authorized users, groups, and services by assigning them roles at a subscription, resource group, or individual resource level. Each role defines the access level a user, group, or service has over Azure Stack Hub resources.
-Azure Stack Hub protects your data at the storage subsystem level using [encryption at rest](/azure-stack/operator/azure-stack-security-bitlocker). By default, Azure Stack Hub's storage subsystem is encrypted using BitLocker with 128-bit AES encryption. BitLocker keys are persisted in an internal secret store. At deployment time, it is also possible to configure BitLocker to use 256-bit AES encryption. You can store and manage your secrets including cryptographic keys using [Key Vault in Azure Stack Hub](/azure-stack/user/azure-stack-key-vault-intro).
+Azure Stack Hub protects your data at the storage subsystem level using [encryption at rest](/azure-stack/operator/azure-stack-security-bitlocker). By default, Azure Stack Hub's storage subsystem is encrypted using BitLocker with 128-bit AES encryption. BitLocker keys are persisted in an internal secret store. At deployment time, it's also possible to configure BitLocker to use 256-bit AES encryption. You can store and manage your secrets including cryptographic keys using [Key Vault in Azure Stack Hub](/azure-stack/user/azure-stack-key-vault-intro).
### Azure Stack Edge
Listed below are key enabling technologies and services that you may find helpfu
- Use Azure Key Vault [Managed HSM](../key-vault/managed-hsm/overview.md), which provides a fully managed, highly available, single-tenant HSM as a service that uses FIPS 140 Level 3 validated HSMs. Each Managed HSM instance is bound to a separate security domain controlled by you and isolated cryptographically from instances belonging to other customers. - [Azure Dedicated Host](https://azure.microsoft.com/services/virtual-machines/dedicated-host/) provides physical servers that can host one or more Azure VMs and are dedicated to one Azure subscription. You can provision dedicated hosts within a region, availability zone, and fault domain. You can then place VMs directly into provisioned hosts using whatever configuration best meets your needs. Dedicated Host provides hardware isolation at the physical server level, enabling you to place your Azure VMs on an isolated and dedicated physical server that runs only your organizationΓÇÖs workloads to meet corporate compliance requirements. - Accelerated FPGA networking based on [Azure SmartNICs](https://www.microsoft.com/research/publication/azure-accelerated-networking-smartnics-public-cloud/) enables you to offload host networking to dedicated hardware, enabling tunneling for VNets, security, and load balancing. Offloading network traffic to a dedicated chip guards against side-channel attacks on the main CPU.-- [Azure confidential computing](../confidential-computing/index.yml) offers encryption of data while in use, ensuring that data is always under your control. Data is protected inside a hardware-based trusted execution environment (TEE, also known as enclave) and there is no way to view data or operations from outside the enclave.
+- [Azure confidential computing](../confidential-computing/index.yml) offers encryption of data while in use, ensuring that data is always under your control. Data is protected inside a hardware-based trusted execution environment (TEE, also known as enclave) and there's no way to view data or operations from outside the enclave.
- [Just-in-time (JIT) virtual machine (VM) access](../security-center/security-center-just-in-time.md) can be used to lock down inbound traffic to Azure VMs by creating network security group (NSG) rules. You select ports on the VM to which inbound traffic will be locked down and when a user requests access to a VM, Microsoft Defender for Cloud checks that the user has proper role-based access control (RBAC) permissions. To accommodate secret data in the Azure public multi-tenant cloud, you can deploy extra technologies and services on top of those technologies used for confidential data and limit provisioned services to those services that provide sufficient isolation. These services offer various isolation options at run time. They also support data encryption at rest using customer-managed keys in single-tenant HSMs controlled by you and isolated cryptographically from HSM instances belonging to other customers.
Listed below are key enabling products that you may find helpful when deploying
- In addition to Azure Stack Hub, which is intended for on-premises deployment (for example, in a data center), a ruggedized and field-deployable version called [Tactical Azure Stack Hub](https://www.delltechnologies.com/en-us/collaterals/unauth/data-sheets/products/converged-infrastructure/dell-emc-integrated-system-for-azure-stack-hub-tactical-spec-sheet.pdf) is also available to address tactical edge deployments for limited or no connectivity, fully mobile requirements, and harsh conditions requiring military specification solutions. - User-provided hardware security modules (HSMs) allow you to store your encryption keys in HSMs deployed on-premises and controlled solely by you.
-Accommodating top secret data will likely require a disconnected environment, which is what Azure Stack Hub provides. Azure Stack Hub can be [operated disconnected](/azure-stack/operator/azure-stack-disconnected-deployment) from Azure or the Internet. Even though ΓÇ£air-gappedΓÇ¥ networks do not necessarily increase security, many governments may be reluctant to store data with this classification in an Internet connected environment.
+Accommodating top secret data will likely require a disconnected environment, which is what Azure Stack Hub provides. Azure Stack Hub can be [operated disconnected](/azure-stack/operator/azure-stack-disconnected-deployment) from Azure or the Internet. Even though ΓÇ£air-gappedΓÇ¥ networks don't necessarily increase security, many governments may be reluctant to store data with this classification in an Internet connected environment.
Azure offers an unmatched variety of public, private, and hybrid cloud deployment models to address your concerns regarding the safeguarding of your data. The following section covers select use cases that might be of interest to worldwide government customers.
This section provides an overview of select use cases that showcase Azure capabi
### Processing highly sensitive or regulated data on Azure Stack Hub
-Microsoft provides Azure Stack Hub as an on-premises, cloud-consistent experience for customers who do not have the ability to connect directly to the Internet, or where certain workload types are required to be hosted in-country due to law, compliance, or sentiment. Azure Stack Hub offers IaaS and PaaS services and shares the same APIs as the global Azure cloud. Azure Stack Hub is available in scale units of 4, 8, and 16 servers in a single-server rack, and 4 servers in a military-specification, ruggedized set of transit cases, or multiple racks in a modular data center configuration.
+Microsoft provides Azure Stack Hub as an on-premises, cloud-consistent experience for customers who don't have the ability to connect directly to the Internet, or where certain workload types are required to be hosted in-country due to law, compliance, or sentiment. Azure Stack Hub offers IaaS and PaaS services and shares the same APIs as the global Azure cloud. Azure Stack Hub is available in scale units of 4, 8, and 16 servers in a single-server rack, and 4 servers in a military-specification, ruggedized set of transit cases, or multiple racks in a modular data center configuration.
Azure Stack Hub is a solution if you operate in scenarios where: -- For compliance reasons, you cannot connect your network to the public Internet.-- For geo-political or security reasons, Microsoft cannot offer connectivity to other Microsoft clouds.
+- For compliance reasons, you can't connect your network to the public Internet.
+- For geo-political or security reasons, Microsoft can't offer connectivity to other Microsoft clouds.
- For geo-political or security reasons, the host organization may require cloud management by non-Microsoft entities, or in-country by security-cleared personnel.-- Microsoft does not have an in-country cloud presence and therefore cannot meet data sovereignty requirements.
+- Microsoft doesn't have an in-country cloud presence and therefore can't meet data sovereignty requirements.
- Cloud management would pose significant risk to the physical well-being of Microsoft resources operating the environment. For most of these scenarios, Microsoft and its partners offer a customer-managed, Azure Stack Hub-based private cloud appliance on field-deployable hardware from [major vendors](https://azure.microsoft.com/products/azure-stack/hub/#partners) such as Avanade, Cisco, Dell EMC, Hewlett Packard Enterprise, and Lenovo. Azure Stack Hub is manufactured, configured, and deployed by the hardware vendor, and can be ruggedized and security-hardened to meet a broad range of environmental and compliance standards, including the ability to withstand transport by aircraft, ship, or truck, and deployment into colocation, mobile, or modular data centers. Azure Stack Hub can be used in exploration, construction, agriculture, oil and gas, manufacturing, disaster response, government, and military efforts in hospitable or the most extreme conditions and remote locations. Azure Stack Hub allows you the full autonomy to monitor, manage, and provision your own private cloud resources while meeting your connectivity, compliance, and ruggedization requirements.
With innovative solutions such as [IoT Hub](https://azure.microsoft.com/services
### Precision Agriculture with Farm Beats
-Agriculture plays a vital role in most economies worldwide. In the US, over 70% of the rural households depend on agriculture as it contributes about 17% to the total GDP and provides employment to over 60% of the population. In project [Farm Beats](https://www.microsoft.com/research/project/farmbeats-iot-agriculture/), we gather numerous data from farms that we couldnΓÇÖt get before, and then by applying AI and ML algorithms we are able to turn this data into actionable insights for farmers. We call this technique data-driven farming. What we mean by data-driven farming is the ability to map every farm and overlay it with data. For example, what is the soil moisture level 15 cm below surface, what is the soil temperature 15 cm below surface, etc. These maps can then enable techniques, such as Precision Agriculture, which has been shown to improve yield, reduce costs, and benefit the environment. Despite the fact the Precision Agriculture as a technique was proposed more than 30 years ago, it hasnΓÇÖt taken off. The biggest reason is the inability to capture numerous data from farms to accurately represent the conditions in the farm. Our goal as part of the Farm Beats project is to be able to accurately construct precision maps at a fraction of the cost.
+Agriculture plays a vital role in most economies worldwide. In the US, over 70% of the rural households depend on agriculture as it contributes about 17% to the total GDP and provides employment to over 60% of the population. In project [Farm Beats](https://www.microsoft.com/research/project/farmbeats-iot-agriculture/), we gather numerous data from farms that we couldnΓÇÖt get before, and then by applying AI and ML algorithms we're able to turn this data into actionable insights for farmers. We call this technique data-driven farming. What we mean by data-driven farming is the ability to map every farm and overlay it with data. For example, what is the soil moisture level 15 cm below surface, what is the soil temperature 15 cm below surface, etc. These maps can then enable techniques, such as Precision Agriculture, which has been shown to improve yield, reduce costs, and benefit the environment. Despite the fact the Precision Agriculture as a technique was proposed more than 30 years ago, it hasnΓÇÖt taken off. The biggest reason is the inability to capture numerous data from farms to accurately represent the conditions in the farm. Our goal as part of the Farm Beats project is to be able to accurately construct precision maps at a fraction of the cost.
### Unleashing the power of analytics with synthetic data
-Synthetic data is data that is artificially created rather than being generated by actual events. It is often created with the help of computer algorithms and it is used for a wide range of activities, including usage as test data for new products and tools, as well as for ML models validation and improvements. Synthetic data can meet specific needs or conditions that are not available in existing real data. For governments, the nature of synthetic data removes many barriers and helps data scientists with privacy concerns, accelerated learning, and data volume reduction needed for the same outcome. The main benefits of synthetic data are:
+Synthetic data is data that is artificially created rather than being generated by actual events. It's often created with the help of computer algorithms and it's used for a wide range of activities, including usage as test data for new products and tools, as well as for ML models validation and improvements. Synthetic data can meet specific needs or conditions that aren't available in existing real data. For governments, the nature of synthetic data removes many barriers and helps data scientists with privacy concerns, accelerated learning, and data volume reduction needed for the same outcome. The main benefits of synthetic data are:
- **Overcoming restrictions:** Real data may have usage constraints due to privacy rules or other regulations. Synthetic data can replicate all important statistical properties of real data without exposing real data.-- **Scarcity:** Providing data where real data does not exist for a given event.
+- **Scarcity:** Providing data where real data doesn't exist for a given event.
- **Precision:** Synthetic data is perfectly labeled. - **Quality:** The quality of synthetic data can be precisely measured to fit the mission conditions.
For instance, captured data from the field often includes documents, pamphlets,
Security is a key driver accelerating the adoption of cloud computing, but itΓÇÖs also a major concern when customers are moving sensitive IP and data to the cloud.
-Microsoft Azure provides broad capabilities to secure data at rest and in transit, but sometimes the requirement is also to protect data from threats as itΓÇÖs being processed. [Azure confidential computing](../confidential-computing/index.yml) supports two different technologies for data encryption while in use:
+Microsoft Azure provides broad capabilities to secure data at rest and in transit, but sometimes the requirement is also to protect data from threats as itΓÇÖs being processed. [Azure confidential computing](../confidential-computing/index.yml) supports two different confidential VMs for data encryption while in use:
-- VMs that provide a hardware-based trusted execution environment (TEE, also known as enclave) based on [Intel Software Guard Extensions](https://software.intel.com/sgx) (SGX) technology. The hardware provides a protected container by securing a portion of the processor and memory. Only authorized code is permitted to run and to access data, so code and data are protected against viewing and modification from outside of TEE.-- VMs based on AMD EPYC 3rd Generation CPUs for lift and shift scenarios without requiring any application code changes. These AMD EPYC CPUs make it possible to encrypt your entire virtual machine at runtime. The encryption keys used for VM encryption are generated and safeguarded by a dedicated secure processor on the EPYC CPU and cannot be extracted by any external means.
+- VMs based on AMD EPYC 7003 series CPUs for lift and shift scenarios without requiring any application code changes. These AMD EPYC CPUs use AMD [Secure Encrypted Virtualization ΓÇô Secure Nested Paging](https://www.amd.com/en/processors/amd-secure-encrypted-virtualization) (SEV-SNP) technology to encrypt your entire virtual machine at runtime. The encryption keys used for VM encryption are generated and safeguarded by a dedicated secure processor on the EPYC CPU and can't be extracted by any external means.
+- VMs that provide a hardware-based trusted execution environment (TEE, also known as enclave) based on [Intel Software Guard Extensions](https://software.intel.com/sgx) (Intel SGX) technology. The hardware provides a protected container by securing a portion of the processor and memory. Only authorized code is permitted to run and to access data, so code and data are protected against viewing and modification from outside of TEE.
Azure confidential computing can directly address scenarios involving data protection while in use. For example, consider the scenario where data coming from a public or unclassified source needs to be matched with data from a highly sensitive source. Azure confidential computing can enable that matching to occur in the public cloud while protecting the highly sensitive data from disclosure. This circumstance is common in highly sensitive national security and law enforcement scenarios. A second scenario involves data coming from multiple sources that needs to be analyzed together, even though none of the sources have the authority to see the data. Each individual provider encrypts the data they provide and only within the TEE is that data decrypted. As such, no external party and even none of the providers can see the combined data set. This capability is valuable for secondary use of healthcare data.
-Customers deploying the types of workloads discussed in this section typically seek assurances from Microsoft that the underlying cloud platform security controls for which Microsoft is responsible are operating effectively. To address the needs of customers across regulated markets worldwide, Azure maintains a comprehensive compliance portfolio based on formal third-party certifications and other types of assurances to help you meet your own compliance obligations.
+If you're deploying the types of workloads discussed in this section, you may need assurances from Microsoft that the underlying cloud platform security controls for which Microsoft is responsible are operating effectively. To address the needs of customers across regulated markets worldwide, Azure maintains a comprehensive compliance portfolio based on formal third-party certifications and other types of assurances to help you meet your own compliance obligations.
## Compliance and certifications
This section addresses common customer questions related to Azure public, privat
### Data residency and data sovereignty - **Data location:** How does Microsoft keep data within a specific countryΓÇÖs boundaries? In what cases does data leave? What data attributes leave? **Answer:** Microsoft provides [strong customer commitments](https://azure.microsoft.com/global-infrastructure/data-residency/) regarding cloud services data residency and transfer policies:
- - **Data storage for regional
- - **Data storage for non-regional
-- **Air-gapped (sovereign) cloud deployment:** Why doesnΓÇÖt Microsoft deploy an air-gapped, sovereign, physically isolated cloud instance in every country? **Answer:** Microsoft is actively pursuing air-gapped cloud deployments where a business case can be made with governments across the world. However, physical isolation or ΓÇ£air gappingΓÇ¥, as a strategy, is diametrically opposed to the strategy of hyperscale cloud. The value proposition of the cloud, rapid feature growth, resiliency, and cost-effective operation, are diminished when the cloud is fragmented and physically isolated. These strategic challenges compound with each extra air-gapped cloud or fragmentation within an air-gapped cloud. Whereas an air-gapped cloud might prove to be the right solution for certain customers, it is not the only available option.
+ - **Data storage for regional
+ - **Data storage for non-regional
+- **Air-gapped (sovereign) cloud deployment:** Why doesnΓÇÖt Microsoft deploy an air-gapped, sovereign, physically isolated cloud instance in every country? **Answer:** Microsoft is actively pursuing air-gapped cloud deployments where a business case can be made with governments across the world. However, physical isolation or ΓÇ£air gappingΓÇ¥, as a strategy, is diametrically opposed to the strategy of hyperscale cloud. The value proposition of the cloud, rapid feature growth, resiliency, and cost-effective operation, are diminished when the cloud is fragmented and physically isolated. These strategic challenges compound with each extra air-gapped cloud or fragmentation within an air-gapped cloud. Whereas an air-gapped cloud might prove to be the right solution for certain customers, it isn't the only available option.
- **Air-gapped (sovereign) cloud customer options:** How can Microsoft support governments who need to operate cloud services completely in-country by local security-cleared personnel? What options does Microsoft have for cloud services operated entirely on-premises within customer owned datacenter where government employees exercise sole operational and data access control? **Answer:** You can use [Azure Stack Hub](https://azure.microsoft.com/products/azure-stack/hub/) to deploy a private cloud on-premises managed by your own security-cleared, in-country personnel. You can run many types of VM instances, App Services, Containers (including Cognitive Services containers), Functions, Azure Monitor, Key Vault, Event Hubs, and other services while using the same development tools, APIs, and management processes that you use in Azure. With Azure Stack Hub, you have sole control of your data, including storage, processing, transmission, and remote access.-- **Local jurisdiction:** Is Microsoft subject to local country jurisdiction based on the availability of Azure public cloud service? **Answer:** Yes, Microsoft must comply with all applicable local laws; however, government requests for customer data must also comply with applicable laws. A subpoena or its local equivalent is required to request non-content data. A warrant, court order, or its local equivalent is required for content data. Government requests for customer data follow a strict procedure according to [Microsoft practices for responding to government requests](https://blogs.microsoft.com/datalaw/our-practices/). Every year, Microsoft rejects many law enforcement requests for customer data. Challenges to government requests can take many forms. In many of these cases, Microsoft simply informs the requesting government that it is unable to disclose the requested information and explains the reason for rejecting the request. Where appropriate, Microsoft challenges requests in court. Our [Law Enforcement Request Report](https://www.microsoft.com/corporate-responsibility/law-enforcement-requests-report?rtc=1) and [US National Security Order Report](https://www.microsoft.com/corporate-responsibility/us-national-security-orders-report) are updated every six months and show that most of our customers are never impacted by government requests for data. For example, in the second half of 2019, Microsoft received 39 requests from law enforcement for accounts associated with enterprise cloud customers. Of those requests, only one warrant resulted in disclosure of customer content related to a non-US enterprise customer whose data was stored outside the United States.
+- **Local jurisdiction:** Is Microsoft subject to local country jurisdiction based on the availability of Azure public cloud service? **Answer:** Yes, Microsoft must comply with all applicable local laws; however, government requests for customer data must also comply with applicable laws. A subpoena or its local equivalent is required to request non-content data. A warrant, court order, or its local equivalent is required for content data. Government requests for customer data follow a strict procedure according to [Microsoft practices for responding to government requests](https://blogs.microsoft.com/datalaw/our-practices/). Every year, Microsoft rejects many law enforcement requests for customer data. Challenges to government requests can take many forms. In many of these cases, Microsoft simply informs the requesting government that it's unable to disclose the requested information and explains the reason for rejecting the request. Where appropriate, Microsoft challenges requests in court. Our [Law Enforcement Request Report](https://www.microsoft.com/corporate-responsibility/law-enforcement-requests-report?rtc=1) and [US National Security Order Report](https://www.microsoft.com/corporate-responsibility/us-national-security-orders-report) are updated every six months and show that most of our customers are never impacted by government requests for data. For example, in the second half of 2019, Microsoft received 39 requests from law enforcement for accounts associated with enterprise cloud customers. Of those requests, only one warrant resulted in disclosure of customer content related to a non-US enterprise customer whose data was stored outside the United States.
- **Autarky:** Can Microsoft cloud operations be separated from the rest of Microsoft cloud and connected solely to local government network? Are operations possible without external connections to a third party? **Answer:** Yes, depending on the cloud deployment model.
- - **Public Cloud:** Azure regional datacenters can be connected to your local government network through dedicated private connections such as ExpressRoute. Independent operation without any connectivity to a third party such as Microsoft is not possible in the public cloud.
+ - **Public Cloud:** Azure regional datacenters can be connected to your local government network through dedicated private connections such as ExpressRoute. Independent operation without any connectivity to a third party such as Microsoft isn't possible in the public cloud.
- **Private Cloud:** With Azure Stack Hub, you have full control over network connectivity and can operate Azure Stack Hub in [disconnected mode](/azure-stack/operator/azure-stack-disconnected-deployment). - **Data flow restrictions:** What provisions exist for approval and documentation of all data exchange between customer and Microsoft for local, in-country deployed cloud services? **Answer:** Options vary based on the cloud deployment model. - **Private cloud:** For private cloud deployment using Azure Stack Hub, you can control which data is exchanged with third parties. Azure Stack Hub telemetry can be turned off based on your preference and Azure Stack Hub can be operated disconnected. Moreover, Azure Stack Hub offers the [capacity-based billing model](https://azure.microsoft.com/pricing/details/azure-stack/hub/) in which no billing or consumption data leaves your on-premises infrastructure.
- - **Public cloud:** In Azure public cloud, you can use [Network Watcher](https://azure.microsoft.com/services/network-watcher/) to monitor network traffic associated with your workloads. For public cloud workloads, all billing data is generated through telemetry used exclusively for billing purposes and sent to Microsoft billing systems. You can [download and view](../cost-management-billing/manage/download-azure-invoice-daily-usage-date.md) your billing and usage data; however, you cannot prevent this information from being sent to Microsoft.
-- **Patching and maintenance for private cloud:** How can Microsoft support patching and other maintenance for Azure Stack Hub private cloud deployment? **Answer:** Microsoft has a regular cadence in place for releasing [update packages for Azure Stack Hub](/azure-stack/operator/azure-stack-updates). You are the sole operator of Azure Stack Hub and you can download and install these update packages. An update alert for Microsoft software updates and hotfixes will appear in the Update blade for Azure Stack Hub instances that are connected to the Internet. If your instance isnΓÇÖt connected and you would like to be notified about each update release, subscribe to the RSS or ATOM feed, as explained in our online documentation.
+ - **Public cloud:** In Azure public cloud, you can use [Network Watcher](https://azure.microsoft.com/services/network-watcher/) to monitor network traffic associated with your workloads. For public cloud workloads, all billing data is generated through telemetry used exclusively for billing purposes and sent to Microsoft billing systems. You can [download and view](../cost-management-billing/manage/download-azure-invoice-daily-usage-date.md) your billing and usage data; however, you can't prevent this information from being sent to Microsoft.
+- **Patching and maintenance for private cloud:** How can Microsoft support patching and other maintenance for Azure Stack Hub private cloud deployment? **Answer:** Microsoft has a regular cadence in place for releasing [update packages for Azure Stack Hub](/azure-stack/operator/azure-stack-updates). You're the sole operator of Azure Stack Hub and you can download and install these update packages. An update alert for Microsoft software updates and hotfixes will appear in the Update blade for Azure Stack Hub instances that are connected to the Internet. If your instance isnΓÇÖt connected and you would like to be notified about each update release, subscribe to the RSS or ATOM feed, as explained in our online documentation.
### Safeguarding of customer data -- **Microsoft network security:** What network controls and security does Microsoft use? Can my requirements be considered? **Answer:** For insight into Azure infrastructure protection, you should review Azure [network architecture](../security/fundamentals/infrastructure-network.md), Azure [production network](../security/fundamentals/production-network.md), and Azure [infrastructure monitoring](../security/fundamentals/infrastructure-monitoring.md). If you are deploying Azure applications, you should review Azure [network security overview](../security/fundamentals/network-overview.md) and [network security best practices](../security/fundamentals/network-best-practices.md). To provide feedback or requirements, contact your Microsoft account representative.-- **Customer separation:** How does Microsoft logically or physically separate customers within its cloud environment? Is there an option for my organization to ensure complete physical separation? **Answer:** Azure uses [logical isolation](./azure-secure-isolation-guidance.md) to separate your applications and data from other customers. This approach provides the scale and economic benefits of multi-tenant cloud services while rigorously enforcing controls designed to keep your data and applications off limits to other customers. There is also an option to enforce physical compute isolation via [Azure Dedicated Host](https://azure.microsoft.com/services/virtual-machines/dedicated-host/), which provides physical servers that can host one or more Azure VMs and are dedicated to one Azure subscription. You can provision dedicated hosts within a region, availability zone, and fault domain. You can then place VMs directly into provisioned hosts using whatever configuration best meets your needs. Dedicated Host provides hardware isolation at the physical server level, enabling you to place your Azure VMs on an isolated and dedicated physical server that runs only your organizationΓÇÖs workloads to meet corporate compliance requirements.-- **Data encryption at rest and in transit:** Does Microsoft enforce data encryption by default? Does Microsoft support customer-managed encryption keys? **Answer:** Yes, many Azure services, including Azure Storage and Azure SQL Database, encrypt data by default and support customer-managed keys. Azure [Storage encryption for data at rest](../storage/common/storage-service-encryption.md) ensures that data is automatically encrypted before persisting it to Azure Storage and decrypted before retrieval. You can use [your own encryption keys](../storage/common/customer-managed-keys-configure-key-vault.md) for Azure Storage encryption at rest and manage your keys in Azure Key Vault. Storage encryption is enabled by default for all new and existing storage accounts and it cannot be disabled. When provisioning storage accounts, you can enforce ΓÇ£[secure transfer required](../storage/common/storage-require-secure-transfer.md)ΓÇ¥ option, which allows access only from secure connections. This option is enabled by default when creating a storage account in the Azure portal. Azure SQL Database enforces [data encryption in transit](../azure-sql/database/security-overview.md#information-protection-and-encryption) by default and provides [transparent data encryption](../azure-sql/database/transparent-data-encryption-tde-overview.md) (TDE) at rest [by default](https://azure.microsoft.com/updates/newly-created-azure-sql-databases-encrypted-by-default/) allowing you to use Azure Key Vault and *[bring your own key](../azure-sql/database/transparent-data-encryption-byok-overview.md)* (BYOK) functionality to control key management tasks including key permissions, rotation, deletion, and so on. -- **Data encryption during processing:** Can Microsoft protect my data while it is being processed in memory? **Answer:** Yes, [Azure confidential computing](https://azure.microsoft.com/solutions/confidential-compute/) supports two different technologies for data encryption while in use. First, you can use VMs based on Intel Xeon processors with [Intel Software Guard Extensions](https://software.intel.com/sgx) (SGX) technology. With this approach, data is protected inside a hardware-based trusted execution environment (TEE, also known as enclave), which is created by securing a portion of the processor and memory. Only authorized code is permitted to run and to access data, so application code and data are protected against viewing and modification from outside of TEE. Second, you can use VMs based on AMD EPYC 3rd Generation CPUs for lift and shift scenarios without requiring any application code changes. These AMD EPYC CPUs make it possible to encrypt your entire virtual machine at runtime. The encryption keys used for VM encryption are generated and safeguarded by a dedicated secure processor on the EPYC CPU and cannot be extracted by any external means.
+- **Microsoft network security:** What network controls and security does Microsoft use? Can my requirements be considered? **Answer:** For insight into Azure infrastructure protection, you should review Azure [network architecture](../security/fundamentals/infrastructure-network.md), Azure [production network](../security/fundamentals/production-network.md), and Azure [infrastructure monitoring](../security/fundamentals/infrastructure-monitoring.md). If you're deploying Azure applications, you should review Azure [network security overview](../security/fundamentals/network-overview.md) and [network security best practices](../security/fundamentals/network-best-practices.md). To provide feedback or requirements, contact your Microsoft account representative.
+- **Customer separation:** How does Microsoft logically or physically separate customers within its cloud environment? Is there an option for my organization to ensure complete physical separation? **Answer:** Azure uses [logical isolation](./azure-secure-isolation-guidance.md) to separate your applications and data from other customers. This approach provides the scale and economic benefits of multi-tenant cloud services while rigorously enforcing controls designed to keep your data and applications off limits to other customers. There's also an option to enforce physical compute isolation via [Azure Dedicated Host](https://azure.microsoft.com/services/virtual-machines/dedicated-host/), which provides physical servers that can host one or more Azure VMs and are dedicated to one Azure subscription. You can provision dedicated hosts within a region, availability zone, and fault domain. You can then place VMs directly into provisioned hosts using whatever configuration best meets your needs. Dedicated Host provides hardware isolation at the physical server level, enabling you to place your Azure VMs on an isolated and dedicated physical server that runs only your organizationΓÇÖs workloads to meet corporate compliance requirements.
+- **Data encryption at rest and in transit:** Does Microsoft enforce data encryption by default? Does Microsoft support customer-managed encryption keys? **Answer:** Yes, many Azure services, including Azure Storage and Azure SQL Database, encrypt data by default and support customer-managed keys. Azure [Storage encryption for data at rest](../storage/common/storage-service-encryption.md) ensures that data is automatically encrypted before persisting it to Azure Storage and decrypted before retrieval. You can use [your own encryption keys](../storage/common/customer-managed-keys-configure-key-vault.md) for Azure Storage encryption at rest and manage your keys in Azure Key Vault. Storage encryption is enabled by default for all new and existing storage accounts and it can't be disabled. When provisioning storage accounts, you can enforce ΓÇ£[secure transfer required](../storage/common/storage-require-secure-transfer.md)ΓÇ¥ option, which allows access only from secure connections. This option is enabled by default when creating a storage account in the Azure portal. Azure SQL Database enforces [data encryption in transit](../azure-sql/database/security-overview.md#information-protection-and-encryption) by default and provides [transparent data encryption](../azure-sql/database/transparent-data-encryption-tde-overview.md) (TDE) at rest [by default](https://azure.microsoft.com/updates/newly-created-azure-sql-databases-encrypted-by-default/) allowing you to use Azure Key Vault and *[bring your own key](../azure-sql/database/transparent-data-encryption-byok-overview.md)* (BYOK) functionality to control key management tasks including key permissions, rotation, deletion, and so on.
+- **Data encryption during processing:** Can Microsoft protect my data while it's being processed in memory? **Answer:** Yes, [Azure confidential computing](../confidential-computing/index.yml) supports two different technologies for data encryption while in use. First, you can use VMs based on Intel Xeon processors with [Intel Software Guard Extensions](https://software.intel.com/sgx) (Intel SGX) technology. With this approach, data is protected inside a hardware-based trusted execution environment (TEE, also known as enclave), which is created by securing a portion of the processor and memory. Only authorized code is permitted to run and to access data, so application code and data are protected against viewing and modification from outside of TEE. Second, you can use VMs based on AMD EPYC 7003 series CPUs for lift and shift scenarios without requiring any application code changes. These AMD EPYC CPUs make it possible to encrypt your entire virtual machine at runtime. The encryption keys used for VM encryption are generated and safeguarded by a dedicated secure processor on the EPYC CPU and can't be extracted by any external means.
- **FIPS 140 validation:** Does Microsoft offer FIPS 140 Level 3 validated hardware security modules (HSMs) in Azure? If so, can I store AES-256 symmetric encryption keys in these HSMs? **Answer:** Azure Key Vault [Managed HSM](../key-vault/managed-hsm/overview.md) provides a fully managed, highly available, single-tenant HSM as a service that uses [FIPS 140 Level 3 validated HSMs](/azure/compliance/offerings/offering-fips-140-2). Each Managed HSM instance is bound to a separate security domain controlled by you and isolated cryptographically from instances belonging to other customers. With Managed HSMs, support is available for AES 128-bit and 256-bit symmetric keys. - **Customer provided cryptography:** Can I use my own cryptography or encryption hardware? **Answer:** Yes, you can use your own HSMs deployed on-premises with your own crypto algorithms. However, if you expect to use customer-managed keys for services integrated with [Azure Key Vault](https://azure.microsoft.com/services/key-vault/) (for example, Azure Storage, SQL Database, Disk encryption, and others), then you must use hardware security modules (HSMs) and [cryptography supported by Azure Key Vault](../key-vault/keys/about-keys.md).-- **Access to customer data by Microsoft personnel:** How does Microsoft restrict access to my data by Microsoft engineers? **Answer:** Microsoft engineers [do not have default access](https://www.microsoft.com/trust-center/privacy/data-access) to your data in the cloud. Instead, they can be granted access, under management oversight, only when necessary using a [restricted access workflow](https://www.youtube.com/watch?v=lwjPGtGGe84&feature=youtu.be&t=25m). Most customer support requests can be resolved without accessing your data as Microsoft engineers rely heavily on logs for troubleshooting and support. If a Microsoft engineer requires elevated access to your data as part of the support workflow, you can use [Customer Lockbox](../security/fundamentals/customer-lockbox-overview.md) for Azure to control how a Microsoft engineer accesses your data. Customer Lockbox for Azure puts you in charge of that decision by enabling you to approve/deny such elevated access requests. For more information on how Microsoft restricts insider access to your data, see [Restrictions on insider access](./documentation-government-plan-security.md#restrictions-on-insider-access).
+- **Access to customer data by Microsoft personnel:** How does Microsoft restrict access to my data by Microsoft engineers? **Answer:** Microsoft engineers [don't have default access](https://www.microsoft.com/trust-center/privacy/data-access) to your data in the cloud. Instead, they can be granted access, under management oversight, only when necessary using a [restricted access workflow](https://www.youtube.com/watch?v=lwjPGtGGe84&feature=youtu.be&t=25m). Most customer support requests can be resolved without accessing your data as Microsoft engineers rely heavily on logs for troubleshooting and support. If a Microsoft engineer requires elevated access to your data as part of the support workflow, you can use [Customer Lockbox](../security/fundamentals/customer-lockbox-overview.md) for Azure to control how a Microsoft engineer accesses your data. Customer Lockbox for Azure puts you in charge of that decision by enabling you to approve/deny such elevated access requests. For more information on how Microsoft restricts insider access to your data, see [Restrictions on insider access](./documentation-government-plan-security.md#restrictions-on-insider-access).
### Operations -- **Code review:** What can Microsoft do to prevent malicious code from being inserted into services that my organization uses? Can I review Microsoft code deployments? **Answer:** Microsoft has invested heavily in security assurance processes and practices to correctly develop logically isolated services and systems. For more information, see [Security assurance processes and practices](./azure-secure-isolation-guidance.md#security-assurance-processes-and-practices). For more information about Azure Hypervisor isolation, see [Defense-in-depth exploit mitigations](./azure-secure-isolation-guidance.md#defense-in-depth-exploits-mitigations). Microsoft has full control over all source code that comprises Azure services. For example, the procedure for patching guest VMs differs greatly from traditional on-premises patching where patch verification is necessary following installation. In Azure, patches are not applied to guest VMs; instead, the VM is simply restarted and when the VM boots, it is guaranteed to boot from a known good image that Microsoft controls. There is no way to insert malicious code into the image or interfere with the boot process. PaaS VMs offer more advanced protection against persistent malware infections than traditional physical server solutions, which if compromised by an attacker can be difficult to clean, even after the vulnerability is corrected. With PaaS VMs, reimaging is a routine part of operations, and it can help clean out intrusions that have not even been detected. This approach makes it more difficult for a compromise to persist. You cannot review Azure source code; however, online access to view source code is available for key products through the Microsoft [Government Security Program](https://www.microsoft.com/securityengineering/gsp) (GSP).
+- **Code review:** What can Microsoft do to prevent malicious code from being inserted into services that my organization uses? Can I review Microsoft code deployments? **Answer:** Microsoft has invested heavily in security assurance processes and practices to correctly develop logically isolated services and systems. For more information, see [Security assurance processes and practices](./azure-secure-isolation-guidance.md#security-assurance-processes-and-practices). For more information about Azure Hypervisor isolation, see [Defense-in-depth exploit mitigations](./azure-secure-isolation-guidance.md#defense-in-depth-exploits-mitigations). Microsoft has full control over all source code that comprises Azure services. For example, the procedure for patching guest VMs differs greatly from traditional on-premises patching where patch verification is necessary following installation. In Azure, patches aren't applied to guest VMs; instead, the VM is simply restarted and when the VM boots, it's guaranteed to boot from a known good image that Microsoft controls. There's no way to insert malicious code into the image or interfere with the boot process. PaaS VMs offer more advanced protection against persistent malware infections than traditional physical server solutions, which if compromised by an attacker can be difficult to clean, even after the vulnerability is corrected. With PaaS VMs, reimaging is a routine part of operations, and it can help clean out intrusions that haven't even been detected. This approach makes it more difficult for a compromise to persist. You can't review Azure source code; however, online access to view source code is available for key products through the Microsoft [Government Security Program](https://www.microsoft.com/securityengineering/gsp) (GSP).
- **DevOps personnel (cleared nationals):** What controls or clearance levels does Microsoft have for the personnel that have DevOps access to cloud environments or physical access to data centers? **Answer:** Microsoft conducts [background screening](./documentation-government-plan-security.md#screening) on operations personnel with access to production systems and physical data center infrastructure. Microsoft cloud background check includes verification of education and employment history upon hire, and extra checks conducted every two years thereafter (where permissible by law), including criminal history check, OFAC list, BIS denied persons list, and DDTC debarred parties list. - **Data center site options:** Is Microsoft willing to deploy a data center to a specific physical location to meet more advanced security requirements? **Answer:** You should inquire with your Microsoft account team regarding options for data center locations.-- **Service availability guarantee:** How can my organization ensure that Microsoft (or particular government or other entity) canΓÇÖt turn off our cloud services? **Answer:** You should review the Microsoft [Online Services Terms](https://www.microsoft.com/licensing/terms/productoffering) (OST) and the OST [Data Protection Addendum](https://aka.ms/DPA) (DPA) for contractual commitments Microsoft makes regarding service availability and use of online services.
+- **Service availability guarantee:** How can my organization ensure that Microsoft (or particular government or other entity) canΓÇÖt turn off our cloud services? **Answer:** You should review the Microsoft [Product Terms](https://www.microsoft.com/licensing/docs/view/Product-Terms) (formerly Online Services Terms) and the Microsoft Products and Services [Data Protection Addendum](https://aka.ms/dpa) (DPA) for contractual commitments Microsoft makes regarding service availability and use of online services.
- **Non-traditional cloud service needs:** What options does Microsoft provide for periodically internet free/disconnected environments? **Answer:** In addition to [Azure Stack Hub](https://azure.microsoft.com/products/azure-stack/hub/), which is intended for on-premises deployment and disconnected scenarios, a ruggedized and field-deployable version called [Tactical Azure Stack Hub](https://www.delltechnologies.com/en-us/collaterals/unauth/data-sheets/products/converged-infrastructure/dell-emc-integrated-system-for-azure-stack-hub-tactical-spec-sheet.pdf) is also available to address tactical edge deployments for limited or no connectivity, fully mobile requirements, and harsh conditions requiring military specification solutions. ### Transparency and audit -- **Audit documentation:** Does Microsoft make all audit documentation readily available to customers to download and examine? **Answer:** Yes, Microsoft makes independent third-party audit reports and other related documentation available for download under a non-disclosure agreement from the Azure portal. You will need an existing Azure subscription or [free trial subscription](https://azure.microsoft.com/free/) to access the Microsoft Defender for Cloud [audit reports blade](https://portal.azure.com/#blade/Microsoft_Azure_Security/AuditReportsBlade). Additional compliance documentation is available from the Service Trust Portal (STP) [Audit Reports](https://servicetrust.microsoft.com/ViewPage/MSComplianceGuideV3) section. You must log in to access audit reports on the STP. For more information, see [Get started with the Microsoft Service Trust Portal](/microsoft-365/compliance/get-started-with-service-trust-portal).
+- **Audit documentation:** Does Microsoft make all audit documentation readily available to customers to download and examine? **Answer:** Yes, Microsoft makes independent third-party audit reports and other related documentation available for download under a non-disclosure agreement from the Azure portal. You'll need an existing Azure subscription or [free trial subscription](https://azure.microsoft.com/free/) to access the Microsoft Defender for Cloud [audit reports blade](https://portal.azure.com/#blade/Microsoft_Azure_Security/AuditReportsBlade). Additional compliance documentation is available from the Service Trust Portal (STP) [Audit Reports](https://servicetrust.microsoft.com/ViewPage/MSComplianceGuideV3) section. You must log in to access audit reports on the STP. For more information, see [Get started with the Microsoft Service Trust Portal](/microsoft-365/compliance/get-started-with-service-trust-portal).
- **Process auditability:** Does Microsoft make its processes, data flow, and documentation available to customers or regulators for audit? **Answer:** Microsoft offers a Regulator Right to Examine, which is a program Microsoft implemented to provide regulators with direct right to examine Azure, including the ability to conduct an on-site examination, to meet with Microsoft personnel and Microsoft external auditors, and to access any related information, records, reports, and documents. - **Service documentation:** Can Microsoft provide in-depth documentation covering service architecture, software and hardware components, and data protocols? **Answer:** Yes, Microsoft provides extensive and in-depth Azure online documentation covering all these topics. For example, you can review documentation on Azure [products](../index.yml), [global infrastructure](https://azure.microsoft.com/global-infrastructure/), and [API reference](/rest/api/azure/).
Learn more about:
- [Azure Security](../security/index.yml) - [Azure Compliance](../compliance/index.yml)
+- [Azure compliance offerings](/azure/compliance/offerings/)
- [Azure guidance for secure isolation](./azure-secure-isolation-guidance.md) - [Azure for government - worldwide government](https://azure.microsoft.com/industries/government/)
+- [Enabling data residency and data protection in Microsoft Azure regions](https://azure.microsoft.com/resources/achieving-compliant-data-residency-and-security-with-azure/)
+- [Azure Policy regulatory compliance samples](../governance/policy/samples/index.md)
azure-maps Power Bi Visual Add Heat Map Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-heat-map-layer.md
The following table shows the primary settings that are available in the **Heat
|-|| | Radius | The radius of each data point in the heat map.<br /><br />Valid values when Unit = ΓÇÿpixelsΓÇÖ: 1 - 200. Default: **20**<br />Valid values when Unit = ΓÇÿmetersΓÇÖ: 1 - 4,000,000| | Units | The distance units of the radius. Possible values are:<br /><br />**pixels**. When set to pixels the size of each data point will always be the same, regardless of zoom level.<br />**meters**. When set to meters, the size of the data points will scale based on zoom level, ensuring the radius is spatially accurate.<br /><br /> Default: **pixels** |
-| Opacity | Sets the opacity of the heat map layer. Default: **1**<br/>Value should be a decimal between 0 and 1. |
+| Transparency | Sets the Transparency of the heat map layer. Default: **1**<br/>Value should be from 0% to 100%. |
| Intensity | The intensity of each heat point. Intensity is a decimal value between 0 and 1, used to specify how "hot" a single data point should be. Default: **0.5** | | Use size as weight | A boolean value that determines if the size field value should be used as the weight of each data point. If on, this causes the layer to render as a weighted heat map. Default: **Off** | | Gradient |Color pick for users to pick 3 colors for low (0%), center (50%) and high (100%) gradient colors. |
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
Since the Dependency agent works at the kernel level, support is also dependent
| | 7.7 | 3.10.0-1062 | | CentOS Linux 6 | 6.10 | 2.6.32-754.3.5<br>2.6.32-696.30.1 | | | 6.9 | 2.6.32-696.30.1<br>2.6.32-696.18.7 |
-| Ubuntu Server | 20.04 | 5.4\* |
+| Ubuntu Server | 20.04 | 5.8<br>5.4\* |
| | 18.04 | 5.3.0-1020<br>5.0 (includes Azure-tuned kernel)<br>4.18*<br>4.15* | | | 16.04.3 | 4.15.\* | | | 16.04 | 4.13.\*<br>4.11.\*<br>4.10.\*<br>4.8.\*<br>4.4.\* |
azure-monitor Api Custom Events Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-custom-events-metrics.md
telemetry.flush();
The function is asynchronous for the [server telemetry channel](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel/).
-We recommend using the flush() or flushAsync() methods in the shutdown activity of the Application when using the .NET or JS SDK.
-
-For Example:
-
-*JS*
-
-```javascript
-// Immediately send all queued telemetry. By default, it is sent async.
-flush(async?: boolean = true)
-```
+We recommend using the flush() or flushAsync() methods in the shutdown activity of the Application when using the .NET SDK.
## Authenticated users
azure-monitor Performance Counters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/performance-counters.md
Like other telemetry, **performanceCounters** also has a column `cloud_RoleInsta
* *Exceptions* is a count of the TrackException reports received by the portal in the sampling interval of the chart. It includes only the handled exceptions where you have written TrackException calls in your code, and doesn't include all [unhandled exceptions](./asp-net-exceptions.md).
-## Performance counters for applications running in Azure Web Apps
+## Performance counters for applications running in Azure Web Apps and Windows Containers on Azure App Service
-Both ASP.NET and ASP.NET Core applications deployed to Azure Web Apps run in a special sandbox environment. This environment does not allow direct access to system performance counters. However, a limited subset of counters are exposed as environment variables as described [here](https://github.com/projectkudu/kudu/wiki/Perf-Counters-exposed-as-environment-variables). Application Insights SDK for ASP.NET and ASP.NET Core collects performance counters from Azure Web Apps from these special environment variables. Only a subset of counters are available in this environment, and the full list can be found [here.](https://github.com/microsoft/ApplicationInsights-dotnet-server/blob/develop/WEB/Src/PerformanceCollector/Perf.Shared/Implementation/WebAppPerformanceCollector/CounterFactory.cs)
+Both ASP.NET and ASP.NET Core applications deployed to Azure Web Apps run in a special sandbox environment. Applications deployed to Azure App Service can utilize a [Windows container](/azure/app-service/quickstart-custom-container?tabs=dotnet&pivots=container-windows) or be hosted in a sandbox environment. If the application is deployed in a Windows Container all standard performance counters are available in the container image.
+
+The sandbox environment does not allow direct access to system performance counters. However, a limited subset of counters are exposed as environment variables as described [here](https://github.com/projectkudu/kudu/wiki/Perf-Counters-exposed-as-environment-variables). Only a subset of counters are available in this environment, and the full list can be found [here](https://github.com/microsoft/ApplicationInsights-dotnet/blob/main/WEB/Src/PerformanceCollector/PerformanceCollector/Implementation/WebAppPerformanceCollector/CounterFactory.cs).
+
+The Application Insights SDK for [ASP.NET](https://nuget.org/packages/Microsoft.ApplicationInsights.Web) and [ASP.NET Core](https://nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore) detect, using environment variables, if code is deployed to a Web App and non-Windows container. This determines whether it collects performance counters from applications using environment variables when in a sandbox environment or utilizing the standard collection mechanism when hosted on a Windows Container or Virtual Machine. Sandbox environments include Azure Web Apps and Azure App Service Apps not running in a Windows container.
## Performance counters in ASP.NET Core applications
azure-monitor Container Insights Enable Existing Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-existing-clusters.md
If you would rather integrate with an existing workspace, perform the following
In the output, find the workspace name, and then copy the full resource ID of that Log Analytics workspace under the field **id**.
-4. Run the following command to enable the monitoring add-on, replacing the value for the `--workspace-resource-id` parameter. The string value must be within the double quotes:
+4. Switch to the subscription hosting the cluster using the following command:
+
+ ```azurecli
+ az account set -s <subscriptionId of the cluster>
+ ```
+
+5. Run the following command to enable the monitoring add-on, replacing the value for the `--workspace-resource-id` parameter. The string value must be within the double quotes:
```azurecli az aks enable-addons -a monitoring -n ExistingManagedCluster -g ExistingManagedClusterRG --workspace-resource-id "/subscriptions/<SubscriptionId>/resourceGroups/<ResourceGroupName>/providers/Microsoft.OperationalInsights/workspaces/<WorkspaceName>"
After a few minutes, the command completes and returns JSON-formatted informatio
* If you experience issues while attempting to onboard the solution, review the [troubleshooting guide](container-insights-troubleshoot.md)
-* With monitoring enabled to collect health and resource utilization of your AKS cluster and workloads running on them, learn [how to use](container-insights-analyze.md) Container insights.
+* With monitoring enabled to collect health and resource utilization of your AKS cluster and workloads running on them, learn [how to use](container-insights-analyze.md) Container insights.
azure-monitor Private Link Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/private-link-design.md
The AMPLS object has the following limits:
In the below diagram: * Each VNet connects to only **one** AMPLS object.
-* AMPLS A connects to two workspaces and one Application Insight component, using 2 of the possible 300 Log Analytics workspaces and 1 of the possible 1 Application Insights components it can connect to.
+* AMPLS A connects to two workspaces and one Application Insight component, using 2 of the possible 300 Log Analytics workspaces and 1 of the possible 1000 Application Insights components it can connect to.
* Workspace2 connects to AMPLS A and AMPLS B, using two of the five possible AMPLS connections. * AMPLS B is connected to Private Endpoints of two VNets (VNet2 and VNet3), using two of the 10 possible Private Endpoint connections.
azure-netapp-files Azacsnap Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-preview.md
the same name with a prefix of `all-volumes` and a maximum of five snapshots wit
The processing is handled in the order outlined as follows:
-1. `data` Volume Snapshot (same as the normal `--volume data` option)
+1. **data** Volume Snapshot (same as the normal `--volume data` option)
1. put the database into *backup-mode*.
- 1. take snapshots of the Volume listed in the configuration file's `"dataVolume"` stanza.
+ 1. take snapshots of the Volume(s) listed in the configuration file's `"dataVolume"` stanza.
1. take the database out of *backup-mode*. 1. perform snapshot management.
-1. `other Volume Snapshot (same as the normal `--volume other` option)
- 1. take snapshots of the Volumes listed in the configuration file's `"otherVolume"` stanza.
+1. **other** Volume Snapshot (same as the normal `--volume other` option)
+ 1. take snapshots of the Volume(s) listed in the configuration file's `"otherVolume"` stanza.
1. perform snapshot management.
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
Azure NetApp Files volumes are designed to be contained in a special purpose sub
Azure NetApp Files standard network features are supported for the following regions:
+* Australia Central
* France Central * North Central US * South Central US
azure-percept How To Troubleshoot Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-troubleshoot-setup.md
Refer to the table below for workarounds to common issues found during the [Azur
|Issue|Reason|Workaround| |:--|:|:-|
-|The Azure Percept DK Wi-Fi access point passphrase/password doesn't work| We have heard reports that some welcome cards may have incorrect passphrase/password printed.|In order to retrieve the Wi-Fi SoftAP password of your Percept Devkit, you must connect and use an Ethernet cable. Once the cable is attached and the device powered on, youΓÇÖll need to find the IP address that was assigned to your devkit. In ΓÇ£HomeΓÇ¥ situations you may be able to log in to your home router to get this info. Look for an ASUS device named ΓÇ£apdk-xxxxxxxΓÇ¥. The article [Connect to Azure Percept DK over Ethernet](./how-to-connect-over-ethernet.md) can guide you if youΓÇÖre not able to get the IP from the router. Once you have the EthernetΓÇÖs IP, start a web browser and manually copy and paste this address: IE: http://192.168.0.222 to go to the Onboarding experience. <ul><li>DonΓÇÖt go through the full setup just yet.</li><li>Setup Wi-Fi and create your SSH User and pause there (you can leave that window open and complete setup after we get the SoftAP password).</li><li>Open Putty or an SSH client and connect to the devkit using the user/pw you just created.</li><li>**Run: sudo tpm2_handle2psk 0x81000009.** The output from this command will be your password for the SoftAP. ΓÇô Please write it down on the card ΓÇô</li></ul>
+|SSH username or password is lost or unknown. | You have lost or canΓÇÖt remember your SSH username or password. | You can create a new SSH user by relaunching the Setup Experience as outlined in this [launch the Azure Percept DK setup experience](./quickstart-percept-dk-set-up.md#launch-the-azure-percept-dk-setup-experience) step. Skip through the steps youΓÇÖve previously completed until you get to the SSH User creation page and create a new SSH login.|
+|The Azure Percept DK Wi-Fi access point passphrase/password doesn't work.| We have heard reports that some welcome cards may have incorrect passphrase/password printed.|In order to retrieve the Wi-Fi SoftAP password of your Percept Devkit, you must connect and use an Ethernet cable. Once the cable is attached and the device powered on, youΓÇÖll need to find the IP address that was assigned to your devkit. In ΓÇ£HomeΓÇ¥ situations you may be able to log in to your home router to get this info. Look for an ASUS device named ΓÇ£apdk-xxxxxxxΓÇ¥. The article [Connect to Azure Percept DK over Ethernet](./how-to-connect-over-ethernet.md) can guide you if youΓÇÖre not able to get the IP from the router. Once you have the EthernetΓÇÖs IP, start a web browser and manually copy and paste this address: IE: http://192.168.0.222 to go to the Onboarding experience. <ul><li>DonΓÇÖt go through the full setup just yet.</li><li>Setup Wi-Fi and create your SSH User and pause there (you can leave that window open and complete setup after we get the SoftAP password).</li><li>Open Putty or an SSH client and connect to the devkit using the user/pw you just created.</li><li>**Run: sudo tpm2_handle2psk 0x81000009.** The output from this command will be your password for the SoftAP. ΓÇô Please write it down on the card ΓÇô</li></ul>
|When connecting to the Azure account sign-up pages or to the Azure portal, you may automatically sign in with a cached account. If you don't sign in with the correct account, it may result in an experience that is inconsistent with the documentation.|The result of a browser setting to "remember" an account you have previously used.|From the Azure page, select on your account name in the upper right corner and select **sign out**. You can then sign in with the correct account.| |The Azure Percept DK Wi-Fi access point (apd-xxxx) doesn't appear in the list of available Wi-Fi networks.|It's usually a temporary issue that resolves within 15 minutes.|Wait for the network to appear. If it doesn't appear after more than 15 minutes, reboot the device.| |The connection to the Azure Percept DK Wi-Fi access point frequently disconnects.|It's usually because of a poor connection between the device and the host computer. It can also be caused by interference from other Wi-Fi connections on the host computer.|Make sure that the antennas are properly attached to the dev kit. If the dev kit is far away from the host computer, try moving it closer. Turn off any other internet connections such as LTE/5G if they're running on the host computer.|
azure-resource-manager Resource Name Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-name-rules.md
Title: Resource naming restrictions description: Shows the rules and restrictions for naming Azure resources. Previously updated : 02/28/2022 Last updated : 03/08/2022 # Naming rules and restrictions for Azure resources
In the following tables, the term alphanumeric refers to:
> | | | | | > | blockchainMembers | global | 2-20 | Lowercase letters and numbers.<br><br>Start with lowercase letter. |
+## Microsoft.Blueprint
+
+> [!div class="mx-tableFixed"]
+> | Entity | Scope | Length | Valid Characters |
+> | | | | |
+> | blueprint| Management groups, Subscriptions, Resource groups | 90 | Alphanumerics, underscores, and hyphens. |
+> | blueprintAssignments | Management groups, Subscriptions, Resource groups | 90 | Alphanumerics, underscores, and hyphens. |
+ ## Microsoft.BotService > [!div class="mx-tableFixed"]
In the following tables, the term alphanumeric refers to:
> | autoProvisioningSettings | subscription | 1