Updates from: 11/08/2022 02:12:17
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Self Asserted Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/self-asserted-technical-profile.md
Previously updated : 02/17/2022 Last updated : 11/07/2022
In a self-asserted technical profile, you can use the **InputClaims** and **Inpu
## Display claims
-The display claims feature is currently in **preview**.
- The **DisplayClaims** element contains a list of claims to be presented on the screen for collecting data from the user. To prepopulate the values of display claims, use the input claims that were previously described. The element may also contain a default value. The order of the claims in **DisplayClaims** specifies the order in which Azure AD B2C renders the claims on the screen. To force the user to provide a value for a specific claim, set the **Required** attribute of the **DisplayClaim** element to `true`.
Use output claims when:
- **Claims are output by output claims transformation**. - **Setting a default value in an output claim** without collecting data from the user or returning the data from the validation technical profile. The `LocalAccountSignUpWithLogonEmail` self-asserted technical profile sets the **executed-SelfAsserted-Input** claim to `true`. - **A validation technical profile returns the output claims** - Your technical profile may call a validation technical profile that returns some claims. You may want to bubble up the claims and return them to the next orchestration steps in the user journey. For example, when signing in with a local account, the self-asserted technical profile named `SelfAsserted-LocalAccountSignin-Email` calls the validation technical profile named `login-NonInteractive`. This technical profile validates the user credentials and also returns the user profile. Such as 'userPrincipalName', 'displayName', 'givenName' and 'surName'.-- **A display control returns the output claims** - Your technical profile may have a reference to a [display control](display-controls.md). The display control returns some claims, such as the verified email address. You may want to bubble up the claims and return them to the next orchestration steps in the user journey. The display control feature is currently in **preview**.
+- **A display control returns the output claims** - Your technical profile may have a reference to a [display control](display-controls.md). The display control returns some claims, such as the verified email address. You may want to bubble up the claims and return them to the next orchestration steps in the user journey.
The following example demonstrates the use of a self-asserted technical profile that uses both display claims and output claims.
active-directory Active Directory Saml Protocol Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-saml-protocol-reference.md
Previously updated : 10/27/2021 Last updated : 11/4/2022 -+
The SAML protocol requires the identity provider (Microsoft identity platform) a
When an application is registered with Azure AD, the app developer registers federation-related information with Azure AD. This information includes the **Redirect URI** and **Metadata URI** of the application.
-The Microsoft identity platform uses the cloud service's **Metadata URI** to retrieve the signing key and the logout URI. In the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>, you can open the app in **Azure Active Directory -> App registrations**, and then in **Manage -> Authentication**, you can update the Logout URL. This way the Microsoft identity platform can send the response to the correct URL.
+The Microsoft identity platform uses the cloud service's **Metadata URI** to retrieve the signing key and the logout URI. This way the Microsoft identity platform can send the response to the correct URL. In the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>;
-Azure AD exposes tenant-specific and common (tenant-independent) SSO and single sign-out endpoints. These URLs represent addressable locations--they're not just identifiers--so you can go to the endpoint to read the metadata.
+- Open the app in **Azure Active Directory** and select **App registrations**
+- Under **Manage**, select **Authentication**. From there you can update the Logout URL.
-- The tenant-specific endpoint is located at `https://login.microsoftonline.com/<TenantDomainName>/FederationMetadata/2007-06/FederationMetadata.xml`. The _\<TenantDomainName>_ placeholder represents a registered domain name or TenantID GUID of an Azure AD tenant. For example, the federation metadata of the contoso.com tenant is at: https://login.microsoftonline.com/contoso.com/FederationMetadata/2007-06/FederationMetadata.xml
+Azure AD exposes tenant-specific and common (tenant-independent) SSO and single sign-out endpoints. These URLs represent addressable locations, and aren't only identifiers. You can then go to the endpoint to read the metadata.
+
+- The tenant-specific endpoint is located at `https://login.microsoftonline.com/<TenantDomainName>/FederationMetadata/2007-06/FederationMetadata.xml`. The *\<TenantDomainName>* placeholder represents a registered domain name or TenantID GUID of an Azure AD tenant. For example, the federation metadata of the `contoso.com` tenant is at: https://login.microsoftonline.com/contoso.com/FederationMetadata/2007-06/FederationMetadata.xml
- The tenant-independent endpoint is located at
- `https://login.microsoftonline.com/common/FederationMetadata/2007-06/FederationMetadata.xml`. In this endpoint address, **common** appears instead of a tenant domain name or ID.
+ `https://login.microsoftonline.com/common/FederationMetadata/2007-06/FederationMetadata.xml`. In this endpoint address, *common* appears instead of a tenant domain name or ID.
## Next steps
active-directory Howto Configure App Instance Property Locks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-configure-app-instance-property-locks.md
+
+ Title: "How to configure app instance property lock in your applications"
+description: How to increase app security by configuring property modification locks for sensitive properties of the application.
++++++ Last updated : 11/03/2022+++
+# Customer intent: As an application developer, I want to learn how to protect properties of my application instance of being modified.
+
+# How to configure app instance property lock for your applications (Preview)
+
+Application instance lock is a feature in Azure Active Directory (Azure AD) that allows sensitive properties of a multi-tenant application object to be locked for modification after the application is provisioned in another tenant.
+This feature provides application developers with the ability to lock certain properties if the application doesn't support scenarios that require configuring those properties.
++
+## What are sensitive properties?
+
+The following property usage scenarios are considered as sensitive:
+
+- Credentials (`keyCredentials`, `passwordCredentials`) where usage type is `Sign`. This is a scenario where your application supports a SAML flow.
+- Credentials (`keyCredentials`, `passwordCredentials`) where usage type is `Verify`. In this scenario, your application supports an OIDC client credentials flow.
+- `TokenEncryptionKeyId` which specifies the keyId of a public key from the keyCredentials collection. When configured, Azure AD encrypts all the tokens it emits by using the key to which this property points. The application code that receives the encrypted token must use the matching private key to decrypt the token before it can be used for the signed-in user.
+
+## Configure an app instance lock
+
+To configure an app instance lock using the Azure portal:
+
+1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
+1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant that contains the app registration you want to configure.
+1. Search for and select **Azure Active Directory**.
+1. Under **Manage**, select **App registrations**, and then select the application you want to configure.
+1. Select **Authentication**, and then select **Configure** under the *App instance property lock* section.
+
+ :::image type="content" source="media/howto-configure-app-instance-property-locks/app-instance-lock-configure-overview.png" alt-text="Screenshot of an app registration's app instance lock in the Azure portal.":::
+
+2. In the **App instance property lock** pane, enter the settings for the lock. The table following the image describes each setting and their parameters.
+
+ :::image type="content" source="media/howto-configure-app-instance-property-locks/app-instance-lock-configure-properties.png" alt-text="Screenshot of an app registration's app instance property lock context pane in the Azure portal.":::
+
+ | Field | Description |
+ | - | -- |
+ | **Enable property lock** | Specifies if the property locks are enabled. |
+ | **All properties** | Locks all sensitive properties without needing to select each property scenario. |
+ | **Credentials used for verification** | Locks the ability to add or update credential properties (`keyCredentials`, `passwordCredentials`) where usage type is `verify`. |
+ | **Credentials used for signing tokens** | Locks the ability to add or update credential properties (`keyCredentials`, `passwordCredentials`) where usage type is `sign`. |
+ | **Token Encryption KeyId** | Locks the ability to change the `tokenEncryptionKeyId` property. |
+
+3. Select **Save** to save your changes.
active-directory Tutorial V2 Nodejs Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-nodejs-desktop.md
Create a folder to host your application, for example *ElectronDesktopApp*.
```console npm init -y
- npm install --save @azure/msal-node @microsoft/microsoft-graph-sdk isomorphic-fetch bootstrap jquery popper.js
+ npm install --save @azure/msal-node @microsoft/microsoft-graph-client isomorphic-fetch bootstrap jquery popper.js
npm install --save-dev electron@20.0.0 ```
active-directory User Flow Add Custom Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/user-flow-add-custom-attributes.md
Previously updated : 03/02/2021 Last updated : 11/07/2022 -+ +
+# Customer intent: As a tenant administrator, I want to create custom attributes for the self-service sign-up user flows.
# Define custom attributes for user flows
Once you've created a new user using a user flow that uses the newly created cus
## Next steps
-[Add a self-service sign-up user flow to an app](self-service-sign-up-user-flow.md)
+- [Add a self-service sign-up user flow to an app](self-service-sign-up-user-flow.md)
+- [Customize the user flow language](user-flow-customize-language.md)
active-directory Active Directory Users Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-users-restore.md
Previously updated : 08/17/2022 Last updated : 11/07/2022
# Restore or remove a recently deleted user using Azure Active Directory
-After you delete a user, the account remains in a suspended state for 30 days. During that 30-day window, the user account can be restored, along with all its properties. After that 30-day window passes, the permanent deletion process is automatically started.
+After you delete a user, the account remains in a suspended state for 30 days. During that 30-day window, the user account can be restored, along with all its properties. After that 30-day window passes, the permanent deletion process is automatically started and can't be stopped. During this time, the management of soft-deleted users is blocked. This limitation also applies to restoring a soft-deleted user via a match during Tenant sync cycle for on-premises hybrid scenarios.
You can view your restorable users, restore a deleted user, or permanently delete a user using Azure Active Directory (Azure AD) in the Azure portal.
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md
Title: What's new? Release notes - Azure Active Directory | Microsoft Docs description: Learn what is new with Azure Active Directory; such as the latest release notes, known issues, bug fixes, deprecated functionality, and upcoming changes.--++ featureFlags: - clicktale ms.assetid: 06a149f7-4aa1-4fb9-a8ec-ac2633b031fb
Previously updated : 1/31/2022- Last updated : 11/7/2022+
This page is updated monthly, so revisit it regularly. If you're looking for ite
**Service category:** Provisioning **Product capability:** AAD Connect Cloud Sync
-Microsoft will stop support for Azure AD provisioning agent with versions 1.1.818.0 and below starting Feb 1,2023. If you are using Azure AD cloud sync, please make sure you have the latest version of the agent. You can info about the agent release history [here](../app-provisioning/provisioning-agent-release-version-history.md). You can download the latest version [here](https://download.msappproxy.net/Subscription/d3c8b69d-6bf7-42be-a529-3fe9c2e70c90/Connector/provisioningAgentInstaller)
+Microsoft will stop support for Azure AD provisioning agent with versions 1.1.818.0 and below starting Feb 1,2023. If you're using Azure AD cloud sync, please make sure you have the latest version of the agent. You can info about the agent release history [here](../app-provisioning/provisioning-agent-release-version-history.md). You can download the latest version [here](https://download.msappproxy.net/Subscription/d3c8b69d-6bf7-42be-a529-3fe9c2e70c90/Connector/provisioningAgentInstaller)
-You can find out which version of the agent you are using as follows:
+You can find out which version of the agent you're using as follows:
-1. Going to the domain server which you have the agent installed
+1. Going to the domain server that you have the agent installed
1. Right-click on the Microsoft Azure AD Connect Provisioning Agent app
-1. Click on ΓÇ£DetailsΓÇ¥ tab and you can find the version number there
+1. Select on ΓÇ£DetailsΓÇ¥ tab and you can find the version number there
> [!NOTE] > Azure Active Directory (AD) Connect follows the [Modern Lifecycle Policy](/lifecycle/policies/modern). Changes for products and services under the Modern Lifecycle Policy may be more frequent and require customers to be alert for forthcoming modifications to their product or service.
For more information, see: [What are Lifecycle Workflows? (Public Preview)](../g
**Service category:** Access Reviews **Product capability:** Identity Governance
-This feature provides Machine Learning based recommendations to the reviewers of Azure AD Access Reviews to make the review experience easier and more accurate. The recommendation detects user affiliation with other users within the group, and leverages the scoring mechanism we built by computing the userΓÇÖs average distance with other users in the group. For more information, see: [Review recommendations for Access reviews](../governance/review-recommendations-access-reviews.md).
+This feature provides Machine Learning based recommendations to the reviewers of Azure AD Access Reviews to make the review experience easier and more accurate. The recommendation detects user affiliation with other users within the group, and applies the scoring mechanism we built by computing the userΓÇÖs average distance with other users in the group. For more information, see: [Review recommendations for Access reviews](../governance/review-recommendations-access-reviews.md).
When configuring writeback of attributes from Azure AD to SAP SuccessFactors Emp
To prevent accidental notification approvals, admins can now require users to enter the number displayed on the sign-in screen when approving an MFA notification in the Microsoft Authenticator app. We've also refreshed the Azure portal admin UX and Microsoft Graph APIs to make it easier for customers to manage Authenticator app feature roll-outs. As part of this update we have also added the highly requested ability for admins to exclude user groups from each feature.
-The number matching feature greatly up-levels the security posture of the Microsoft Authenticator app and protects organizations from MFA fatigue attacks. We highly encourage our customers to adopt this feature leveraging the rollout controls we have built. Number Matching will begin to be enabled for all users of the Microsoft Authenticator app starting 27th of February 2023.
+The number matching feature greatly up-levels the security posture of the Microsoft Authenticator app and protects organizations from MFA fatigue attacks. We highly encourage our customers to adopt this feature applying the rollout controls we have built. Number Matching will begin to be enabled for all users of the Microsoft Authenticator app starting 27th of February 2023.
For more information, see: [How to use number matching in multifactor authentication (MFA) notifications - Authentication methods policy](../authentication/how-to-mfa-number-match.md).
active-directory Create Lifecycle Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-lifecycle-workflow.md
If you are using the Azure portal to create a workflow, you can customize existi
1. On the **configure scope** page select the **Trigger type** and execution conditions to be used for this workflow. For more information on what can be configured, see: [Configure scope](understanding-lifecycle-workflows.md#configure-scope).
-1. Under rules, select the **Property**, **Operator**, and give it a **value**. The following picture gives an example of a rule being set up for a sales department. For a full list of user properties supported by Lifecycle Workflows, see [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta#supported-user-properties-and-query-parameters?toc=/azure/active-directory/governance/toc.json&bc=/azure/active-directory/governance/breadcrumb/toc.json)
+1. Under rules, select the **Property**, **Operator**, and give it a **value**. The following picture gives an example of a rule being set up for a sales department. For a full list of user properties supported by Lifecycle Workflows, see [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta#supported-user-properties-and-query-parameters)
:::image type="content" source="media/create-lifecycle-workflow/template-scope.png" alt-text="Screenshot of Lifecycle Workflows template scope configuration options.":::
active-directory Tutorial Onboard Custom Workflow Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-onboard-custom-workflow-portal.md
Use the following steps to create a pre-hire workflow that will generate a TAP a
:::image type="content" source="media/tutorial-lifecycle-workflows/configure-scope.png" alt-text="Screenshot of selecting a configuration scope." lightbox="media/tutorial-lifecycle-workflows/configure-scope.png":::
- 8. Next, you will configure the scope. The scope determines which users this workflow will run against. In this case, it will be on all users in the Sales department. On the configure scope screen, under **Rule** add the following settings and then select **Next: Review tasks**
+ 8. Next, you will configure the scope. The scope determines which users this workflow will run against. In this case, it will be on all users in the Sales department. On the configure scope screen, under **Rule** add the following settings and then select **Next: Review tasks**. For a full list of supported user properties, see: [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta#supported-user-properties-and-query-parameters)
:::image type="content" source="media/tutorial-lifecycle-workflows/review-tasks.png" alt-text="Screenshot of selecting review tasks." lightbox="media/tutorial-lifecycle-workflows/review-tasks.png":::
active-directory Tutorial Scheduled Leaver Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-scheduled-leaver-portal.md
Use the following steps to create a scheduled leaver workflow that will configur
7. Next, you will configure the basic information about the workflow. This information includes when the workflow will trigger, known as **Days from event**. So in this case, the workflow will trigger seven days after the employee's leave date. On the post-offboarding of an employee screen, add the following settings and then select **Next: Configure Scope**. :::image type="content" source="media/tutorial-lifecycle-workflows/leaver-basics.png" alt-text="Screenshot of leaver template basics information for a workflow." lightbox="media/tutorial-lifecycle-workflows/leaver-basics.png":::
- 8. Next, you will configure the scope. The scope determines which users this workflow will run against. In this case, it will be on all users in the Marketing department. On the configure scope screen, under **Rule** add the following and then select **Next: Review tasks**.
+ 8. Next, you will configure the scope. The scope determines which users this workflow will run against. In this case, it will be on all users in the Marketing department. On the configure scope screen, under **Rule** add the following and then select **Next: Review tasks**. For a full list of supported user properties, see: [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta#supported-user-properties-and-query-parameters)
:::image type="content" source="media/tutorial-lifecycle-workflows/leaver-scope.png" alt-text="Screenshot of reviewing scope details for a leaver workflow." lightbox="media/tutorial-lifecycle-workflows/leaver-scope.png"::: 9. On the following page, you may inspect the tasks if desired but no additional configuration is needed. Select **Next: Select users** when you are finished.
active-directory Understanding Lifecycle Workflows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/understanding-lifecycle-workflows.md
You can add extra expressions using **And/Or** to create complex conditionals, a
[![Extra expressions.](media/understanding-lifecycle-workflows/workflow-8.png)](media/understanding-lifecycle-workflows/workflow-8.png#lightbox) > [!NOTE]
-> For a full list of user properties supported by Lifecycle Workflows, see [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta#supported-user-properties-and-query-parameters?toc=/azure/active-directory/governance/toc.json&bc=/azure/active-directory/governance/breadcrumb/toc.json)
+> For a full list of user properties supported by Lifecycle Workflows, see [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta#supported-user-properties-and-query-parameters)
For more information, see [Create a lifecycle workflow.](create-lifecycle-workflow.md)
active-directory Grant Admin Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/grant-admin-consent.md
Title: Grant tenant-wide admin consent to an application
-description: Learn how to grant tenant-wide consent to an application so that end-users are not prompted for consent when signing in to an application.
+description: Learn how to grant tenant-wide consent to an application so that end-users aren't prompted for consent when signing in to an application.
Previously updated : 09/02/2022 Last updated : 11/07/2022
+zone_pivot_groups: enterprise-apps-minus-aad-powershell
#customer intent: As an admin, I want to grant tenant-wide admin consent to an application in Azure AD.
In this article, you'll learn how to grant tenant-wide admin consent to an application in Azure Active Directory (Azure AD). To understand how individual users consent, see [Configure how end-users consent to applications](configure-user-consent.md).
-When you grant tenant-wide admin consent to an application, you give the application access on behalf of the whole organization to the permissions requested. Granting admin consent on behalf of an organization is a sensitive operation, potentially allowing the application's publisher access to significant portions of your organization's data, or the permission to do highly privileged operations. Examples of such operations might be role management, full access to all mailboxes or all sites, and full user impersonation.
+When you grant tenant-wide admin consent to an application, you give the application access on behalf of the whole organization to the permissions requested. Granting admin consent on behalf of an organization is a sensitive operation, potentially allowing the application's publisher access to significant portions of your organization's data, or the permission to do highly privileged operations. Examples of such operations might be role management, full access to all mailboxes or all sites, and full user impersonation. Carefully review the permissions that the application is requesting before you grant consent.
By default, granting tenant-wide admin consent to an application will allow all users to access the application unless otherwise restricted. To restrict which users can sign-in to an application, configure the app to [require user assignment](application-properties.md#assignment-required) and then [assign users or groups to the application](assign-user-or-group-access-portal.md).
-Tenant-wide admin consent to an app grants the app and the app's publisher access to your organization's data. Carefully review the permissions that the application is requesting before you grant consent. For more information on consenting to applications, see [Azure Active Directory consent framework](../develop/consent-framework.md).
-
-Granting tenant-wide admin consent may revoke any permissions which had previously been granted tenant-wide for that application. Permissions which have previously been granted by users on their own behalf will not be affected.
+Granting tenant-wide admin consent may revoke any permissions that had previously been granted tenant-wide for that application. Permissions that have previously been granted by users on their own behalf won't be affected.
## Prerequisites
To grant tenant-wide admin consent, you need:
You can grant tenant-wide admin consent through *Enterprise applications* if the application has already been provisioned in your tenant. For example, an app could be provisioned in your tenant if at least one user has already consented to the application. For more information, see [How and why applications are added to Azure Active Directory](../develop/active-directory-how-applications-are-added.md). + To grant tenant-wide admin consent to an app listed in **Enterprise applications**: 1. Sign in to the [Azure portal](https://portal.azure.com) with one of the roles listed in the prerequisites section.
where:
As always, carefully review the permissions an application requests before granting consent. ++++
+In the following example, you'll grant delegated permissions defined by a resource enterprise application to a client enterprise application on behalf of all users.
+
+In the example, the resource enterprise application is Microsoft Graph of object ID `7ea9e944-71ce-443d-811c-71e8047b557a`. The Microsoft Graph defines the delegated permissions, `User.Read.All` and `Group.Read.All`. The consentType is `AllPrincipals`, indicating that you're consenting on behalf of all users in the tenant. The object ID of the client enterprise application is `b0d9b9e3-0ecf-4bfd-8dab-9273dd055a941`.
+
+> [!CAUTION]
+> Be careful! Permissions granted programmatically are not subject to review or confirmation. They take effect immediately.
+
+## Grant admin consent for delegated permissions
+
+1. Connect to Microsoft Graph PowerShell:
+
+ ```powershell
+ Connect-MgGraph -Scopes "Application.ReadWrite.All", "DelegatedPermissionGrant.ReadWrite.All"
+ ```
+
+1. Retrieve all the delegated permissions defined by Microsoft graph (the resource application) in your tenant application. Identify the delegated permissions that you'll grant the client application. In this example, the delegation permissions are `User.Read.All` and `Group.Read.All`
+
+ ```powershell
+ Get-MgServicePrincipal -Filter "displayName eq 'Microsoft Graph'" -Property Oauth2PermissionScopes | Select -ExpandProperty Oauth2PermissionScopes | fl
+ ```
+
+1. Grant the delegated permissions to the client enterprise application by running the following request.
+
+```powershell
+$params = @{
+
+ "ClientId" = "b0d9b9e3-0ecf-4bfd-8dab-9273dd055a94"
+ "ConsentType" = "AllPrincipals"
+ "ResourceId" = "7ea9e944-71ce-443d-811c-71e8047b557a"
+ "Scope" = "User.Read.All Group.Read.All"
+}
+
+New-MgOauth2PermissionGrant -BodyParameter $params |
+ Format-List Id, ClientId, ConsentType, ResourceId, Scope
+```
+
+1. Confirm that you've granted tenant wide admin consent by running the following request.
+
+ ```powershell
+ Get-MgOauth2PermissionGrant-Filter "clientId eq 'b0d9b9e3-0ecf-4bfd-8dab-9273dd055a94' consentType eq 'AllPrincipals'"
+ ```
+## Grant admin consent for application permissions
+
+In the following example, you grant the Microsoft Graph enterprise application (the principal of ID `b0d9b9e3-0ecf-4bfd-8dab-9273dd055a94`) an app role (application permission) of ID `df021288-bdef-4463-88db-98f22de89214` that's exposed by a resource enterprise application of ID `7ea9e944-71ce-443d-811c-71e8047b557a`.
+
+1. Connect to Microsoft Graph PowerShell:
+
+ ```powershell
+ Connect-MgGraph -Scopes "Application.ReadWrite.All", "AppRoleAssignment.ReadWrite.All"
+ ```
+
+1. Retrieve the app roles defined by Microsoft graph in your tenant. Identify the app role that you'll grant the client enterprise application. In this example, the app role ID is `df021288-bdef-4463-88db-98f22de89214`.
+
+ ```powershell
+ Get-MgServicePrincipal -Filter "displayName eq 'Microsoft Graph'" -Property AppRoles | Select -ExpandProperty appRoles |fl
+ ```
+
+1. Grant the application permission (app role) to the client enterprise application by running the following request.
+
+```powershell
+ $params = @{
+ "PrincipalId" ="b0d9b9e3-0ecf-4bfd-8dab-9273dd055a94"
+ "ResourceId" = "2cab1707-656d-40cc-8522-3178a184e03d"
+ "AppRoleId" = "df021288-bdef-4463-88db-98f22de89214"
+}
+
+New-MgServicePrincipalAppRoleAssignment -ServicePrincipalId '2cab1707-656d-40cc-8522-3178a184e03d' -BodyParameter $params |
+ Format-List Id, AppRoleId, CreatedDateTime, PrincipalDisplayName, PrincipalId, PrincipalType, ResourceDisplayName
+```
+++
+Use [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) to grant both delegated and application permissions.
+
+## Grant admin consent for delegated permissions
+
+In the following example, you'll grant delegated permissions defined by a resource enterprise application to a client enterprise application on behalf of all users.
+
+In the example, the resource enterprise application is Microsoft Graph of object ID `7ea9e944-71ce-443d-811c-71e8047b557a`. The Microsoft Graph defines the delegated permissions, `User.Read.All` and `Group.Read.All`. The consentType is `AllPrincipals`, indicating that you're consenting on behalf of all users in the tenant. The object ID of the client enterprise application is `b0d9b9e3-0ecf-4bfd-8dab-9273dd055a941`.
+
+> [!CAUTION]
+> Be careful! Permissions granted programmatically are not subject to review or confirmation. They take effect immediately.
+
+1. Retrieve all the delegated permissions defined by Microsoft graph (the resource application) in your tenant application. Identify the delegated permissions that you'll grant the client application. In this example, the delegation permissions are `User.Read.All` and `Group.Read.All`
+
+ ```http
+ GET https://graph.microsoft.com/v1.0/servicePrincipals?$filter=displayName eq 'Microsoft Graph'&$select=id,displayName,appId,oauth2PermissionScopes
+ ```
+
+1. Grant the delegated permissions to the client enterprise application by running the following request.
+
+ ```http
+ POST https://graph.microsoft.com/v1.0/oauth2PermissionGrants
+
+ Request body
+ {
+ "clientId": "b0d9b9e3-0ecf-4bfd-8dab-9273dd055a94",
+ "consentType": "AllPrincipals",
+ "resourceId": "7ea9e944-71ce-443d-811c-71e8047b557a",
+ "scope": "User.Read.All Group.Read.All"
+ }
+ ```
+1. Confirm that you've granted tenant wide admin consent by running the following request.
+
+ ```http
+ GET https://graph.microsoft.com/v1.0/oauth2PermissionGrants?$filter=clientId eq 'b0d9b9e3-0ecf-4bfd-8dab-9273dd055a94' and consentType eq 'AllPrincipals'
+ ```
+## Grant admin consent for application permissions
+
+In the following example, you grant the Microsoft Graph enterprise application (the principal of ID `b0d9b9e3-0ecf-4bfd-8dab-9273dd055a94`) an app role (application permission) of ID `df021288-bdef-4463-88db-98f22de89214` that's exposed by a resource enterprise application of ID `7ea9e944-71ce-443d-811c-71e8047b557a`.
+
+1. Retrieve the app roles defined by Microsoft graph in your tenant. Identify the app role that you'll grant the client enterprise application. In this example, the app role ID is `df021288-bdef-4463-88db-98f22de89214`
+
+ ```http
+ GET https://graph.microsoft.com/v1.0/servicePrincipals?$filter=displayName eq 'Microsoft Graph'&$select=id,displayName,appId,appRoles
+ ```
+1. Grant the application permission (app role) to the client enterprise application by running the following request.
+
+ ```http
+ POST https://graph.microsoft.com/v1.0/servicePrincipals/7ea9e944-71ce-443d-811c-71e8047b557a/appRoleAssignedTo
+
+ Request body
+
+ {
+ "principalId": "b0d9b9e3-0ecf-4bfd-8dab-9273dd055a94",
+ "resourceId": "7ea9e944-71ce-443d-811c-71e8047b557a",
+ "appRoleId": "df021288-bdef-4463-88db-98f22de89214"
+ }
+ ```
+ ## Next steps [Configure how end-users consent to applications](configure-user-consent.md)
active-directory Manage Application Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-application-permissions.md
Previously updated : 10/23/2021 Last updated : 11/07/2022 zone_pivot_groups: enterprise-apps-minus-graph
-# Review permissions granted to applications
+# Review permissions granted to enterprise applications
In this article, you'll learn how to review permissions granted to applications in your Azure Active Directory (Azure AD) tenant. You may need to review permissions when you've detected a malicious application or the application has been granted more permissions than is necessary.
-The steps in this article apply to all applications that were added to your Azure Active Directory (Azure AD) tenant via user or admin consent. For more information on consenting to applications, see [Azure Active Directory consent framework](../develop/consent-framework.md).
+The steps in this article apply to all applications that were added to your Azure Active Directory (Azure AD) tenant via user or admin consent. For more information on consenting to applications, see [User and admin consent](user-admin-consent-overview.md).
## Prerequisites
To review permissions granted to applications, you need:
- One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator. - A Service principal owner who isn't an administrator is able to invalidate refresh tokens.
-## Review application permissions
+## Review permissions
:::zone pivot="portal"
Each option generates PowerShell scripts that enable you to control user access
:::zone pivot="aad-powershell"
+## Revoke permissions
++ Using the following Azure AD PowerShell script revokes all permissions granted to an application. ```powershell
$spOAuth2PermissionsGrants | ForEach-Object {
# Get all application permissions for the service principal $spApplicationPermissions = Get-AzureADServiceAppRoleAssignedTo -ObjectId $sp.ObjectId -All $true | Where-Object { $_.PrincipalType -eq "ServicePrincipal" }
-# Remove all delegated permissions
+# Remove all application permissions
$spApplicationPermissions | ForEach-Object { Remove-AzureADServiceAppRoleAssignment -ObjectId $_.PrincipalId -AppRoleAssignmentId $_.objectId }
$sp = Get-MgServicePrincipal -ServicePrincipalID "$ServicePrincipalID"
Example: Get-MgServicePrincipal -ServicePrincipalId '22c1770d-30df-49e7-a763-f39d2ef9b369'
-# Get all application permissions for the service principal
+# Get all delegated permissions for the service principal
$spOAuth2PermissionsGrants= Get-MgOauth2PermissionGrant -All| Where-Object { $_.clientId -eq $sp.Id } # Remove all delegated permissions
active-directory Concept Activity Logs Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-activity-logs-azure-monitor.md
Previously updated : 10/03/2022 Last updated : 11/04/2022
To use this feature, you need:
* A user who's a **Global Administrator** or **Security Administrator** for the Azure AD tenant. * Azure AD Premium 1, or Premium 2 [license](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing), to access the Azure AD sign-in logs in the Azure portal.
-Depending on where you want to route the audit log data, you need either of the following:
+Depending on where you want to route the audit log data, you need one of the following endpoints:
* An Azure storage account that you have *ListKeys* permissions for. We recommend that you use a general storage account and not a Blob storage account. For storage pricing information, see the [Azure Storage pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=storage). * An Azure Event Hubs namespace to integrate with third-party solutions.
Depending on where you want to route the audit log data, you need either of the
## Cost considerations
-If you already have an Azure AD license, you need an Azure subscription to set up the storage account and Event Hub. The Azure subscription comes at no cost, but you have to pay to utilize Azure resources, including the storage account that you use for archival and the Event Hub that you use for streaming. The amount of data and, thus, the cost incurred, can vary significantly depending on the tenant size.
+If you already have an Azure AD license, you need an Azure subscription to set up the storage account and Event Hubs. The Azure subscription comes at no cost, but you have to pay to utilize Azure resources, including the storage account that you use for archival and the Event Hubs that you use for streaming. The amount of data and, thus, the cost incurred, can vary significantly depending on the tenant size.
### Storage size for activity logs
The following table contains a cost estimate of, depending on the size of the te
| Sign-ins | 100,000 | 15&nbsp;million | 1.7 TB | $35.41 | $424.92 |
-### Event Hub messages for activity logs
+### Event Hubs messages for activity logs
-Events are batched into approximately five-minute intervals and sent as a single message that contains all the events within that timeframe. A message in the Event Hub has a maximum size of 256 KB, and if the total size of all the messages within the timeframe exceeds that volume, multiple messages are sent.
+Events are batched into approximately five-minute intervals and sent as a single message that contains all the events within that timeframe. A message in the Event Hubs has a maximum size of 256 KB. If the total size of all the messages within the timeframe exceeds that volume, multiple messages are sent.
For example, about 18 events per second ordinarily occur for a large tenant of more than 100,000 users, a rate that equates to 5,400 events every five minutes. Because audit logs are about 2 KB per event, this equates to 10.8 MB of data. Therefore, 43 messages are sent to the Event Hub in that five-minute interval.
active-directory Concept All Sign Ins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-all-sign-ins.md
Title: Sign-in logs in Azure Active Directory - preview | Microsoft Docs
-description: Overview of the sign-in logs in Azure Active Directory including new features in preview.
+ Title: Sign-in logs (preview) in Azure Active Directory | Microsoft Docs
+description: Conceptual information about Azure AD sign-in logs, including new features in preview.
Previously updated : 10/03/2022 Last updated : 11/04/2022
-# Sign-in logs in Azure Active Directory - preview
+# Sign-in logs in Azure Active Directory (preview)
-As an IT administrator, you want to know how your IT environment is doing. The information about your systemΓÇÖs health enables you to assess whether and how you need to respond to potential issues.
+Reviewing sign-in errors and patterns provides valuable insight into how your users access applications and services. The sign-in logs provided by Azure Active Directory (Azure AD) are a powerful type of [activity log](overview-reports.md) that IT administrators can analyze. This article explains how to access and utilize the sign-in logs.
-To support you with this goal, the Azure Active Directory (Azure AD) portal gives you access to three activity logs:
--- **[Sign-in](concept-sign-ins.md)** ΓÇô Information about sign-ins and how your resources are used by your users.
+Two other activity logs are also available to help monitor the health of your tenant:
- **[Audit](concept-audit-logs.md)** ΓÇô Information about changes applied to your tenant, such as users and group management or updates applied to your tenantΓÇÖs resources. - **[Provisioning](concept-provisioning-logs.md)** ΓÇô Activities performed by a provisioning service, such as the creation of a group in ServiceNow or a user imported from Workday.
-The classic sign-in log in Azure AD provides you with an overview of interactive user sign-ins. Three additional sign-in logs are now in preview:
+The classic sign-in logs in Azure AD provide you with an overview of interactive user sign-ins. Three more sign-in logs are now in preview:
- Non-interactive user sign-ins- - Service principal sign-ins- - Managed identities for Azure resource sign-ins
-This article gives you an overview of the sign-in activity report with the preview of non-interactive, application, and managed identities for Azure resources sign-ins. For information about the sign-in report without the preview features, see [Sign-in logs in Azure Active Directory](concept-sign-ins.md).
-
-## What can you do with it?
-
-The sign-in log provides answers to questions like:
--- What is the sign-in pattern of a user, application or service?
+This article gives you an overview of the sign-in activity report with the preview of non-interactive, application, and managed identities for Azure resources sign-ins. For information about the sign-in report without the preview features, see [Sign-in logs in Azure Active Directory](concept-sign-ins.md).
-- How many users, apps or services have signed in over a week?
+## How do you access the sign-in logs?
-- WhatΓÇÖs the status of these sign-ins?
+You can always access your own sign-ins history at [https://mysignins.microsoft.com](https://mysignins.microsoft.com).
+To access the sign-ins log for a tenant, you must have one of the following roles:
-## Who can access the data?
+- Global Administrator
+- Security Administrator
+- Security Reader
+- Global Reader
+- Reports Reader
-- Users in the Security Administrator, Security Reader, and Report Reader roles
+The sign-in activity report is available in [all editions of Azure AD](reference-reports-data-retention.md#how-long-does-azure-ad-store-the-data). If you have an Azure Active Directory P1 or P2 license, you can access the sign-in activity report through the Microsoft Graph API. See [Getting started with Azure Active Directory Premium](../fundamentals/active-directory-get-started-premium.md) to upgrade your Azure Active Directory edition. It will take a couple of days for the data to show up in Graph after you upgrade to a premium license with no data activities before the upgrade.
-- Global Administrators
+**To access the Azure AD sign-ins log preview:**
-- Any user (non-admins) can access their own sign-ins
+1. Sign in to the [Azure portal](https://portal.azure.com) using the appropriate least privileged role.
+1. Go to **Azure Active Directory** > **Sign-ins log**.
+1. Select the **Try out our new sign-ins preview** link.
-## What Azure AD license do you need?
+ ![Screenshot of the preview link on the sign-in logs page.](./media/concept-all-sign-ins/sign-in-logs-preview-link.png)
-The sign-in activity report is available in [all editions of Azure AD](reference-reports-data-retention.md#how-long-does-azure-ad-store-the-data). If you have an Azure Active Directory P1 or P2 license, you also can access the sign-in activity report through the Microsoft Graph API. See [Getting started with Azure Active Directory Premium](../fundamentals/active-directory-get-started-premium.md) to upgrade your Azure Active Directory edition. It will take a couple of days for the data to show up in Graph after you upgrade to a premium license with no data activities before the upgrade.
+ To toggle back to the legacy view, select the **Click here to leave the preview** link.
-## Where can you find it in the Azure portal?
+ ![Screenshot of the leave preview link on the sign-in logs page.](./media/concept-all-sign-ins/sign-in-logs-leave-preview-link.png)
-The Azure portal provides you with several options to access the log. For example, on the Azure Active Directory menu, you can open the log in the **Monitoring** section.
+You can also access the sign-in logs from the following areas of Azure AD:
-![Screenshot of the sign-in logs menu option.](./media/concept-sign-ins/sign-ins-logs-menu.png)
+- Users
+- Groups
+- Enterprise applications
-Additionally, you can access the sign-in log using this link: [https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/SignIns](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/SignIns)
+On the sign-in logs page, you can switch between:
-On the sign-ins page, you can switch between:
+- **Interactive user sign-ins:** Sign-ins where a user provides an authentication factor, such as a password, a response through an MFA app, a biometric factor, or a QR code.
-- **Interactive user sign-ins** - Sign-ins where a user provides an authentication factor, such as a password, a response through an MFA app, a biometric factor, or a QR code.
+- **Non-interactive user sign-ins:** Sign-ins performed by a client on behalf of a user. These sign-ins don't require any interaction or authentication factor from the user. For example, authentication and authorization using refresh and access tokens that don't require a user to enter credentials.
-- **Non-interactive user sign-ins** - Sign-ins performed by a client on behalf of a user. These sign-ins don't require any interaction or authentication factor from the user. For example, authentication and authorization using refresh and access tokens that don't require a user to enter credentials.--- **Service principal sign-ins** - Sign-ins by apps and service principals that do not involve any user. In these sign-ins, the app or service provides a credential on its own behalf to authenticate or access resources.--- **Managed identities for Azure resources sign-ins** - Sign-ins by Azure resources that have secrets managed by Azure. For more information, see [What are managed identities for Azure resources?](../managed-identities-azure-resources/overview.md)
+- **Service principal sign-ins:** Sign-ins by apps and service principals that don't involve any user. In these sign-ins, the app or service provides a credential on its own behalf to authenticate or access resources.
+- **Managed identities for Azure resources sign-ins:** Sign-ins by Azure resources that have secrets managed by Azure. For more information, see [What are managed identities for Azure resources?](../managed-identities-azure-resources/overview.md)
![Screenshot of the sign-in log types.](./media/concept-all-sign-ins/sign-ins-report-types.png)
-Each tab on the sign-ins page shows the default columns below. Some tabs have additional columns:
--- Sign-in date--- Request ID--- User name or user ID
+## View the sign-ins log
-- Application name or application ID--- Status of the sign-in--- IP address of the device used for the sign-in
+To more effectively view the sign-ins log, spend a few moments customizing the view for your needs. You can specify what columns to include and filter the data to narrow things down.
### Interactive user sign-ins
-Interactive user sign-ins are sign-ins where a user provides an authentication factor to Azure AD or interacts directly with Azure AD or a helper app, such as the Microsoft Authenticator app. The factors users provide include passwords, responses to MFA challenges, biometric factors, or QR codes that a user provides to Azure AD or to a helper app.
-
-> [!NOTE]
-> This log also includes federated sign-ins from identity providers that are federated to Azure AD.
+Interactive user sign-ins provide an authentication factor to Azure AD or interact directly with Azure AD or a helper app, such as the Microsoft Authenticator app. Users can provide passwords, responses to MFA challenges, biometric factors, or QR codes to Azure AD or to a helper app. This log also includes federated sign-ins from identity providers that are federated to Azure AD.
> [!NOTE]
-> The interactive user sign-in log used to contain some non-interactive sign-ins from Microsoft Exchange clients. Although those sign-ins were non-interactive, they were included in the interactive user sign-in log for additional visibility. Once the non-interactive user sign-in log entered public preview in November 2020, those non-interactive sign-in logs were moved to the non-interactive user sign in log for increased accuracy.
-
+> The interactive user sign-in log previously contained some non-interactive sign-ins from Microsoft Exchange clients. Although those sign-ins were non-interactive, they were included in the interactive user sign-in log for additional visibility. Once the non-interactive user sign-in log entered public preview in November 2020, those non-interactive sign-in logs were moved to the non-interactive user sign in log for increased accuracy.
-**Report size:** small <br>
+**Report size:** small </br>
**Examples:** - A user provides username and password in the Azure AD sign-in screen.- - A user passes an SMS MFA challenge.- - A user provides a biometric gesture to unlock their Windows PC with Windows Hello for Business.- - A user is federated to Azure AD with an AD FS SAML assertion. - In addition to the default fields, the interactive sign-in log also shows: - The sign-in location--- Whether conditional access has been applied
+- Whether Conditional Access has been applied
You can customize the list view by clicking **Columns** in the toolbar.
-![Screenshot of the interactive user sign-in columns that can be customized.](./media/concept-all-sign-ins/columns-interactive.png "Interactive user sign-in columns")
-
-Customizing the view enables you to display additional fields or remove fields that are already displayed.
-
-![Screenshot of all interactive columns.](./media/concept-all-sign-ins/all-interactive-columns.png)
+![Screenshot customize columns button.](./media/concept-all-sign-ins/sign-in-logs-columns-preview.png)
### Non-interactive user sign-ins
-Non-interactive user sign-ins are sign-ins that were performed by a client app or OS components on behalf of a user. Like interactive user sign-ins, these sign-ins are done on behalf of a user. Unlike interactive user sign-ins, these sign-ins do not require the user to provide an authentication factor. Instead, the device or client app uses a token or code to authenticate or access a resource on behalf of a user. In general, the user will perceive these sign-ins as happening in the background of the userΓÇÖs activity.
-
+Like interactive user sign-ins, non-interactive sign-ins are done on behalf of a user. These sign-ins were performed by a client app or OS components on behalf of a user and don't require the user to provide an authentication factor. Instead, the device or client app uses a token or code to authenticate or access a resource on behalf of a user. In general, the user will perceive these sign-ins as happening in the background.
-**Report size:** Large <br>
+**Report size:** Large </br>
**Examples:** - A client app uses an OAuth 2.0 refresh token to get an access token.- - A client uses an OAuth 2.0 authorization code to get an access token and refresh token.- - A user performs single sign-on (SSO) to a web or Windows app on an Azure AD joined PC (without providing an authentication factor or interacting with an Azure AD prompt).- - A user signs in to a second Microsoft Office app while they have a session on a mobile device using FOCI (Family of Client IDs). In addition to the default fields, the non-interactive sign-in log also shows: - Resource ID- - Number of grouped sign-ins You can't customize the fields shown in this report.
-![Screenshot of the disabled columns option.](./media/concept-all-sign-ins/disabled-columns.png "Disabled columns")
+![Screenshot of the disabled columns option.](./media/concept-all-sign-ins/disabled-columns.png)
-To make it easier to digest the data, non-interactive sign-in events are grouped. Clients often create many non-interactive sign-ins on behalf of the same user in a short time period, which share all the same characteristics except for the time the sign-in was attempted. For example, a client may get an access token once per hour on behalf of a user. If the user or client do not change state, the IP address, resource, and all other information is the same for each access token request. When Azure AD logs multiple sign-ins that are identical other than time and date, those sign-ins will be from the same entity are aggregated into a single row. A row with multiple identical sign-ins (except for date and time issued) will have a value greater than 1 in the # sign-ins column. You can expand the row to see all the different sign-ins and their different time stamps. Sign-ins are aggregated in the non-interactive users when the following data matches:
+To make it easier to digest the data, non-interactive sign-in events are grouped. Clients often create many non-interactive sign-ins on behalf of the same user in a short time period. The non-interactive sign-ins share the same characteristics except for the time the sign-in was attempted. For example, a client may get an access token once per hour on behalf of a user. If the state of the user or client doesn't change, the IP address, resource, and all other information is the same for each access token request. The only state that does change is the date and time of the sign-in.
-- Application
+When Azure AD logs multiple sign-ins that are identical other than time and date, those sign-ins will be from the same entity and are aggregated into a single row. A row with multiple identical sign-ins (except for date and time issued) will have a value greater than 1 in the *# sign-ins* column. These aggregated sign-ins may also appear to have the same time stamps. The **Time aggregate** filter can set to 1 hour, 6 hours, or 24 hours. You can expand the row to see all the different sign-ins and their different time stamps.
-- User
+Sign-ins are aggregated in the non-interactive users when the following data matches:
+- Application
+- User
- IP address- - Status- - Resource ID The IP address of non-interactive sign-ins doesn't match the actual source IP of where the refresh token request is coming from. Instead, it shows the original IP used for the original token issuance.
-## Service principal sign-ins
+### Service principal sign-ins
-Unlike interactive and non-interactive user sign-ins, service principal sign-ins do not involve a user. Instead, they are sign-ins by any non-user account, such as apps or service principals (except managed identity sign-in, which are in included only in the managed identity sign-in log). In these sign-ins, the app or service provides its own credential, such as a certificate or app secret to authenticate or access resources.
+Unlike interactive and non-interactive user sign-ins, service principal sign-ins don't involve a user. Instead, they're sign-ins by any non-user account, such as apps or service principals (except managed identity sign-in, which are in included only in the managed identity sign-in log). In these sign-ins, the app or service provides its own credential, such as a certificate or app secret to authenticate or access resources.
-**Report size:** Large <br>
+**Report size:** Large </br>
**Examples:** - A service principal uses a certificate to authenticate and access the Microsoft Graph. - - An application uses a client secret to authenticate in the OAuth Client Credentials flow. -
-This report has a default list view that shows:
--- Sign-in date--- Request ID--- Service principal name or ID--- Status--- IP address--- Resource name--- Resource ID--- Number of sign-ins- You can't customize the fields shown in this report.
-![Disabled columns](./media/concept-all-sign-ins/disabled-columns.png "Disabled columns")
- To make it easier to digest the data in the service principal sign-in logs, service principal sign-in events are grouped. Sign-ins from the same entity under the same conditions are aggregated into a single row. You can expand the row to see all the different sign-ins and their different time stamps. Sign-ins are aggregated in the service principal report when the following data matches: - Service principal name or ID- - Status- - IP address- - Resource name or ID
-## Managed identity for Azure resources sign-ins
+### Managed identity for Azure resources sign-ins
-Managed identity for Azure resources sign-ins are sign-ins that were performed by resources that have their secrets managed by Azure to simplify credential management.
+Managed identities for Azure resources sign-ins are sign-ins that were performed by resources that have their secrets managed by Azure to simplify credential management. A VM with managed credentials uses Azure AD to get an Access Token.
-**Report size:** Small <br>
+**Report size:** Small </br>
**Examples:**
-A VM with managed credentials uses Azure AD to get an Access Token.
--
-This report has a default list view that shows:
---- Managed identity ID--- Managed identity Name--- Resource--- Resource ID--- Number of grouped sign-ins-
-You can't customize the fields shown in this report.
+ You can't customize the fields shown in this report.
To make it easier to digest the data, managed identities for Azure resources sign in logs, non-interactive sign-in events are grouped. Sign-ins from the same entity are aggregated into a single row. You can expand the row to see all the different sign-ins and their different time stamps. Sign-ins are aggregated in the managed identities report when all of the following data matches: - Managed identity name or ID- - Status- - Resource name or ID
-Select an item in the list view to display all sign-ins that are grouped under a node.
-
-Select a grouped item to see all details of the sign-in.
--
-## Sign-in error code
-
-If a sign-in failed, you can get more information about the reason in the **Basic info** section of the related log item.
+Select an item in the list view to display all sign-ins that are grouped under a node. Select a grouped item to see all details of the sign-in.
-![Screenshot shows a detailed information view.](./media/concept-all-sign-ins/error-code.png)
-
-While the log item provides you with a failure reason, there are cases where you might get more information using the [sign-in error lookup tool](https://login.microsoftonline.com/error). For example, if available, this tool provides you with remediation steps.
-
-![Error code lookup tool](./media/concept-all-sign-ins/error-code-lookup-tool.png)
---
-## Filter sign-in activities
+### Filter the results
-By setting a filter, you can narrow down the scope of the returned sign-in data. Azure AD provides you with a broad range of additional filters you can set. When setting your filter, you should always pay special attention to your configured **Date** range filter. A proper date range filter ensures that Azure AD only returns the data you really care about.
+Filtering the sign-ins log is a helpful way to quickly find logs that match a specific scenario. For example, you could filter the list to only view sign-ins that occurred in a specific geographic location, from a specific operating system, or from a specific type of credential.
-The **Date** range filter enables to you to define a timeframe for the returned data.
-Possible values are:
+Some filter options prompt you to select more options. Follow the prompts to make the selection you need for the filter. You can add multiple filters. Take note of the **Date** range in your filter to ensure that Azure AD only returns the data you need. The filter you configure for interactive sign-ins is persisted for non-interactive sign-ins and vice versa.
-- One month
+Select the **Add filters** option from the top of the table to get started.
-- Seven days--- Twenty-four hours--- Custom-
-![Date range filter](./media/concept-all-sign-ins/date-range-filter.png)
+![Screenshot of the sign-in logs page with the Add filters option highlighted.](./media/concept-all-sign-ins/sign-in-logs-filter-preview.png)
+There are several filter options to choose from. Below are some notable options and details.
+- **User:** The *user principal name* (UPN) of the user in question.
+- **Status:** Options are *Success*, *Failure*, and *Interrupted*.
+- **Resource:** The name of the service used for the sign-in.
+- **Conditional access:** The status of the Conditional Access (CA) policy. Options are:
+ - *Not applied:* No policy applied to the user and application during sign-in.
+ - *Success:* One or more CA policies applied to the user and application (but not necessarily the other conditions) during sign-in.
+ - *Failure:* The sign-in satisfied the user and application condition of at least one CA policy and grant controls are either not satisfied or set to block access.
+- **IP addresses:** There's no definitive connection between an IP address and where the computer with that address is physically located. Mobile providers and VPNs issue IP addresses from central pools that are often far from where the client device is used. Currently, converting IP address to a physical location is a best effort based on traces, registry data, reverse lookups and other information.
+The following table provides the options and descriptions for the **Client app** filter option.
+> [!NOTE]
+> Due to privacy commitments, Azure AD does not populate this field to the home tenant in the case of a cross-tenant scenario.
+
+|Name|Modern authentication|Description|
+||:-:||
+|Authenticated SMTP| |Used by POP and IMAP client's to send email messages.|
+|Autodiscover| |Used by Outlook and EAS clients to find and connect to mailboxes in Exchange Online.|
+|Exchange ActiveSync| |This filter shows all sign-in attempts where the EAS protocol has been attempted.|
+|Browser|![Blue checkmark.](./media/concept-all-sign-ins/check.png)|Shows all sign-in attempts from users using web browsers|
+|Exchange ActiveSync| | Shows all sign-in attempts from users with client apps using Exchange ActiveSync to connect to Exchange Online|
+|Exchange Online PowerShell| |Used to connect to Exchange Online with remote PowerShell. If you block basic authentication for Exchange Online PowerShell, you need to use the Exchange Online PowerShell module to connect. For instructions, see [Connect to Exchange Online PowerShell using multi-factor authentication](/powershell/exchange/exchange-online/connect-to-exchange-online-powershell/mfa-connect-to-exchange-online-powershell).|
+|Exchange Web Services| |A programming interface that's used by Outlook, Outlook for Mac, and third-party apps.|
+|IMAP4| |A legacy mail client using IMAP to retrieve email.|
+|MAPI over HTTP| |Used by Outlook 2010 and later.|
+|Mobile apps and desktop clients|![Blue checkmark.](./media/concept-all-sign-ins/check.png)|Shows all sign-in attempts from users using mobile apps and desktop clients.|
+|Offline Address Book| |A copy of address list collections that are downloaded and used by Outlook.|
+|Outlook Anywhere (RPC over HTTP)| |Used by Outlook 2016 and earlier.|
+|Outlook Service| |Used by the Mail and Calendar app for Windows 10.|
+|POP3| |A legacy mail client using POP3 to retrieve email.|
+|Reporting Web Services| |Used to retrieve report data in Exchange Online.|
+|Other clients| |Shows all sign-in attempts from users where the client app isn't included or unknown.|
+
+## Analyze the sign-in logs
+
+Now that your sign-in logs table is formatted appropriately, you can more effectively analyze the data. Some common scenarios are described here, but they aren't the only ways to analyze sign-in data. Further analysis and retention of sign-in data can be accomplished by exporting the logs to other tools.
+
+### Sign-in error code
+
+If a sign-in failed, you can get more information about the reason in the **Basic info** section of the related log item. The error code and associated failure reason appear in the details. Because of the complexity of some Azure AD environments, we can't document every possible error code and resolution. Some errors may require [submitting a support request](../fundamentals/how-to-get-support.md) to resolve the issue.
+
+![Screenshot of a sign-in error code.](./media/concept-all-sign-ins/error-code.png)
+
+For a list of error codes related to Azure AD authentication and authorization, see the [Azure AD authentication and authorization error codes](../develop/reference-aadsts-error-codes.md) article. In some cases, the [sign-in error lookup tool](https://login.microsoftonline.com/error) may provide remediation steps. Enter the **Error code** provided in the sign-in log details into the tool and select the **Submit** button.
+![Screenshot of the error code lookup tool.](./media/concept-all-sign-ins/error-code-lookup-tool.png)
-### Filter user sign-ins
+### Authentication details
-The filter for interactive and non-interactive sign-ins is the same. Because of this, the filter you have configured for interactive sign-ins is persisted for non-interactive sign-ins and vice versa.
+The **Authentication Details** tab in the details of a sign-in log provides the following information for each authentication attempt:
+- A list of authentication policies applied, such as Conditional Access or Security Defaults.
+- A list of session lifetime policies applied, such as Sign-in frequency or Remember MFA.
+- The sequence of authentication methods used to sign-in.
+- If the authentication attempt was successful and the reason why.
+This information allows you to troubleshoot each step in a userΓÇÖs sign-in. Use these details to track:
+- The volume of sign-ins protected by MFA.
+- The reason for the authentication prompt, based on the session lifetime policies.
+- Usage and success rates for each authentication method.
+- Usage of passwordless authentication methods, such as Passwordless Phone Sign-in, FIDO2, and Windows Hello for Business.
+- How frequently authentication requirements are satisfied by token claims, such as when users aren't interactively prompted to enter a password or enter an SMS OTP.
+While viewing the sign-ins log, select a sign-in event, and then select the **Authentication Details** tab.
+![Screenshot of the Authentication Details tab.](media/concept-all-sign-ins/authentication-details-tab.png)
-## Access the new sign-in activity logs
+When analyzing authentication details, take note of the following details:
-The sign-ins activity report in the Azure portal provides you with a simple method to switch the preview report on and off. If you have the preview logs enabled, you get a new menu that gives you access to all sign-in activity report types.
+- **OATH verification code** is logged as the authentication method for both OATH hardware and software tokens (such as the Microsoft Authenticator app).
+- The **Authentication details** tab can initially show incomplete or inaccurate data until log information is fully aggregated. Known examples include:
+ - A **satisfied by claim in the token** message is incorrectly displayed when sign-in events are initially logged.
+ - The **Primary authentication** row isn't initially logged.
+## Sign-in data used by other services
-To access the new sign-in logs with non-interactive and application sign-ins:
+Sign-in data is used by several services in Azure to monitor risky sign-ins and provide insight into application usage.
-1. In the [Azure portal](https://portal.azure.com), select **Azure Active Directory**.
+### Risky sign-in data in Azure AD Identity Protection
- ![Select Azure AD](./media/concept-all-sign-ins/azure-services.png)
+Sign-in log data visualization that relates to risky sign-ins is available in the **Azure AD Identity Protection** overview, which uses the following data:
-2. In the **Monitoring** section, click **Sign-ins**.
+- Risky users
+- Risky user sign-ins
+- Risky service principals
+- Risky service principal sign-ins
- ![Select sign-ins](./media/concept-all-sign-ins/sign-ins.png)
+For more information about the Azure AD Identity Protection tools, see the [Azure AD Identity Protection overview](../identity-protection/overview-identity-protection.md).
-3. Click the **Preview** bar.
+![Screenshot of risky users in Identity Protection.](media/concept-all-sign-ins/id-protection-overview.png)
- ![Enable new view](./media/concept-all-sign-ins/enable-new-preview.png)
+### Azure AD application and authentication sign-in activity
-4. To switch back to the default view, click the **Preview** bar again.
+With an application-centric view of your sign-in data, you can answer questions such as:
- ![Restore classic view](./media/concept-all-sign-ins/switch-back.png)
+- Who is using my applications?
+- What are the top three applications in my organization?
+- How is my newest application doing?
+To view application-specific sign-in data, go to **Azure AD** and select **Usage & insights** from the Monitoring section. These reports provide a closer look at sign-ins for Azure AD application activity and AD FS application activity. For more information, see [Azure AD Usage & insights](concept-usage-insights-report.md).
+![Screenshot of the Azure AD application activity report.](media/concept-all-sign-ins/azure-ad-app-activity.png)
+Azure AD Usage & insights also provides the **Authentication methods activity** report, which breaks down authentication by the method used. Use this report to see how many of your users are set up with MFA or passwordless authentication.
+![Screenshot of the Authentication methods report.](media/concept-all-sign-ins/azure-ad-authentication-methods.png)
+### Microsoft 365 activity logs
+You can view Microsoft 365 activity logs from the [Microsoft 365 admin center](/office365/admin/admin-overview/about-the-admin-center). Microsoft 365 activity and Azure AD activity logs share a significant number of directory resources. Only the Microsoft 365 admin center provides a full view of the Microsoft 365 activity logs.
+You can also access the Microsoft 365 activity logs programmatically by using the [Office 365 Management APIs](/office/office-365-management-api/office-365-management-apis-overview).
## Next steps
active-directory Concept Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-audit-logs.md
Previously updated : 10/03/2022 Last updated : 11/04/2022 # Audit logs in Azure Active Directory
-As an IT administrator, you want to know how your IT environment is doing. The information about your systemΓÇÖs health enables you to assess whether and how you need to respond to potential issues.
+Azure Active Directory (Azure AD) activity logs include audit logs, which is a comprehensive report on every logged event in Azure AD. Changes to applications, groups, users, and licenses are all captured in the Azure AD audit logs.
-To support you with this goal, the Azure Active Directory (Azure AD) portal gives you access to three activity logs:
+Two other activity logs are also available to help monitor the health of your tenant:
- **[Sign-ins](concept-sign-ins.md)** ΓÇô Information about sign-ins and how your resources are used by your users.-- **[Audit](concept-audit-logs.md)** ΓÇô Information about changes applied to your tenant such as users and group management or updates applied to your tenantΓÇÖs resources. - **[Provisioning](concept-provisioning-logs.md)** ΓÇô Activities performed by the provisioning service, such as the creation of a group in ServiceNow or a user imported from Workday. This article gives you an overview of the audit logs.
active-directory Concept Provisioning Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-provisioning-logs.md
Previously updated : 10/05/2022 Last updated : 11/04/2022
# Provisioning logs in Azure Active Directory
-As an IT administrator, you want to know how your IT environment is doing. The information about your systemΓÇÖs health enables you to assess whether and how you need to respond to potential issues.
+Azure Active Directory (Azure AD) integrates with several third party services to provision users into your tenant. If you need to troubleshoot an issue with a provisioned user, you can use the information captured in the Azure AD provisioning logs to help find a solution.
-To support you with this goal, the Azure Active Directory portal gives you access to three activity logs:
+Two other activity logs are also available to help monitor the health of your tenant:
- **[Sign-ins](concept-sign-ins.md)** ΓÇô Information about sign-ins and how your resources are used by your users. - **[Audit](concept-audit-logs.md)** ΓÇô Information about changes applied to your tenant such as users and group management or updates applied to your tenantΓÇÖs resources.-- **[Provisioning](concept-provisioning-logs.md)** ΓÇô Activities performed by the provisioning service, such as the creation of a group in ServiceNow or a user imported from Workday.- This article gives you an overview of the provisioning logs. - ## What can I do with it? You can use the provisioning logs to find answers to questions like:
You can use the provisioning logs to find answers to questions like:
- What users from Workday were successfully created in Active Directory?
-## How can I access it?
+## How do you access the provisioning logs?
-To view the provisioning activity report, your tenant must have an Azure AD Premium license associated with it. To upgrade your Azure AD edition, see [Getting started with Azure Active Directory Premium](../fundamentals/active-directory-get-started-premium.md).
+To view the provisioning logs, your tenant must have an Azure AD Premium license associated with it. To upgrade your Azure AD edition, see [Getting started with Azure Active Directory Premium](../fundamentals/active-directory-get-started-premium.md).
Application owners can view logs for their own applications. The following roles are required to view provisioning logs:
To access the provisioning log data, you have the following options:
- Download the provisioning logs as a CSV or JSON file.
-## What is the default view?
+## View the provisioning logs
-A provisioning log has a default list view that shows:
+To more effectively view the provisioning log, spend a few moments customizing the view for your needs. You can specify what columns to include and filter the data to narrow things down.
-- The identity-- The action-- The source system-- The target system-- The status-- The date
+### Customize the layout
-You can customize the list view by selecting **Columns** on the toolbar.
+The provisioning log has a default view, but you can customize columns.
+
+1. Select **Columns** from the menu at the top of the log.
+1. Select the columns you want to view and select the **Save** button at the bottom of the window.
![Screenshot that shows the button for customizing columns.](./media/concept-provisioning-logs/column-chooser.png "Column chooser") This area enables you to display more fields or remove fields that are already displayed.
-![Screenshot that shows available columns with some selected.](./media/concept-provisioning-logs/available-columns.png "Available columns")
-
-Select an item from the list to get more detailed information, such as the steps taken to provision the user and tips for troubleshooting issues.
-
-![Screenshot that shows detailed information.](./media/concept-provisioning-logs/steps.png "Filter")
--
-## Filter provisioning activities
+## Filter the results
When you filter your provisioning data, some filter values are dynamically populated based on your tenant. For example, if you don't have any "create" events in your tenant, there won't be a **Create** filter option.
-In the default view, you can select the following filters:
--- Identity-- Date-- Status-- Action-
-![Screenshot that shows filter values.](./media/concept-provisioning-logs/default-filter.png "Filter")
- The **Identity** filter enables you to specify the name or the identity that you care about. This identity might be a user, group, role, or other object.
-You can search by the name or ID of the object. The ID varies by scenario. For example, when you're provisioning an object from Azure AD to Salesforce, the source ID is the object ID of the user in Azure AD.
-The target ID is the ID of the user at Salesforce. When you're provisioning from Workday to Active Directory, the source ID is the Workday worker employee ID.
+You can search by the name or ID of the object. The ID varies by scenario.
+- If you're provisioning an object *from Azure AD to Salesforce*, the **source ID** is the object ID of the user in Azure AD. The **target ID** is the ID of the user at Salesforce.
+- If you're provisioning *from Workday to Azure AD*, the **source ID** is the Workday worker employee ID. The **target ID** is the ID of the user in Azure AD.
> [!NOTE] > The name of the user might not always be present in the **Identity** column. There will always be one ID. - The **Date** filter enables to you to define a timeframe for the returned data. Possible values are: - One month - Seven days - 30 days - 24 hours-- Custom time interval-
-When you select a custom time frame, you can configure a start date and an end date.
+- Custom time interval (configure a start date and an end date)
The **Status** filter enables you to select:
The **Action** filter enables you to filter these actions:
In addition to the filters of the default view, you can set the following filters.
-![Screenshot that shows fields that you can add as filters.](./media/concept-provisioning-logs/add-filter.png "Pick a field")
- - **Job ID**: A unique job ID is associated with each application that you've enabled provisioning for. - **Cycle ID**: The cycle ID uniquely identifies the provisioning cycle. You can share this ID with product support to look up the cycle in which this event occurred.
In addition to the filters of the default view, you can set the following filter
- **Application**: You can show only records of applications with a display name that contains a specific string.
-## Provisioning details
-
-When you select an item in the provisioning list view, you get more details about this item. The details are grouped into the following tabs.
+## Analyze the provisioning logs
-![Screenshot that shows four tabs that contain provisioning details.](./media/concept-provisioning-logs/provisioning-tabs.png "Tabs")
+When you select an item in the provisioning list view, you get more details about this item, such as the steps taken to provision the user and tips for troubleshooting issues. The details are grouped into four tabs.
- **Steps**: Outlines the steps taken to provision an object. Provisioning an object can consist of four steps:
Use the following table to better understand how to resolve errors that you find
|Error code|Description| |||
-|Conflict, EntryConflict|Correct the conflicting attribute values in either Azure AD or the application. Or, review your matching attribute configuration if the conflicting user account was supposed to be matched and taken over. Review the [documentation](../app-provisioning/customize-application-attributes.md) for more information on configuring matching attributes.|
+|Conflict,<br>EntryConflict|Correct the conflicting attribute values in either Azure AD or the application. Or, review your matching attribute configuration if the conflicting user account was supposed to be matched and taken over. Review the [documentation](../app-provisioning/customize-application-attributes.md) for more information on configuring matching attributes.|
|TooManyRequests|The target app rejected this attempt to update the user because it's overloaded and receiving too many requests. There's nothing to do. This attempt will automatically be retired. Microsoft has also been notified of this issue.| |InternalServerError |The target app returned an unexpected error. A service issue with the target application might be preventing it from working. This attempt will automatically be retried in 40 minutes.|
-|InsufficientRights, MethodNotAllowed, NotPermitted, Unauthorized| Azure AD authenticated with the target application but wasn't authorized to perform the update. Review any instructions that the target application has provided, along with the respective application [tutorial](../saas-apps/tutorial-list.md).|
+|InsufficientRights,<br>MethodNotAllowed,<br>NotPermitted,<br>Unauthorized| Azure AD authenticated with the target application but wasn't authorized to perform the update. Review any instructions that the target application has provided, along with the respective application [tutorial](../saas-apps/tutorial-list.md).|
|UnprocessableEntity|The target application returned an unexpected response. The configuration of the target application might not be correct, or a service issue with the target application might be preventing it from working.| |WebExceptionProtocolError |An HTTP protocol error occurred in connecting to the target application. There's nothing to do. This attempt will automatically be retried in 40 minutes.| |InvalidAnchor|A user that was previously created or matched by the provisioning service no longer exists. Ensure that the user exists. To force a new matching of all users, use the Microsoft Graph API to [restart the job](/graph/api/synchronization-synchronizationjob-restart?tabs=http&view=graph-rest-beta&preserve-view=true). <br><br>Restarting provisioning will trigger an initial cycle, which can take time to complete. Restarting provisioning also deletes the cache that the provisioning service uses to operate. That means all users and groups in the tenant will have to be evaluated again, and certain provisioning events might be dropped.| |NotImplemented | The target app returned an unexpected response. The configuration of the app might not be correct, or a service issue with the target app might be preventing it from working. Review any instructions that the target application has provided, along with the respective application [tutorial](../saas-apps/tutorial-list.md). |
-|MandatoryFieldsMissing, MissingValues |The user couldn't be created because required values are missing. Correct the missing attribute values in the source record, or review your matching attribute configuration to ensure that the required fields aren't omitted. [Learn more](../app-provisioning/customize-application-attributes.md) about configuring matching attributes.|
+|MandatoryFieldsMissing,<br>MissingValues |The user couldn't be created because required values are missing. Correct the missing attribute values in the source record, or review your matching attribute configuration to ensure that the required fields aren't omitted. [Learn more](../app-provisioning/customize-application-attributes.md) about configuring matching attributes.|
|SchemaAttributeNotFound |The operation couldn't be performed because an attribute was specified that doesn't exist in the target application. See the [documentation](../app-provisioning/customize-application-attributes.md) on attribute customization and ensure that your configuration is correct.| |InternalError |An internal service error occurred within the Azure AD provisioning service. There's nothing to do. This attempt will automatically be retried in 40 minutes.| |InvalidDomain |The operation couldn't be performed because an attribute value contains an invalid domain name. Update the domain name on the user or add it to the permitted list in the target application. |
Use the following table to better understand how to resolve errors that you find
|DuplicateSourceEntries | The operation couldn't be completed because more than one user was found with the configured matching attributes. Remove the duplicate user, or [reconfigure your attribute mappings](../app-provisioning/customize-application-attributes.md).| |ImportSkipped | When each user is evaluated, the system tries to import the user from the source system. This error commonly occurs when the user who's being imported is missing the matching property defined in your attribute mappings. Without a value present on the user object for the matching attribute, the system can't evaluate scoping, matching, or export changes. The presence of this error doesn't indicate that the user is in scope, because you haven't yet evaluated scoping for the user.| |EntrySynchronizationSkipped | The provisioning service has successfully queried the source system and identified the user. No further action was taken on the user and they were skipped. The user might have been out of scope, or the user might have already existed in the target system with no further changes required.|
-|SystemForCrossDomainIdentityManagementMultipleEntriesInResponse| A GET request to retrieve a user or group received multiple users or groups in the response. The system expects to receive only one user or group in the response. For example, if you do a [GET Group request](../app-provisioning/use-scim-to-provision-users-and-groups.md#get-group) to retrieve a group, provide a filter to exclude members, and your System for Cross-Domain Identity Management (SCIM) endpoint returns the members, you'll get this error.|
-|SystemForCrossDomainIdentityManagementServiceIncompatible|The Azure AD provisioning service is unable to parse the response from the third party application. Work with the application developer to ensure that the SCIM server is compatible with the [Azure AD SCIM client](../app-provisioning/use-scim-to-provision-users-and-groups.md#understand-the-azure-ad-scim-implementation).|
+|SystemForCrossDomainIdentity<br>ManagementMultipleEntriesInResponse| A GET request to retrieve a user or group received multiple users or groups in the response. The system expects to receive only one user or group in the response. For example, if you do a [GET Group request](../app-provisioning/use-scim-to-provision-users-and-groups.md#get-group) to retrieve a group, provide a filter to exclude members, and your System for Cross-Domain Identity Management (SCIM) endpoint returns the members, you'll get this error.|
+|SystemForCrossDomainIdentity<br>ManagementServiceIncompatible|The Azure AD provisioning service is unable to parse the response from the third party application. Work with the application developer to ensure that the SCIM server is compatible with the [Azure AD SCIM client](../app-provisioning/use-scim-to-provision-users-and-groups.md#understand-the-azure-ad-scim-implementation).|
|SchemaPropertyCanOnlyAcceptValue|The property in the target system can only accept one value, but the property in the source system has multiple. Ensure that you either map a single-valued attribute to the property that is throwing an error, update the value in the source to be single-valued, or remove the attribute from the mappings.| ## Next steps
active-directory Concept Reporting Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-reporting-api.md
Title: Get started with the Azure AD reporting API | Microsoft Docs description: How to get started with the Azure Active Directory reporting API -+ - Previously updated : 08/26/2022- Last updated : 11/04/2022+ - # Get started with the Azure Active Directory reporting API
-Azure Active Directory provides you with a variety of [reports](overview-reports.md), containing useful information for applications such as SIEM systems, audit, and business intelligence tools.
-
-By using the Microsoft Graph API for Azure AD reports, you can gain programmatic access to the data through a set of REST-based APIs. You can call these APIs from a variety of programming languages and tools.
-
-This article provides you with an overview of the reporting API, including ways to access it.
+Azure Active Directory provides you with several [reports](overview-reports.md), containing useful information such as security information and event management (SIEM) systems, audit, and business intelligence tools. By using the Microsoft Graph API for Azure AD reports, you can gain programmatic access to the data through a set of REST-based APIs. You can call these APIs from various programming languages and tools.
-If you run into issues, see [how to get support for Azure Active Directory](../fundamentals/active-directory-troubleshooting-support-howto.md).
+This article provides you with an overview of the reporting API, including ways to access it. If you run into issues, see [how to get support for Azure Active Directory](../fundamentals/active-directory-troubleshooting-support-howto.md).
## Prerequisites To access the reporting API, with or without user intervention, you need to:
-1. Assign roles (Security Reader, Security Admin, Global Admin)
-2. Register an application
-3. Grant permissions
-4. Gather configuration settings
+1. Confirm your roles and licenses
+1. Register an application
+1. Grant permissions
+1. Gather configuration settings
For detailed instructions, see the [prerequisites to access the Azure Active Directory reporting API](howto-configure-prerequisites-for-reporting-api.md). ## API Endpoints
-The Microsoft Graph API endpoint for audit logs is `https://graph.microsoft.com/v1.0/auditLogs/directoryAudits` and the Microsoft Graph API endpoint for sign-ins is `https://graph.microsoft.com/v1.0/auditLogs/signIns`. For more information, see the [audit API reference](/graph/api/resources/directoryaudit) and [sign-in API reference](/graph/api/resources/signIn).
+Microsoft Graph API endpoints:
+- **Audit logs:** `https://graph.microsoft.com/v1.0/auditLogs/directoryAudits`
+- **Sign-in logs:** `https://graph.microsoft.com/v1.0/auditLogs/signIns`
-You can use the [Identity Protection risk detections API](/graph/api/resources/identityprotection-root) to gain programmatic access to security detections using Microsoft Graph. For more information, see [Get started with Azure Active Directory Identity Protection and Microsoft Graph](../identity-protection/howto-identity-protection-graph-api.md).
-
-You can also use the [provisioning logs API](/graph/api/resources/provisioningobjectsummary) to get programmatic access to provisioning events in your tenant.
+Programmatic access APIs:
+- **Security detections:** [Identity Protection risk detections API](/graph/api/resources/identityprotection-root)
+- **Tenant provisioning events:** [Provisioning logs API](/graph/api/resources/provisioningobjectsummary)
+
+Check out the following helpful resources for Microsoft Graph API:
+- [Audit log API reference](/graph/api/resources/directoryaudit)
+- [Sign-in log API reference](/graph/api/resources/signIn)
+- [Get started with Azure Active Directory Identity Protection and Microsoft Graph](../identity-protection/howto-identity-protection-graph-api.md)
+
## APIs with Microsoft Graph Explorer
-You can use the [Microsoft Graph explorer](https://developer.microsoft.com/graph/graph-explorer) to verify your sign-in and audit API data. Make sure to sign in to your account using both of the sign-in buttons in the Graph Explorer UI, and set **AuditLog.Read.All** and **Directory.Read.All** permissions for your tenant as shown.
+You can use the [Microsoft Graph explorer](https://developer.microsoft.com/graph/graph-explorer) to verify your sign-in and audit API data. Sign in to your account using both of the sign-in buttons in the Graph Explorer UI, and set **AuditLog.Read.All** and **Directory.Read.All** permissions for your tenant as shown.
![Graph Explorer](./media/concept-reporting-api/graph-explorer.png)
active-directory Concept Sign In Diagnostics Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-sign-in-diagnostics-scenarios.md
Title: Sign in diagnostics for Azure AD scenarios description: Lists the scenarios that are supported by the sign-in diagnostics for Azure AD. -+ - -+ Previously updated : 08/26/2022---
-# Customer intent: As an Azure AD administrator, I want to know the scenarios that are supported by the sign in diagnostics for Azure AD so that I can determine whether the tool can help me with a sign-in issue.
Last updated : 11/04/2022++
+# Customer intent: As an Azure AD administrator, I want to know the scenarios that are supported by the sign in diagnostics for Azure AD so that I can determine whether the tool can help me with a sign-in issue.
# Sign in diagnostics for Azure AD scenarios
The sign-in diagnostic for Azure AD provides you with support for the following
- Pass Through Authentication
- - Seamless single sign on
----
+ - Seamless single sign-on
active-directory Concept Sign Ins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-sign-ins.md
Title: Sign-in logs in Azure Active Directory | Microsoft Docs
-description: Overview of the sign-in logs in Azure Active Directory.
+description: Conceptual information about Azure AD sign-in logs.
Previously updated : 10/06/2022 Last updated : 11/04/2022 # Sign-in logs in Azure Active Directory
-As an IT administrator, you want to know how your IT environment is doing. The information about your systemΓÇÖs health enables you to assess whether and how you need to respond to potential issues.
+Reviewing sign-in errors and patterns provides valuable insight into how your users access applications and services. The sign-in logs provided by Azure Active Directory (Azure AD) are a powerful type of [activity log](overview-reports.md) that IT administrators can analyze. This article explains how to access and utilize the sign-in logs.
-To support you with this goal, the Azure Active Directory portal gives you access to three activity logs:
+Two other activity logs are also available to help monitor the health of your tenant:
+- **[Audit](concept-audit-logs.md)** ΓÇô Information about changes applied to your tenant, such as users and group management or updates applied to your tenantΓÇÖs resources.
+- **[Provisioning](concept-provisioning-logs.md)** ΓÇô Activities performed by a provisioning service, such as the creation of a group in ServiceNow or a user imported from Workday.
-- **[Sign-ins](concept-sign-ins.md)** ΓÇô Information about sign-ins and how your resources are used by your users.-- **[Audit](concept-audit-logs.md)** ΓÇô Information about changes applied to your tenant such as users and group management or updates applied to your tenantΓÇÖs resources.-- **[Provisioning](concept-provisioning-logs.md)** ΓÇô Activities performed by the provisioning service, such as the creation of a group in ServiceNow or a user imported from Workday.-
-This article gives you an overview of the sign-ins report.
--
-## What can you do with it?
+## What can you do with sign-in logs?
You can use the sign-ins log to find answers to questions like:
You can use the sign-ins log to find answers to questions like:
- WhatΓÇÖs the status of these sign-ins?
+## How do you access the sign-in logs?
-## Who can access it?
-
-You can always access your own sign-ins history using this link: [https://mysignins.microsoft.com](https://mysignins.microsoft.com)
-
-To access the sign-ins log, you need to be:
--- A global administrator--- A user in one of the following roles:
- - Security administrator
-
- - Security reader
-
- - Global reader
-
- - Reports reader
---
-## What Azure AD license do you need?
-
-The sign-in activity report is available in [all editions of Azure AD](reference-reports-data-retention.md#how-long-does-azure-ad-store-the-data). If you have an Azure Active Directory P1 or P2 license, you also can access the sign-in activity report through the Microsoft Graph API.
--
-## Where can you find it in the Azure portal?
-
-The Azure portal provides you with several options to access the log. For example, on the Azure Active Directory menu, you can open the log in the **Monitoring** section.
-
-![Open sign-in logs](./media/concept-sign-ins/sign-ins-logs-menu.png)
-
-Additionally, you can get directly get to the sign-in logs using this link: [https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/SignIns](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/SignIns)
--
-## What is the default view?
-
-A sign-ins log has a default list view that shows:
--- The sign-in date-- The related user-- The application the user has signed in to-- The sign-in status-- The status of the risk detection-- The status of the multi-factor authentication (MFA) requirement-
-![Screenshot shows the Office 365 SharePoint Online Sign-ins.](./media/concept-sign-ins/sign-in-activity.png "Sign-in activity")
-
-You can customize the list view by clicking **Columns** in the toolbar.
-
-![Screenshot shows the Columns option in the Sign-ins page.](./media/concept-sign-ins/19.png "Sign-in activity")
-
-The **Columns** dialog gives you access to the selectable attributes. In a sign-in report, you can't have fields
-that have more than one value for a given sign-in request as column. This is, for example, true for authentication details, conditional access data and network location.
-
-![Screenshot shows the Columns dialog box where you can select attributes.](./media/concept-sign-ins/columns.png "Sign-in activity")
----
-## Sign-in error code
-
-If a sign-in failed, you can get more information about the reason in the **Basic info** section of the related log item.
-
-![sign-in error code](./media/concept-all-sign-ins/error-code.png)
-
-While the log item provides you with a failure reason, there are cases where you might get more information using the [sign-in error lookup tool](https://login.microsoftonline.com/error). For example, if available, this tool provides you with remediation steps.
-
-![Error code lookup tool](./media/concept-all-sign-ins/error-code-lookup-tool.png)
--
+You can always access your own sign-ins history at [https://mysignins.microsoft.com](https://mysignins.microsoft.com).
-## Filter sign-in activities
+To access the sign-ins log for a tenant, you must have one of the following roles:
+- Global Administrator
+- Security Administrator
+- Security Reader
+- Global Reader
+- Reports Reader
-You can filter the data in a log to narrow it down to a level that works for you:
+The sign-in activity report is available in [all editions of Azure AD](reference-reports-data-retention.md#how-long-does-azure-ad-store-the-data). If you have an Azure Active Directory P1 or P2 license, you can access the sign-in activity report through the Microsoft Graph API. See [Getting started with Azure Active Directory Premium](../fundamentals/active-directory-get-started-premium.md) to upgrade your Azure Active Directory edition. It will take a couple of days for the data to show up in Graph after you upgrade to a premium license with no data activities before the upgrade.
-![Screenshot shows the Add filters option.](./media/concept-sign-ins/04.png "Sign-in activity")
+**To access the Azure AD sign-ins log:**
-**Request ID** - The ID of the request you care about.
+1. Sign in to the [Azure portal](https://portal.azure.com) using the appropriate least privileged role.
+1. Go to **Azure Active Directory** > **Sign-ins log**.
-**User** - The name or the user principal name (UPN) of the user you care about.
+ ![Screenshot of the Monitoring side menu with sign-in logs highlighted.](./media/concept-sign-ins/side-menu-sign-in-logs.png)
-**Application** - The name of the target application.
-
-**Status** - The sign-in status you care about:
+You can also access the sign-in logs from the following areas of Azure AD:
-- Success--- Failure--- Interrupted
+- Users
+- Groups
+- Enterprise applications
+## View the sign-ins log
-**IP address** - The IP address of the device used to connect to your tenant.
+To more effectively view the sign-ins log, spend a few moments customizing the view for your needs. You can specify what columns to include and filter the data to narrow things down.
-The **Location** - The location the connection was initiated from:
+### Customize the layout
-- City
+The sign-ins log has a default view, but you can customize the view using over 30 column options.
-- State / Province
+1. Select **Columns** from the menu at the top of the log.
+1. Select the columns you want to view and select the **Save** button at the bottom of the window.
-- Country/Region
+![Screenshot of the sign-in logs page with the Columns option highlighted.](./media/concept-sign-ins/sign-in-logs-columns.png)
+### Filter the results <h3 id="filter-sign-in-activities"></h3>
-**Resource** - The name of the service used for the sign-in.
+Filtering the sign-ins log is a helpful way to quickly find logs that match a specific scenario. For example, you could filter the list to only view sign-ins that occurred in a specific geographic location, from a specific operating system, or from a specific type of credential.
+Some filter options prompt you to select more options. Follow the prompts to make the selection you need for the filter. You can add multiple filters.
-**Resource ID** - The ID of the service used for the sign-in.
+Select the **Add filters** option from the top of the table to get started.
+![Screenshot of the sign-in logs page with the Add filters option highlighted.](./media/concept-sign-ins/sign-in-logs-filter.png)
-**Client app** - The type of the client app used to connect to your tenant:
+There are several filter options to choose from. Below are some notable options and details.
-![Client app filter](./media/concept-sign-ins/client-app-filter.png)
+- **User:** The *user principal name* (UPN) of the user in question.
+- **Status:** Options are *Success*, *Failure*, and *Interrupted*.
+- **Resource:** The name of the service used for the sign-in.
+- **Conditional access:** The status of the Conditional Access (CA) policy. Options are:
+ - *Not applied:* No policy applied to the user and application during sign-in.
+ - *Success:* One or more CA policies applied to the user and application (but not necessarily the other conditions) during sign-in.
+ - *Failure:* The sign-in satisfied the user and application condition of at least one CA policy and grant controls are either not satisfied or set to block access.
+- **IP addresses:** There is no definitive connection between an IP address and where the computer with that address is physically located. Mobile providers and VPNs issue IP addresses from central pools that are often far from where the client device is actually used. Currently, converting IP address to a physical location is a best effort based on traces, registry data, reverse lookups and other information.
+The following table provides the options and descriptions for the **Client app** filter option.
> [!NOTE] > Due to privacy commitments, Azure AD does not populate this field to the home tenant in the case of a cross-tenant scenario. - |Name|Modern authentication|Description| ||:-:|| |Authenticated SMTP| |Used by POP and IMAP client's to send email messages.|
The **Location** - The location the connection was initiated from:
|Outlook Service| |Used by the Mail and Calendar app for Windows 10.| |POP3| |A legacy mail client using POP3 to retrieve email.| |Reporting Web Services| |Used to retrieve report data in Exchange Online.|
-|Other clients| |Shows all sign-in attempts from users where the client app is not included or unknown.|
-------
-**Operating system** - The operating system running on the device used sign-on to your tenant.
--
-**Device browser** - If the connection was initiated from a browser, this field enables you to filter by browser name.
--
-**Correlation ID** - The correlation ID of the activity.
--
+|Other clients| |Shows all sign-in attempts from users where the client app isn't included or unknown.|
+## Analyze the sign-in logs
-**Conditional access** - The status of the applied conditional access rules
+Now that your sign-in logs table is formatted appropriately, you can more effectively analyze the data. Some common scenarios are described here, but they aren't the only ways to analyze sign-in data. Further analysis and retention of sign-in data can be accomplished by exporting the logs to other tools.
-- **Not applied**: No policy applied to the user and application during sign-in.
+### Sign-in error codes
-- **Success**: One or more conditional access policies applied to the user and application (but not necessarily the other conditions) during sign-in.
+If a sign-in failed, you can get more information about the reason in the **Basic info** section of the related log item. The error code and associated failure reason appear in the details. Because of the complexity of some Azure AD environments, we cannot document every possible error code and resolution. Some errors may require [submitting a support request](../fundamentals/how-to-get-support.md) to resolve the issue.
-- **Failure**: The sign-in satisfied the user and application condition of at least one Conditional Access policy and grant controls are either not satisfied or set to block access.---
-## Sign-ins data shortcuts
-
-Azure AD and the Azure portal both provide you with additional entry points to sign-ins data:
--- The Identity security protection overview-- Users-- Groups-- Enterprise applications-
-### Users sign-ins data in Identity security protection
-
-The user sign-in graph in the **Identity security protection** overview page shows weekly aggregations of sign-ins. The default for the time period is 30 days.
-
-![Screenshot shows a graph of Sign-ins over a month.](./media/concept-sign-ins/06.png "Sign-in activity")
-
-When you click on a day in the sign-in graph, you get an overview of the sign-in activities for this day.
-
-Each row in the sign-in activities list shows:
-
-* Who has signed in?
-* What application was the target of the sign-in?
-* What is the status of the sign-in?
-* What is the MFA status of the sign-in?
-
-By clicking an item, you get more details about the sign-in operation:
--- User ID-- User-- Username-- Application ID-- Application-- Client-- Location-- IP address-- Date-- MFA Required-- Sign-in status-
-> [!NOTE]
-> IP addresses are issued in such a way that there is no definitive connection between an IP address and where the computer with that address is physically located. Mapping IP addresses is complicated by the fact that mobile providers and VPNs issue IP addresses from central pools that are often very far from where the client device is actually used.
-> Currently, converting IP address to a physical location is a best effort based on traces, registry data, reverse lookups and other information.
+![Screenshot of a sign-in error code.](./media/concept-sign-ins/error-code.png)
-On the **Users** page, you get a complete overview of all user sign-ins by clicking **Sign-ins** in the **Activity** section.
+For a list of error codes related to Azure AD authentication and authorization, see the [Azure AD authentication and authorization error codes](../develop/reference-aadsts-error-codes.md) article. In some cases, the [sign-in error lookup tool](https://login.microsoftonline.com/error) may provide remediation steps. Enter the **Error code** provided in the sign-in log details into the tool and select the **Submit** button.
-![Screenshot shows the Activity section where you can select Sign-ins.](./media/concept-sign-ins/08.png "Sign-in activity")
+![Screenshot of the error code lookup tool.](./media/concept-sign-ins/error-code-lookup-tool.png)
-## Authentication details
+### Authentication details
-The **Authentication Details** tab located within the sign-ins report provides the following information, for each authentication attempt:
+The **Authentication Details** tab in the details of a sign-in log provides the following information for each authentication attempt:
-- A list of authentication policies applied (such as Conditional Access, per-user MFA, Security Defaults)-- A list of session lifetime policies applied (such as Sign-in frequency, Remember MFA, Configurable Token lifetime)-- The sequence of authentication methods used to sign-in-- Whether or not the authentication attempt was successful-- Detail about why the authentication attempt succeeded or failed
+- A list of authentication policies applied, such as Conditional Access or Security Defaults.
+- A list of session lifetime policies applied, such as Sign-in frequency or Remember MFA.
+- The sequence of authentication methods used to sign-in.
+- If the authentication attempt was successful and the reason why.
-This information allows admins to troubleshoot each step in a userΓÇÖs sign-in, and track:
+This information allows you to troubleshoot each step in a userΓÇÖs sign-in. Use these details to track:
-- Volume of sign-ins protected by multi-factor authentication -- Reason for authentication prompt based on the session lifetime policies-- Usage and success rates for each authentication method -- Usage of passwordless authentication methods (such as Passwordless Phone Sign-in, FIDO2, and Windows Hello for Business) -- How frequently authentication requirements are satisfied by token claims (where users are not interactively prompted to enter a password, enter an SMS OTP, and so on)
+- The volume of sign-ins protected by MFA.
+- The reason for the authentication prompt, based on the session lifetime policies.
+- Usage and success rates for each authentication method.
+- Usage of passwordless authentication methods, such as Passwordless Phone Sign-in, FIDO2, and Windows Hello for Business.
+- How frequently authentication requirements are satisfied by token claims, such as when users aren't interactively prompted to enter a password or enter an SMS OTP.
-While viewing the Sign-ins report, select the **Authentication Details** tab:
+While viewing the sign-ins log, select a sign-in event, and then select the **Authentication Details** tab.
-![Screenshot of the Authentication Details tab](media/concept-sign-ins/auth-details-tab.png)
+![Screenshot of the Authentication Details tab](media/concept-sign-ins/authentication-details-tab.png)
->[!NOTE]
->**OATH verification code** is logged as the authentication method for both OATH hardware and software tokens (such as the Microsoft Authenticator app).
+When analyzing authentication details, take note of the following details:
->[!IMPORTANT]
->The **Authentication details** tab can initially show incomplete or inaccurate data, until log information is fully aggregated. Known examples include:
->- A **satisfied by claim in the token** message is incorrectly displayed when sign-in events are initially logged.
->- The **Primary authentication** row is not initially logged.
+- **OATH verification code** is logged as the authentication method for both OATH hardware and software tokens (such as the Microsoft Authenticator app).
+- The **Authentication details** tab can initially show incomplete or inaccurate data until log information is fully aggregated. Known examples include:
+ - A **satisfied by claim in the token** message is incorrectly displayed when sign-in events are initially logged.
+ - The **Primary authentication** row isn't initially logged.
+## Sign-in data used by other services
-## Usage of managed applications
+Sign-in data is used by several services in Azure to monitor risky sign-ins and provide insight into application usage.
-With an application-centric view of your sign-in data, you can answer questions such as:
+### Risky sign-in data in Azure AD Identity Protection
-* Who is using my applications?
-* What are the top three applications in your organization?
-* How is my newest application doing?
+Sign-in log data visualization that relates to risky sign-ins is available in the **Azure AD Identity Protection** overview, which uses the following data:
-The entry point to this data is the top three applications in your organization. The data is contained within the last 30 days report in the **Overview** section under **Enterprise applications**.
+- Risky users
+- Risky user sign-ins
+- Risky service principals
+- Risky service principal sign-ins
-![Screenshot shows where you can select Overview.](./media/concept-sign-ins/10.png "Sign-in activity")
+ For more information about the Azure AD Identity Protection tools, see the [Azure AD Identity Protection overview](../identity-protection/overview-identity-protection.md).
-The app-usage graphs weekly aggregations of sign-ins for your top three applications in a given time period. The default for the time period is 30 days.
+![Screenshot of risky users in Identity Protection.](media/concept-sign-ins/id-protection-overview.png)
-![Screenshot shows the App usage for a one month period.](./media/concept-sign-ins/graph-chart.png "Sign-in activity")
+### Azure AD application and authentication sign-in activity
-If you want to, you can set the focus on a specific application.
+To view application-specific sign-in data, go to **Azure AD** and select **Usage & insights** from the Monitoring section. These reports provide a closer look at sign-ins for Azure AD application activity and AD FS application activity. For more information, see [Azure AD Usage & insights](concept-usage-insights-report.md).
-![Reporting](./media/concept-sign-ins/single-app-usage-graph.png "Reporting")
+![Screenshot of the Azure AD application activity report.](media/concept-sign-ins/azure-ad-app-activity.png)
-When you click on a day in the app usage graph, you get a detailed list of the sign-in activities.
+Azure AD Usage & insights also provides the **Authentication methods activity** report, which breaks down authentication by the method used. Use this report to see how many of your users are set up with MFA or passwordless authentication.
-The **Sign-ins** option gives you a complete overview of all sign-in events to your applications.
+![Screenshot of the Authentication methods report.](media/concept-sign-ins/azure-ad-authentication-methods.png)
-## Microsoft 365 activity logs
+### Microsoft 365 activity logs
-You can view Microsoft 365 activity logs from the [Microsoft 365 admin center](/office365/admin/admin-overview/about-the-admin-center). Consider the point that, Microsoft 365 activity and Azure AD activity logs share a significant number of the directory resources. Only the Microsoft 365 admin center provides a full view of the Microsoft 365 activity logs.
+You can view Microsoft 365 activity logs from the [Microsoft 365 admin center](/office365/admin/admin-overview/about-the-admin-center). Microsoft 365 activity and Azure AD activity logs share a significant number of directory resources. Only the Microsoft 365 admin center provides a full view of the Microsoft 365 activity logs.
-You can also access the Microsoft 365 activity logs programmatically by using the [Office 365 Management APIs](/office/office-365-management-api/office-365-management-apis-overview).
+You can access the Microsoft 365 activity logs programmatically by using the [Office 365 Management APIs](/office/office-365-management-api/office-365-management-apis-overview).
## Next steps
active-directory Concept Usage Insights Report https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-usage-insights-report.md
Title: Usage and insights report | Microsoft Docs description: Introduction to usage and insights report in the Azure Active Directory portal -+ - Previously updated : 08/26/2022- Last updated : 11/03/2022+
-# Usage and insights report in the Azure Active Directory portal
+# Usage and insights in Azure Active Directory
-With the usage and insights report, you can get an application-centric view of your sign-in data. You can find answers to the following questions:
+With the Azure Active Directory (Azure AD) **Usage and insights** reports, you can get an application-centric view of your sign-in data. Usage & insights also includes a report on authentication methods activity. You can find answers to the following questions:
* What are the top used applications in my organization? * What applications have the most failed sign-ins? * What are the top sign-in errors for each application?
-## Prerequisites
+This article provides an overview of three reports that look sign-in data.
+
+## Access Usage & insights
-To access the data from the usage and insights report, you need:
+Accessing the data from Usage and insights requires:
* An Azure AD tenant * An Azure AD premium (P1/P2) license to view the sign-in data
-* A user in the global administrator, security administrator, security reader, or report reader roles. In addition, any user (non-admins) can access their own sign-ins.
+* A user in the Global Administrator, Security Administrator, Security Reader, or Report Reader roles.
+
+To access Usage & insights:
+
+1. Sign in to the [Azure portal](https://portal.azure.com) using the appropriate least privileged role.
+1. Go to **Azure Active Directory** > **Usage & insights**.
+
+The **Usage & insights** report is also available from the **Enterprise applications** area of Azure AD. All users can access their own sign-ins at the [My Sign-Ins portal](https://mysignins.microsoft.com/security-info).
+
+## View the Usage & insights reports
+
+There are currently three reports available in Azure AD Usage & insights. All three reports use sign-in data to provide helpful information an application usage and authentication methods.
+
+### Azure AD application activity (preview)
+
+The **Azure AD application activity (preview)** report shows the list of applications with one or more sign-in attempts. The report allows you to sort by the number of successful sign-ins, failed sign-ins, and the success rate.
+
+Select the **View sign in activity** link for an application to view more details. The sign-in graph per application counts interactive user sign-ins. The details of any sign-in failures appears below the table.
+
+![Screenshot shows Usage and insights for Application activity where you can select a range and view sign-in activity for different apps.](./media/concept-usage-insights-report/usage-insights-overview.png)
-## Access the usage and insights report
+Select a day in the application usage graph to see a detailed list of the sign-in activities for the application. This detailed list is actually the sign-in log with the filter set to the selected application and date.
-1. Navigate to the [Azure portal](https://portal.azure.com).
-2. Select the right directory, then select **Azure Active Directory** and choose **Enterprise applications**.
-3. From the **Activity** section, select **Usage & insights** to open the report.
+![Screenshot of the sign-in activity details for a selected application.](./media/concept-usage-insights-report/application-activity-sign-in-detail.png)
-![Screenshot shows Usage & insights selected from the Activity section.](./media/concept-usage-insights-report/main-menu.png)
-
+### AD FS application activity
-## Use the report
+The **AD FS application activity** report in Usage & insights lists all Active Directory Federated Services (AD FS) applications in your organization that have had an active user login to authenticate in the last 30 days. These applications have not been migrated to Azure AD for authentication.
-The usage and insights report shows the list of applications with one or more sign-in attempts, and allows you to sort by the number of successful sign-ins, failed sign-ins, and the success rate. The sign-in graph per application only counts interactive user sign-ins.
+### Authentication methods activity
-Clicking **Load more** at the bottom of the list allows you to view additional applications on the page. You can select the date range to view all applications that have been used within the range.
+The **Authentication methods activity** in Usage & insights displays visualizations of the different authentication methods used by your organization. The **Registration tab** displays statistics of users registered for each of your available authentication methods. Select the **Usage** tab at the top of the page to see actual usage for each authentication method.
-![Screenshot shows Usage & insights for Application activity where you can select a range and view sign-in activity for different apps.](./media/concept-usage-insights-report/usage-and-insights-report.png)
+You can also access several other reports and tools related to authentication.
-You can also set the focus on a specific application. Select **view sign-in activity** to see the sign-in activity over time for the application as well as the top errors.
+Are you planning on running a registration campaign to nudge users to sign up for MFA? Use the **Registration campaign** option from the side menu to set up a registration campaign. For more information, see [Nudge users to set up Microsoft Authenticator](../authentication/how-to-mfa-registration-campaign.md).
-When you select a day in the application usage graph, you get a detailed list of the sign-in activities for the application.
+Looking for the details of a user and their authentication methods? Look at the **User registration details** report from the side menu and search for a name or UPN. The default MFA method and other methods registered are displayed. You can also see if the user is capable of registering for one of the authentication methods.
+Looking for the status of an authentication registration or reset event of a user? Look at the **Registration and reset events** report from the side menu and then search for a name or UPN. You'll be able to see the method used to attempt to register or reset an authentication method.
## Next steps
-* [Sign-ins report](concept-sign-ins.md)
+- [Learn about the sign-ins report](concept-sign-ins.md)
+- [Learn about Azure AD authentication](../authentication/overview-authentication.md)
active-directory Overview Flagged Sign Ins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-flagged-sign-ins.md
Any user signing into Azure AD via web page can use flag sign-ins for review. Me
## Who can review flagged sign-ins?
-Reviewing flagged sign-in events requires permissions to read the Sign-in Report events in the Azure AD portal. For more information, see [who can access it?](concept-sign-ins.md#who-can-access-it)
+Reviewing flagged sign-in events requires permissions to read the Sign-in Report events in the Azure AD portal. For more information, see [who can access it?](concept-sign-ins.md#how-do-you-access-the-sign-in-logs)
To flag sign-in failures, you don't need extra permissions.
active-directory Reference Azure Ad Sla Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-azure-ad-sla-performance.md
The SLA attainment is truncated at three places after the decimal. Numbers are n
| July | 99.999% | 99.999% | | August | 99.999% | 99.999% | | September | 99.999% | 99.998% |
-| October | 99.999% | |
+| October | 99.999% | 99.999% |
| November | 99.998% | | | December | 99.978% | |
aks Concepts Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-network.md
# Network concepts for applications in Azure Kubernetes Service (AKS) In a container-based, microservices approach to application development, application components work together to process their tasks. Kubernetes provides various resources enabling this cooperation:
-* You can connect to and expose applications internally or externally.
-* You can build highly available applications by load balancing your applications.
-* For your more complex applications, you can configure ingress traffic for SSL/TLS termination or routing of multiple components.
+
+* You can connect to and expose applications internally or externally.
+* You can build highly available applications by load balancing your applications.
+* For your more complex applications, you can configure ingress traffic for SSL/TLS termination or routing of multiple components.
* For security reasons, you can restrict the flow of network traffic into or between pods and nodes. This article introduces the core concepts that provide networking to your applications in AKS:
This article introduces the core concepts that provide networking to your applic
To allow access to your applications or between application components, Kubernetes provides an abstraction layer to virtual networking. Kubernetes nodes connect to a virtual network, providing inbound and outbound connectivity for pods. The *kube-proxy* component runs on each node to provide these network features. In Kubernetes:
-* *Services* logically group pods to allow for direct access on a specific port via an IP address or DNS name.
-* You can distribute traffic using a *load balancer*.
-* More complex routing of application traffic can also be achieved with *Ingress Controllers*.
+
+* *Services* logically group pods to allow for direct access on a specific port via an IP address or DNS name.
+* You can distribute traffic using a *load balancer*.
+* More complex routing of application traffic can also be achieved with *Ingress Controllers*.
+* You can *control outbound (egress) traffic* for cluster nodes.
* Security and filtering of the network traffic for pods is possible with Kubernetes *network policies*. The Azure platform also simplifies virtual networking for AKS clusters. When you create a Kubernetes load balancer, you also create and configure the underlying Azure load balancer resource. As you open network ports to pods, the corresponding Azure network security group rules are configured. For HTTP application routing, Azure can also configure *external DNS* as new ingress routes are configured.
The LoadBalancer only works at layer 4. At layer 4, the Service is unaware of th
![Diagram showing Ingress traffic flow in an AKS cluster][aks-ingress] ### Create an ingress resource+ In AKS, you can create an Ingress resource using NGINX, a similar tool, or the AKS HTTP application routing feature. When you enable HTTP application routing for an AKS cluster, the Azure platform creates the Ingress controller and an *External-DNS* controller. As new Ingress resources are created in Kubernetes, the required DNS A records are created in a cluster-specific DNS zone. For more information, see [Deploy HTTP application routing][aks-http-routing].
Configure your ingress controller to preserve the client source IP on requests t
If you're using client source IP preservation on your ingress controller, you can't use TLS pass-through. Client source IP preservation and TLS pass-through can be used with other services, such as the *LoadBalancer* type.
+## Control outbound (egress) traffic
+
+AKS clusters are deployed on a virtual network and have outbound dependencies on services outside of that virtual network. These outbound dependencies are almost entirely defined with fully qualified domain names (FQDNs). By default, AKS clusters have unrestricted outbound (egress) internet access. This allows the nodes and services you run to access external resources as needed. If desired, you can restrict outbound traffic.
+
+For more information, see [Control egress traffic for cluster nodes in AKS][limit-egress].
+ ## Network security groups A network security group filters traffic for VMs like the AKS nodes. As you create Services, such as a LoadBalancer, the Azure platform automatically configures any necessary network security group rules.
-You don't need to manually configure network security group rules to filter traffic for pods in an AKS cluster. Simply define any required ports and forwarding as part of your Kubernetes Service manifests. Let the Azure platform create or update the appropriate rules.
+You don't need to manually configure network security group rules to filter traffic for pods in an AKS cluster. Simply define any required ports and forwarding as part of your Kubernetes Service manifests. Let the Azure platform create or update the appropriate rules.
You can also use network policies to automatically apply traffic filter rules to pods.
For more information on core Kubernetes and AKS concepts, see the following arti
[use-network-policies]: use-network-policies.md [operator-best-practices-network]: operator-best-practices-network.md [support-policies]: support-policies.md
+[limit-egress]: limit-egress-traffic.md
api-management Api Management Access Restriction Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-access-restriction-policies.md
This article provides a reference for API Management access restriction policies
- [Restrict caller IPs](#RestrictCallerIPs) - Filters (allows/denies) calls from specific IP addresses and/or address ranges. - [Set usage quota by subscription](#SetUsageQuota) - Allows you to enforce a renewable or lifetime call volume and/or bandwidth quota, on a per subscription basis. - [Set usage quota by key](#SetUsageQuotaByKey) - Allows you to enforce a renewable or lifetime call volume and/or bandwidth quota, on a per key basis.-- [Validate JWT](#ValidateJWT) - Enforces existence and validity of a JWT extracted from either a specified HTTP header or a specified query parameter.
+- [Validate Azure Active Directory token](#ValidateAAD) - Enforces existence and validity of an Azure Active Directory JWT extracted from either a specified HTTP header, query parameter, or token value.
+- [Validate JWT](#ValidateJWT) - Enforces existence and validity of a JWT extracted from either a specified HTTP header, query parameter, or token value.
- [Validate client certificate](#validate-client-certificate) - Enforces that a certificate presented by a client to an API Management instance matches specified validation rules and claims. > [!TIP]
-> You can use access restriction policies in different scopes for different purposes. For example, you can secure the whole API with AAD authentication by applying the `validate-jwt` policy on the API level or you can apply it on the API operation level and use `claims` for more granular control.
+> You can use access restriction policies in different scopes for different purposes. For example, you can secure the whole API with AAD authentication by applying the `validate-azure-ad-token` policy on the API level or you can apply it on the API operation level and use `claims` for more granular control.
## <a name="CheckHTTPHeader"></a> Check HTTP header
If `identity-type=jwt` is configured, a JWT token is required to be validated. T
| context-variable-name | The name of the context variable to receive the [`Authorization` object](#authorization-object). | Yes | | | identity-type | Type of identity to be checked against the authorization access policy. <br> - `managed`: managed identity of the API Management service. <br> - `jwt`: JWT bearer token specified in the `identity` attribute. | No | managed | | identity | An Azure AD JWT bearer token to be checked against the authorization permissions. Ignored for `identity-type` other than `jwt`. <br><br>Expected claims: <br> - audience: `https://azure-api.net/authorization-manager` <br> - `oid`: Permission object ID <br> - `tid`: Permission tenant ID | No | |
-| ignore-error | Boolean. If acquiring the authorization context results in an error (for example, the authorization resource is not found or is in an error state): <br> - `true`: the context variable is assigned a value of null. <br> - `false`: return `500` | No | false |
+| ignore-error | Boolean. If acquiring the authorization context results in an error (for example, the authorization resource isn't found or is in an error state): <br> - `true`: the context variable is assigned a value of null. <br> - `false`: return `500` | No | false |
### Authorization object
In the following example, the per subscription rate limit is 20 calls per 90 sec
| Name | Description | Required | | - | -- | -- | | rate-limit | Root element. | Yes |
-| api | Add one or more of these elements to impose a call rate limit on APIs within the product. Product and API call rate limits are applied independently. API can be referenced either via `name` or `id`. If both attributes are provided, `id` will be used and `name` will be ignored. | No |
-| operation | Add one or more of these elements to impose a call rate limit on operations within an API. Product, API, and operation call rate limits are applied independently. Operation can be referenced either via `name` or `id`. If both attributes are provided, `id` will be used and `name` will be ignored. | No |
+| api | Add one or more of these elements to impose a call rate limit on APIs within the product. Product and API call rate limits are applied independently. API can be referenced either via `name` or `id`. If both attributes are provided, `id` will be used, and `name` will be ignored. | No |
+| operation | Add one or more of these elements to impose a call rate limit on operations within an API. Product, API, and operation call rate limits are applied independently. Operation can be referenced either via `name` or `id`. If both attributes are provided, `id` will be used, and `name` will be ignored. | No |
### Attributes
In the following example, the per subscription rate limit is 20 calls per 90 sec
| -- | -- | -- | - | | name | The name of the API for which to apply the rate limit. | Yes | N/A | | calls | The maximum total number of calls allowed during the time interval specified in `renewal-period`. | Yes | N/A |
-| renewal-period | The length in seconds of the sliding window during which the number of allowed requests should not exceed the value specified in `calls`. Maximum allowed value: 300 seconds. | Yes | N/A |
+| renewal-period | The length in seconds of the sliding window during which the number of allowed requests shouldn't exceed the value specified in `calls`. Maximum allowed value: 300 seconds. | Yes | N/A |
| retry-after-header-name | The name of a custom response header whose value is the recommended retry interval in seconds after the specified call rate is exceeded. | No | `Retry-After` | | retry-after-variable-name | The name of a policy expression variable that stores the recommended retry interval in seconds after the specified call rate is exceeded. | No | N/A | | remaining-calls-header-name | The name of a response header whose value after each policy execution is the number of remaining calls allowed for the time interval specified in the `renewal-period`. | No | N/A |
In the following example, the rate limit of 10 calls per 60 seconds is keyed by
| counter-key | The key to use for the rate limit policy. For each key value, a single counter is used for all scopes at which the policy is configured. | Yes | N/A | | increment-condition | The boolean expression specifying if the request should be counted towards the rate (`true`). | No | N/A | | increment-count | The number by which the counter is increased per request. | No | 1 |
-| renewal-period | The length in seconds of the sliding window during which the number of allowed requests should not exceed the value specified in `calls`. Policy expression is allowed. Maximum allowed value: 300 seconds. | Yes | N/A |
+| renewal-period | The length in seconds of the sliding window during which the number of allowed requests shouldn't exceed the value specified in `calls`. Policy expression is allowed. Maximum allowed value: 300 seconds. | Yes | N/A |
| retry-after-header-name | The name of a custom response header whose value is the recommended retry interval in seconds after the specified call rate is exceeded. | No | `Retry-After` | | retry-after-variable-name | The name of a policy expression variable that stores the recommended retry interval in seconds after the specified call rate is exceeded. | No | N/A | | remaining-calls-header-name | The name of a response header whose value after each policy execution is the number of remaining calls allowed for the time interval specified in the `renewal-period`. | No | N/A |
This policy can be used in the following policy [sections](./api-management-howt
> [!IMPORTANT] > This feature is unavailable in the **Consumption** tier of API Management.
-The `quota-by-key` policy enforces a renewable or lifetime call volume and/or bandwidth quota, on a per key basis. The key can have an arbitrary string value and is typically provided using a policy expression. Optional increment condition can be added to specify which requests should be counted towards the quota. If multiple policies would increment the same key value, it is incremented only once per request. When the quota is exceeded, the caller receives a `403 Forbidden` response status code, and the response includes a `Retry-After` header whose value is the recommended retry interval in seconds.
+The `quota-by-key` policy enforces a renewable or lifetime call volume and/or bandwidth quota, on a per key basis. The key can have an arbitrary string value and is typically provided using a policy expression. Optional increment condition can be added to specify which requests should be counted towards the quota. If multiple policies would increment the same key value, it's incremented only once per request. When the quota is exceeded, the caller receives a `403 Forbidden` response status code, and the response includes a `Retry-After` header whose value is the recommended retry interval in seconds.
For more information and examples of this policy, see [Advanced request throttling with Azure API Management](./api-management-sample-flexible-throttling.md).
This policy can be used in the following policy [sections](./api-management-howt
- **Policy sections:** inbound - **Policy scopes:** all scopes
+## <a name="ValidateAAD"></a> Validate Azure Active Directory token
+
+The `validate-azure-ad-token` policy enforces the existence and validity of a JSON web token (JWT) that was provided by the Azure Active Directory service. The JWT can be extracted from a specified HTTP header, query parameter, or value provided using a policy expression or context variable.
+
+### Policy statement
+
+```xml
+<validate-azure-ad-token
+ tenant-id="tenant ID or URL (for example, "contoso.onmicrosoft.com") of the Azure Active Directory service"
+ header-name="name of HTTP header containing the token (alternatively, use query-parameter-name or token-value attribute to specify token)"
+ query-parameter-name="name of query parameter used to pass the token (alternative, use header-name or token-value attribute to specify token)"
+ token-value="expression returning the token as a string (alternatively, use header-name or query-parameter attribute to specify token)"
+ failed-validation-httpcode="HTTP status code to return on failure"
+ failed-validation-error-message="error message to return on failure"
+ output-token-variable-name="name of a variable to receive a JWT object representing successfully validated token">
+ <client-application-ids>
+ <application-id>Client application ID from Azure Active Directory</application-id>
+ <!-- If there are multiple client application IDs, then add additional application-id elements -->
+ </client-application-ids>
+ <backend-application-ids>
+ <application-id>Backend application ID from Azure Active Directory</application-id>
+ <!-- If there are multiple backend application IDs, then add additional application-id elements -->
+ </backend-application-ids>
+ <audiences>
+ <audience>audience string</audience>
+ <!-- if there are multiple possible audiences, then add additional audience elements -->
+ </audiences>
+ <required-claims>
+ <claim name="name of the claim as it appears in the token" match="all|any" separator="separator character in a multi-valued claim">
+ <value>claim value as it is expected to appear in the token</value>
+ <!-- if there is more than one allowed value, then add additional value elements -->
+ </claim>
+ <!-- if there are multiple possible allowed values, then add additional value elements -->
+ </required-claims>
+</validate-azure-ad-token>
+```
+
+### Examples
+
+#### Simple token validation
+
+The following policy is the minimal form of the `validate-azure-ad-token` policy. It expects the JWT to be provided in the `Authorization` header using the `Bearer` scheme. In this example, the Azure AD tenant ID and client application ID are provided using named values.
+
+```xml
+<validate-azure-ad-token tenant-id="{{aad-tenant-id}}">
+ <client-application-ids>
+ <application-id>{{aad-client-application-id}}</application-id>
+ </client-application-ids>
+</validate-azure-ad-token>
+```
+
+#### Validate that audience and claim are correct
+
+The following policy checks that the audience is the hostname of the API Management instance and that the `ctry` claim is `US`. The hostname is provided using a policy expression, and the Azure AD tenant ID and client application ID are provided using named values. The decoded JWT is provided in the `jwt` variable after validation.
+
+For more details on optional claims, read [Provide optional claims to your app](/azure/active-directory/develop/active-directory-optional-claims).
+
+```xml
+<validate-azure-ad-token tenant-id="{{aad-tenant-id}}" output-token-variable-name="jwt">
+ <client-application-ids>
+ <application-id>{{aad-client-application-id}}</application-id>
+ </client-application-ids>
+ <audiences>
+ <audience>@(context.Request.OriginalUrl.Host)</audience>
+ </audiences>
+ <required-claims>
+ <claim name="ctry" match="any">
+ <value>US</value>
+ </claim>
+ </required-claims>
+</validate-azure-ad-token>
+```
+
+### Elements
+
+| Element | Description | Required |
+| - | -- | -- |
+| validate-azure-ad-token | Root element. | Yes |
+| audiences | Contains a list of acceptable audience claims that can be present on the token. If multiple audience values are present, then each value is tried until either all are exhausted (in which case validation fails) or until one succeeds. At least one audience must be specified. | No |
+| backend-application-ids | Contains a list of acceptable backend application IDs. This is only required in advanced cases for the configuration of options and can generally be removed. | No |
+| client-application-ids | Contains a list of acceptable client application IDs. If multiple application-id elements are present, then each value is tried until either all are exhausted (in which case validation fails) or until one succeeds. At least one application-id must be specified. | Yes |
+| required-claims | Contains a list of claims expected to be present on the token for it to be considered valid. When the `match` attribute is set to `all`, every claim value in the policy must be present in the token for validation to succeed. When the `match` attribute is set to `any`, at least one claim must be present in the token for validation to succeed. | No |
+
+### Attributes
+
+| Name | Description | Required | Default |
+| - | | -- | |
+| failed-validation-error-message | Error message to return in the HTTP response body if the JWT doesn't pass validation. This message must have any special characters properly escaped. | No | Default error message depends on validation issue, for example "JWT not present." |
+| failed-validation-httpcode | HTTP status code to return if the JWT doesn't pass validation. | No | 401 |
+| header-name | The name of the HTTP header holding the token. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A |
+| match | The `match` attribute on the `claim` element specifies whether every claim value in the policy must be present in the token for validation to succeed. Possible values are:<br /><br /> - `all` - every claim value in the policy must be present in the token for validation to succeed.<br /><br /> - `any` - at least one claim value must be present in the token for validation to succeed. | No | all |
+| output-token-variable-name | String. Name of context variable that will receive token value as an object of type [`Jwt`](api-management-policy-expressions.md) upon successful token validation | No | N/A |
+| query-parameter-name | The name of the query parameter holding the token. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A |
+| separator | String. Specifies a separator (e.g. ",") to be used for extracting a set of values from a multi-valued claim. | No | N/A |
+| token-value | Expression returning a string containing the token. You must not return `Bearer ` as part of the token value. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A |
+
+### Usage
+
+This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
+
+- **Policy sections:** inbound
+- **Policy scopes:** all scopes
+
+### Limitations
+
+This policy can only be used with an Azure Active Directory tenant in the public Azure cloud. It doesn't support tenants configured in regional clouds or Azure clouds with restricted access.
+ ## <a name="ValidateJWT"></a> Validate JWT The `validate-jwt` policy enforces existence and validity of a JSON web token (JWT) extracted from a specified HTTP header, extracted from a specified query parameter, or matching a specific value.
The `validate-jwt` policy enforces existence and validity of a JSON web token (J
#### Azure Active Directory token validation
+> [!NOTE]
+> Use the [`validate-azure-ad-token`](#ValidateAAD) policy to validate tokens against Azure Active Directory.
+ ```xml <validate-jwt header-name="Authorization" failed-validation-httpcode="401" failed-validation-error-message="Unauthorized. Access token is missing or invalid."> <openid-config url="https://login.microsoftonline.com/contoso.onmicrosoft.com/v2.0/.well-known/openid-configuration" />
This example shows how to use the [Validate JWT](api-management-access-restricti
| decryption-keys | A list of Base64-encoded keys used to decrypt the tokens. If multiple security keys are present, then each key is tried until either all keys are exhausted (in which case validation fails) or a key succeeds. Key elements have an optional `id` attribute used to match against `kid` claim.<br/><br/>Alternatively supply a decryption key using:<br/><br/> - `certificate-id` in format `<key certificate-id="mycertificate" />` to specify the identifier of a certificate entity [uploaded](/rest/api/apimanagement/apimanagementrest/azure-api-management-rest-api-certificate-entity#Add) to API Management | No | | issuers | A list of acceptable principals that issued the token. If multiple issuer values are present, then each value is tried until either all are exhausted (in which case validation fails) or until one succeeds. | No | | openid-config | Add one or more of these elements to specify a compliant OpenID configuration endpoint from which signing keys and issuer can be obtained.<br/><br/>Configuration including the JSON Web Key Set (JWKS) is pulled from the endpoint every 1 hour and cached. If the token being validated references a validation key (using `kid` claim) that is missing in cached configuration, or if retrieval fails, API Management pulls from the endpoint at most once per 5 min. These intervals are subject to change without notice. | No |
-| required-claims | Contains a list of claims expected to be present on the token for it to be considered valid. When the `match` attribute is set to `all` every claim value in the policy must be present in the token for validation to succeed. When the `match` attribute is set to `any` at least one claim must be present in the token for validation to succeed. | No |
+| required-claims | Contains a list of claims expected to be present on the token for it to be considered valid. When the `match` attribute is set to `all`, every claim value in the policy must be present in the token for validation to succeed. When the `match` attribute is set to `any`, at least one claim must be present in the token for validation to succeed. | No |
### Attributes | Name | Description | Required | Default | | - | | -- | | | clock-skew | Timespan. Use to specify maximum expected time difference between the system clocks of the token issuer and the API Management instance. | No | 0 seconds |
-| failed-validation-error-message | Error message to return in the HTTP response body if the JWT does not pass validation. This message must have any special characters properly escaped. | No | Default error message depends on validation issue, for example "JWT not present." |
+| failed-validation-error-message | Error message to return in the HTTP response body if the JWT doesn't pass validation. This message must have any special characters properly escaped. | No | Default error message depends on validation issue, for example "JWT not present." |
| failed-validation-httpcode | HTTP Status code to return if the JWT doesn't pass validation. | No | 401 | | header-name | The name of the HTTP header holding the token. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A | | query-parameter-name | The name of the query parameter holding the token. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A |
This example shows how to use the [Validate JWT](api-management-access-restricti
| id | The `id` attribute on the `key` element allows you to specify the string that will be matched against `kid` claim in the token (if present) to find out the appropriate key to use for signature validation. | No | N/A | | match | The `match` attribute on the `claim` element specifies whether every claim value in the policy must be present in the token for validation to succeed. Possible values are:<br /><br /> - `all` - every claim value in the policy must be present in the token for validation to succeed.<br /><br /> - `any` - at least one claim value must be present in the token for validation to succeed. | No | all | | require-expiration-time | Boolean. Specifies whether an expiration claim is required in the token. | No | true |
-| require-scheme | The name of the token scheme, e.g. "Bearer". When this attribute is set, the policy will ensure that specified scheme is present in the Authorization header value. | No | N/A |
+| require-scheme | The name of the token scheme, for example, "Bearer". When this attribute is set, the policy will ensure that specified scheme is present in the Authorization header value. | No | N/A |
| require-signed-tokens | Boolean. Specifies whether a token is required to be signed. | No | true |
-| separator | String. Specifies a separator (e.g. ",") to be used for extracting a set of values from a multi-valued claim. | No | N/A |
+| separator | String. Specifies a separator (for example, ",") to be used for extracting a set of values from a multi-valued claim. | No | N/A |
| url | Open ID configuration endpoint URL from where OpenID configuration metadata can be obtained. The response should be according to specs as defined at URL: `https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderMetadata`. <br/><br/>For Azure Active Directory use the OpenID Connect [metadata endpoint](../active-directory/develop/v2-protocols-oidc.md#find-your-apps-openid-configuration-document-uri) configured in your app registration such as:<br/>- (v2) `https://login.microsoftonline.com/{tenant-name}/v2.0/.well-known/openid-configuration`<br/> - (v2 multitenant) ` https://login.microsoftonline.com/organizations/v2.0/.well-known/openid-configuration`<br/>- (v1) `https://login.microsoftonline.com/{tenant-name}/.well-known/openid-configuration` <br/><br/> substituting your directory tenant name or ID, for example `contoso.onmicrosoft.com`, for `{tenant-name}`. | Yes | N/A | | output-token-variable-name | String. Name of context variable that will receive token value as an object of type [`Jwt`](api-management-policy-expressions.md) upon successful token validation | No | N/A |
The following example validates a client certificate to match the policy's defau
| Name | Description | Required | Default | | - | --| -- | -- | | validate-revocationΓÇ» | Boolean. Specifies whether certificate is validated against online revocation list.ΓÇ» | noΓÇ» | True |
-| validate-trustΓÇ»| Boolean. Specifies if validation should fail in case chain cannot be successfully built up to trusted CA. | no | True |
+| validate-trustΓÇ»| Boolean. Specifies if validation should fail in case the chain can't be successfully built up to a trusted CA. | no | True |
| validate-not-before | Boolean. Validates value against current time. | noΓÇ»| True | | validate-not-afterΓÇ» | Boolean. Validates value against current time. | noΓÇ»| True| | ignore-errorΓÇ» | Boolean. Specifies if policy should proceed to the next handler or jump to on-error upon failed validation. | no | False |
api-management Api Management Howto Configure Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-configure-notifications.md
To modify email settings:
* **Administrator email** - the email address to receive all system notifications and other configured notifications * **Organization name** - the name of your organization for use in the developer portal and notifications * **Originating email address** - The value of the `From` header for notifications from the API Management instance. API Management sends notifications on behalf of this originating address.-
- :::image type="content" source="media/api-management-howto-configure-notifications/configure-email-settings.png" alt-text="Screenshot of API Management email settings in the portal":::
+ > [!NOTE]
+ > When you change the Originating email address, some recipients may not receive the auto-generated emails from API Management or emails may get sent to the Junk/Spam folder. This happens because the email no longer passes SPF Authentication after you change the Originating email address domain. To ensure successful SPF Authentication and delivery of email, create the following TXT record in the DNS database of the domain specified in the email address. For instance, if the email address is `noreply@contoso.com`, you will need to contact the administrator of contoso.com to add the following TXT record: **"v=spf1 include:spf.protection.outlook.com include:_spf-ssg-a.microsoft.com -all"**
+
+ :::image type="content" source="media/api-management-howto-configure-notifications/configure-email-settings.png" alt-text="Screenshot of API Management email settings in the portal":::
1. Select **Save**. ## Next steps
api-management Api Management Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-policies.md
More information about policies:
- [Restrict caller IPs](api-management-access-restriction-policies.md#RestrictCallerIPs) - Filters (allows/denies) calls from specific IP addresses and/or address ranges. - [Set usage quota by subscription](api-management-access-restriction-policies.md#SetUsageQuota) - Allows you to enforce a renewable or lifetime call volume and/or bandwidth quota, on a per subscription basis. - [Set usage quota by key](api-management-access-restriction-policies.md#SetUsageQuotaByKey) - Allows you to enforce a renewable or lifetime call volume and/or bandwidth quota, on a per key basis.-- [Validate JWT](api-management-access-restriction-policies.md#ValidateJWT) - Enforces existence and validity of a JWT extracted from either a specified HTTP Header or a specified query parameter.
+- [Validate Azure Active Directory Token](api-management-access-restriction-policies.md#ValidateAAD) - Enforces existence and validity of an Azure Active Directory JWT extracted from either a specified HTTP Header, query parameter, or token value.
+- [Validate JWT](api-management-access-restriction-policies.md#ValidateJWT) - Enforces existence and validity of a JWT extracted from either a specified HTTP Header, query parameter, or token value.
- [Validate client certificate](api-management-access-restriction-policies.md#validate-client-certificate) - Enforces that a certificate presented by a client to an API Management instance matches specified validation rules and claims. ## Advanced policies
api-management Api Management Transformation Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-transformation-policies.md
This article provides a reference for API Management policies used to transform
consider-accept-header="true | false" parse-date="true | false" namespace-separator="separator character"
+ namespace-prefix="namepsace prefix"
attribute-block-name="name" /> ```
Consider the following policy:
</inbound> <outbound> <base />
- <json-to-xml apply="always" consider-accept-header="false" parse-date="false" namespace-separator=":" attribute-block-name="#attrs" />
+ <json-to-xml apply="always" consider-accept-header="false" parse-date="false" namespace-separator=":" namespace-prefix="xmlns" attribute-block-name="#attrs" />
</outbound> </policies> ```
The XML response to the client will be:
|consider-accept-header|The attribute must be set to one of the following values.<br /><br /> - true - apply conversion if XML is requested in request Accept header.<br />- false -always apply conversion.|No|true| |parse-date|When set to `false` date values are simply copied during transformation|No|true| |namespace-separator|The character to use as a namespace separator|No|Underscore|
+|namespace-prefix|The string that identifies property as namespace attribute, usually "xmlns". Properties with names beginning with specified prefix will be added to current element as namespace declarations.|No|N/A|
|attribute-block-name|When set, properties inside the named object will be added to the element as attributes|No|Not set| ### Usage
api-management Virtual Network Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/virtual-network-reference.md
When an API Management service instance is hosted in a VNet, the ports in the fo
### [stv2](#tab/stv2)
+>[!IMPORTANT]
+> When using `stv2`, it is required to assign a Network Security Group to your VNet in order for the Azure Load Balancer to work. Learn more in the [Azure Load Balancer documentation](/security/benchmark/azure/baselines/azure-load-balancer-security-baseline#network-security-group-support).
+ | Source / Destination Port(s) | Direction | Transport protocol | Service tags <br> Source / Destination | Purpose | VNet type | ||--|--||-|-| | * / [80], 443 | Inbound | TCP | Internet / VirtualNetwork | **Client communication to API Management** | External only |
app-service Configure Connect To Azure Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-connect-to-azure-storage.md
The following features are supported for Linux containers:
- Azure CLI, Azure PowerShell, and Azure SDK support is in preview. - Mapping `/` or `/home` to custom-mounted storage is not supported. - Don't map the custom storage mount to `/tmp` or its subdirectories as this may cause timeout during app startup.
+- Azure Storage is not supported with [Docker Compose Scenarios](configure-custom-container.md?pivots=container-linux#docker-compose-options)
- Storage mounts cannot be used together with clone settings option during [deployment slot](deploy-staging-slots.md) creation. - Storage mounts are not backed up when you [back up your app](manage-backup.md). Be sure to follow best practices to back up the Azure Storage accounts. - Only Azure Files [SMB](../storage/files/files-smb-protocol.md) are supported. Azure Files [NFS](../storage/files/files-nfs-protocol.md) is not currently supported for Linux App Services.
app-service Configure Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-custom-container.md
The following lists show supported and unsupported Docker Compose configuration
- ports - restart - services-- volumes
+- volumes ([mapping to Azure Storage is unsupported](configure-connect-to-azure-storage.md?tabs=portal&pivots=container-linux#limitations))
#### Unsupported options
app-service Overview Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-diagnostics.md
After you choose to investigate the issue further by clicking on a topic, you ca
## Resiliency Score
-If you don't know whatΓÇÖs wrong with your app or donΓÇÖt know where to start troubleshooting your issues, the Get Resiliency Score report is a good place to start. Once a Troubleshooting category has been selected the Get Resilience Score report link is available and clicking it produces a PDF document with actionable insights.
+To review tailored best practice recommendations, check out the Resiliency Score Report. This is available as a downloadable PDF Report. To get it, simply click on the "Get Resilience Score report" button available on the command bar of any of the Troubleshooting categories.
![App Service Diagnose and solve problems Resiliency Score report, with a gauge indicating App's resilience score and what App Developer can do to improve resilience of the App.](./media/app-service-diagnostics/app-service-diagnostics-resiliency-report-1.png)
application-gateway Mutual Authentication Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/mutual-authentication-overview.md
description: This article is an overview of mutual authentication on Application
Previously updated : 03/30/2021 Last updated : 11/03/2022
To configure mutual authentication, a trusted client CA certificate is required
For example, if your client certificate contains a root CA certificate, multiple intermediate CA certificates, and a leaf certificate, make sure that the root CA certificate and all the intermediate CA certificates are uploaded onto Application Gateway in one file. For more information on how to extract a trusted client CA certificate, see [how to extract trusted client CA certificates](./mutual-authentication-certificate-management.md).
-If you're uploading a certificate chain with root CA and intermediate CA certificates, the certificate chain must be uploaded as a PEM or CER file to the gateway.
+If you're uploading a certificate chain with root CA and intermediate CA certificates, the certificate chain must be uploaded as a PEM or CER file to the gateway.
+
+> [!IMPORTANT]
+> Make sure you upload the entire trusted client CA certificate chain to the Application Gateway when using mutual authentication.
+
+Each SSL profile can support up to five trusted client CA certificate chains.
> [!NOTE] > Mutual authentication is only available on Standard_v2 and WAF_v2 SKUs. ### Certificates supported for mutual authentication
-Application Gateway supports the following types of certificates:
--- CA (Certificate Authority) certificate: A CA certificate is a digital certificate issued by a certificate authority (CA).-- Self-signed CA certificates: Client browsers do not trust these certificates and will warn the user that the virtual service's certificate is not part of a trust chain. Self-signed CA certificates are good for testing or in environments where administrators control the clients and can safely bypass the browser's security alerts.
+Application Gateway supports certificates issued from both public and privately established certificate authorities.
-> [!IMPORTANT]
-> Production workloads should never use self-signed CA certificates.
+- CA certificates issued from well-known certificate authorities: Intermediate and root certificates are commonly found in trusted certificate stores and enable trusted connections with little to no additional configuration on the device.
+- CA certificates issued from organization established certificate authorities: These certificates are typically issued privately via your organization and not trusted by other entities. Intermediate and root certificates must be imported in to trusted certificate stores for clients to establish chain trust.
-For more information on how to set up mutual authentication, see [configure mutual authentication with Application Gateway](./mutual-authentication-portal.md).
-
-> [!IMPORTANT]
-> Make sure you upload the entire trusted client CA certificate chain to the Application Gateway when using mutual authentication.
-
-Each SSL profile can support up to five trusted client CA certificate chains.
+> [!NOTE]
+> When issuing client certificates from well established certificate authorities, consider working with the certificate authority to see if an intermediate certificate can be issued for your organization to prevent inadvertent cross-organizational client certificate authentication.
## Additional client authentication validation ### Verify client certificate DN
-You have the option to verify the client certificate's immediate issuer and only allow the Application Gateway to trust that issuer. This options is off by default but you can enable this through Portal, PowerShell, or Azure CLI.
+You have the option to verify the client certificate's immediate issuer and only allow the Application Gateway to trust that issuer. This option is off by default but you can enable this through Portal, PowerShell, or Azure CLI.
If you choose to enable the Application Gateway to verify the client certificate's immediate issuer, here's how to determine what client certificate issuer DN will be extracted from the certificates uploaded. * **Scenario 1:** Certificate chain includes: root certificate - intermediate certificate - leaf certificate
applied-ai-services Rest Api And Client Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/quickstarts/rest-api-and-client-library.md
Previously updated : 07/06/2021 Last updated : 11/07/2022 zone_pivot_groups: programming-languages-metrics-monitor
applied-ai-services Web Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/quickstarts/web-portal.md
Previously updated : 09/30/2020 Last updated : 11/07/2022
# Quickstart: Monitor your first metric by using the web portal
-When you provision an instance of Azure Metrics Advisor, you can use the APIs and web-based workspace to work with the service. The web-based workspace can be used as a straightforward way to quickly get started with the service. It also provides a visual way to configure settings, customize your model, and perform root cause analysis.
+When you provision an instance of Azure Metrics Advisor, you can use the APIs and web-based workspace to interact with the service. The web-based workspace can be used as a straightforward way to quickly get started with the service. It also provides a visual way to configure settings, customize your model, and perform root cause analysis.
## Prerequisites
When detection is applied, select one of the metrics listed in the data feed to
After tuning the detection configuration, you should find that detected anomalies reflect actual anomalies in your data. Metrics Advisor performs analysis on multidimensional metrics to locate the root cause to a specific dimension. The service also performs cross-metrics analysis by using the metrics graph feature.
-To view the diagnostic insights, select the red dots on time series visualizations. These red dots represent detected anomalies. A window will appear with a link to the incident analysis page.
+To view diagnostic insights, select the red dots on time series visualizations. These red dots represent detected anomalies. A window will appear with a link to the incident analysis page.
:::image type="content" source="../media/incident-link.png" alt-text="Screenshot that shows an incident link." lightbox="../media/incident-link.png":::
attestation Claim Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/claim-sets.md
The following claims are additionally supported by the SevSnpVm attestation type
- **x-ms-sevsnpvm-authorkeydigest**: SHA384 hash of the author signing key - **x-ms-sevsnpvm-bootloader-svn** :AMD boot loader security version number (SVN)-- **x-ms-sevsnpvm-familyId**: HCL family identification string
+- **x-ms-sevsnpvm-familyId**: Host Compatibility Layer (HCL) family identification string
- **x-ms-sevsnpvm-guestsvn**: HCL security version number (SVN) - **x-ms-sevsnpvm-hostdata**: Arbitrary data defined by the host at VM launch time - **x-ms-sevsnpvm-idkeydigest**: SHA384 hash of the identification signing key
azure-arc Tutorial Use Gitops Connected Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md
Title: 'Tutorial: Deploy configurations using GitOps on an Azure Arc-enabled Kubernetes cluster' description: This tutorial demonstrates applying configurations on an Azure Arc-enabled Kubernetes cluster. For a conceptual take on this process, see the Configurations and GitOps - Azure Arc-enabled Kubernetes article. -- Last updated 05/24/2022
azure-functions Create First Function Cli Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-csharp.md
Title: "Create a C# function from the command line - Azure Functions" description: "Learn how to create a C# function from the command line, then publish the local project to serverless hosting in Azure Functions." Previously updated : 09/14/2021 Last updated : 11/08/2022 ms.devlang: csharp
This article supports creating both types of compiled C# functions:
[!INCLUDE [functions-dotnet-execution-model](../../includes/functions-dotnet-execution-model.md)]
-This article creates an HTTP triggered function that runs on .NET 6.0. There is also a [Visual Studio Code-based version](create-first-function-vs-code-csharp.md) of this article.
+This article creates an HTTP triggered function that runs on .NET in-process or isolated worker process with an example of .NET 6. There's also a [Visual Studio Code-based version](create-first-function-vs-code-csharp.md) of this article.
Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account.
Completing this quickstart incurs a small cost of a few USD cents or less in you
Before you begin, you must have the following:
-+ [.NET 6.0 SDK](https://dotnet.microsoft.com/download)
++ [.NET 6.0 SDK](https://dotnet.microsoft.com/download). + [Azure Functions Core Tools](./functions-run-local.md#v2) version 4.x.
You also need an Azure account with an active subscription. [Create an account f
### Prerequisite check
-Verify your prerequisites, which depend on whether you are using Azure CLI or Azure PowerShell for creating Azure resources:
+Verify your prerequisites, which depend on whether you're using Azure CLI or Azure PowerShell for creating Azure resources:
# [Azure CLI](#tab/azure-cli)
In Azure Functions, a function project is a container for one or more individual
func new --name HttpExample --template "HTTP trigger" --authlevel "anonymous" ```
- `func new` creates a HttpExample.cs code file.
+ `func new` creates an HttpExample.cs code file.
### (Optional) Examine the file contents
The return object is an [ActionResult](/dotnet/api/microsoft.aspnetcore.mvc.acti
# [Isolated process](#tab/isolated-process)
-*HttpExample.cs* contains a `Run` method that receives request data in the `req` variable is an [HttpRequestData](/dotnet/api/microsoft.azure.functions.worker.http.httprequestdata) object that's decorated with the **HttpTriggerAttribute**, which defines the trigger behavior. Because of the isolated process model, `HttpRequestData` is a representation of the actual `HttpRequest`, and not the request object itself.
+*HttpExample.cs* contains a `Run` method that receives request data in the `req` variable is an [HttpRequestData](/dotnet/api/microsoft.azure.functions.worker.http.httprequestdata) object that's decorated with the **HttpTriggerAttribute**, which defines the trigger behavior. Because of the isolated worker process model, `HttpRequestData` is a representation of the actual `HttpRequest`, and not the request object itself.
:::code language="csharp" source="~/functions-docs-csharp/http-trigger-isolated/HttpExample.cs":::
To learn more, see [Azure Functions HTTP triggers and bindings](./functions-bind
In the previous example, replace `<STORAGE_NAME>` with the name of the account you used in the previous step, and replace `<APP_NAME>` with a globally unique name appropriate to you. The `<APP_NAME>` is also the default DNS domain for the function app.
- This command creates a function app running in your specified language runtime under the [Azure Functions Consumption Plan](consumption-plan.md), which is free for the amount of usage you incur here. The command also provisions an associated Azure Application Insights instance in the same resource group, with which you can monitor your function app and view logs. For more information, see [Monitor Azure Functions](functions-monitoring.md). The instance incurs no costs until you activate it.
+ This command creates a function app running in your specified language runtime under the [Azure Functions Consumption Plan](consumption-plan.md), which is free for the amount of usage you incur here. The command also creates an associated Azure Application Insights instance in the same resource group, with which you can monitor your function app and view logs. For more information, see [Monitor Azure Functions](functions-monitoring.md). The instance incurs no costs until you activate it.
[!INCLUDE [functions-publish-project-cli](../../includes/functions-publish-project-cli.md)]
azure-functions Create First Function Vs Code Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-csharp.md
Title: "Create a C# function using Visual Studio Code - Azure Functions" description: "Learn how to create a C# function, then publish the local project to serverless hosting in Azure Functions using the Azure Functions extension in Visual Studio Code. " Previously updated : 10/11/2022 Last updated : 11/08/2022 ms.devlang: csharp adobe-target: true
adobe-target-content: ./create-first-function-vs-code-csharp-ieux
# Quickstart: Create a C# function in Azure using Visual Studio Code
-In this article, you use Visual Studio Code to create a C# function that responds to HTTP requests. After testing the code locally, you deploy it to the serverless environment of Azure Functions. This article creates an HTTP triggered function that runs on .NET 6.0. There's also a [CLI-based version](create-first-function-cli-csharp.md) of this article.
+This article creates an HTTP triggered function that runs on .NET 6, either in-process or isolated worker process. .NET Functions isolated worker process also lets you run on .NET 7 (in preview). For information about all .NET versions supported by isolated worker process, see [Supported versions](dotnet-isolated-process-guide.md#supported-versions).
-By default, this article shows you how to create C# functions that run [in the same process as the Functions host](functions-dotnet-class-library.md). These _in-process_ C# functions are only supported on Long Term Support (LTS) versions of .NET, such as .NET 6. To create C# functions on .NET 6 that can also run on [other supported versions](functions-versions.md) for Azure functions [in an isolated process](dotnet-isolated-process-guide.md).
+There's also a [CLI-based version](create-first-function-cli-csharp.md) of this article.
+
+By default, this article shows you how to create C# functions that run on .NET 6 [in the same process as the Functions host](functions-dotnet-class-library.md). These _in-process_ C# functions are only supported on [Long Term Support (LTS)](https://dotnet.microsoft.com/en-us/platform/support/policy/dotnet-core) .NET versions, such as .NET 6. When creating your project, you can choose to instead create a function that runs on .NET 6 in an [isolated worker process](dotnet-isolated-process-guide.md). [Isolated worker process](dotnet-isolated-process-guide.md) supports both LTS and Standard Term Support (STS) versions of .NET. For more information, see [Supported versions](dotnet-isolated-process-guide.md#supported-versions) in the .NET Functions isolated worker process guide.
Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account.
In this section, you use Visual Studio Code to create a local Azure Functions pr
1. Provide the following information at the prompts:
- # [.NET 6](#tab/in-process)
+ # [In-process](#tab/in-process)
|Prompt|Selection| |--|--|
In this section, you use Visual Studio Code to create a local Azure Functions pr
|**Authorization level**|Choose `Anonymous`, which enables anyone to call your function endpoint. To learn about authorization level, see [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys).| |**Select how you would like to open your project**|Select `Add to workspace`.|
- # [.NET 6 Isolated](#tab/isolated-process)
+ # [Isolated process](#tab/isolated-process)
|Prompt|Selection| |--|--|
In this section, you use Visual Studio Code to create a local Azure Functions pr
> [!NOTE] > If you don't see .NET 6 as a runtime option, check the following: >
- > + Make sure you have installed the .NET 6.0 SDK.
+ > + Make sure you have installed the .NET 6.0 SDK or other available .NET SDK versions, from .NET website [here](https://dotnet.microsoft.com/download).
> + Press F1 and type `Preferences: Open user settings`, then search for `Azure Functions: Project Runtime` and change the default runtime version to `~4`. 1. Visual Studio Code uses the provided information and generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. For more information about the files that are created, see [Generated project files](functions-develop-vs-code.md?tabs=csharp#generated-project-files).
After checking that the function runs correctly on your local computer, it's tim
You have used [Visual Studio Code](functions-develop-vs-code.md?tabs=csharp) to create a function app with a simple HTTP-triggered function. In the next article, you expand that function by connecting to either Azure Cosmos DB or Azure Queue Storage. To learn more about connecting to other Azure services, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=csharp).
-# [.NET 6](#tab/in-process)
+# [In-process](#tab/in-process)
> [!div class="nextstepaction"] > [Connect to Azure Cosmos DB](functions-add-output-binding-cosmos-db-vs-code.md?pivots=programming-language-csharp&tabs=in-process) > [Connect to Azure Queue Storage](functions-add-output-binding-storage-queue-vs-code.md?pivots=programming-language-csharp&tabs=in-process)
-# [.NET 6 Isolated](#tab/isolated-process)
+# [Isolated process](#tab/isolated-process)
> [!div class="nextstepaction"] > [Connect to Azure Cosmos DB](functions-add-output-binding-cosmos-db-vs-code.md?pivots=programming-language-csharp&tabs=isolated-process)
azure-functions Dotnet Isolated In Process Differences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-in-process-differences.md
+
+ Title: Differences between in-process and isolate worker process .NET Azure Functions
+description: Compares features and functionality differences between running .NET Functions in-process or as an isolated worker process.
++ Last updated : 11/07/2022
+recommendations: false
+#Customer intent: As a developer, I need to understand the differences between running in-process and running in an isolated worker process so that I can choose the best process model for my functions.
++
+# Differences between in-process and isolate worker process .NET Azure Functions
+
+Functions supports two process models for .NET class library functions:
++
+This article describes the current state of the functional and behavioral differences between the two models.
+
+## Execution mode comparison table
+
+Use the following table to compare feature and functional differences between the two models:
+
+| Feature/behavior | In-process<sup>3</sup> | Isolated worker process |
+| - | - | - |
+| [Supported .NET versions](./dotnet-isolated-process-guide.md#supported-versions) | Long Term Support (LTS) versions | All supported versions + .NET Framework |
+| Core packages | [Microsoft.NET.Sdk.Functions](https://www.nuget.org/packages/Microsoft.NET.Sdk.Functions/) | [Microsoft.Azure.Functions.Worker](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker/)<br/>[Microsoft.Azure.Functions.Worker.Sdk](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Sdk) |
+| Binding extension packages | [Microsoft.Azure.WebJobs.Extensions.*](https://www.nuget.org/packages?q=Microsoft.Azure.WebJobs.Extensions) | [Microsoft.Azure.Functions.Worker.Extensions.*](https://www.nuget.org/packages?q=Microsoft.Azure.Functions.Worker.Extensions) |
+| Durable Functions | [Supported](durable/durable-functions-overview.md) | [Supported (public preview)](https://github.com/microsoft/durabletask-dotnet#usage-with-azure-functions) |
+| Model types exposed by bindings | Simple types<br/>JSON serializable types<br/>Arrays/enumerations<br/>Service SDK types such as [BlobClient]<br/>`IAsyncCollector` (for output bindings) | Simple types<br/>JSON serializable types<br/>Arrays/enumerations |
+| HTTP trigger model types| [HttpRequest]/[ObjectResult] | [HttpRequestData]/[HttpResponseData] |
+| Output binding interaction | Return values (single output only)<br/>`out` parameters<br/>`IAsyncCollector` | Return values (expanded model with single or [multiple outputs](dotnet-isolated-process-guide.md#multiple-output-bindings)) |
+| Imperative bindings<sup>1</sup> | [Supported](functions-dotnet-class-library.md#binding-at-runtime) | Not supported |
+| Dependency injection | [Supported](functions-dotnet-dependency-injection.md) | [Supported](dotnet-isolated-process-guide.md#dependency-injection) |
+| Middleware | Not supported | [Supported](dotnet-isolated-process-guide.md#middleware) |
+| Logging | [ILogger] passed to the function<br/>[ILogger&lt;T&gt;] via dependency injection | [ILogger]/[ILogger&lt;T&gt;] obtained from [FunctionContext] or via [dependency injection](dotnet-isolated-process-guide.md#dependency-injection)|
+| Application Insights dependencies | [Supported](functions-monitoring.md#dependencies) | [Supported (public preview)](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.ApplicationInsights) |
+| Cancellation tokens | [Supported](functions-dotnet-class-library.md#cancellation-tokens) | [Supported](dotnet-isolated-process-guide.md#cancellation-tokens) |
+| Cold start times<sup>2</sup> | (Baseline) | Additionally includes process launch |
+| ReadyToRun | [Supported](functions-dotnet-class-library.md#readytorun) | [Supported](dotnet-isolated-process-guide.md#readytorun) |
+
+<sup>1</sup> When you need to interact with a service using parameters determined at runtime, using the corresponding service SDKs directly is recommended over using imperative bindings. The SDKs are less verbose, cover more scenarios, and have advantages for error handling and debugging purposes. This recommendation applies to both models.
+
+<sup>2</sup> Cold start times may be additionally impacted on Windows when using some preview versions of .NET due to just-in-time loading of preview frameworks. This applies to both the in-process and out-of-process models but may be noticeable when comparing across different versions. This delay for preview versions isn't present on Linux plans.
+
+<sup>3</sup> C# Script functions also run in-process and use the same libraries as in-process class library functions. For more information, see the [Azure Functions C# script (.csx) developer reference](functions-reference-csharp.md).
+
+## Next steps
+
+To learn more, see:
+++ [Develop .NET class library functions](functions-dotnet-class-library.md)++ [Develop .NET isolated worker process functions](dotnet-isolated-process-guide.md)+
azure-functions Dotnet Isolated Process Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md
Title: Guide for running C# Azure Functions in an isolated process
-description: Learn how to use a .NET isolated process to run your C# functions in Azure, which supports .NET 5.0 and later versions.
-
+ Title: Guide for running C# Azure Functions in an isolated worker process
+description: Learn how to use a .NET isolated worker process to run your C# functions in Azure, which supports non-LTS versions of .NET and .NET Framework apps.
Previously updated : 09/29/2022 Last updated : 11/01/2022 recommendations: false
-#Customer intent: As a developer, I need to know how to create functions that run in an isolated process so that I can run my function code on current (not LTS) releases of .NET.
+#Customer intent: As a developer, I need to know how to create functions that run in an isolated worker process so that I can run my function code on current (not LTS) releases of .NET.
-# Guide for running C# Azure Functions in an isolated process
+# Guide for running C# Azure Functions in an isolated worker process
+
+This article is an introduction to working with .NET Functions isolated worker process, which runs your functions in an isolated worker process in Azure. This allows you to run your .NET class library functions on a version of .NET that is different from the version used by the Functions host process. For information about specific .NET versions supported, see [supported version](#supported-versions).
-This article is an introduction to using C# to develop .NET isolated process functions, which runs Azure Functions in an isolated process. This allows you to decouple your function code from the Azure Functions runtime, check out [supported version](#supported-versions) for Azure functions in an isolated process. [In-process C# class library functions](functions-dotnet-class-library.md) aren't supported on .NET 7.0.
+Use the following links to get started right away building .NET isolated worker process functions.
| Getting started | Concepts| Samples | |--|--|--| | <ul><li>[Using Visual Studio Code](create-first-function-vs-code-csharp.md?tabs=isolated-process)</li><li>[Using command line tools](create-first-function-cli-csharp.md?tabs=isolated-process)</li><li>[Using Visual Studio](functions-create-your-first-function-visual-studio.md?tabs=isolated-process)</li></ul> | <ul><li>[Hosting options](functions-scale.md)</li><li>[Monitoring](functions-monitoring.md)</li> | <ul><li>[Reference samples](https://github.com/Azure/azure-functions-dotnet-worker/tree/main/samples)</li></ul> |
-## Why .NET isolated process?
+If you still need to run your functions in the same process as the host, see [In-process C# class library functions](functions-dotnet-class-library.md).
+
+For a comprehensive comparison between isolated worker process and in-process .NET Functions, see [Differences between in-process and isolate worker process .NET Azure Functions](dotnet-isolated-in-process-differences.md).
-Previously Azure Functions has only supported a tightly integrated mode for .NET functions, which run [as a class library](functions-dotnet-class-library.md) in the same process as the host. This mode provides deep integration between the host process and the functions. For example, .NET class library functions can share binding APIs and types. However, this integration also requires a tighter coupling between the host process and the .NET function. For example, .NET functions running in-process are required to run on the same version of .NET as the Functions runtime. To enable you to run outside these constraints, you can now choose to run in an isolated process. This process isolation also lets you develop functions that use current .NET releases (such as .NET 7.0), not natively supported by the Functions runtime. Both isolated process and in-process C# class library functions run on .NET 6.0. To learn more, see [Supported versions](#supported-versions).
+## Why .NET Functions isolated worker process?
-Because these functions run in a separate process, there are some [feature and functionality differences](#differences-with-net-class-library-functions) between .NET isolated function apps and .NET class library function apps.
+When it was introduced, Azure Functions only supported a tightly integrated mode for .NET functions. In this _in-process_ mode, your [.NET class library functions](functions-dotnet-class-library.md) run in the same process as the host. This mode provides deep integration between the host process and the functions. For example, when running in the same process .NET class library functions can share binding APIs and types. However, this integration also requires a tight coupling between the host process and the .NET function. For example, .NET functions running in-process are required to run on the same version of .NET as the Functions runtime. This means that your in-process functions can only run on version of .NET with Long Term Support (LTS). To enable you to run on non-LTS version of .NET, you can instead choose to run in an isolated worker process. This process isolation lets you develop functions that use current .NET releases not natively supported by the Functions runtime, including .NET Framework. Both isolated worker process and in-process C# class library functions run on LTS versions. To learn more, see [Supported versions](#supported-versions).
-### Benefits of running out-of-process
+Because these functions run in a separate process, there are some [feature and functionality differences](./dotnet-isolated-in-process-differences.md) between .NET isolated function apps and .NET class library function apps.
-When your .NET functions run out-of-process, you can take advantage of the following benefits:
+### Benefits of isolated worker process
+
+When your .NET functions run in an isolated worker process, you can take advantage of the following benefits:
+ Fewer conflicts: because the functions run in a separate process, assemblies used in your app won't conflict with different version of the same assemblies used by the host process. + Full control of the process: you control the start-up of the app and can control the configurations used and the middleware started.
When your .NET functions run out-of-process, you can take advantage of the follo
[!INCLUDE [functions-dotnet-supported-versions](../../includes/functions-dotnet-supported-versions.md)]
-## .NET isolated project
+## .NET isolated worker process project
A .NET isolated function project is basically a .NET console app project that targets a supported .NET runtime. The following are the basic files required in any .NET isolated project:
For complete examples, see the [.NET 6 isolated sample project](https://github.c
## Package references
-When your functions run out-of-process, your .NET project uses a unique set of packages, which implement both core functionality and binding extensions.
+A .NET Functions isolated worker process project uses a unique set of packages, for both core functionality and binding extensions.
### Core packages
-The following packages are required to run your .NET functions in an isolated process:
+The following packages are required to run your .NET functions in an isolated worker process:
+ [Microsoft.Azure.Functions.Worker](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker/) + [Microsoft.Azure.Functions.Worker.Sdk](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Sdk/) ### Extension packages
-Because functions that run in a .NET isolated process use different binding types, they require a unique set of binding extension packages.
+Because .NET isolated worker process functions use different binding types, they require a unique set of binding extension packages.
You'll find these extension packages under [Microsoft.Azure.Functions.Worker.Extensions](https://www.nuget.org/packages?q=Microsoft.Azure.Functions.Worker.Extensions). ## Start-up and configuration
-When using .NET isolated functions, you have access to the start-up of your function app, which is usually in Program.cs. You're responsible for creating and starting your own host instance. As such, you also have direct access to the configuration pipeline for your app. When you run your functions out-of-process, you can much more easily add configurations, inject dependencies, and run your own middleware.
+When using .NET isolated functions, you have access to the start-up of your function app, which is usually in Program.cs. You're responsible for creating and starting your own host instance. As such, you also have direct access to the configuration pipeline for your app. With .NET Functions isolated worker process, you can much more easily add configurations, inject dependencies, and run your own middleware.
The following code shows an example of a [HostBuilder] pipeline:
A [HostBuilder] is used to build and return a fully initialized [IHost] instance
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/FunctionApp/Program.cs" id="docsnippet_host_run"::: > [!IMPORTANT]
-> If your project targets .NET Framework 4.8, you also need to add `FunctionsDebugger.Enable();` before creating the HostBuilder. It should be the first line of your `Main()` method. See [Debugging when targeting .NET Framework](#debugging-when-targeting-net-framework) for more information.
+> If your project targets .NET Framework 4.8, you also need to add `FunctionsDebugger.Enable();` before creating the HostBuilder. It should be the first line of your `Main()` method. For more information, see [Debugging when targeting .NET Framework](#debugging-when-targeting-net-framework).
### Configuration
-The [ConfigureFunctionsWorkerDefaults] method is used to add the settings required for the function app to run in an isolated process, which includes the following functionality:
+The [ConfigureFunctionsWorkerDefaults] method is used to add the settings required for the function app to run in an isolated worker process, which includes the following functionality:
+ Default set of converters. + Set the default [JsonSerializerOptions] to ignore casing on property names.
The [ConfigureFunctionsWorkerDefaults] extension method has an overload that let
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/CustomMiddleware/Program.cs" id="docsnippet_middleware_register" :::
- The `UseWhen` extension method can be used to register a middleware which gets executed conditionally. A predicate which returns a boolean value needs to be passed to this method and the middleware will be participating in the invocation processing pipeline if the return value of the predicate is true.
+ The `UseWhen` extension method can be used to register a middleware that gets executed conditionally. You must pass to this method a predicate that returns a boolean value, and the middleware participates in the invocation processing pipeline when the return value of the predicate is `true`.
The following extension methods on [FunctionContext] make it easier to work with middleware in the isolated model.
The following extension methods on [FunctionContext] make it easier to work with
| **` GetOutputBindings`** | Gets the output binding entries for the current function execution. Each entry in the result of this method is of type `OutputBindingData`. You can use the `Value` property to get or set the value as needed. | | **` BindInputAsync`** | Binds an input binding item for the requested `BindingMetadata` instance. For example, you can use this method when you have a function with a `BlobInput` input binding that needs to be accessed or updated by your middleware. |
-The following is an example of a middleware implementation which reads the `HttpRequestData` instance and updates the `HttpResponseData` instance during function execution. This middleware checks for the presence of a specific request header(x-correlationId), and when present uses the header value to stamp a response header. Otherwise, it generates a new GUID value and uses that for stamping the response header.
+The following is an example of a middleware implementation that reads the `HttpRequestData` instance and updates the `HttpResponseData` instance during function execution. This middleware checks for the presence of a specific request header(x-correlationId), and when present uses the header value to stamp a response header. Otherwise, it generates a new GUID value and uses that for stamping the response header.
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/CustomMiddleware/StampHttpHeaderMiddleware.cs" id="docsnippet_middleware_example_stampheader" :::
For a more complete example of using custom middleware in your function app, see
A function can accept a [CancellationToken](/dotnet/api/system.threading.cancellationtoken) parameter, which enables the operating system to notify your code when the function is about to be terminated. You can use this notification to make sure the function doesn't terminate unexpectedly in a way that leaves data in an inconsistent state.
-Cancellation tokens are supported in .NET functions when running in an isolated process. The following example raises an exception when a cancellation request has been received:
+Cancellation tokens are supported in .NET functions when running in an isolated worker process. The following example raises an exception when a cancellation request has been received:
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Net7Worker/EventHubCancellationToken.cs" id="docsnippet_cancellation_token_throw":::
The following example performs clean-up actions if a cancellation request has be
## ReadyToRun
-You can compile your function app as [ReadyToRun binaries](/dotnet/core/deploying/ready-to-run). ReadyToRun is a form of ahead-of-time compilation that can improve startup performance to help reduce the impact of [cold-start](event-driven-scaling.md#cold-start) when running in a [Consumption plan](consumption-plan.md).
+You can compile your function app as [ReadyToRun binaries](/dotnet/core/deploying/ready-to-run). ReadyToRun is a form of ahead-of-time compilation that can improve startup performance to help reduce the effect of [cold-start](event-driven-scaling.md#cold-start) when running in a [Consumption plan](consumption-plan.md).
-ReadyToRun is available in .NET 3.1, .NET 6 (both in-process and isolated process), and .NET 7, and it requires [version 3.0 or later](functions-versions.md) of the Azure Functions runtime.
+ReadyToRun is available in .NET 3.1, .NET 6 (both in-process and isolated worker process), and .NET 7, and it requires [version 3.0 or later](functions-versions.md) of the Azure Functions runtime.
To compile your project as ReadyToRun, update your project file by adding the `<PublishReadyToRun>` and `<RuntimeIdentifier>` elements. The following is the configuration for publishing to a Windows 32-bit function app.
The `Function` attribute marks the method as a function entry point. The name mu
Because .NET isolated projects run in a separate worker process, bindings can't take advantage of rich binding classes, such as `ICollector<T>`, `IAsyncCollector<T>`, and `CloudBlockBlob`. There's also no direct support for types inherited from underlying service SDKs, such as [DocumentClient] and [BrokeredMessage]. Instead, bindings rely on strings, arrays, and serializable types, such as plain old class objects (POCOs).
-For HTTP triggers, you must use [HttpRequestData] and [HttpResponseData] to access the request and response data. This is because you don't have access to the original HTTP request and response objects when running out-of-process.
+For HTTP triggers, you must use [HttpRequestData] and [HttpResponseData] to access the request and response data. This is because you don't have access to the original HTTP request and response objects when using .NET Functions isolated worker process.
-For a complete set of reference samples for using triggers and bindings when running out-of-process, see the [binding extensions reference sample](https://github.com/Azure/azure-functions-dotnet-worker/blob/main/samples/Extensions).
+For a complete set of reference samples for using triggers and bindings with isolated worker process functions, see the [binding extensions reference sample](https://github.com/Azure/azure-functions-dotnet-worker/blob/main/samples/Extensions).
### Input bindings
An [ILogger] is also provided when using [dependency injection](#dependency-inje
## Debugging when targeting .NET Framework
-If your isolated project targets .NET Framework 4.8, the current preview scope requires manual steps to enable debugging. These steps are not required if using another target framework.
+If your isolated project targets .NET Framework 4.8, the current preview scope requires manual steps to enable debugging. These steps aren't required if using another target framework.
Your app should start with a call to `FunctionsDebugger.Enable();` as its first operation. This occurs in the `Main()` method before initializing a HostBuilder. Your `Program.cs` file should look similar to the following:
namespace MyDotnetFrameworkProject
} ```
-Next, you need to manually attach to the process using a .NET Framework debugger. Visual Studio doesn't do this automatically for isolated process .NET Framework apps yet, and the "Start Debugging" operation should be avoided.
+Next, you need to manually attach to the process using a .NET Framework debugger. Visual Studio doesn't do this automatically for isolated worker process .NET Framework apps yet, and the "Start Debugging" operation should be avoided.
In your project directory (or its build output directory), run:
Azure Functions .NET Worker (PID: <process id>) initialized in debug mode. Waiti
Where `<process id>` is the ID for your worker process. You can now use Visual Studio to manually attach to the process. For instructions on this operation, see [How to attach to a running process](/visualstudio/debugger/attach-to-running-processes-with-the-visual-studio-debugger#BKMK_Attach_to_a_running_process).
-Once the debugger is attached, the process execution will resume and you will be able to debug.
-
-## Differences with .NET class library functions
-
-This section describes the current state of the functional and behavioral differences running on out-of-process compared to .NET class library functions running in-process:
-
-| Feature/behavior | In-process | Out-of-process |
-| - | - | - |
-| .NET versions | .NET Core 3.1<br/>.NET 6.0 | .NET 6.0<br/>.NET 7.0 (Preview)<br/>.NET Framework 4.8 (GA) |
-| Core packages | [Microsoft.NET.Sdk.Functions](https://www.nuget.org/packages/Microsoft.NET.Sdk.Functions/) | [Microsoft.Azure.Functions.Worker](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker/)<br/>[Microsoft.Azure.Functions.Worker.Sdk](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Sdk) |
-| Binding extension packages | [Microsoft.Azure.WebJobs.Extensions.*](https://www.nuget.org/packages?q=Microsoft.Azure.WebJobs.Extensions) | [Microsoft.Azure.Functions.Worker.Extensions.*](https://www.nuget.org/packages?q=Microsoft.Azure.Functions.Worker.Extensions) |
-| Durable Functions | [Supported](durable/durable-functions-overview.md) | [Supported (public preview)](https://github.com/microsoft/durabletask-dotnet#usage-with-azure-functions) |
-| Model types exposed by bindings | Simple types<br/>JSON serializable types<br/>Arrays/enumerations<br/>Service SDK types such as [BlobClient]<br/>`IAsyncCollector` (for output bindings) | Simple types<br/>JSON serializable types<br/>Arrays/enumerations |
-| HTTP trigger model types| [HttpRequest]/[ObjectResult] | [HttpRequestData]/[HttpResponseData] |
-| Output binding interaction | Return values (single output only)<br/>`out` parameters<br/>`IAsyncCollector` | Return values (expanded model with single or [multiple outputs](#multiple-output-bindings)) |
-| Imperative bindings<sup>1</sup> | [Supported](functions-dotnet-class-library.md#binding-at-runtime) | Not supported |
-| Dependency injection | [Supported](functions-dotnet-dependency-injection.md) | [Supported](#dependency-injection) |
-| Middleware | Not supported | [Supported](#middleware) |
-| Logging | [ILogger] passed to the function<br/>[ILogger&lt;T&gt;] via dependency injection | [ILogger]/[ILogger&lt;T&gt;] obtained from [FunctionContext] or via [dependency injection](#dependency-injection)|
-| Application Insights dependencies | [Supported](functions-monitoring.md#dependencies) | [Supported (public preview)](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.ApplicationInsights) |
-| Cancellation tokens | [Supported](functions-dotnet-class-library.md#cancellation-tokens) | [Supported](#cancellation-tokens) |
-| Cold start times<sup>2</sup> | (Baseline) | Additionally includes process launch |
-| ReadyToRun | [Supported](functions-dotnet-class-library.md#readytorun) | [Supported](#readytorun) |
-
-<sup>1</sup> When you need to interact with a service using parameters determined at runtime, using the corresponding service SDKs directly is recommended over using imperative bindings. The SDKs are less verbose, cover more scenarios, and have advantages for error handling and debugging purposes. This recommendation applies to both models.
-
-<sup>2</sup> Cold start times may be additionally impacted on Windows when using some preview versions of .NET due to just-in-time loading of preview frameworks. This applies to both the in-process and out-of-process models but may be particularly noticeable if comparing across different versions. This delay for preview versions is not present on Linux plans.
+After the debugger is attached, the process execution resumes, and you'll be able to debug.
## Remote Debugging using Visual Studio
-Because your isolated process app runs outside the Functions runtime, you need to attach the remote debugger to a separate process. To learn more about debugging using Visual Studio, see [Remote Debugging](functions-develop-vs.md?tabs=isolated-process#remote-debugging).
+Because your isolated worker process app runs outside the Functions runtime, you need to attach the remote debugger to a separate process. To learn more about debugging using Visual Studio, see [Remote Debugging](functions-develop-vs.md?tabs=isolated-process#remote-debugging).
## Next steps + [Learn more about triggers and bindings](functions-triggers-bindings.md)
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md
The following major runtime version values are supported:
## FUNCTIONS\_V2\_COMPATIBILITY\_MODE
-This setting enables your function app to run in a version 2.x compatible mode on the version 3.x runtime. Use this setting only if encountering issues when [upgrading your function app from version 2.x to 3.x of the runtime](functions-versions.md#migrating-from-2x-to-3x).
+This setting enables your function app to run in a version 2.x compatible mode on the version 3.x runtime. Use this setting only if encountering issues after upgrading your function app from version 2.x to 3.x of the runtime.
>[!IMPORTANT] > This setting is intended only as a short-term workaround while you update your app to run correctly on version 3.x. This setting is supported as long as the [2.x runtime is supported](functions-versions.md). If you encounter issues that prevent your app from running on version 3.x without using this setting, please [report your issue](https://github.com/Azure/azure-functions-host/issues/new?template=Bug_report.md).
Valid values:
| Value | Language | ||| | `dotnet` | [C# (class library)](functions-dotnet-class-library.md)<br/>[C# (script)](functions-reference-csharp.md) |
-| `dotnet-isolated` | [C# (isolated process)](dotnet-isolated-process-guide.md) |
+| `dotnet-isolated` | [C# (isolated worker process)](dotnet-isolated-process-guide.md) |
| `java` | [Java](functions-reference-java.md) | | `node` | [JavaScript](functions-reference-node.md)<br/>[TypeScript](functions-reference-node.md#typescript) | | `powershell` | [PowerShell](functions-reference-powershell.md) |
Sets the version of Node.js to use when running your function app on Windows. Yo
## WEBSITE\_OVERRIDE\_STICKY\_EXTENSION\_VERSIONS
-By default, the version settings for function apps are specific to each slot. This setting is used when upgrading functions by using [deployment slots](functions-deployment-slots.md). This prevents unanticipated behavior due to changing versions after a swap. Set to `0` in production and in the slot to make sure that all version settings are also swapped. For more information, see [Migrate using slots](functions-versions.md#migrate-using-slots).
+By default, the version settings for function apps are specific to each slot. This setting is used when upgrading functions by using [deployment slots](functions-deployment-slots.md). This prevents unanticipated behavior due to changing versions after a swap. Set to `0` in production and in the slot to make sure that all version settings are also swapped. For more information, see [Upgrade using slots](migrate-version-3-version-4.md#upgrade-using-slots).
|Key|Sample value| |||
For more information, see [Create a function on Linux using a custom container](
### netFrameworkVersion
-Sets the specific version of .NET for C# functions. For more information, see [Migrating from 3.x to 4.x](functions-versions.md#migrating-from-3x-to-4x).
+Sets the specific version of .NET for C# functions. For more information, see [Upgrade your function app in Azure](migrate-version-3-version-4.md?pivots=programming-language-csharp#upgrade-your-function-app-in-azure).
### powerShellVersion
azure-functions Functions Bindings Azure Sql Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-input.md
The stored procedure `dbo.DeleteToDo` must be created on the SQL database. In t
# [Isolated process](#tab/isolated-process)
-Isolated process isn't currently supported.
+Isolated worker process isn't currently supported.
<!-- Uncomment to support C# script examples. # [C# Script](#tab/csharp-script)
azure-functions Functions Bindings Azure Sql Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-output.md
namespace AzureSQLSamples
# [Isolated process](#tab/isolated-process)
-Isolated process isn't currently supported.
+Isolated worker process isn't currently supported.
<!-- Uncomment to support C# script examples. # [C# Script](#tab/csharp-script)
azure-functions Functions Bindings Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql.md
Add the extension to your project by installing this [NuGet package](https://www
# [Isolated process](#tab/isolated-process)
-Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated process](dotnet-isolated-process-guide.md).
+Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
> [!NOTE]
-> In the current preview, Azure SQL bindings aren't supported when your function app runs in an isolated process.
+> In the current preview, Azure SQL bindings aren't supported when your function app runs in an isolated worker process.
<!-- Add the extension to your project by installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.SignalRService/).
azure-functions Functions Bindings Cosmosdb V2 Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-input.md
Here's the binding data in the *function.json* file:
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file.
# [Functions 2.x+](#tab/functionsv2/in-process)
azure-functions Functions Bindings Cosmosdb V2 Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-output.md
def main(req: func.HttpRequest, doc: func.Out[func.Document]) -> func.HttpRespon
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file.
# [Functions 2.x+](#tab/functionsv2/in-process)
azure-functions Functions Bindings Cosmosdb V2 Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-trigger.md
An in-process class library is a compiled C# function runs in the same process a
# [Isolated process](#tab/isolated-process)
-An isolated process class library compiled C# function runs in a process isolated from the runtime. Isolated process is required to support C# functions running on .NET 5.0.
+An isolated worker process class library compiled C# function runs in a process isolated from the runtime.
# [C# script](#tab/csharp-script)
azure-functions Functions Bindings Cosmosdb V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2.md
Functions execute in the same process as the Functions host. To learn more, see
# [Isolated process](#tab/isolated-process)
-Functions execute in an isolated C# worker process. To learn more, see [Guide for running functions on .NET 5.0 in Azure](dotnet-isolated-process-guide.md).
+Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
# [C# script](#tab/csharp-script)
azure-functions Functions Bindings Error Pages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-error-pages.md
public static async Task Run([EventHubTrigger("myHub", Connection = "EventHubCon
# [Isolated process](#tab/isolated-process/fixed-delay)
-Retry policies aren't yet supported when running in an isolated process.
+Retry policies aren't yet supported when running in an isolated worker process.
# [C# Script](#tab/csharp-script/fixed-delay)
public static async Task Run([EventHubTrigger("myHub", Connection = "EventHubCon
# [Isolated process](#tab/isolated-process/exponential-backoff)
-Retry policies aren't yet supported when running in an isolated process.
+Retry policies aren't yet supported when running in an isolated worker process.
# [C# Script](#tab/csharp-script/exponential-backoff)
azure-functions Functions Bindings Event Grid Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid-output.md
For information on setup and configuration details, see [How to work with Event
The type of the output parameter used with an Event Grid output binding depends on the Functions runtime version, the binding extension version, and the modality of the C# function. The C# function can be created using one of the following C# modes: * [In-process class library](functions-dotnet-class-library.md): compiled C# function that runs in the same process as the Functions runtime.
-* [Isolated process class library](dotnet-isolated-process-guide.md): compiled C# function that runs in a process isolated from the runtime. Isolated process is required to support C# functions running on .NET 5.0.
+* [Isolated worker process class library](dotnet-isolated-process-guide.md): compiled C# function that runs in a worker process isolated from the runtime.
* [C# script](functions-reference-csharp.md): used primarily when creating C# functions in the Azure portal. # [In-process](#tab/in-process)
def main(eventGridEvent: func.EventGridEvent,
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use attribute to configure the binding. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attribute to configure the binding. C# script instead uses a function.json configuration file.
The attribute's constructor takes the name of an application setting that contains the name of the custom topic, and the name of an application setting that contains the topic key.
Requires you to define a custom type, or use a string. See the [Example section]
# [Functions 1.x](#tab/functionsv1/isolated-process)
-Functions version 1.x doesn't support isolated process.
+Functions version 1.x doesn't support isolated worker process.
# [Extension v3.x](#tab/extensionv3/csharp-script)
azure-functions Functions Bindings Event Grid Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid-trigger.md
namespace Company.Function
``` # [Isolated process](#tab/isolated-process)
-When running your C# function in an isolated process, you need to define a custom type for event properties. The following example defines a `MyEventType` class.
+When running your C# function in an isolated worker process, you need to define a custom type for event properties. The following example defines a `MyEventType` class.
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/EventGrid/EventGridFunction.cs" range="35-49":::
def main(event: func.EventGridEvent):
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use the [EventGridTrigger](https://github.com/Azure/azure-functions-eventgrid-extension/blob/master/src/EventGridExtension/TriggerBinding/EventGridTriggerAttribute.cs) attribute. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the [EventGridTrigger](https://github.com/Azure/azure-functions-eventgrid-extension/blob/master/src/EventGridExtension/TriggerBinding/EventGridTriggerAttribute.cs) attribute. C# script instead uses a function.json configuration file.
# [In-process](#tab/in-process)
Requires you to define a custom type, or use a string. See the [Example section]
# [Functions 1.x](#tab/functionsv1/isolated-process)
-Functions version 1.x doesn't support isolated process.
+Functions version 1.x doesn't support the isolated worker process.
# [Extension v3.x](#tab/extensionv3/csharp-script)
azure-functions Functions Bindings Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid.md
Functions execute in the same process as the Functions host. To learn more, see
# [Isolated process](#tab/isolated-process)
-Functions execute in an isolated C# worker process. To learn more, see [Guide for running functions on .NET 5.0 in Azure](dotnet-isolated-process-guide.md).
+Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
# [C# script](#tab/csharp-script)
Add the extension to your project by installing the [NuGet package](https://www.
# [Functions 1.x](#tab/functionsv1/isolated-process)
-Functions version 1.x doesn't support isolated process.
+Functions version 1.x doesn't support the isolated worker process.
The Event Grid output binding is only available for Functions 2.x and higher.
azure-functions Functions Bindings Event Hubs Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-hubs-output.md
In the [Java functions runtime library](/java/api/overview/azure/functions/runti
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use attribute to configure the binding. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attribute to configure the binding. C# script instead uses a function.json configuration file.
# [In-process](#tab/in-process)
azure-functions Functions Bindings Http Webhook Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-http-webhook-output.md
The default return value for an HTTP-triggered function is:
::: zone pivot="programming-language-csharp" ## Attribute
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries don't require an attribute. C# script uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries don't require an attribute. C# script uses a function.json configuration file.
# [In-process](#tab/in-process)
azure-functions Functions Bindings Http Webhook Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-http-webhook-trigger.md
def main(req: func.HttpRequest) -> func.HttpResponse:
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use the `HttpTriggerAttribute` to define the trigger binding. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the `HttpTriggerAttribute` to define the trigger binding. C# script instead uses a function.json configuration file.
# [In-process](#tab/in-process)
In [in-process functions](functions-dotnet-class-library.md), the `HttpTriggerAt
# [Isolated process](#tab/isolated-process)
-In [isolated process](dotnet-isolated-process-guide.md) function apps, the `HttpTriggerAttribute` supports the following parameters:
+In [isolated worker process](dotnet-isolated-process-guide.md) function apps, the `HttpTriggerAttribute` supports the following parameters:
| Parameters | Description| ||-|
azure-functions Functions Bindings Http Webhook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-http-webhook.md
Functions execute in the same process as the Functions host. To learn more, see
# [Isolated process](#tab/isolated-process)
-Functions execute in an isolated C# worker process. To learn more, see [Guide for running functions on .NET 5.0 in Azure](dotnet-isolated-process-guide.md).
+Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
# [C# script](#tab/csharp-script)
Add the extension to your project by installing the [NuGet package](https://www.
# [Functions v1.x](#tab/functionsv1/isolated-process)
-Functions 1.x doesn't support running in an isolated process.
+Functions 1.x doesn't support running in an isolated worker process.
# [Functions v2.x+](#tab/functionsv2/csharp-script)
azure-functions Functions Bindings Kafka Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-kafka-output.md
An [in-process class library](functions-dotnet-class-library.md) is a compiled C
# [Isolated process](#tab/isolated-process)
-An [isolated process class library](dotnet-isolated-process-guide.md) compiled C# function runs in a process isolated from the runtime. Isolated process is required to support C# functions running on .NET 5.0.
+An [isolated worker process class library](dotnet-isolated-process-guide.md) compiled C# function runs in a process isolated from the runtime.
For a complete set of working Java examples for Confluent, see the [Kafka extens
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use the `Kafka` attribute to define the function trigger.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the `Kafka` attribute to define the function trigger.
The following table explains the properties you can set using this attribute:
azure-functions Functions Bindings Kafka Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-kafka-trigger.md
An [in-process class library](functions-dotnet-class-library.md) is a compiled C
# [Isolated process](#tab/isolated-process)
-An [isolated process class library](dotnet-isolated-process-guide.md) compiled C# function runs in a process isolated from the runtime. Isolated process is required to support C# functions running on .NET 5.0.
+An [isolated worker process class library](dotnet-isolated-process-guide.md) compiled C# function runs in a process isolated from the runtime.
For a complete set of working Java examples for Event Hubs, see the [Kafka exten
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use the `KafkaTriggerAttribute` to define the function trigger.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the `KafkaTriggerAttribute` to define the function trigger.
The following table explains the properties you can set using this trigger attribute:
azure-functions Functions Bindings Kafka https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-kafka.md
Add the extension to your project by installing this [NuGet package](https://www
# [Isolated process](#tab/isolated-process)
-Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated process](dotnet-isolated-process-guide.md).
+Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
Add the extension to your project by installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Kafka).
azure-functions Functions Bindings Rabbitmq Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-rabbitmq-output.md
def main(req: func.HttpRequest, outputMessage: func.Out[str]) -> func.HttpRespon
## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use the <!--attribute API here--> attribute to define the function. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the <!--attribute API here--> attribute to define the function. C# script instead uses a function.json configuration file.
The attribute's constructor takes the following parameters:
ILogger log)
In [C# class libraries](functions-dotnet-class-library.md), use the [RabbitMQTrigger](https://github.com/Azure/azure-functions-rabbitmq-extension/blob/dev/extension/WebJobs.Extensions.RabbitMQ/Trigger/RabbitMQTriggerAttribute.cs) attribute.
-Here's a `RabbitMQTrigger` attribute in a method signature for an isolated process library:
+Here's a `RabbitMQTrigger` attribute in a method signature for an isolated worker process library:
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/RabbitMQ/RabbitMQFunction.cs" range="12-16":::
When working with C# functions:
# [Isolated process](#tab/isolated-process)
-The RabbitMQ bindings currently support only string and serializable object types when running in an isolated process.
+The RabbitMQ bindings currently support only string and serializable object types when running in an isolated worker process.
# [C# script](#tab/csharp-script)
azure-functions Functions Bindings Rabbitmq Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-rabbitmq-trigger.md
def main(myQueueItem) -> None:
## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use the <!--attribute API here--> attribute to define the function. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the <!--attribute API here--> attribute to define the function. C# script instead uses a function.json configuration file.
The attribute's constructor takes the following parameters:
public static void RabbitMQTest([RabbitMQTrigger("queue")] string message, ILogg
In [C# class libraries](functions-dotnet-class-library.md), use the [RabbitMQTrigger](https://github.com/Azure/azure-functions-rabbitmq-extension/blob/dev/extension/WebJobs.Extensions.RabbitMQ/Trigger/RabbitMQTriggerAttribute.cs) attribute.
-Here's a `RabbitMQTrigger` attribute in a method signature for an isolated process library:
+Here's a `RabbitMQTrigger` attribute in a method signature for an isolated worker process library:
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/RabbitMQ/RabbitMQFunction.cs" range="12-16":::
azure-functions Functions Bindings Rabbitmq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-rabbitmq.md
Add the extension to your project by installing this [NuGet package](https://www
# [Isolated process](#tab/isolated-process)
-Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated process](dotnet-isolated-process-guide.md).
+Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
Add the extension to your project by installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Rabbitmq).
azure-functions Functions Bindings Register https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-register.md
The following table lists the currently available versions of the default *Micro
## Explicitly install extensions
-For compiled C# class library projects ([in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md)), you install the NuGet packages for the extensions that you need as you normally would. For examples see either the [Visual Studio Code developer guide](functions-develop-vs-code.md?tabs=csharp#install-binding-extensions) or the [Visual Studio developer guide](functions-develop-vs.md#add-bindings).
+For compiled C# class library projects ([in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md)), you install the NuGet packages for the extensions that you need as you normally would. For examples see either the [Visual Studio Code developer guide](functions-develop-vs-code.md?tabs=csharp#install-binding-extensions) or the [Visual Studio developer guide](functions-develop-vs.md#add-bindings).
For non-.NET languages and C# script, when you can't use extension bundles you need to manually install required binding extensions in your local project. The easiest way is to use Azure Functions Core Tools. To learn more, see [Install extensions](functions-run-local.md#install-extensions).
azure-functions Functions Bindings Sendgrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-sendgrid.md
Functions execute in the same process as the Functions host. To learn more, see
# [Isolated process](#tab/isolated-process)
-Functions execute in an isolated C# worker process. To learn more, see [Guide for running functions on .NET 5.0 in Azure](dotnet-isolated-process-guide.md).
+Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
# [C# script](#tab/csharp-script)
Add the extension to your project by installing the [NuGet package](https://www.
# [Functions v1.x](#tab/functionsv1/isolated-process)
-Functions 1.x doesn't support running in an isolated process.
+Functions 1.x doesn't support running in an isolated worker process.
# [Functions v2.x+](#tab/functionsv2/csharp-script)
You can omit setting the attribute's `ApiKey` property if you have your API key
# [Isolated process](#tab/isolated-process)
-We don't currently have an example for using the SendGrid binding in a function app running in an isolated process.
+We don't currently have an example for using the SendGrid binding in a function app running in an isolated worker process.
# [C# Script](#tab/csharp-script)
public class HttpTriggerSendGrid {
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the output binding. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the output binding. C# script instead uses a function.json configuration file.
# [In-process](#tab/in-process)
In [in-process](functions-dotnet-class-library.md) function apps, use the [SendG
# [Isolated process](#tab/isolated-process)
-In [isolated process](dotnet-isolated-process-guide.md) function apps, the `SendGridOutputAttribute` supports the following parameters:
+In [isolated worker process](dotnet-isolated-process-guide.md) function apps, the `SendGridOutputAttribute` supports the following parameters:
| Attribute/annotation property | Description | |-|-|
azure-functions Functions Bindings Service Bus Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus-output.md
def main(req: func.HttpRequest, msg: func.Out[str]) -> func.HttpResponse:
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the output binding. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the output binding. C# script instead uses a function.json configuration file.
# [In-process](#tab/in-process)
azure-functions Functions Bindings Service Bus Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus-trigger.md
def main(msg: func.ServiceBusMessage):
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use the [ServiceBusTriggerAttribute](https://github.com/Azure/azure-functions-servicebus-extension/blob/master/src/Microsoft.Azure.WebJobs.Extensions.ServiceBus/ServiceBusTriggerAttribute.cs) attribute to define the function trigger. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the [ServiceBusTriggerAttribute](https://github.com/Azure/azure-functions-servicebus-extension/blob/master/src/Microsoft.Azure.WebJobs.Extensions.ServiceBus/ServiceBusTriggerAttribute.cs) attribute to define the function trigger. C# script instead uses a function.json configuration file.
# [In-process](#tab/in-process)
azure-functions Functions Bindings Service Bus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus.md
Add the extension to your project installing this [NuGet package](https://www.nu
# [Isolated process](#tab/isolated-process)
-Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated process](dotnet-isolated-process-guide.md).
+Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
Add the extension to your project installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.servicebus).
Add the extension to your project by installing the [NuGet package](https://www.
# [Functions 1.x](#tab/functionsv1/isolated-process)
-Functions version 1.x doesn't support isolated process.
+Functions version 1.x doesn't support the isolated worker process.
# [Extension 5.x+](#tab/extensionv5/csharp-script)
azure-functions Functions Bindings Signalr Service Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-signalr-service-input.md
public static SignalRConnectionInfo Negotiate(
# [Isolated process](#tab/isolated-process)
-Sample code not available for isolated process.
+Sample code not available for the isolated worker process.
# [C# Script](#tab/csharp-script)
public SignalRConnectionInfo negotiate(
## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use attribute to define the function. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attribute to define the function. C# script instead uses a function.json configuration file.
# [In-process](#tab/in-process)
azure-functions Functions Bindings Signalr Service Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-signalr-service-output.md
public SignalRGroupAction removeFromGroup(
## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use attribute to define the function. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attribute to define the function. C# script instead uses a function.json configuration file.
# [In-process](#tab/in-process)
azure-functions Functions Bindings Signalr Service Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-signalr-service-trigger.md
def main(invocation) -> None:
## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use the `SignalRTrigger` attribute to define the function. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the `SignalRTrigger` attribute to define the function. C# script instead uses a function.json configuration file.
# [In-process](#tab/in-process)
You can follow the sample in GitHub to deploy a chat room on Function App with S
* [Azure Functions development and configuration with Azure SignalR Service](../azure-signalr/signalr-concept-serverless-development-config.md) * [SignalR Service Trigger binding sample](https://github.com/aspnet/AzureSignalR-samples/tree/master/samples/BidirectionChat)
-* [SignalR Service Trigger binding sample in isolated process](https://github.com/aspnet/AzureSignalR-samples/tree/master/samples/DotnetIsolated-BidirectionChat)
+* [SignalR Service Trigger binding sample in isolated worker process](https://github.com/aspnet/AzureSignalR-samples/tree/master/samples/DotnetIsolated-BidirectionChat)
azure-functions Functions Bindings Signalr Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-signalr-service.md
Add the extension to your project by installing this [NuGet package].
# [Isolated process](#tab/isolated-process)
-Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated process](dotnet-isolated-process-guide.md).
+Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
Add the extension to your project by installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.SignalRService/).
azure-functions Functions Bindings Storage Blob Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-input.md
public static void Run(
# [Isolated process](#tab/isolated-process)
-The following example is a [C# function](dotnet-isolated-process-guide.md) that runs in an isolated process and uses a blob trigger with both blob input and blob output blob bindings. The function is triggered by the creation of a blob in the *test-samples-trigger* container. It reads a text file from the *test-samples-input* container and creates a new text file in an output container based on the name of the triggered file.
+The following example is a [C# function](dotnet-isolated-process-guide.md) that runs in an isolated worker process and uses a blob trigger with both blob input and blob output blob bindings. The function is triggered by the creation of a blob in the *test-samples-trigger* container. It reads a text file from the *test-samples-input* container and creates a new text file in an output container based on the name of the triggered file.
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/Blob/BlobFunction.cs" range="9-26":::
def main(queuemsg: func.QueueMessage, inputblob: bytes) -> bytes:
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file.
# [In-process](#tab/in-process)
public static void Run(
# [Isolated process](#tab/isolated-process)
-Isolated process defines an input binding by using a `BlobInputAttribute` attribute, which takes the following parameters:
+isolated worker process defines an input binding by using a `BlobInputAttribute` attribute, which takes the following parameters:
|Parameter | Description| ||-|
azure-functions Functions Bindings Storage Blob Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-output.md
public class ResizeImages
# [Isolated process](#tab/isolated-process)
-The following example is a [C# function](dotnet-isolated-process-guide.md) that runs in an isolated process and uses a blob trigger with both blob input and blob output blob bindings. The function is triggered by the creation of a blob in the *test-samples-trigger* container. It reads a text file from the *test-samples-input* container and creates a new text file in an output container based on the name of the triggered file.
+The following example is a [C# function](dotnet-isolated-process-guide.md) that runs in an isolated worker process and uses a blob trigger with both blob input and blob output blob bindings. The function is triggered by the creation of a blob in the *test-samples-trigger* container. It reads a text file from the *test-samples-input* container and creates a new text file in an output container based on the name of the triggered file.
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/Blob/BlobFunction.cs" range="4-26":::
def main(queuemsg: func.QueueMessage, inputblob: bytes, outputblob: func.Out[byt
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use attribute to define the function. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attribute to define the function. C# script instead uses a function.json configuration file.
# [In-process](#tab/in-process)
azure-functions Functions Bindings Storage Blob Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-trigger.md
For more information about the `BlobTrigger` attribute, see [Attributes](#attrib
# [Isolated process](#tab/isolated-process)
-The following example is a [C# function](dotnet-isolated-process-guide.md) that runs in an isolated process and uses a blob trigger with both blob input and blob output blob bindings. The function is triggered by the creation of a blob in the *test-samples-trigger* container. It reads a text file from the *test-samples-input* container and creates a new text file in an output container based on the name of the triggered file.
+The following example is a [C# function](dotnet-isolated-process-guide.md) that runs in an isolated worker process and uses a blob trigger with both blob input and blob output blob bindings. The function is triggered by the creation of a blob in the *test-samples-trigger* container. It reads a text file from the *test-samples-input* container and creates a new text file in an output container based on the name of the triggered file.
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/Blob/BlobFunction.cs" range="9-25":::
def main(myblob: func.InputStream):
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use the [BlobAttribute](/dotnet/api/microsoft.azure.webjobs.blobattribute) attribute to define the function. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the [BlobAttribute](/dotnet/api/microsoft.azure.webjobs.blobattribute) attribute to define the function. C# script instead uses a function.json configuration file.
The attribute's constructor takes the following parameters:
azure-functions Functions Bindings Storage Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob.md
Functions execute in the same process as the Functions host. To learn more, see
# [Isolated process](#tab/isolated-process)
-Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated process](dotnet-isolated-process-guide.md).
+Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
# [C# script](#tab/csharp-script)
Add the extension to your project by installing the [Microsoft.Azure.Functions.W
# [Functions 1.x](#tab/functionsv1/isolated-process)
-Functions version 1.x doesn't support isolated process.
+Functions version 1.x doesn't support isolated worker process.
# [Extension 5.x and higher](#tab/extensionv5/csharp-script)
azure-functions Functions Bindings Storage Queue Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-queue-output.md
You can use the `StorageAccount` attribute to specify the storage account at cla
# [Isolated process](#tab/isolated-process)
-When running in an isolated process, you use the [QueueOutputAttribute](https://github.com/Azure/azure-functions-dotnet-worker/blob/main/extensions/Worker.Extensions.Storage.Queues/src/QueueOutputAttribute.cs), which takes the name of the queue, as shown in the following example:
+When running in an isolated worker process, you use the [QueueOutputAttribute](https://github.com/Azure/azure-functions-dotnet-worker/blob/main/extensions/Worker.Extensions.Storage.Queues/src/QueueOutputAttribute.cs), which takes the name of the queue, as shown in the following example:
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/Queue/QueueFunction.cs" id="docsnippet_queue_trigger" :::
-Only returned variables are supported when running in an isolated process. Output parameters can't be used.
+Only returned variables are supported when running in an isolated worker process. Output parameters can't be used.
# [C# script](#tab/csharp-script)
An in-process class library is a compiled C# function runs in the same process a
# [Isolated process](#tab/isolated-process)
-An isolated process class library compiled C# function runs in a process isolated from the runtime. Isolated process is required to support C# functions running on .NET 5.0.
+An isolated worker process class library compiled C# function runs in a process isolated from the runtime.
# [C# script](#tab/csharp-script)
You can write multiple messages to the queue by using one of the following types
# [Extension 5.x+](#tab/extensionv5/isolated-process)
-Isolated process currently only supports binding to string parameters.
+Isolated worker process currently only supports binding to string parameters.
# [Extension 2.x+](#tab/extensionv2/isolated-process)
-Isolated process currently only supports binding to string parameters.
+Isolated worker process currently only supports binding to string parameters.
# [Extension 5.x+](#tab/extensionv5/csharp-script)
azure-functions Functions Bindings Storage Queue Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-queue-trigger.md
def main(msg: func.QueueMessage):
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use the [QueueTriggerAttribute](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs.Extensions.Storage/Queues/QueueTriggerAttribute.cs) to define the function. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the [QueueTriggerAttribute](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs.Extensions.Storage/Queues/QueueTriggerAttribute.cs) to define the function. C# script instead uses a function.json configuration file.
# [In-process](#tab/in-process)
An in-process class library is a compiled C# function runs in the same process a
# [Isolated process](#tab/isolated-process)
-An isolated process class library compiled C# function runs in a process isolated from the runtime. Isolated process is required to support C# functions running on .NET 5.0.
+An isolated worker process class library compiled C# function runs in a process isolated from the runtime.
# [C# script](#tab/csharp-script)
When binding to an object, the Functions runtime tries to deserialize the JSON p
# [Extension 5.x+](#tab/extensionv5/isolated-process)
-Isolated process currently only supports binding to string parameters.
+Isolated worker process currently only supports binding to string parameters.
# [Extension 2.x+](#tab/extensionv2/isolated-process)
-Isolated process currently only supports binding to string parameters.
+Isolated worker process currently only supports binding to string parameters.
# [Extension 5.x+](#tab/extensionv5/csharp-script)
azure-functions Functions Bindings Storage Queue https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-queue.md
Functions execute in the same process as the Functions host. To learn more, see
# [Isolated process](#tab/isolated-process)
-Functions execute in an isolated C# worker process. To learn more, see [Guide for running functions on .NET 5.0 in Azure](dotnet-isolated-process-guide.md).
+Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
# [C# script](#tab/csharp-script)
Add the extension to your project by installing the [NuGet package](https://www.
# [Functions 1.x](#tab/functionsv1/isolated-process)
-Functions version 1.x doesn't support isolated process.
+Functions version 1.x doesn't support the isolated worker process.
# [Extension 5.x+](#tab/extensionv5/csharp-script)
azure-functions Functions Bindings Storage Table Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-table-input.md
An [in-process class library](functions-dotnet-class-library.md) is a compiled C
# [Isolated process](#tab/isolated-process)
-An [isolated process class library](dotnet-isolated-process-guide.md) compiled C# function runs in a process isolated from the runtime. Isolated process is required to support C# functions running on .NET 5.0.
+An [isolated worker process class library](dotnet-isolated-process-guide.md) compiled C# function runs in a process isolated from the runtime.
# [C# script](#tab/csharp-script)
The `Filter` and `Take` properties are used to limit the number of entities retu
# [Azure Cosmos DB for Table extension (preview)](#tab/table-api/isolated-process)
-The Azure Cosmos DB for Table extension does not currently support isolated process. You will instead need to use the combined Azure Storage extension.
+The Azure Cosmos DB for Table extension does not currently support isolated worker process. You will instead need to use the combined Azure Storage extension.
# [Functions 1.x](#tab/functionsv1/isolated-process)
-Functions version 1.x doesn't support isolated process.
+Functions version 1.x doesn't support isolated worker process.
# [Combined Azure Storage extension](#tab/storage-extension/csharp-script)
With this simple binding, you can't programmatically handle a case in which no r
## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file.
# [In-process](#tab/in-process)
An in-process class library is a compiled C# function that runs in the same proc
# [Isolated process](#tab/isolated-process)
-An isolated process class library compiled C# function runs in a process isolated from the runtime. Isolated process is required to support C# functions running on .NET 5.0.
+An isolated worker process class library compiled C# function runs in a process isolated from the runtime.
# [C# script](#tab/csharp-script)
To return a specific entity by key, use a plain-old CLR object (POCO). The speci
# [Azure Cosmos DB for Table extension (preview)](#tab/table-api/isolated-process)
-The Azure Cosmos DB for Table extension does not currently support isolated process. You will instead need to use the combined Azure Storage extension.
+The Azure Cosmos DB for Table extension does not currently support isolated worker process. You will instead need to use the combined Azure Storage extension.
# [Functions 1.x](#tab/functionsv1/isolated-process)
-Functions version 1.x doesn't support isolated process.
+Functions version 1.x doesn't support isolated worker process.
# [Combined Azure Storage extension](#tab/storage-extension/csharp-script)
azure-functions Functions Bindings Storage Table Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-table-output.md
def main(req: func.HttpRequest, message: func.Out[str]) -> func.HttpResponse:
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file.
# [In-process](#tab/in-process)
An in-process class library is a compiled C# function runs in the same process a
# [Isolated process](#tab/isolated-process)
-An isolated process class library compiled C# function runs in a process isolated from the runtime. Isolated process is required to support C# functions running on .NET 5.0.
+An isolated worker process class library compiled C# function runs in a process isolated from the runtime.
# [C# script](#tab/csharp-script)
Return a plain-old CLR object (POCO) with properties that can be mapped to the t
# [Azure Cosmos DB for Table extension (preview)](#tab/table-api/isolated-process)
-The Azure Cosmos DB for Table extension does not currently support isolated process. You will instead need to use the combined Azure Storage extension.
+The Azure Cosmos DB for Table extension does not currently support isolated worker process. You will instead need to use the combined Azure Storage extension.
# [Functions 1.x](#tab/functionsv1/isolated-process)
-Functions version 1.x doesn't support isolated process.
+Functions version 1.x doesn't support isolated worker process.
# [Combined Azure Storage extension](#tab/storage-extension/csharp-script)
azure-functions Functions Bindings Storage Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-table.md
Functions execute in the same process as the Functions host. To learn more, see
# [Isolated process](#tab/isolated-process)
-Functions execute in an isolated C# worker process. To learn more, see [Guide for running functions on .NET 5.0 in Azure](dotnet-isolated-process-guide.md).
+Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
# [C# script](#tab/csharp-script)
Tables are included in a combined package for Azure Storage. Install the [Micros
# [Azure Cosmos DB for Table extension (preview)](#tab/table-api/isolated-process)
-The Azure Cosmos DB for Table extension does not currently support isolated process. You will instead need to use the [Storage extension](#storage-extension).
+The Azure Cosmos DB for Table extension does not currently support isolated worker process. You will instead need to use the [Storage extension](#storage-extension).
# [Functions 1.x](#tab/functionsv1/isolated-process)
-Functions version 1.x doesn't support isolated process.
+Functions version 1.x doesn't support isolated worker process.
# [Combined Azure Storage extension](#tab/storage-extension/csharp-script)
azure-functions Functions Bindings Timer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-timer.md
def main(mytimer: func.TimerRequest) -> None:
::: zone pivot="programming-language-csharp" ## Attributes
-[In-process](functions-dotnet-class-library.md) C# library uses [TimerTriggerAttribute](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions/Extensions/Timers/TimerTriggerAttribute.cs) from [Microsoft.Azure.WebJobs.Extensions](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions) whereas [Isolated process](dotnet-isolated-process-guide.md) C# library uses [TimerTriggerAttribute](https://github.com/Azure/azure-functions-dotnet-worker/blob/main/extensions/Worker.Extensions.Timer/src/TimerTriggerAttribute.cs) from [Microsoft.Azure.Functions.Worker.Extensions.Timer](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Timer) to define the function.
+[In-process](functions-dotnet-class-library.md) C# library uses [TimerTriggerAttribute](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions/Extensions/Timers/TimerTriggerAttribute.cs) from [Microsoft.Azure.WebJobs.Extensions](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions) whereas [isolated worker process](dotnet-isolated-process-guide.md) C# library uses [TimerTriggerAttribute](https://github.com/Azure/azure-functions-dotnet-worker/blob/main/extensions/Worker.Extensions.Timer/src/TimerTriggerAttribute.cs) from [Microsoft.Azure.Functions.Worker.Extensions.Timer](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Timer) to define the function.
C# script instead uses a function.json configuration file.
azure-functions Functions Bindings Twilio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-twilio.md
Functions execute in the same process as the Functions host. To learn more, see
# [Isolated process](#tab/isolated-process)
-Functions execute in an isolated C# worker process. To learn more, see [Guide for running functions on .NET 5.0 in Azure](dotnet-isolated-process-guide.md).
+Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
# [C# script](#tab/csharp-script)
Add the extension to your project by installing the [NuGet package](https://www.
# [Functions v2.x+](#tab/functionsv2/isolated-process)
-There is currently no support for Twilio for an isolated process app.
+There is currently no support for Twilio for an isolated worker process app.
# [Functions v1.x](#tab/functionsv1/isolated-process)
-Functions 1.x doesn't support running in an isolated process.
+Functions 1.x doesn't support running in an isolated worker process.
# [Functions v2.x+](#tab/functionsv2/csharp-script)
This example uses the `TwilioSms` attribute with the method return value. An alt
# [Isolated process](#tab/isolated-process)
-The Twilio binding isn't currently supported for a function app running in an isolated process.
+The Twilio binding isn't currently supported for a function app running in an isolated worker process.
# [C# Script](#tab/csharp-script)
public class TwilioOutput {
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the output binding. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the output binding. C# script instead uses a function.json configuration file.
# [In-process](#tab/in-process)
In [in-process](functions-dotnet-class-library.md) function apps, use the [Twili
# [Isolated process](#tab/isolated-process)
-The Twilio binding isn't currently supported for a function app running in an isolated process.
+The Twilio binding isn't currently supported for a function app running in an isolated worker process.
# [C# Script](#tab/csharp-script)
azure-functions Functions Bindings Warmup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-warmup.md
def main(warmupContext: func.Context) -> None:
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use the `WarmupTrigger` attribute to define the function. C# script instead uses a *function.json* configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the `WarmupTrigger` attribute to define the function. C# script instead uses a *function.json* configuration file.
# [In-process](#tab/in-process)
azure-functions Functions Create Your First Function Visual Studio Uiex https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-your-first-function-visual-studio-uiex.md
Title: "Quickstart: Create your first function in Azure using Visual Studio"
description: In this quickstart, you learn how to create and publish an HTTP trigger Azure Function by using Visual Studio. ms.assetid: 82db1177-2295-4e39-bd42-763f6082e796 Previously updated : 09/30/2020 Last updated : 11/8/2022 ms.devlang: csharp
The `FunctionName` method attribute sets the name of the function, which by defa
+ **Select** <abbr title="When you publish your project to a function app that runs in a Consumption plan, you pay only for executions of your functions app. Other hosting plans incur higher costs.">Consumption</abbr> in the Play Type drop-down. (For more information, see [Consumption plan](consumption-plan.md).)
- + **Select** an <abbr title="A geographical reference to a specific Azure datacenter in which resources are allocated.See [regions](https://azure.microsoft.com/regions/) for a list of available regions.">location</abbr> from the drop-down.
+ + **Select** a <abbr title="A geographical reference to a specific Azure datacenter in which resources are allocated.See [regions](https://azure.microsoft.com/regions/) for a list of available regions.">location</abbr> from the drop-down.
+ **Select** an <abbr="An Azure Storage account is required by the Functions runtime. Select New to configure a general-purpose storage account. You can also choose an existing account that meets the storage account requirements.">Azure Storage</abbr> account from the drop-down
azure-functions Functions Create Your First Function Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-your-first-function-visual-studio.md
Title: "Quickstart: Create your first C# function in Azure using Visual Studio"
description: "In this quickstart, you learn how to use Visual Studio to create and publish a C# HTTP triggered function to Azure Functions." ms.assetid: 82db1177-2295-4e39-bd42-763f6082e796 Previously updated : 09/08/2022 Last updated : 11/08/2022 ms.devlang: csharp adobe-target: true
adobe-target-content: ./functions-create-your-first-function-visual-studio-uiex
Azure Functions lets you use Visual Studio to create local C# function projects and then easily publish this project to run in a scalable serverless environment in Azure. If you prefer to develop your C# apps locally using Visual Studio Code, you should instead consider the [Visual Studio Code-based version](create-first-function-vs-code-csharp.md) of this article.
-By default, this article shows you how to create C# functions that run [in the same process as the Functions host](functions-dotnet-class-library.md). These _in-process_ C# functions are only supported on Long Term Support (LTS) versions of .NET. To create C# functions [in an isolated process](dotnet-isolated-process-guide.md), see the [alternate version of this article](functions-create-your-first-function-visual-studio.md?tabs=isolated-process). Check out [.NET supported versions](functions-dotnet-class-library.md#supported-versions) before getting started.
+By default, this article shows you how to create C# functions that run on .NET 6 [in the same process as the Functions host](functions-dotnet-class-library.md). These _in-process_ C# functions are only supported on [Long Term Support (LTS)](https://dotnet.microsoft.com/en-us/platform/support/policy/dotnet-core) .NET versions, such as .NET 6. When creating your project, you can choose to instead create a function that runs on .NET 6 in an [isolated worker process](dotnet-isolated-process-guide.md). [Isolated worker process](dotnet-isolated-process-guide.md) supports both LTS and Standard Term Support (STS) versions of .NET. For more information, see [Supported versions](dotnet-isolated-process-guide.md#supported-versions) in the .NET Functions isolated worker process guide.
In this article, you learn how to:
Completing this quickstart incurs a small cost of a few USD cents or less in you
+ [Visual Studio 2022](https://visualstudio.microsoft.com/vs/). Make sure to select the **Azure development** workload during installation.
-+ [Azure subscription](../guides/developer/azure-developer-guide.md#understanding-accounts-subscriptions-and-billing). If you don't already have an account [create a free one](https://azure.microsoft.com/free/dotnet/) before you begin.
++ [Azure subscription](../guides/developer/azure-developer-guide.md#understanding-accounts-subscriptions-and-billing). If you don't already have an account, [create a free one](https://azure.microsoft.com/free/dotnet/) before you begin. ## Create a function app project
The Azure Functions project template in Visual Studio creates a C# class library
1. For the **Additional information** settings, use the values in the following table:
- # [.NET 6](#tab/in-process)
+ # [In-process](#tab/in-process)
| Setting | Value | Description | | | - |-- |
The Azure Functions project template in Visual Studio creates a C# class library
:::image type="content" source="../../includes/media/functions-vs-tools-create/functions-project-settings-v4.png" alt-text="Screenshot of Azure Functions project settings.":::
- # [.NET 6 Isolated](#tab/isolated-process)
+ # [Isolated process](#tab/isolated-process)
| Setting | Value | Description | | | - |-- |
- | **Functions worker** | **.NET 6 Isolated** | When you choose **.NET 6 Isolated**, you create a project that runs in a separate worker process. Choose isolated process when you need to run your function app on .NET 7.0 or on .NET Framework 4.8 (preview). To learn more, see [Supported versions](dotnet-isolated-process-guide.md#supported-versions). |
+ | **Functions worker** | **.NET 6 Isolated** | When you choose **.NET 6 Isolated**, you create a project that runs in a separate worker process. Choose isolated worker process when you need to run your function app on .NET 7.0 or on .NET Framework 4.8 (preview). To learn more, see [Supported versions](dotnet-isolated-process-guide.md#supported-versions). |
| **Function** | **HTTP trigger** | This value creates a function triggered by an HTTP request. | | **Use Azurite for runtime storage account (AzureWebJobsStorage)** | Enable | Because a function app in Azure requires a storage account, one is assigned or created when you publish your project to Azure. An HTTP trigger doesn't use an Azure Storage account connection string; all other trigger types require a valid Azure Storage account connection string. When you select this option, the [Azurite emulator](../storage/common/storage-use-azurite.md?tabs=visual-studio) is used. | | **Authorization level** | **Anonymous** | The created function can be triggered by any client without providing a key. This authorization setting makes it easy to test your new function. For more information about keys and authorization, see [Authorization keys](./functions-bindings-http-webhook-trigger.md#authorization-keys) and [HTTP and webhook bindings](./functions-bindings-http-webhook.md). |
The `FunctionName` method attribute sets the name of the function, which by defa
Your function definition should now look like the following code:
-# [.NET 6](#tab/in-process)
+# [In-process](#tab/in-process)
:::code language="csharp" source="~/functions-docs-csharp/http-trigger-template/HttpExample.cs" range="15-18":::
-# [.NET 6 Isolated](#tab/isolated-process)
+# [Isolated process](#tab/isolated-process)
:::code language="csharp" source="~/functions-docs-csharp/http-trigger-isolated/HttpExample.cs" range="11-13":::
After you've verified that the function runs correctly on your local computer, i
## Publish the project to Azure
-Visual Studio can publish your local project to Azure. Before you can publish your project, you must have a function app in your Azure subscription. If you don't already have a function app in Azure, Visual Studio publishing creates one for you the first time you publish your project. In this article you create a function app and related Azure resources.
+Visual Studio can publish your local project to Azure. Before you can publish your project, you must have a function app in your Azure subscription. If you don't already have a function app in Azure, Visual Studio publishing creates one for you the first time you publish your project. In this article, you create a function app and related Azure resources.
[!INCLUDE [Publish the project to Azure](../../includes/functions-vstools-publish.md)]
Visual Studio can publish your local project to Azure. Before you can publish yo
*Resources* in Azure refer to function apps, functions, storage accounts, and so forth. They're grouped into *resource groups*, and you can delete everything in a group by deleting the group.
-You created Azure resources to complete this quickstart. You may be billed for these resources, depending on your [account status](https://azure.microsoft.com/account/) and [service pricing](https://azure.microsoft.com/pricing/). Other quickstarts in this collection build upon this quickstart. If you plan to work with subsequent quickstarts, tutorials, or with any of the services you have created in this quickstart, don't clean up the resources.
+You created Azure resources to complete this quickstart. You may be billed for these resources, depending on your [account status](https://azure.microsoft.com/account/) and [service pricing](https://azure.microsoft.com/pricing/). Other quickstarts in this collection build upon this quickstart. If you plan to work with subsequent quickstarts, tutorials, or with any of the services you've created in this quickstart, don't clean up the resources.
[!INCLUDE [functions-vstools-cleanup](../../includes/functions-vstools-cleanup.md)]
You created Azure resources to complete this quickstart. You may be billed for t
In this quickstart, you used Visual Studio to create and publish a C# function app in Azure with a simple HTTP trigger function.
-# [.NET 6](#tab/in-process)
+# [In-process](#tab/in-process)
To learn more about working with C# functions that run in-process with the Functions host, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
Advance to the next article to learn how to add an Azure Storage queue binding t
> [!div class="nextstepaction"] > [Add an Azure Storage queue binding to your function](functions-add-output-binding-storage-queue-vs.md?tabs=in-process)
-# [.NET 6 Isolated](#tab/isolated-process)
+# [Isolated process](#tab/isolated-process)
-To learn more about working with C# functions that run in an isolated process, see the [Guide for running C# Azure Functions in an isolated process](dotnet-isolated-process-guide.md). Check out [.NET supported versions](functions-dotnet-class-library.md#supported-versions) to see other versions of supported .NET versions in an isolated process .
+To learn more about working with C# functions that run in an isolated worker process, see the [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md). Check out [.NET supported versions](functions-dotnet-class-library.md#supported-versions) to see other versions of supported .NET versions in an isolated worker process.
Advance to the next article to learn how to add an Azure Storage queue binding to your function: > [!div class="nextstepaction"]
azure-functions Functions Deployment Slots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-deployment-slots.md
There are a number of advantages to using deployment slots. The following scenar
- **Different environments for different purposes**: Using different slots gives you the opportunity to differentiate app instances before swapping to production or a staging slot. - **Prewarming**: Deploying to a slot instead of directly to production allows the app to warm up before going live. Additionally, using slots reduces latency for HTTP-triggered workloads. Instances are warmed up before deployment, which reduces the cold start for newly deployed functions. - **Easy fallbacks**: After a swap with production, the slot with a previously staged app now has the previous production app. If the changes swapped into the production slot aren't as you expect, you can immediately reverse the swap to get your "last known good instance" back.-- **Minimize restarts**: Changing app settings in a production slot requires a restart of the running app. You can instead change settings in a staging slot and swap the settings change into production with a prewarmed instance. This is the recommended way to upgrade between Functions runtime versions while maintaining the highest availability. To learn more, see [Minimum downtime upgrade](functions-versions.md#minimum-downtime-upgrade).
+- **Minimize restarts**: Changing app settings in a production slot requires a restart of the running app. You can instead change settings in a staging slot and swap the settings change into production with a prewarmed instance. This is the recommended way to upgrade between Functions runtime versions while maintaining the highest availability. To learn more, see [Minimum downtime upgrade](migrate-version-3-version-4.md#minimum-downtime-upgrade).
## Swap operations
azure-functions Functions Develop Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-local.md
The way in which you develop functions on your local computer depends on your [l
|Environment |Languages |Description| |--|||
-|[Visual Studio Code](functions-develop-vs-code.md)| [C# (in-process)](functions-dotnet-class-library.md)<br/>[C# (isolated process)](dotnet-isolated-process-guide.md)<br/>[JavaScript](functions-reference-node.md)<br/>[PowerShell](./create-first-function-vs-code-powershell.md)<br/>[Python](functions-reference-python.md) | The [Azure Functions extension for VS Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) adds Functions support to VS Code. Requires the Core Tools. Supports development on Linux, macOS, and Windows, when using version 2.x of the Core Tools. To learn more, see [Create your first function using Visual Studio Code](./create-first-function-vs-code-csharp.md). |
-| [Command prompt or terminal](functions-run-local.md) | [C# (in-process)](functions-dotnet-class-library.md)<br/>[C# (isolated process)](dotnet-isolated-process-guide.md)<br/>[JavaScript](functions-reference-node.md)<br/>[PowerShell](functions-reference-powershell.md)<br/>[Python](functions-reference-python.md) | [Azure Functions Core Tools] provides the core runtime and templates for creating functions, which enable local development. Version 2.x supports development on Linux, macOS, and Windows. All environments rely on Core Tools for the local Functions runtime. |
-| [Visual Studio](functions-develop-vs.md) | [C# (in-process)](functions-dotnet-class-library.md)<br/>[C# (isolated process)](dotnet-isolated-process-guide.md) | The Azure Functions tools are included in the **Azure development** workload of [Visual Studio](https://www.visualstudio.com/vs/), starting with Visual Studio 2019. Lets you compile functions in a class library and publish the .dll to Azure. Includes the Core Tools for local testing. To learn more, see [Develop Azure Functions using Visual Studio](functions-develop-vs.md). |
+|[Visual Studio Code](functions-develop-vs-code.md)| [C# (in-process)](functions-dotnet-class-library.md)<br/>[C# (isolated worker process)](dotnet-isolated-process-guide.md)<br/>[JavaScript](functions-reference-node.md)<br/>[PowerShell](./create-first-function-vs-code-powershell.md)<br/>[Python](functions-reference-python.md) | The [Azure Functions extension for VS Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) adds Functions support to VS Code. Requires the Core Tools. Supports development on Linux, macOS, and Windows, when using version 2.x of the Core Tools. To learn more, see [Create your first function using Visual Studio Code](./create-first-function-vs-code-csharp.md). |
+| [Command prompt or terminal](functions-run-local.md) | [C# (in-process)](functions-dotnet-class-library.md)<br/>[C# (isolated worker process)](dotnet-isolated-process-guide.md)<br/>[JavaScript](functions-reference-node.md)<br/>[PowerShell](functions-reference-powershell.md)<br/>[Python](functions-reference-python.md) | [Azure Functions Core Tools] provides the core runtime and templates for creating functions, which enable local development. Version 2.x supports development on Linux, macOS, and Windows. All environments rely on Core Tools for the local Functions runtime. |
+| [Visual Studio](functions-develop-vs.md) | [C# (in-process)](functions-dotnet-class-library.md)<br/>[C# (isolated worker process)](dotnet-isolated-process-guide.md) | The Azure Functions tools are included in the **Azure development** workload of [Visual Studio](https://www.visualstudio.com/vs/), starting with Visual Studio 2019. Lets you compile functions in a class library and publish the .dll to Azure. Includes the Core Tools for local testing. To learn more, see [Develop Azure Functions using Visual Studio](functions-develop-vs.md). |
| [Maven](./create-first-function-cli-java.md) (various) | [Java](functions-reference-java.md) | Maven archetype supports Core Tools to enable development of Java functions. Version 2.x supports development on Linux, macOS, and Windows. To learn more, see [Create your first function with Java and Maven](./create-first-function-cli-java.md). Also supports development using [Eclipse](functions-create-maven-eclipse.md) and [IntelliJ IDEA](functions-create-maven-intellij.md). | [!INCLUDE [Don't mix development environments](../../includes/functions-mixed-dev-environments.md)]
When you develop your functions locally, you need to take trigger and binding be
## Local storage emulator
-During local development, you can use the local [Azurite emulator](/azure/storage/common/storage-use-azurite.md) when testing functions with Azure Storage bindings (Queue Storage, Blob Storage, and Table Storage), without having to connect to remote storage services. Azurite integrates with Visual Studio Code and Visual Studio, and you can also run it from the command prompt using npm. For more information, see [Use the Azurite emulator for local Azure Storage development](/storage/common/storage-use-azurite.md).
+During local development, you can use the local [Azurite emulator](../storage/common/storage-use-azurite.md) when testing functions with Azure Storage bindings (Queue Storage, Blob Storage, and Table Storage), without having to connect to remote storage services. Azurite integrates with Visual Studio Code and Visual Studio, and you can also run it from the command prompt using npm. For more information, see [Use the Azurite emulator for local Azure Storage development](../storage/common/storage-use-azurite.md).
The following setting in the `Values` collection of the local.settings.json file tells the local Functions host to use Azurite for the default `AzureWebJobsStorage` connection:
With this setting in place, any Azure Storage trigger or binding that uses `Azur
## Next steps
-+ To learn more about local development of compiled C# functions (both in-process and isolated process) using Visual Studio, see [Develop Azure Functions using Visual Studio](functions-develop-vs.md).
++ To learn more about local development of compiled C# functions (both in-process and isolated worker process) using Visual Studio, see [Develop Azure Functions using Visual Studio](functions-develop-vs.md). + To learn more about local development of functions using VS Code on a Mac, Linux, or Windows computer, see the Visual Studio Code getting started article for your preferred language: + [C# (in-process)](create-first-function-vs-code-csharp.md)
- + [C# (isolated process)](create-first-function-vs-code-csharp.md?tabs=isolated-process)
+ + [C# (isolated worker process)](create-first-function-vs-code-csharp.md?tabs=isolated-process)
+ [Java](create-first-function-vs-code-java.md) + [JavaScript](create-first-function-vs-code-node.md) + [PowerShell](create-first-function-vs-code-powershell.md)
azure-functions Functions Develop Vs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-vs.md
This section assumes you've already published to your function app using a relea
### Attach the debugger
-The way you attach the debugger depends on your execution mode. When debugging an isolated process app, you currently need to attach the remote debugger to a separate .NET process, and several other configuration steps are required.
+The way you attach the debugger depends on your execution mode. When debugging an isolated worker process app, you currently need to attach the remote debugger to a separate .NET process, and several other configuration steps are required.
When you're done, you should [disable remote debugging](#disable-remote-debugging).
Visual Studio connects to your function app and enables remote debugging, if it'
To attach a remote debugger to a function app running in a process separate from the Functions host:
-1. From the **Publish** tab, select the ellipses (**...**) in the **Hosting** section, and then choose **Download publish profile**. This action downloads a copy of the publish profile and opens the download location. You need this file, which contains the credentials used to attach to your isolated process running in Azure.
+1. From the **Publish** tab, select the ellipses (**...**) in the **Hosting** section, and then choose **Download publish profile**. This action downloads a copy of the publish profile and opens the download location. You need this file, which contains the credentials used to attach to your isolated worker process running in Azure.
> [!CAUTION] > The .publishsettings file contains your credentials (unencoded) that are used to administer your function app. The security best practice for this file is to store it temporarily outside your source directories (for example in the Libraries\Documents folder), and then delete it after it's no longer needed. A malicious user who gains access to the .publishsettings file can edit, create, and delete your function app.
To attach a remote debugger to a function app running in a process separate from
![Visual Studio enter credential](./media/functions-develop-vs/creds-dialog.png)
-1. Check **Show process from all users** and then choose **dotnet.exe** and select **Attach**. When the operation completes, you're attached to your C# class library code running in an isolated process. At this point, you can debug your function app as normal.
+1. Check **Show process from all users** and then choose **dotnet.exe** and select **Attach**. When the operation completes, you're attached to your C# class library code running in an isolated worker process. At this point, you can debug your function app as normal.
azure-functions Functions Dotnet Class Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-dotnet-class-library.md
Last updated 10/12/2022
This article is an introduction to developing Azure Functions by using C# in .NET class libraries. >[!IMPORTANT]
->This article supports .NET class library functions that run in-process with the runtime. Your C# functions can also run out-of-process and isolated from the Functions runtime. The isolated model is the only way to run .NET 5.x and the preview of .NET Framework 4.8 using recent versions of the Functions runtime. To learn more, see [.NET isolated process functions](dotnet-isolated-process-guide.md).
+>This article supports .NET class library functions that run in-process with the runtime. Your C# functions can also run out-of-process and isolated from the Functions runtime. The isolated worker process model is the only way to run non-LTS versions of .NET and .NET Framework apps in current versions of the Functions runtime. To learn more, see [.NET isolated worker process functions](dotnet-isolated-process-guide.md).
+>For a comprehensive comparison between isolated worker process and in-process .NET Functions, see [Differences between in-process and isolate worker process .NET Azure Functions](dotnet-isolated-in-process-differences.md).
As a C# developer, you may also be interested in one of the following articles:
azure-functions Functions Dotnet Dependency Injection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-dotnet-dependency-injection.md
Azure Functions supports the dependency injection (DI) software design pattern,
- Dependency injection patterns differ depending on whether your C# functions run [in-process](functions-dotnet-class-library.md) or [out-of-process](dotnet-isolated-process-guide.md). > [!IMPORTANT]
-> The guidance in this article applies only to [C# class library functions](functions-dotnet-class-library.md), which run in-process with the runtime. This custom dependency injection model doesn't apply to [.NET isolated functions](dotnet-isolated-process-guide.md), which lets you run .NET 5.0 functions out-of-process. The .NET isolated process model relies on regular ASP.NET Core dependency injection patterns. To learn more, see [Dependency injection](dotnet-isolated-process-guide.md#dependency-injection) in the .NET isolated process guide.
+> The guidance in this article applies only to [C# class library functions](functions-dotnet-class-library.md), which run in-process with the runtime. This custom dependency injection model doesn't apply to [.NET isolated functions](dotnet-isolated-process-guide.md), which lets you run .NET functions out-of-process. The .NET isolated worker process model relies on regular ASP.NET Core dependency injection patterns. To learn more, see [Dependency injection](dotnet-isolated-process-guide.md#dependency-injection) in the .NET isolated worker process guide.
## Prerequisites
azure-functions Functions Event Grid Blob Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-event-grid-blob-trigger.md
When you use Visual Studio Code to create a Blob Storage triggered function, you
|Prompt|Selection| |--|--| |**Select a language**|Choose `C#`.|
- |**Select a .NET runtime**| Choose `.NET 6.0 LTS`. Event-driven blob triggers aren't yet supported when running in an isolated process. |
+ |**Select a .NET runtime**| Choose `.NET 6.0 LTS`. Event-driven blob triggers aren't yet supported when running in an isolated worker process. |
|**Select a template for your project's first function**|Choose `Azure Blob Storage trigger`.| |**Provide a function name**|Type `BlobTriggerEventGrid`.| |**Provide a namespace** | Type `My.Functions`. |
azure-functions Functions Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-monitoring.md
In [C#](functions-dotnet-class-library.md#log-custom-telemetry-in-c-functions),
### Dependencies
-Starting with version 2.x of Functions, Application Insights automatically collects data on dependencies for bindings that use certain client SDKs. Application Insights distributed tracing and dependency tracking aren't currently supported for C# apps running in an [isolated process](dotnet-isolated-process-guide.md). Application Insights collects data on the following dependencies:
+Starting with version 2.x of Functions, Application Insights automatically collects data on dependencies for bindings that use certain client SDKs. Application Insights distributed tracing and dependency tracking aren't currently supported for C# apps running in an [isolated worker process](dotnet-isolated-process-guide.md). Application Insights collects data on the following dependencies:
+ Azure Cosmos DB + Azure Event Hubs
azure-functions Functions Reference Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-csharp.md
Azure Functions lets you develop functions using C# in one of the following ways
| | - | | | | | C# script | in-process | .csx | [Portal](functions-create-function-app-portal.md)<br/>[Core Tools](functions-run-local.md) | This article | | C# class library | in-process | .cs | [Visual Studio](functions-develop-vs.md)<br/>[Visual Studio Code](functions-develop-vs-code.md)<br />[Core Tools](functions-run-local.md)| [In-process C# class library functions](functions-dotnet-class-library.md) |
-| C# class library (isolated process)| in an isolated process | .cs | [Visual Studio](functions-develop-vs.md)<br/>[Visual Studio Code](functions-develop-vs-code.md)<br />[Core Tools](functions-run-local.md) | [.NET isolated process functions](dotnet-isolated-process-guide.md) |
+| C# class library (isolated worker process)| in an isolated worker process | .cs | [Visual Studio](functions-develop-vs.md)<br/>[Visual Studio Code](functions-develop-vs-code.md)<br />[Core Tools](functions-run-local.md) | [.NET isolated worker process functions](dotnet-isolated-process-guide.md) |
This article assumes that you've already read the [Azure Functions developers guide](functions-reference.md).
azure-functions Functions Run Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-run-local.md
Certain languages may have additional considerations:
# [C\#](#tab/csharp)
-+ By default, version 2.x and later versions of the Core Tools create function app projects for the .NET runtime as [C# class projects](functions-dotnet-class-library.md) (.csproj). Version 3.x also supports creating functions that [run on .NET 5.0 in an isolated process](dotnet-isolated-process-guide.md). These C# projects, which can be used with Visual Studio or Visual Studio Code, are compiled during debugging and when publishing to Azure.
++ Core Tools lets you create function app projects for the .NET runtime as both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# class library projects (.csproj). These projects, which can be used with Visual Studio or Visual Studio Code, are compiled during debugging and when publishing to Azure. + Use the `--csx` parameter if you want to work locally with C# script (.csx) files. These are the same files you get when you create functions in the Azure portal and when using version 1.x of Core Tools. To learn more, see the [func init reference](functions-core-tools-reference.md#func-init).
azure-functions Functions Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-versions.md
Title: Azure Functions runtime versions overview
description: Azure Functions supports multiple versions of the runtime. Learn the differences between them and how to choose the one that's right for you. Previously updated : 10/04/2022 Last updated : 10/22/2022 zone_pivot_groups: programming-languages-set-functions
zone_pivot_groups: programming-languages-set-functions
| 1.x | GA | Recommended only for C# apps that must use .NET Framework and only supports development in the Azure portal, Azure Stack Hub portal, or locally on Windows computers. This version is in maintenance mode, with enhancements provided only in later versions. | > [!IMPORTANT]
-> Beginning on December 3, 2022, function apps running on versions 2.x and 3.x of the Azure Functions runtime can no longer be supported. Before that time, please test, verify, and migrate your function apps to version 4.x of the Functions runtime. For more information, see [Migrating from 3.x to 4.x](#migrating-from-3x-to-4x). After the deadline, function apps can be created and deployed, and existing apps continue to run. However, your apps won't be eligible for new features, security patches, performance optimizations, and support until you upgrade them to version 4.x.
+> Beginning on December 3, 2022, function apps running on versions 2.x and 3.x of the Azure Functions runtime can no longer be supported. Before that time, please test, verify, and migrate your function apps to version 4.x of the Functions runtime. For more information, see [Migrate apps from Azure Functions version 3.x to version 4.x](migrate-version-3-version-4.md). After the deadline, function apps can be created and deployed, and existing apps continue to run. However, your apps won't be eligible for new features, security patches, performance optimizations, and support until you upgrade them to version 4.x.
> >End of support for these runtime versions is due to the ending of support for .NET Core 3.1, which is required by these older runtime versions. This requirement affects all Azure Functions runtime languages. >Functions version 1.x is still supported for C# function apps that require the .NET Framework. Preview support is now available in Functions 4.x to [run C# functions on .NET Framework 4.8](dotnet-isolated-process-guide.md#supported-versions).
The following table indicates which programming languages are currently supporte
## <a name="creating-1x-apps"></a>Run on a specific version
-By default, function apps created in the Azure portal and by the Azure CLI are set to version 4.x. You can modify this version if needed. You can only downgrade the runtime version to 1.x after you create your function app but before you add any functions. Moving to a later version is allowed even with apps that have existing functions. When your app has existing functions, be aware of any breaking changes between versions before moving to a later runtime version. The following sections detail breaking changes between versions, including language-specific breaking changes.
+The version of the Functions runtime used by published apps in Azure is dictated by the [`FUNCTIONS_EXTENSION_VERSION`](functions-app-settings.md#functions_extension_version) application setting. In some cases and for certain languages, other settings may apply.
-+ [Between 3.x and 4.x](#breaking-changes-between-3x-and-4x)
-+ [Between 2.x and 3.x](#breaking-changes-between-2x-and-3x)
-+ [Between 1.x and later versions](#migrating-from-1x-to-later-versions)
+By default, function apps created in the Azure portal, by the Azure CLI, or from Visual Studio tools are set to version 4.x. You can modify this version if needed. You can only downgrade the runtime version to 1.x after you create your function app but before you add any functions. Moving to a later version is allowed even with apps that have existing functions.
-If you don't see your programming language, go select it from the [top of the page](#top).
+### Migrating existing function apps
-Before making a change to the major version of the runtime, you should first test your existing code on the new runtime version. You can verify your app runs correctly after the upgrade by deploying to another function app running on the latest major version. You can also verify your code locally by using the runtime-specific version of the [Azure Functions Core Tools](functions-run-local.md), which includes the Functions runtime.
+When your app has existing functions, you must take precautions before moving to a later runtime version. The following articles detail breaking changes between versions, including language-specific breaking changes. They also provide you with step-by-step instructions for a successful migration of you existing function app.
-Downgrades to v2.x aren't supported. When possible, you should always run your apps on the latest supported version of the Functions runtime.
++ [Migrate from runtime version 3.x to version 4.x](./migrate-version-3-version-4.md) ++ [Migrate from runtime version 1.x to version 4.x](./migrate-version-1-version-4.md) ### Changing version of apps in Azure
-The version of the Functions runtime used by published apps in Azure is dictated by the [`FUNCTIONS_EXTENSION_VERSION`](functions-app-settings.md#functions_extension_version) application setting. The following major runtime version values are supported:
+The following major runtime version values are supported:
| Value | Runtime target | | | -- | | `~4` | 4.x | | `~3` | 3.x |
-| `~2` | 2.x |
| `~1` | 1.x | >[!IMPORTANT]
-> Don't arbitrarily change this app setting, because other app setting changes and changes to your function code may be required. You should instead change this setting in the **Function runtime settings** tab of the function app **Configuration** in the Azure portal when you are ready to make a major version upgrade.
-
-To learn more, see [How to target Azure Functions runtime versions](set-runtime-version.md).
+> Don't arbitrarily change this app setting, because other app setting changes and changes to your function code may be required. You should instead change this setting in the **Function runtime settings** tab of the function app **Configuration** in the Azure portal when you are ready to make a major version upgrade. For existing function apps, [follow the migration instructions](#migrating-existing-function-apps).
### Pinning to a specific minor version
If you receive a warning about your extension bundle version not meeting a minim
To learn more about extension bundles, see [Extension bundles](functions-bindings-register.md#extension-bundles). ::: zone-end
-## <a name="migrating-from-3x-to-4x"></a>Migrating from 3.x to 4.x
-
-Azure Functions version 4.x is highly backwards compatible to version 3.x. Most apps should safely upgrade to 4.x without requiring significant code changes. An upgrade is initiated when you set the `FUNCTIONS_EXTENSION_VERSION` app setting to a value of `~4`. For function apps running on Windows, you also need to set the `netFrameworkVersion` site setting to target .NET 6.
-
-Before you upgrade your app to version 4.x of the Functions runtime, you should do the following tasks:
-
-* Review the list of [breaking changes between 3.x and 4.x](#breaking-changes-between-3x-and-4x).
-* [Run the pre-upgrade validator](#run-the-pre-upgrade-validator).
-* When possible, [upgrade your local project environment to version 4.x](#upgrade-your-local-project). Fully test your app locally using version 4.x of the [Azure Functions Core Tools](functions-run-local.md). When you use Visual Studio to publish a version 4.x project to an existing function app at a lower version, you're prompted to let Visual Studio upgrade the function app to version 4.x during deployment. This upgrade uses the same process defined in [Migrate without slots](#migrate-without-slots).
-* Consider using a [staging slot](functions-deployment-slots.md) to test and verify your app in Azure on the new runtime version. You can then deploy your app with the updated version settings to the production slot. For more information, see [Migrate using slots](#migrate-using-slots).
-
-### Run the pre-upgrade validator
-
-Azure Functions provides a pre-upgrade validator to help you identify potential issues when migrating your function app to 4.x. To run the pre-upgrade validator:
-
-1. In the [Azure portal](https://portal.azure.com), navigate to your function app.
-
-1. Open the **Diagnose and solve problems** page.
-
-1. In **Function App Diagnostics**, start typing `Functions 4.x Pre-Upgrade Validator` and then choose it from the list.
-
-1. After validation completes, review the recommendations and address any issues in your app. If you need to make changes to your app, make sure to validate the changes against version 4.x of the Functions runtime, either [locally using Azure Functions Core Tools v4](#upgrade-your-local-project) or by [using a staging slot](#migrate-using-slots).
-
-### Migrate without slots
-
-The simplest way to upgrade to v4.x is to set the `FUNCTIONS_EXTENSION_VERSION` application setting to `~4` on your function app in Azure. You must follow a [different procedure](#migrate-using-slots) on a site with slots.
-
-# [Azure CLI](#tab/azure-cli)
-
-```azurecli
-az functionapp config appsettings set --settings FUNCTIONS_EXTENSION_VERSION=~4 -g <RESOURCE_GROUP_NAME> -n <APP_NAME>
-```
-
-# [Azure PowerShell](#tab/azure-powershell)
-
-```azurepowershell
-Update-AzFunctionAppSetting -AppSetting @{FUNCTIONS_EXTENSION_VERSION = "~4"} -Name <APP_NAME> -ResourceGroupName <RESOURCE_GROUP_NAME> -Force
-```
---
-# [Windows](#tab/windows/azure-cli)
-
-When running on Windows, you also need to enable .NET 6.0, which is required by version 4.x of the runtime.
-
-```azurecli
-az functionapp config set --net-framework-version v6.0 -g <RESOURCE_GROUP_NAME> -n <APP_NAME>
-```
-
-.NET 6 is required for function apps in any language running on Windows.
-
-# [Windows](#tab/windows/azure-powershell)
-
-When running on Windows, you also need to enable .NET 6.0, which is required by version 4.x of the runtime.
-
-```azurepowershell
-Set-AzWebApp -NetFrameworkVersion v6.0 -Name <APP_NAME> -ResourceGroupName <RESOURCE_GROUP_NAME>
-```
-
-.NET 6 is required for function apps in any language running on Windows.
-
-# [Linux](#tab/linux/azure-cli)
-
-When running .NET apps on Linux, you also need to update the `linuxFxVersion` site setting for .NET 6.0.
-
-```azurecli
-az functionapp config set --name <APP_NAME> --resource-group <RESOURCE_GROUP_NAME> --linux-fx-version "DOTNET|6.0"
-```
-
-# [Linux](#tab/linux/azure-powershell)
-
-When running .NET apps on Linux, you also need to update the `linuxFxVersion` site setting. Unfortunately, Azure PowerShell can't be used to set the `linuxFxVersion` at this time. Use the Azure CLI instead.
---
-In this example, replace `<APP_NAME>` with the name of your function app and `<RESOURCE_GROUP_NAME>` with the name of the resource group.
-
-### Migrate using slots
-
-Using [deployment slots](functions-deployment-slots.md) is a good way to migrate your function app to the v4.x runtime from a previous version. By using a staging slot, you can run your app on the new runtime version in the staging slot and switch to production after verification. Slots also provide a way to minimize downtime during upgrade. If you need to minimize downtime, follow the steps in [Minimum downtime upgrade](#minimum-downtime-upgrade).
-
-After you've verified your app in the upgraded slot, you can swap the app and new version settings into production. This swap requires setting [`WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0`](functions-app-settings.md#website_override_sticky_extension_versions) in the production slot. How you add this setting affects the amount of downtime required for the upgrade.
-
-#### Standard upgrade
-
-If your slot-enabled function app can handle the downtime of a full restart, you can update the `WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS` setting directly in the production slot. Because changing this setting directly in the production slot causes a restart that impacts availability, consider doing this change at a time of reduced traffic. You can then swap in the upgraded version from the staging slot.
-
-The [`Update-AzFunctionAppSetting`](/powershell/module/az.functions/update-azfunctionappsetting) PowerShell cmdlet doesn't currently support slots. You must use Azure CLI or the Azure portal.
-
-1. Use the following command to set `WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0` in the production slot:
-
- ```azurecli
- az functionapp config appsettings set --settings WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0 -g <RESOURCE_GROUP_NAME> -n <APP_NAME>
- ```
- This command causes the app running in the production slot to restart.
-
-1. Use the following command to also set `WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS` in the staging slot:
-
- ```azurecli
- az functionapp config appsettings set --settings WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0 -g <RESOURCE_GROUP_NAME> -n <APP_NAME> --slot <SLOT_NAME>
- ```
-
-1. Use the following command to change `FUNCTIONS_EXTENSION_VERSION` and upgrade the staging slot to the new runtime version:
-
- ```azurecli
- az functionapp config appsettings set --settings FUNCTIONS_EXTENSION_VERSION=~4 -g <RESOURCE_GROUP_NAME> -n <APP_NAME> --slot <SLOT_NAME>
- ```
-
-1. Version 4.x of the Functions runtime requires .NET 6 in Windows. On Linux, .NET apps must also upgrade to .NET 6. Use the following command so that the runtime can run on .NET 6:
-
- # [Windows](#tab/windows)
-
- When running on Windows, you also need to enable .NET 6.0, which is required by version 4.x of the runtime.
-
- ```azurecli
- az functionapp config set --net-framework-version v6.0 -g <RESOURCE_GROUP_NAME> -n <APP_NAME>
- ```
-
- .NET 6 is required for function apps in any language running on Windows.
-
- # [Linux](#tab/linux/azure-cli)
-
- When running .NET functions on Linux, you also need to update the `linuxFxVersion` site setting for .NET 6.0.
-
- ```azurecli
- az functionapp config set --name <APP_NAME> --resource-group <RESOURCE_GROUP_NAME> --linux-fx-version "DOTNET|6.0"
- ```
-
-
-
- In this example, replace `<APP_NAME>` with the name of your function app and `<RESOURCE_GROUP_NAME>` with the name of the resource group.
-
-1. If your code project required any updates to run on version 4.x, deploy those updates to the staging slot now.
-
-1. Confirm that your function app runs correctly in the upgraded staging environment before swapping.
-
-1. Use the following command to swap the upgraded staging slot to production:
-
- ```azurecli
- az functionapp deployment slot swap -g <RESOURCE_GROUP_NAME> -n <APP_NAME> --slot <SLOT_NAME> --target-slot production
- ```
-
-#### Minimum downtime upgrade
-
-To minimize the downtime in your production app, you can swap the `WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS` setting from the staging slot into production. After that, you can swap in the upgraded version from a prewarmed staging slot.
-
-1. Use the following command to set `WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0` in the staging slot:
-
- ```azurecli
- az functionapp config appsettings set --settings WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0 -g <RESOURCE_GROUP_NAME> -n <APP_NAME> --slot <SLOT_NAME>
- ```
-1. Use the following commands to swap the slot with the new setting into production, and at the same time restore the version setting in the staging slot.
-
- ```azurecli
- az functionapp deployment slot swap -g <RESOURCE_GROUP_NAME> -n <APP_NAME> --slot <SLOT_NAME> --target-slot production
- az functionapp config appsettings set --settings FUNCTIONS_EXTENSION_VERSION=~3 -g <RESOURCE_GROUP_NAME> -n <APP_NAME> --slot <SLOT_NAME>
- ```
-
- You may see errors from the staging slot during the time between the swap and the runtime version being restored on staging. This can happen because having `WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0` only in staging during a swap removes the `FUNCTIONS_EXTENSION_VERSION` setting in staging. Without the version setting, your slot is in a bad state. Updating the version in the staging slot right after the swap should put the slot back into a good state, and you call roll back your changes if needed. However, any rollback of the swap also requires you to directly remove `WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0` from production before the swap back to prevent the same errors in production seen in staging. This change in the production setting would then cause a restart.
-
-1. Use the following command to again set `WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0` in the staging slot:
-
- ```azurecli
- az functionapp config appsettings set --settings WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0 -g <RESOURCE_GROUP_NAME> -n <APP_NAME> --slot <SLOT_NAME>
- ```
-
- At this point, both slots have `WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0` set.
-
-1. Use the following command to change `FUNCTIONS_EXTENSION_VERSION` and upgrade the staging slot to the new runtime version:
-
- ```azurecli
- az functionapp config appsettings set --settings FUNCTIONS_EXTENSION_VERSION=~4 -g <RESOURCE_GROUP_NAME> -n <APP_NAME> --slot <SLOT_NAME>
- ```
-
-1. Version 4.x of the Functions runtime requires .NET 6 in Windows. On Linux, .NET apps must also upgrade to .NET 6. Use the following command so that the runtime can run on .NET 6:
-
- # [Windows](#tab/windows)
-
- When running on Windows, you also need to enable .NET 6.0, which is required by version 4.x of the runtime.
-
- ```azurecli
- az functionapp config set --net-framework-version v6.0 -g <RESOURCE_GROUP_NAME> -n <APP_NAME>
- ```
-
- .NET 6 is required for function apps in any language running on Windows.
-
- # [Linux](#tab/linux/azure-cli)
-
- When running .NET functions on Linux, you also need to update the `linuxFxVersion` site setting for .NET 6.0.
-
- ```azurecli
- az functionapp config set --name <APP_NAME> --resource-group <RESOURCE_GROUP_NAME> --linux-fx-version "DOTNET|6.0"
- ```
-
-
-
- In this example, replace `<APP_NAME>` with the name of your function app and `<RESOURCE_GROUP_NAME>` with the name of the resource group.
-
-1. If your code project required any updates to run on version 4.x, deploy those updates to the staging slot now.
-
-1. Confirm that your function app runs correctly in the upgraded staging environment before swapping.
-
-1. Use the following command to swap the upgraded and prewarmed staging slot to production:
-
- ```azurecli
- az functionapp deployment slot swap -g <RESOURCE_GROUP_NAME> -n <APP_NAME> --slot <SLOT_NAME> --target-slot production
- ```
-
-### Upgrade your local project
-
-Upgrading instructions are language dependent. If you don't see your language, choose it from the switcher at the [top of the article](#top).
-
-To update a C# class library project to .NET 6 and Azure Functions 4.x:
-
-1. Update your local installation of [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) to version 4.
-
-1. Update the `TargetFramework` and `AzureFunctionsVersion`, as follows:
-
- ```xml
- <TargetFramework>net6.0</TargetFramework>
- <AzureFunctionsVersion>v4</AzureFunctionsVersion>
- ```
-
-1. Update the NuGet packages referenced by your app to the latest versions. For more information, see [breaking changes](#breaking-changes-between-3x-and-4x).
- Specific packages depend on whether your functions run in-process or out-of-process.
-
- # [In-process](#tab/in-process)
-
- * [Microsoft.NET.Sdk.Functions](https://www.nuget.org/packages/Microsoft.NET.Sdk.Functions/) 4.0.0 or later
-
- # [Isolated process](#tab/isolated-process)
-
- * [Microsoft.Azure.Functions.Worker](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker/) 1.5.2 or later
- * [Microsoft.Azure.Functions.Worker.Sdk](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Sdk/) 1.2.0 or later
-
-
-To update your project to Azure Functions 4.x:
-
-1. Update your local installation of [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) to version 4.x.
-
-1. Update your app's [Azure Functions extensions bundle](functions-bindings-register.md#extension-bundles) to 2.x or above. For more information, see [breaking changes](#breaking-changes-between-3x-and-4x).
-
-1. If you're using Node.js version 10 or 12, move to one of the [supported version](functions-reference-node.md#node-version).
-1. If you're using PowerShell Core 6, move to one of the [supported versions](functions-reference-powershell.md#powershell-versions).
-1. If you're using Python 3.6, move to one of the [supported versions](functions-reference-python.md#python-version).
-
-### Breaking changes between 3.x and 4.x
-
-The following are key breaking changes to be aware of before upgrading a 3.x app to 4.x, including language-specific breaking changes. For a full list, see Azure Functions GitHub issues labeled [*Breaking Change: Approved*](https://github.com/Azure/azure-functions/issues?q=is%3Aissue+label%3A%22Breaking+Change%3A+Approved%22+is%3A%22closed+OR+open%22). More changes are expected during the preview period. Subscribe to [App Service Announcements](https://github.com/Azure/app-service-announcements/issues) for updates.
-
-If you don't see your programming language, go select it from the [top of the page](#top).
-
-#### Runtime
--- Azure Functions proxies is a legacy feature for versions 1.x through 3.x of the Azure Functions runtime. Support for Functions proxies is being returned in version 4.x so that you can successfully upgrade your function apps to the latest runtime version. As soon as possible, you should instead switch to integrating your function apps with Azure API Management. API Management lets you take advantage of a more complete set of features for defining, securing, managing, and monetizing your Functions-based APIs. For more information, see [API Management integration](functions-proxies.md#api-management-integration). For information about the pending return of proxies in version 4.x, [Monitor the App Service announcements page](https://github.com/Azure/app-service-announcements/issues). --- Logging to Azure Storage using *AzureWebJobsDashboard* is no longer supported in 4.x. You should instead use [Application Insights](./functions-monitoring.md). ([#1923](https://github.com/Azure/Azure-Functions/issues/1923))--- Azure Functions 4.x now enforces [minimum version requirements for extensions](#minimum-extension-versions). Upgrade to the latest version of affected extensions. For non-.NET languages, [upgrade](./functions-bindings-register.md#extension-bundles) to extension bundle version 2.x or later. ([#1987](https://github.com/Azure/Azure-Functions/issues/1987))--- Default and maximum timeouts are now enforced in 4.x for function app running on Linux in a Consumption plan. ([#1915](https://github.com/Azure/Azure-Functions/issues/1915))--- Azure Functions 4.x uses `Azure.Identity` and `Azure.Security.KeyVault.Secrets` for the Key Vault provider and has deprecated the use of Microsoft.Azure.KeyVault. For more information about how to configure function app settings, see the Key Vault option in [Secret Repositories](security-concepts.md#secret-repositories). ([#2048](https://github.com/Azure/Azure-Functions/issues/2048))--- Function apps that share storage accounts now fail to start when their host IDs are the same. For more information, see [Host ID considerations](storage-considerations.md#host-id-considerations). ([#2049](https://github.com/Azure/Azure-Functions/issues/2049))---- Azure Functions 4.x supports .NET 6 in-process and isolated apps.--- `InvalidHostServicesException` is now a fatal error. ([#2045](https://github.com/Azure/Azure-Functions/issues/2045))--- `EnableEnhancedScopes` is enabled by default. ([#1954](https://github.com/Azure/Azure-Functions/issues/1954))--- Remove `HttpClient` as a registered service. ([#1911](https://github.com/Azure/Azure-Functions/issues/1911))-- Use single class loader in Java 11. ([#1997](https://github.com/Azure/Azure-Functions/issues/1997))--- Stop loading worker jars in Java 8. ([#1991](https://github.com/Azure/Azure-Functions/issues/1991))--- Node.js versions 10 and 12 aren't supported in Azure Functions 4.x. ([#1999](https://github.com/Azure/Azure-Functions/issues/1999))--- Output serialization in Node.js apps was updated to address previous inconsistencies. ([#2007](https://github.com/Azure/Azure-Functions/issues/2007))-- PowerShell 6 isn't supported in Azure Functions 4.x. ([#1999](https://github.com/Azure/Azure-Functions/issues/1999))--- Default thread count has been updated. Functions that aren't thread-safe or have high memory usage may be impacted. ([#1962](https://github.com/Azure/Azure-Functions/issues/1962))-- Python 3.6 isn't supported in Azure Functions 4.x. ([#1999](https://github.com/Azure/Azure-Functions/issues/1999))--- Shared memory transfer is enabled by default. ([#1973](https://github.com/Azure/Azure-Functions/issues/1973))--- Default thread count has been updated. Functions that aren't thread-safe or have high memory usage may be impacted. ([#1962](https://github.com/Azure/Azure-Functions/issues/1962))-
-## Migrating from 2.x to 3.x
-
-Azure Functions version 3.x is highly backwards compatible to version 2.x. Many apps can safely upgrade to 3.x without any code changes. While moving to 3.x is encouraged, run extensive tests before changing the major version in production apps.
-
-### Breaking changes between 2.x and 3.x
-
-The following are the language-specific changes to be aware of before upgrading a 2.x app to 3.x. If you don't see your programming language, go select it from the [top of the page](#top).
-
-The main differences between versions when running .NET class library functions is the .NET Core runtime. Functions version 2.x is designed to run on .NET Core 2.2 and version 3.x is designed to run on .NET Core 3.1.
-
-* [Synchronous server operations are disabled by default](/dotnet/core/compatibility/2.2-3.0#http-synchronous-io-disabled-in-all-servers).
-
-* Breaking changes introduced by .NET Core in [version 3.1](/dotnet/core/compatibility/3.1) and [version 3.0](/dotnet/core/compatibility/3.0), which aren't specific to Functions but might still affect your app.
-
->[!NOTE]
->Due to support issues with .NET Core 2.2, function apps pinned to version 2 (`~2`) are essentially running on .NET Core 3.1. To learn more, see [Functions v2.x compatibility mode](functions-dotnet-class-library.md#functions-v2x-considerations).
--
-* Output bindings assigned through 1.x `context.done` or return values now behave the same as setting in 2.x+ `context.bindings`.
-
-* Timer trigger object is camelCase instead of PascalCase
-
-* Event hub triggered functions with `dataType` binary will receive an array of `binary` instead of `string`.
-
-* The HTTP request payload can no longer be accessed via `context.bindingData.req`. It can still be accessed as an input parameter, `context.req`, and in `context.bindings`.
-
-* Node.js 8 is no longer supported and won't execute in 3.x functions.
-
-## Migrating from 1.x to later versions
-
-You may choose to migrate an existing app written to use the version 1.x runtime to instead use a newer version. Most of the changes you need to make are related to changes in the language runtime, such as C# API changes between .NET Framework 4.8 and .NET Core. You'll also need to make sure your code and libraries are compatible with the language runtime you choose. Finally, be sure to note any changes in trigger, bindings, and features highlighted below. For the best migration results, you should create a new function app in a new version and port your existing version 1.x function code to the new app.
-
-While it's possible to do an "in-place" upgrade by manually updating the app configuration, going from 1.x to a higher version includes some breaking changes. For example, in C#, the debugging object is changed from `TraceWriter` to `ILogger`. By creating a new version 3.x project, you start off with updated functions based on the latest version 3.x templates.
-
-### Changes in triggers and bindings after version 1.x
-
-Starting with version 2.x, you must install the extensions for specific triggers and bindings used by the functions in your app. The only exception for this HTTP and timer triggers, which don't require an extension. For more information, see [Register and install binding extensions](./functions-bindings-register.md).
-
-There are also a few changes in the *function.json* or attributes of the function between versions. For example, the Event Hubs `path` property is now `eventHubName`. See the [existing binding table](#bindings) for links to documentation for each binding.
-
-### Changes in features and functionality after version 1.x
-
-A few features were removed, updated, or replaced after version 1.x. This section details the changes you see in later versions after having used version 1.x.
-
-In version 2.x, the following changes were made:
-
-* Keys for calling HTTP endpoints are always stored encrypted in Azure Blob storage. In version 1.x, keys were stored in Azure Files by default. When you upgrade an app from version 1.x to version 2.x, existing secrets that are in Azure Files are reset.
-
-* The version 2.x runtime doesn't include built-in support for webhook providers. This change was made to improve performance. You can still use HTTP triggers as endpoints for webhooks.
-
-* The host configuration file (host.json) should be empty or have the string `"version": "2.0"`.
-
-* To improve monitoring, the WebJobs dashboard in the portal, which used the [`AzureWebJobsDashboard`](functions-app-settings.md#azurewebjobsdashboard) setting is replaced with Azure Application Insights, which uses the [`APPINSIGHTS_INSTRUMENTATIONKEY`](functions-app-settings.md#appinsights_instrumentationkey) setting. For more information, see [Monitor Azure Functions](functions-monitoring.md).
-
-* All functions in a function app must share the same language. When you create a function app, you must choose a runtime stack for the app. The runtime stack is specified by the [`FUNCTIONS_WORKER_RUNTIME`](functions-app-settings.md#functions_worker_runtime) value in application settings. This requirement was added to improve footprint and startup time. When developing locally, you must also include this setting in the [local.settings.json file](functions-develop-local.md#local-settings-file).
-
-* The default timeout for functions in an App Service plan is changed to 30 minutes. You can manually change the timeout back to unlimited by using the [functionTimeout](functions-host-json.md#functiontimeout) setting in host.json.
-
-* HTTP concurrency throttles are implemented by default for Consumption plan functions, with a default of 100 concurrent requests per instance. You can change this behavior in the [`maxConcurrentRequests`](functions-host-json.md#http) setting in the host.json file.
-
-* Because of [.NET Core limitations](https://github.com/Azure/azure-functions-host/issues/3414), support for F# script (`.fsx` files) functions has been removed. Compiled F# functions (.fs) are still supported.
-
-* The URL format of Event Grid trigger webhooks has been changed to follow this pattern: `https://{app}/runtime/webhooks/{triggerName}`.
-
-### Locally developed application versions
+## Locally developed application versions
You can make the following updates to function apps to locally change the targeted versions.
-#### Visual Studio runtime versions
+### Visual Studio runtime versions
In Visual Studio, you select the runtime version when you create a project. Azure Functions tools for Visual Studio supports the three major runtime versions. The correct version is used when debugging and publishing based on project settings. The version settings are defined in the `.csproj` file in the following properties:
In Visual Studio, you select the runtime version when you create a project. Azur
<AzureFunctionsVersion>v4</AzureFunctionsVersion> ```
-You can also choose `net6.0`, `net7.0`, or `net48` as the target framework if you are using [.NET isolated process functions](dotnet-isolated-process-guide.md). Support for `net7.0` and `net48` is currently in preview.
+You can also choose `net6.0`, `net7.0`, or `net48` as the target framework if you are using [.NET isolated worker process functions](dotnet-isolated-process-guide.md). Support for `net7.0` and `net48` is currently in preview.
> [!NOTE] > Azure Functions 4.x requires the `Microsoft.NET.Sdk.Functions` extension be at least `4.0.0`.
You can also choose `net6.0`, `net7.0`, or `net48` as the target framework if yo
<AzureFunctionsVersion>v3</AzureFunctionsVersion> ```
-You can also choose `net5.0` as the target framework if you're using [.NET isolated process functions](dotnet-isolated-process-guide.md).
+You can also choose `net5.0` as the target framework if you're using [.NET isolated worker process functions](dotnet-isolated-process-guide.md).
> [!NOTE] > Azure Functions 3.x and .NET requires the `Microsoft.NET.Sdk.Functions` extension be at least `3.0.0`.
You can also choose `net5.0` as the target framework if you're using [.NET isola
```
-###### Updating 2.x apps to 3.x in Visual Studio
-
-You can open an existing function targeting 2.x and move to 3.x by editing the `.csproj` file and updating the values above. Visual Studio manages runtime versions automatically for you based on project metadata. However, it's possible if you've never created a 3.x app before that Visual Studio doesn't yet have the templates and runtime for 3.x on your machine. This issue may present itself with an error like "no Functions runtime available that matches the version specified in the project." To fetch the latest templates and runtime, go through the experience to create a new function project. When you get to the version and template select screen, wait for Visual Studio to complete fetching the latest templates. After the latest .NET Core 3 templates are available and displayed, you can run and debug any project configured for version 3.x.
-
-> [!IMPORTANT]
-> Version 3.x functions can only be developed in Visual Studio if using Visual Studio version 16.4 or newer.
-
-#### VS Code and Azure Functions Core Tools
+### VS Code and Azure Functions Core Tools
-[Azure Functions Core Tools](functions-run-local.md) is used for command-line development and also by the [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) for Visual Studio Code. To develop against version 3.x, install version 3.x of the Core Tools. Version 2.x development requires version 2.x of the Core Tools, and so on. For more information, see [Install the Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools).
+[Azure Functions Core Tools](functions-run-local.md) is used for command-line development and also by the [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) for Visual Studio Code. To develop against version 4.x, install version 4.x of the Core Tools. Version 3.x development requires version 3.x of the Core Tools, and so on. For more information, see [Install the Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools).
For Visual Studio Code development, you may also need to update the user setting for the `azureFunctions.projectRuntime` to match the version of the tools installed. This setting also updates the templates and languages used during function app creation. To create apps in `~3`, you update the `azureFunctions.projectRuntime` user setting to `~3`. ![Azure Functions extension runtime setting](./media/functions-versions/vs-code-version-runtime.png)
-#### Maven and Java apps
-
-You can migrate Java apps from version 2.x to 3.x by [installing the 3.x version of the core tools](functions-run-local.md#install-the-azure-functions-core-tools) required to run locally. After verifying that your app works correctly running locally on version 3.x, update the app's `POM.xml` file to modify the `FUNCTIONS_EXTENSION_VERSION` setting to `~3`, as in the following example:
-
-```xml
-<configuration>
- <resourceGroup>${functionResourceGroup}</resourceGroup>
- <appName>${functionAppName}</appName>
- <region>${functionAppRegion}</region>
- <appSettings>
- <property>
- <name>WEBSITE_RUN_FROM_PACKAGE</name>
- <value>1</value>
- </property>
- <property>
- <name>FUNCTIONS_EXTENSION_VERSION</name>
- <value>~3</value>
- </property>
- </appSettings>
-</configuration>
-```
- ## Bindings Starting with version 2.x, the runtime uses a new [binding extensibility model](https://github.com/Azure/azure-webjobs-sdk-extensions/wiki/Binding-Extensions-Overview) that offers these advantages:
azure-functions Migrate Version 1 Version 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-version-1-version-4.md
+
+ Title: Migrate apps from Azure Functions version 1.x to 4.x
+description: This article shows you how to upgrade your existing function apps running on version 1.x of the Azure Functions runtime to be able to run on version 4.x of the runtime.
++ Last updated : 11/05/2022+
+zone_pivot_groups: programming-languages-set-functions
++
+# Migrate apps from Azure Functions version 1.x to version 4.x
+
+> [!IMPORTANT]
+> Java isn't supported by version 1.x of the Azure Functions runtime. Perhaps you're instead looking to [migrate your Java app from version 3.x to version 4.x](./migrate-version-3-version-4.md). If you're migrating a version 1.x function app, select either C# or JavaScript above.
+> [!IMPORTANT]
+> TypeScript isn't supported by version 1.x of the Azure Functions runtime. Perhaps you're instead looking to [migrate your TypeScript app from version 3.x to version 4.x](./migrate-version-3-version-4.md). If you're migrating a version 1.x function app, select either C# or JavaScript above.
+> [!IMPORTANT]
+> PowerShell isn't supported by version 1.x of the Azure Functions runtime. Perhaps you're instead looking to [migrate your PowerShell app from version 3.x to version 4.x](./migrate-version-3-version-4.md). If you're migrating a version 1.x function app, select either C# or JavaScript above.
+> [!IMPORTANT]
+> Python isn't supported by version 1.x of the Azure Functions runtime. Perhaps you're instead looking to [migrate your Python app from version 3.x to version 4.x](./migrate-version-3-version-4.md). If you're migrating a version 1.x function app, select either C# or JavaScript above.
+If you're running on version 1.x of the Azure Functions runtime, it's likely because your C# app requires .NET Framework 2.1. Version 4.x of the runtime now lets you run .NET Framework 4.8 apps. At this point, you should consider migrating your version 1.x function apps to run on version 4.x. For more information about Functions runtime versions, see [Azure Functions runtime versions overview](./functions-versions.md).
+
+Migrating a C# function app from version 1.x to version 4.x of the Functions runtime requires you to make changes to your project code. Many of these changes are a result of changes in the C# language and .NET APIs. JavaScript apps generally don't require code changes to migrate.
+
+You can upgrade your C# project to one of the following versions of .NET, all of which can run on Functions version 4.x:
+
+| .NET version | Process model<sup>*</sup> |
+| | | |
+| .NET 7 | [Isolated worker process](./dotnet-isolated-process-guide.md) |
+| .NET 6 | [Isolated worker process](./dotnet-isolated-process-guide.md) |
+| .NET 6 | [In-process](./functions-dotnet-class-library.md) |
+| .NET&nbsp;Framework&nbsp;4.8 | [Isolated worker process](./dotnet-isolated-process-guide.md) |
+
+<sup>*</sup> [In-process execution](./functions-dotnet-class-library.md) is only supported for Long Term Support (LTS) releases of .NET. Non-LTS releases and .NET Framework require you to run in an [isolated worker process](./dotnet-isolated-process-guide.md). For a feature and functionality comparison between the two process models, see [Differences between in-process and isolate worker process .NET Azure Functions](./dotnet-isolated-in-process-differences.md).
+This article walks you through the process of safely migrating your function app to run on version 4.x of the Functions runtime.
+
+## Prepare for migration
+
+Before you upgrade your app to version 4.x of the Functions runtime, you should do the following tasks:
+
+* Review the list of [behavior changes after version 1.x](#behavior-changes-after-version-1x). Migrating from version 1.x to version 4.x also can affect bindings.
+* Review [Update your project files](#update-your-project-files) and decide which version of .NET you want to migrate to. Complete the steps to migrate your local project to your chosen version of .NET.
+* Complete the steps in [update your project files](#update-your-project-files) to migrate your local project to run locally on a version 4.x and a supported version of Node.js.
+* After migrating your local project, fully test the app locally using version 4.x of the [Azure Functions Core Tools](functions-run-local.md).
+
+* Upgrade your function app in Azure to the new version. If you need to minimize downtime, consider using a [staging slot](functions-deployment-slots.md) to test and verify your migrated app in Azure on the new runtime version. You can then deploy your app with the updated version settings to the production slot. For more information, see [Migrate using slots](#upgrade-using-slots).
+* Republished your migrated project to the upgraded function app. When you use Visual Studio to publish a version 4.x project to an existing function app at a lower version, you're prompted to let Visual Studio upgrade the function app to version 4.x during deployment. This upgrade uses the same process defined in [Migrate without slots](#upgrade-without-slots).
+* Republished your migrated project to the upgraded function app.
+* Consider using a [staging slot](functions-deployment-slots.md) to test and verify your app in Azure on the new runtime version. You can then deploy your app with the updated version settings to the production slot. For more information, see [Migrate using slots](#upgrade-using-slots).
+## Update your project files
+
+The following sections describes the updates you must make to your C# project files to be able to run on one of the supported versions of .NET in Functions version 4.x. The updates shown are ones common to most projects. Your project code may require updates not mentioned in this article, especially when using custom NuGet packages.
+
+Choose the tab that matches your target version of .NET and the desired process model (in-process or isolated worker process).
+
+### .csproj file
+
+The following example is a .csproj project file that runs on version 1.x:
+
+```xml
+<Project Sdk="Microsoft.NET.Sdk">
+ <PropertyGroup>
+ <TargetFramework>net48</TargetFramework>
+ <AzureFunctionsVersion>v1</AzureFunctionsVersion>
+ </PropertyGroup>
+ <ItemGroup>
+ <PackageReference Include="Microsoft.NET.Sdk.Functions" Version="1.0.24" />
+ </ItemGroup>
+ <ItemGroup>
+ <Reference Include="Microsoft.CSharp" />
+ </ItemGroup>
+ <ItemGroup>
+ <None Update="host.json">
+ <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
+ </None>
+ <None Update="local.settings.json">
+ <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
+ <CopyToPublishDirectory>Never</CopyToPublishDirectory>
+ </None>
+ </ItemGroup>
+</Project>
+```
+
+Use one of the following procedures to update this XML file to run in Functions version 4.x:
+
+# [.NET Framework 4.8](#tab/v4)
+
+The following changes are required in the .csproj XML project file:
+
+1. Change the value of `PropertyGroup`.`AzureFunctionsVersion` to `v4`.
+
+1. Add the following `OutputType` element to the `PropertyGroup`:
+
+ :::code language="xml" source="~/functions-quickstart-templates/Functions.Templates/ProjectTemplate_v4.x/CSharp-Isolated/Company.FunctionApp.csproj" range="5-5":::
+
+1. Replace the existing `ItemGroup`.`PackageReference` with the following `ItemGroup`:
+
+ :::code language="xml" source="~/functions-quickstart-templates/Functions.Templates/ProjectTemplate_v4.x/CSharp-Isolated/Company.FunctionApp.csproj" range="12-15":::
+
+1. Add the following new `ItemGroup`:
+
+ :::code language="xml" source="~/functions-quickstart-templates/Functions.Templates/ProjectTemplate_v4.x/CSharp-Isolated/Company.FunctionApp.csproj" range="31-33":::
+
+After you make these changes, your updated project should look like the following example:
+
+```xml
+
+<Project Sdk="Microsoft.NET.Sdk">
+ <PropertyGroup>
+ <TargetFramework>net48</TargetFramework>
+ <AzureFunctionsVersion>v4</AzureFunctionsVersion>
+ <RootNamespace>My.Namespace</RootNamespace>
+ <OutputType>Exe</OutputType>
+ </PropertyGroup>
+ <ItemGroup>
+ <PackageReference Include="Microsoft.Azure.Functions.Worker" Version="1.8.0" />
+ <PackageReference Include="Microsoft.Azure.Functions.Worker.Sdk" Version="1.7.0" />
+ </ItemGroup>
+ <ItemGroup>
+ <None Update="host.json">
+ <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
+ </None>
+ <None Update="local.settings.json">
+ <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
+ <CopyToPublishDirectory>Never</CopyToPublishDirectory>
+ </None>
+ </ItemGroup>
+ <ItemGroup>
+ <Folder Include="Properties\" />
+ </ItemGroup>
+</Project>
+```
+
+# [.NET 6 (isolated)](#tab/net6-isolated)
++
+# [.NET 6 (in-process)](#tab/net6-in-proc)
++
+# [.NET 7](#tab/net7)
++++
+### program.cs file
+
+In most cases, migrating requires you to add the following program.cs file to your project:
+
+# [.NET Framework 4.8](#tab/v4)
++
+# [.NET 6 (isolated)](#tab/net6-isolated)
++
+# [.NET 6 (in-process)](#tab/net6-in-proc)
+
+A program.cs file isn't required when running in-process.
+
+# [.NET 7](#tab/net7)
++++
+### host.json file
+
+Settings in the host.json file apply at the function app level, both locally and in Azure. In version 1.x, your host.json file is either empty or it contains some settings that apply to all functions in the function app. For more information, see [Host.json v1](./functions-host-json-v1.md). If your host.json file has setting values, review the [host.json v2 format](./functions-host-json.md) for any changes.
+
+To run on version 4.x, you must add `"version": "2.0"` to the host.json file. You should also consider adding `logging` to your configuration, as in the following examples:
+
+# [.NET Framework 4.8](#tab/v4)
++
+# [.NET 6 (isolated)](#tab/net6-isolated)
++
+# [.NET 6 (in-process)](#tab/net6-in-proc)
++
+# [.NET 7](#tab/net7)
++++
+### local.settings.json file
+
+The local.settings.json file is only used when running locally. For information, see [Local settings file](functions-develop-local.md#local-settings-file). In version 1.x, the local.settings.json file has only two required values:
++
+When you upgrade to version 4.x, make sure that your local.settings.json file has at least the following elements:
+
+# [.NET Framework 4.8](#tab/v4)
++
+# [.NET 6 (isolated)](#tab/net6-isolated)
+
+# [.NET 6 (in-process)](#tab/net6-in-proc)
++
+# [.NET 7](#tab/net7)
++++
+### Namespace changes
+
+C# functions that run in an isolated worker process uses libraries in a different namespace than those libraries used in version 1.x. In-process functions use libraries in the same namespace.
+
+Version 1.x and in-process libraries are generally in the namespace `Microsoft.Azure.WebJobs.*`. Isolated worker process function apps use libraries in the namespace `Microsoft.Azure.Functions.Worker.*`. You can see the effect of these namespace changes on `using` statements in the [HTTP trigger template examples](#http-trigger-template) that follow.
+
+### Class name changes
+
+Some key classes changed names between version 1.x and version 4.x. These changes are a result either of changes in .NET APIs or in differences between in-process and isolated worker process. The following table indicates these key .NET classes used by Azure Functions that changed after version 1.x:
+
+# [.NET Framework 4.8](#tab/v4)
+
+| Version 1.x | .NET Framework 4.8 |
+| | |
+| `FunctionName` (attribute) | `Function` (attribute) |
+| `TraceWriter` | `ILogger` |
+| `HttpRequestMessage` | `HttpRequestData` |
+| `HttpResonseMessage` | `HttpResonseData` |
+
+# [.NET 6 (isolated)](#tab/net6-isolated)
+
+| Version 1.x | .NET 6 (isolated) |
+| | |
+| `FunctionName` (attribute) | `Function` (attribute) |
+| `TraceWriter` | `ILogger` |
+| `HttpRequestMessage` | `HttpRequestData` |
+| `HttpResonseMessage` | `HttpResonseData` |
+
+# [.NET 6 (in-process)](#tab/net6-in-proc)
+
+| Version 1.x | .NET 6 (in-process) |
+| | |
+| `FunctionName` (attribute) | `FunctionName` (attribute) |
+| `TraceWriter` | `ILogger` |
+| `HttpRequestMessage` | `HttpRequest` |
+| `HttpResonseMessage` | `OkObjectResult` |
+
+# [.NET 7](#tab/net7)
+
+| Version 1.x | .NET 7 |
+| | |
+| `FunctionName` (attribute) | `Function` (attribute) |
+| `TraceWriter` | `ILogger` |
+| `HttpRequestMessage` | `HttpRequestData` |
+| `HttpResonseMessage` | `HttpResonseData` |
+++
+There might also be class name differences in bindings. For more information, see the reference articles for the specific bindings.
+
+### HTTP trigger template
+
+Most of the code changes between version 1.x and version 4.x can be seen in HTTP triggered functions. The HTTP trigger template for version 1.x looks like the following example:
+
+```csharp
+using System.Linq;
+using System.Net;
+using System.Net.Http;
+using System.Threading.Tasks;
+using Microsoft.Azure.WebJobs;
+using Microsoft.Azure.WebJobs.Extensions.Http;
+using Microsoft.Azure.WebJobs.Host;
+
+namespace Company.Function
+{
+ public static class HttpTriggerCSharp
+ {
+ [FunctionName("HttpTriggerCSharp")]
+ public static async Task<HttpResponseMessage>
+ Run([HttpTrigger(AuthorizationLevel.AuthLevelValue, "get", "post",
+ Route = null)]HttpRequestMessage req, TraceWriter log)
+ {
+ log.Info("C# HTTP trigger function processed a request.");
+
+ // parse query parameter
+ string name = req.GetQueryNameValuePairs()
+ .FirstOrDefault(q => string.Compare(q.Key, "name", true) == 0)
+ .Value;
+
+ if (name == null)
+ {
+ // Get request body
+ dynamic data = await req.Content.ReadAsAsync<object>();
+ name = data?.name;
+ }
+
+ return name == null
+ ? req.CreateResponse(HttpStatusCode.BadRequest,
+ "Please pass a name on the query string or in the request body")
+ : req.CreateResponse(HttpStatusCode.OK, "Hello " + name);
+ }
+ }
+}
+```
+
+In version 4.x, the HTTP trigger template looks like the following example:
+
+# [.NET Framework 4.8](#tab/v4)
++
+# [.NET 6 (isolated)](#tab/net6-isolated)
++
+# [.NET 6 (in-process)](#tab/net6-in-proc)
++
+# [.NET 7](#tab/net7)
+++
+## Update your project files
+
+To update your project to Azure Functions 4.x:
+
+1. Update your local installation of [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) to version 4.x.
+
+1. Move to one of the [Node.js versions supported on version 4.x](functions-reference-node.md#node-version).
+
+1. Add both `version` and `extensionBundle` elements to the host.json, so that it looks like the following example:
+
+ [!INCLUDE [functions-extension-bundles-json-v3](../../includes/functions-extension-bundles-json-v3.md)]
+
+ The `extensionBundle` element is required because after version 1.x, bindings are maintained as external packages. For more information, see [Extension bundles](/functions-bindings-register.md#extension-bundles).
+
+1. Update your local.settings.json file so that it has at least the following elements:
+
+ ```json
+ {
+ "IsEncrypted": false,
+ "Values": {
+ "AzureWebJobsStorage": "UseDevelopmentStorage=true",
+ "FUNCTIONS_WORKER_RUNTIME": "node"
+ }
+ }
+ ```
+
+ The `AzureWebJobsStorage` setting can be either the Azurite storage emulator or an actual Azure storage account. For more information, see [Local storage emulator](/functions-develop-local.md#local-storage-emulator).
+## Behavior changes after version 1.x
+
+This section details changes made after version 1.x in both trigger and binding behaviors as well as in core Functions features and behaviors.
+
+### Changes in triggers and bindings
+
+Starting with version 2.x, you must install the extensions for specific triggers and bindings used by the functions in your app. The only exception for this HTTP and timer triggers, which don't require an extension. For more information, see [Register and install binding extensions](./functions-bindings-register.md).
+
+There are also a few changes in the *function.json* or attributes of the function between versions. For example, the Event Hubs `path` property is now `eventHubName`. See the [existing binding table](functions-versions.md#bindings) for links to documentation for each binding.
+
+### Changes in features and functionality
+
+A few features were removed, updated, or replaced after version 1.x. This section details the changes you see in later versions after having used version 1.x.
+
+In version 2.x, the following changes were made:
+
+* Keys for calling HTTP endpoints are always stored encrypted in Azure Blob storage. In version 1.x, keys were stored in Azure Files by default. When you upgrade an app from version 1.x to version 2.x, existing secrets that are in Azure Files are reset.
+
+* The version 2.x runtime doesn't include built-in support for webhook providers. This change was made to improve performance. You can still use HTTP triggers as endpoints for webhooks.
+
+* To improve monitoring, the WebJobs dashboard in the portal, which used the [`AzureWebJobsDashboard`](functions-app-settings.md#azurewebjobsdashboard) setting is replaced with Azure Application Insights, which uses the [`APPINSIGHTS_INSTRUMENTATIONKEY`](functions-app-settings.md#appinsights_instrumentationkey) setting. For more information, see [Monitor Azure Functions](functions-monitoring.md).
+
+* All functions in a function app must share the same language. When you create a function app, you must choose a runtime stack for the app. The runtime stack is specified by the [`FUNCTIONS_WORKER_RUNTIME`](functions-app-settings.md#functions_worker_runtime) value in application settings. This requirement was added to improve footprint and startup time. When developing locally, you must also include this setting in the [local.settings.json file](functions-develop-local.md#local-settings-file).
+
+* The default timeout for functions in an App Service plan is changed to 30 minutes. You can manually change the timeout back to unlimited by using the [functionTimeout](functions-host-json.md#functiontimeout) setting in host.json.
+
+* HTTP concurrency throttles are implemented by default for Consumption plan functions, with a default of 100 concurrent requests per instance. You can change this behavior in the [`maxConcurrentRequests`](functions-host-json.md#http) setting in the host.json file.
+
+* Because of [.NET Core limitations](https://github.com/Azure/azure-functions-host/issues/3414), support for F# script (`.fsx` files) functions has been removed. Compiled F# functions (.fs) are still supported.
+
+* The URL format of Event Grid trigger webhooks has been changed to follow this pattern: `https://{app}/runtime/webhooks/{triggerName}`.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn more about Functions versions](functions-versions.md)
++
azure-functions Migrate Version 3 Version 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-version-3-version-4.md
+
+ Title: Migrate apps from Azure Functions version 3.x to 4.x
+description: This article shows you how to upgrade your existing function apps running on version 3.x of the Azure Functions runtime to be able to run on version 4.x of the runtime.
++ Last updated : 11/05/2022
+zone_pivot_groups: programming-languages-set-functions
++
+# <a name="top"></a>Migrate apps from Azure Functions version 3.x to version 4.x
+
+Azure Functions version 4.x is highly backwards compatible to version 3.x. Most apps should safely upgrade to 4.x without requiring significant code changes. For more information about Functions runtime versions, see [Azure Functions runtime versions overview](./functions-versions.md).
+
+This article walks you through the process of safely migrating your function app to run on version 4.x of the Functions runtime. Because project upgrade instructions are language dependent, make sure to choose your development language from the selector at the [top of the article](#top).
+
+## Choose your target .NET
+
+On version 3.x of the Functions runtime, your C# function app targets .NET Core 3.1. When you migrate your function app to version 4.x, you have the opportunity to choose the target version of .NET. You can upgrade your C# project to one of the following versions of .NET, all of which can run on Functions version 4.x:
+
+| .NET version | Process model<sup>*</sup> |
+| | | |
+| .NET 7 | [Isolated worker process](./dotnet-isolated-process-guide.md) |
+| .NET 6 | [Isolated worker process](./dotnet-isolated-process-guide.md) |
+| .NET 6 | [In-process](./functions-dotnet-class-library.md) |
+
+<sup>*</sup> [In-process execution](./functions-dotnet-class-library.md) is only supported for Long Term Support (LTS) releases of .NET. Non-LTS releases and .NET Framework require you to run in an [isolated worker process](./dotnet-isolated-process-guide.md).
+
+Upgrading from .NET Core 3.1 to .NET 6 running in-process requires minimal updates to your project and virtually no updates to code. Switching to the isolated worker process model requires you to make changes to your code, but provides the flexibility of being able to easily run on any future version of .NET. For a feature and functionality comparison between the two process models, see [Differences between in-process and isolate worker process .NET Azure Functions](./dotnet-isolated-in-process-differences.md).
+
+## Prepare for migration
+
+Before you upgrade your app to version 4.x of the Functions runtime, you should do the following tasks:
+
+* Review the list of [breaking changes between 3.x and 4.x](#breaking-changes-between-3x-and-4x).
+* [Run the pre-upgrade validator](#run-the-pre-upgrade-validator).
+* When possible, [upgrade your local project environment to version 4.x](#upgrade-your-local-project). Fully test your app locally using version 4.x of the [Azure Functions Core Tools](functions-run-local.md).
+* Upgrade your function app in Azure to the new version. If you need to minimize downtime, consider using a [staging slot](functions-deployment-slots.md) to test and verify your migrated app in Azure on the new runtime version. You can then deploy your app with the updated version settings to the production slot. For more information, see [Migrate using slots](#upgrade-using-slots).
+* Republished your migrated project to the upgraded function app. When you use Visual Studio to publish a version 4.x project to an existing function app at a lower version, you're prompted to let Visual Studio upgrade the function app to version 4.x during deployment. This upgrade uses the same process defined in [Migrate without slots](#upgrade-without-slots).
+
+## Run the pre-upgrade validator
+
+Azure Functions provides a pre-upgrade validator to help you identify potential issues when migrating your function app to 4.x. To run the pre-upgrade validator:
+
+1. In the [Azure portal](https://portal.azure.com), navigate to your function app.
+
+1. Open the **Diagnose and solve problems** page.
+
+1. In **Function App Diagnostics**, start typing `Functions 4.x Pre-Upgrade Validator` and then choose it from the list.
+
+1. After validation completes, review the recommendations and address any issues in your app. If you need to make changes to your app, make sure to validate the changes against version 4.x of the Functions runtime, either [locally using Azure Functions Core Tools v4](#upgrade-your-local-project) or by [using a staging slot](#upgrade-using-slots).
+
+## Upgrade your local project
+
+Upgrading instructions are language dependent. If you don't see your language, choose it from the selector at the [top of the article](#top).
++
+Choose the tab that matches your target version of .NET and the desired process model (in-process or isolated worker process).
+
+### .csproj file
+
+The following example is a .csproj project file that uses .NET Core 3.1 on version 3.x:
+
+```xml
+<Project Sdk="Microsoft.NET.Sdk">
+ <PropertyGroup>
+ <TargetFramework>netcoreapp3.1</TargetFramework>
+ <AzureFunctionsVersion>v3</AzureFunctionsVersion>
+ </PropertyGroup>
+ <ItemGroup>
+ <PackageReference Include="Microsoft.NET.Sdk.Functions" Version="3.0.13" />
+ </ItemGroup>
+ <ItemGroup>
+ <Reference Include="Microsoft.CSharp" />
+ </ItemGroup>
+ <ItemGroup>
+ <None Update="host.json">
+ <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
+ </None>
+ <None Update="local.settings.json">
+ <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
+ <CopyToPublishDirectory>Never</CopyToPublishDirectory>
+ </None>
+ </ItemGroup>
+</Project>
+```
+
+Use one of the following procedures to update this XML file to run in Functions version 4.x:
+
+# [.NET 6 (in-process)](#tab/net6-in-proc)
++
+# [.NET 6 (isolated)](#tab/net6-isolated)
++
+# [.NET 7](#tab/net7)
++++
+### program.cs file
+
+When migrating to run in an isolated worker process, you must add the following program.cs file to your project:
+
+# [.NET 6 (in-process)](#tab/net6-in-proc)
+
+A program.cs file isn't required when running in-process.
+
+# [.NET 6 (isolated)](#tab/net6-isolated)
++
+# [.NET 7](#tab/net7)
++++
+### local.settings.json file
+
+The local.settings.json file is only used when running locally. For information, see [Local settings file](functions-develop-local.md#local-settings-file). When migrating from running in-process to running in an isolated worker process, you need to change the `FUNCTIONS_WORKER_RUNTIME` value, as in the following example:
+
+# [.NET 6 (in-process)](#tab/net6-in-proc)
++
+# [.NET 6 (isolated)](#tab/net6-isolated)
++
+# [.NET 7](#tab/net7)
++++
+### Namespace changes
+
+C# functions that run in an isolated worker process uses libraries in a different namespace than those libraries used when running in-process. In-process libraries are generally in the namespace `Microsoft.Azure.WebJobs.*`. Isolated worker process function apps use libraries in the namespace `Microsoft.Azure.Functions.Worker.*`. You can see the effect of these namespace changes on `using` statements in the [HTTP trigger template examples](#http-trigger-template) that follow.
+
+### Class name changes
+
+Some key classes change names as a result of differences between in-process and isolated worker process APIs.
+
+The following table indicates key .NET classes used by Functions that could change when migrating from in-process:
+
+| .NET Core 3.1 | .NET 6 (in-process) | .NET 6 (isolated) | .NET 7 |
+| | | | |
+| `FunctionName` (attribute) | `FunctionName` (attribute) | `Function` (attribute) | `Function` (attribute) |
+| `HttpRequest` | `HttpRequest` | `HttpRequestData` | `HttpRequestData` |
+| `OkObjectResult` | `OkObjectResult` | `HttpResonseData` | `HttpResonseData` |
+
+There might also be class name differences in bindings. For more information, see the reference articles for the specific bindings.
+
+### HTTP trigger template
+
+The differences between in-process and isolated worker process can be seen in HTTP triggered functions. The HTTP trigger template for version 3.x (in-process) looks like the following example:
++
+The HTTP trigger template for the migrated version looks like the following example:
+
+# [.NET 6 (in-process)](#tab/net6-in-proc)
+
+Sames as version 3.x (in-process).
+
+# [.NET 6 (isolated)](#tab/net6-isolated)
++
+# [.NET 7](#tab/net7)
++++
+To update your project to Azure Functions 4.x:
+
+1. Update your local installation of [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) to version 4.x.
+
+1. Update your app's [Azure Functions extensions bundle](functions-bindings-register.md#extension-bundles) to 2.x or above. For more information, see [breaking changes](#breaking-changes-between-3x-and-4x).
+
+3. If needed, move to one of the [Java versions supported on version 4.x](./functions-reference-java.md#supported-versions).
+4. Update the app's `POM.xml` file to modify the `FUNCTIONS_EXTENSION_VERSION` setting to `~4`, as in the following example:
+
+ ```xml
+ <configuration>
+ <resourceGroup>${functionResourceGroup}</resourceGroup>
+ <appName>${functionAppName}</appName>
+ <region>${functionAppRegion}</region>
+ <appSettings>
+ <property>
+ <name>WEBSITE_RUN_FROM_PACKAGE</name>
+ <value>1</value>
+ </property>
+ <property>
+ <name>FUNCTIONS_EXTENSION_VERSION</name>
+ <value>~4</value>
+ </property>
+ </appSettings>
+ </configuration>
+ ```
+3. If needed, move to one of the [Node.js versions supported on version 4.x](functions-reference-node.md#node-version).
+3. Take this opportunity to upgrade to PowerShell 7.2, which is recommended. For more information, see [PowerShell versions](functions-reference-powershell.md#powershell-versions).
+3. If you're using Python 3.6, move to one of the [supported versions](functions-reference-python.md#python-version).
++
+## Breaking changes between 3.x and 4.x
+
+The following are key breaking changes to be aware of before upgrading a 3.x app to 4.x, including language-specific breaking changes. For a full list, see Azure Functions GitHub issues labeled [*Breaking Change: Approved*](https://github.com/Azure/azure-functions/issues?q=is%3Aissue+label%3A%22Breaking+Change%3A+Approved%22+is%3A%22closed+OR+open%22).
+
+If you don't see your programming language, go select it from the [top of the page](#top).
+
+### Runtime
+
+- Azure Functions proxies is a legacy feature for versions 1.x through 3.x of the Azure Functions runtime. Support for Functions proxies is being returned in version 4.x so that you can successfully upgrade your function apps to the latest runtime version. As soon as possible, you should instead switch to integrating your function apps with Azure API Management. API Management lets you take advantage of a more complete set of features for defining, securing, managing, and monetizing your Functions-based APIs. For more information, see [API Management integration](functions-proxies.md#api-management-integration). For information about the pending return of proxies in version 4.x, [Monitor the App Service announcements page](https://github.com/Azure/app-service-announcements/issues).
+
+- Logging to Azure Storage using *AzureWebJobsDashboard* is no longer supported in 4.x. You should instead use [Application Insights](./functions-monitoring.md). ([#1923](https://github.com/Azure/Azure-Functions/issues/1923))
+
+- Azure Functions 4.x now enforces [minimum version requirements for extensions](functions-versions.md#minimum-extension-versions). Upgrade to the latest version of affected extensions. For non-.NET languages, [upgrade](./functions-bindings-register.md#extension-bundles) to extension bundle version 2.x or later. ([#1987](https://github.com/Azure/Azure-Functions/issues/1987))
+
+- Default and maximum timeouts are now enforced in 4.x for function apps running on Linux in a Consumption plan. ([#1915](https://github.com/Azure/Azure-Functions/issues/1915))
+
+- Azure Functions 4.x uses `Azure.Identity` and `Azure.Security.KeyVault.Secrets` for the Key Vault provider and has deprecated the use of Microsoft.Azure.KeyVault. For more information about how to configure function app settings, see the Key Vault option in [Secret Repositories](security-concepts.md#secret-repositories). ([#2048](https://github.com/Azure/Azure-Functions/issues/2048))
+
+- Function apps that share storage accounts now fail to start when their host IDs are the same. For more information, see [Host ID considerations](storage-considerations.md#host-id-considerations). ([#2049](https://github.com/Azure/Azure-Functions/issues/2049))
++
+- Azure Functions 4.x supports .NET 6 in-process and isolated apps.
+
+- `InvalidHostServicesException` is now a fatal error. ([#2045](https://github.com/Azure/Azure-Functions/issues/2045))
+
+- `EnableEnhancedScopes` is enabled by default. ([#1954](https://github.com/Azure/Azure-Functions/issues/1954))
+
+- Remove `HttpClient` as a registered service. ([#1911](https://github.com/Azure/Azure-Functions/issues/1911))
+- Use single class loader in Java 11. ([#1997](https://github.com/Azure/Azure-Functions/issues/1997))
+
+- Stop loading worker jars in Java 8. ([#1991](https://github.com/Azure/Azure-Functions/issues/1991))
+
+- Node.js versions 10 and 12 aren't supported in Azure Functions 4.x. ([#1999](https://github.com/Azure/Azure-Functions/issues/1999))
+
+- Output serialization in Node.js apps was updated to address previous inconsistencies. ([#2007](https://github.com/Azure/Azure-Functions/issues/2007))
+- Default thread count has been updated. Functions that aren't thread-safe or have high memory usage may be impacted. ([#1962](https://github.com/Azure/Azure-Functions/issues/1962))
+- Python 3.6 isn't supported in Azure Functions 4.x. ([#1999](https://github.com/Azure/Azure-Functions/issues/1999))
+
+- Shared memory transfer is enabled by default. ([#1973](https://github.com/Azure/Azure-Functions/issues/1973))
+
+- Default thread count has been updated. Functions that aren't thread-safe or have high memory usage may be impacted. ([#1962](https://github.com/Azure/Azure-Functions/issues/1962))
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn more about Functions versions](functions-versions.md)
azure-functions Openapi Apim Integrate Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/openapi-apim-integrate-visual-studio.md
In this tutorial, you learn how to:
The serverless function you create provides an API that lets you determine whether an emergency repair on a wind turbine is cost-effective. Because both the function app and API Management instance you create use consumption plans, your cost for completing this tutorial is minimal. > [!NOTE]
-> The OpenAPI and API Management integration featured in this article is currently in preview. This method for exposing a serverless API is only supported for [in-process](functions-dotnet-class-library.md) C# class library functions. [Isolated process](dotnet-isolated-process-guide.md) C# class library functions and all other language runtimes should instead [use Azure API Management integration from the portal](functions-openapi-definition.md).
+> The OpenAPI and API Management integration featured in this article is currently in preview. This method for exposing a serverless API is only supported for [in-process](functions-dotnet-class-library.md) C# class library functions. [isolated worker process](dotnet-isolated-process-guide.md) C# class library functions and all other language runtimes should instead [use Azure API Management integration from the portal](functions-openapi-definition.md).
## Prerequisites
The Azure Functions project template in Visual Studio creates a project that you
| Setting | Value | Description | | | - |-- |
- | **Functions worker** | **.NET 6** | This value creates a function project that runs in-process on version 4.x of the Azure Functions runtime. OpenAPI file generation is only supported for versions 3.x and 4.x of the Functions runtime, and isolated process isn't supported. |
+ | **Functions worker** | **.NET 6** | This value creates a function project that runs in-process on version 4.x of the Azure Functions runtime. OpenAPI file generation is only supported for versions 3.x and 4.x of the Functions runtime, and isolated worker process isn't supported. |
| **Function template** | **HTTP trigger with OpenAPI** | This value creates a function triggered by an HTTP request, with the ability to generate an OpenAPI definition file. | | **Use Azurite for runtime storage account (AzureWebJobsStorage)** | **Selected** | You can use the emulator for local development of HTTP trigger functions. Because a function app in Azure requires a storage account, one is assigned or created when you publish your project to Azure. | | **Authorization level** | **Function** | When running in Azure, clients must provide a key when accessing the endpoint. For more information about keys and authorization, see [function access keys](functions-bindings-http-webhook-trigger.md#authorization-keys). |
azure-functions Set Runtime Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/set-runtime-version.md
Title: How to target Azure Functions runtime versions description: Azure Functions supports multiple versions of the runtime. Learn how to specify the runtime version of a function app hosted in Azure. Previously updated : 10/04/2022 Last updated : 10/22/2022
The following table shows the `FUNCTIONS_EXTENSION_VERSION` values for each majo
| Major version | `FUNCTIONS_EXTENSION_VERSION` value | Additional configuration | | - | -- | - |
-| 4.x | `~4` | [On Windows, enable .NET 6](./functions-versions.md#migrating-from-3x-to-4x) |
+| 4.x | `~4` | [On Windows, enable .NET 6](./migrate-version-3-version-4.md#upgrade-your-function-app-in-azure) |
| 3.x | `~3` | | | 2.x | `~2` | | | 1.x | `~1` | |
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
recommendations: false Previously updated : 09/29/2022 Last updated : 11/04/2022 # Azure, Dynamics 365, Microsoft 365, and Power Platform services compliance scope
For current Azure Government regions and available services, see [Products avail
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and Power Platform cloud services in scope for FedRAMP High, DoD IL2, DoD IL4, DoD IL5, and DoD IL6 authorizations across Azure, Azure Government, and Azure Government Secret cloud environments. For other authorization details in Azure Government Secret and Azure Government Top Secret, contact your Microsoft account representative. ## Azure public services by audit scope
-*Last updated: September 2022*
+*Last updated: November 2022*
### Terminology used
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure AD Multi-Factor Authentication](../../active-directory/authentication/concept-mfa-howitworks.md) | &#x2705; | &#x2705; | | [Azure API for FHIR](../../healthcare-apis/azure-api-for-fhir/index.yml) | &#x2705; | &#x2705; | | [Azure Arc-enabled servers](../../azure-arc/servers/index.yml) | &#x2705; | &#x2705; |
+| [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/index.yml) | &#x2705; | &#x2705; |
| **Service** | **FedRAMP High** | **DoD IL2** | | [Azure Cache for Redis](../../azure-cache-for-redis/index.yml) | &#x2705; | &#x2705; | | [Azure Cosmos DB](../../cosmos-db/index.yml) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| **Service** | **FedRAMP High** | **DoD IL2** | | [Virtual Network NAT](../../virtual-network/nat-gateway/index.yml) | &#x2705; | &#x2705; | | [Virtual WAN](../../virtual-wan/index.yml) | &#x2705; | &#x2705; |
+| [VM Image Builder](../../virtual-machines/image-builder-overview.md) | &#x2705; | &#x2705; |
| [VPN Gateway](../../vpn-gateway/index.yml) | &#x2705; | &#x2705; | | [Web Application Firewall](../../web-application-firewall/index.yml) | &#x2705; | &#x2705; | | [Windows 10 IoT Core Services](/windows-hardware/manufacture/iot/iotcoreservicesoverview) | &#x2705; | &#x2705; |
azure-monitor Alerts Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-types.md
This article describes the kinds of Azure Monitor alerts you can create, and hel
There are five types of alerts: - [Metric alerts](#metric-alerts)-- [Prometheus alerts](#prometheus-alerts-preview)-- [Log alerts](#log-alerts)+ - [Activity log alerts](#activity-log-alerts) - [Smart detection alerts](#smart-detection-alerts)-
+- [Prometheus alerts](#prometheus-alerts-preview) (preview)
## Choosing the right alert type This table can help you decide when to use what type of alert. For more detailed information about pricing, see the [pricing page](https://azure.microsoft.com/pricing/details/monitor/).
Prometheus alerts are based on metric values stored in [Azure Monitor managed se
- Get an [overview of alerts](alerts-overview.md). - [Create an alert rule](alerts-log.md). - Learn more about [Smart Detection](proactive-failure-diagnostics.md).+
azure-monitor Prometheus Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/prometheus-alerts.md
Last updated 09/15/2022
-# Prometheus metric alerts in Azure Monitor
+# Prometheus alerts in Azure Monitor
Prometheus alert rules allow you to define alert conditions, using queries which are written in Prometheus Query Language (Prom QL) that are applied on Prometheus metrics stored in [Azure Monitor managed services for Prometheus](../essentials/prometheus-metrics-overview.md). Whenever the alert query results in one or more time series meeting the condition, the alert counts as pending for these metric and label sets. A pending alert becomes active after a user-defined period of time during which all the consecutive query evaluations for the respective time series meet the alert condition. Once an alert becomes active, it is fired and would trigger your actions or notifications of choice, as defined in the Azure Action Groups configured in your alert rule. > [!NOTE]
Prometheus alert rules allow you to define alert conditions, using queries which
## Create Prometheus alert rule Prometheus alert rules are created as part of a Prometheus rule group which is stored in [Azure Monitor workspace](../essentials/azure-monitor-workspace-overview.md). See [Azure Monitor managed service for Prometheus rule groups](../essentials/prometheus-rule-groups.md) for details.
-## View Prometheus metric alerts
-View fired and resolved Prometheus alerts in the Azure portal with other alert types. Use the following steps to filter on only Prometheus metric alerts.
-
+## View Prometheus alerts
+View fired and resolved Prometheus alerts in the Azure portal with other alert types. Use the following steps to filter on only Prometheus alerts.
1. From the **Monitor** menu in the Azure portal, select **Alerts**. 2. If **Monitoring Service** isn't displayed as a filter option, then select **Add Filter** and add it. 3. Set the filter **Monitoring Service** to **Prometheus** to see Prometheus alerts. - 4. Click the alert name to view the details of a specific fired/resolved alert. - ## Next steps - [Create a Prometheus rule group](../essentials/prometheus-rule-groups.md).+
azure-monitor Azure Web Apps Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-net.md
# Application Monitoring for Azure App Service and ASP.NET
-Enabling monitoring on your ASP.NET based web applications running on [Azure App Services](../../app-service/index.yml) is now easier than ever. Whereas previously you needed to manually instrument your app, the latest extension/agent is now built into the App Service image by default. This article will walk you through enabling Azure Monitor application Insights monitoring as well as provide preliminary guidance for automating the process for large-scale deployments.
+Enabling monitoring on your ASP.NET based web applications running on [Azure App Services](../../app-service/index.yml) is now easier than ever. Whereas previously you needed to manually instrument your app, the latest extension/agent is now built into the App Service image by default. This article will walk you through enabling Azure Monitor Application Insights monitoring as well as provide preliminary guidance for automating the process for large-scale deployments.
> [!NOTE] > Manually adding an Application Insights site extension via **Development Tools** > **Extensions** is deprecated. This method of extension installation was dependent on manual updates for each new version. The latest stable release of the extension is now [preinstalled](https://github.com/projectkudu/kudu/wiki/Azure-Site-Extensions) as part of the App Service image. The files are located in `d:\Program Files (x86)\SiteExtensions\ApplicationInsightsAgent` and are automatically updated with each stable release. If you follow the auto-instrumentation instructions to enable monitoring below, it will automatically remove the deprecated extension for you.
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
The following table shows the recommended [aggregation types](../essentials/metr
||| | Counter | Sum | | Asynchronous Counter | Sum |
-| Histogram | Average, Sum, Count (Max, Min for Python and Node.js only) |
+| Histogram | Min, Max, Average, Sum and Count |
| Asynchronous Gauge | Average |
-| UpDownCounter (Python and Node.js only) | Sum |
-| Asynchronous UpDownCounter (Python and Node.js only) | Sum |
+| UpDownCounter | Sum |
+| Asynchronous UpDownCounter | Sum |
> [!CAUTION] > Aggregation types beyond what's shown in the table typically aren't meaningful.
azure-monitor Basic Logs Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md
By default, all tables in your Log Analytics workspace are Analytics tables, and
| Table | Details| |:|:| | Custom tables | All custom tables created with or migrated to the [data collection rule (DCR)-based logs ingestion API.](logs-ingestion-api-overview.md) |
-| [ContainerLogV2](/azure/azure-monitor/reference/tables/containerlogv2) | Used in [Container insights](../containers/container-insights-overview.md) and includes verbose text-based log records. |
-| [AppTraces](/azure/azure-monitor/reference/tables/apptraces) | Freeform Application Insights traces. |
-| [ContainerAppConsoleLogs](/azure/azure-monitor/reference/tables/containerappconsoleLogs) | Logs generated by Azure Container Apps, within a Container Apps environment. |
| [ACSCallRecordingSummary](/azure/azure-monitor/reference/tables/acscallrecordingsummary) | Communication Services recording summary logs. |
-| [ACSRoomsIncomingOperations](/azure/azure-monitor/reference/tables/acsroomsincomingoperations) | Communication Services rooms operations incoming requests logs. |
+| [ACSRoomsIncomingOperations](/azure/azure-monitor/reference/tables/acsroomsincomingoperations) | Communication Services Rooms incoming requests operations. |
+| [AppTraces](/azure/azure-monitor/reference/tables/apptraces) | Application Insights Freeform traces. |
+| [AMSKeyDeliveryRequests](/azure/azure-monitor/reference/tables/AMSKeyDeliveryRequests) | Azure Media Services HTTP request details for key, or license acquisition. |
+| [AMSMediaAccountHealth](/azure/azure-monitor/reference/tables/AMSMediaAccountHealth) | Azure Media Account Health Status. |
+| [ContainerAppConsoleLogs](/azure/azure-monitor/reference/tables/containerappconsoleLogs) | Azure Container Apps logs, generated within a Container Apps environment. |
+| [ContainerLogV2](/azure/azure-monitor/reference/tables/containerlogv2) | Used in [Container insights](../containers/container-insights-overview.md) and includes verbose text-based log records. |
> [!NOTE] > Tables created with the [Data Collector API](data-collector-api.md) don't support Basic Logs.
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
na Previously updated : 11/03/2022 Last updated : 11/07/2022 # Guidelines for Azure NetApp Files network planning
Azure NetApp Files Standard network features are supported for the following reg
* France Central * Germany West Central * Japan East
+* Japan West
* Korea Central * North Central US * North Europe
azure-netapp-files Azure Netapp Files Resource Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resource-limits.md
na Previously updated : 09/07/2022 Last updated : 11/07/2022 # Resource limits for Azure NetApp Files
Size: 4096 Blocks: 8 IO Block: 65536 directory
## `Maxfiles` limits <a name="maxfiles"></a>
-Azure NetApp Files volumes have a limit called *`maxfiles`*. The `maxfiles` limit is the number of files a volume can contain. Linux file systems refer to the limit as *inodes*. The `maxfiles` limit for an Azure NetApp Files volume is indexed based on the size (quota) of the volume. The `maxfiles` limit for a volume increases or decreases at the rate of 20 million files per TiB of provisioned volume size.
+Azure NetApp Files volumes have a limit called *`maxfiles`*. The `maxfiles` limit is the number of files a volume can contain. Linux file systems refer to the limit as *inodes*. The `maxfiles` limit for an Azure NetApp Files volume is indexed based on the size (quota) of the volume. The `maxfiles` limit for a volume increases or decreases at the rate of 21,251,126 files per TiB of provisioned volume size.
-The service dynamically adjusts the `maxfiles` limit for a volume based on its provisioned size. For example, a volume configured initially with a size of 1 TiB would have a `maxfiles` limit of 20 million. Subsequent changes to the size of the volume would result in an automatic readjustment of the `maxfiles` limit based on the following rules:
+The service dynamically adjusts the `maxfiles` limit for a volume based on its provisioned size. For example, a volume configured initially with a size of 1 TiB would have a `maxfiles` limit of 21,251,126. Subsequent changes to the size of the volume would result in an automatic readjustment of the `maxfiles` limit based on the following rules:
| Volume size (quota) | Automatic readjustment of the `maxfiles` limit | |-|-|
-| <= 1 TiB | 20 million |
-| > 1 TiB but <= 2 TiB | 40 million |
-| > 2 TiB but <= 3 TiB | 60 million |
-| > 3 TiB but <= 4 TiB | 80 million |
-| > 4 TiB | 100 million |
+| <= 1 TiB | 21,251,126 |
+| > 1 TiB but <= 2 TiB | 42,502,252 |
+| > 2 TiB but <= 3 TiB | 63,753,378 |
+| > 3 TiB but <= 4 TiB | 85,004,504 |
+| > 4 TiB | 106,255,630 |
>[!IMPORTANT] > If your volume has a quota of at least 4 TiB and you want to increase the quota, you must initiate [a support request](#request-limit-increase).
-For volumes with at least 4 TiB of quota, you can increase the `maxfiles` (inodes) limit beyond 100 million. For every 100 million files you increase (or a fraction thereof), you need to increase the corresponding volume quota by 4 TiB. For example, if you increase the `maxfiles` limit from 100 million files to 200 million files (or any number in between), you need to increase the volume quota from 4 TiB to 8 TiB.
+For volumes with at least 4 TiB of quota, you can increase the `maxfiles` (inodes) limit beyond 100 million. For every 100 million files you increase (or a fraction thereof), you need to increase the corresponding volume quota by 4 TiB. For example, if you increase the `maxfiles` limit from 100 million files to 200 million files (or any number in between), you need to increase the volume quota from 4 TiB to 8 TiB.
You can increase the `maxfiles` limit to 500 million if your volume quota is at least 20 TiB.
azure-netapp-files Understand Guidelines Active Directory Domain Service Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/understand-guidelines-active-directory-domain-service-site.md
na Previously updated : 11/01/2022 Last updated : 11/07/2022 # Understand guidelines for Active Directory Domain Services site design and planning for Azure NetApp Files
Ensure that you meet the following requirements about network topology and confi
* Ensure that a [supported network topology for Azure NetApp Files](azure-netapp-files-network-topologies.md) is used. * Ensure that AD DS domain controllers have network connectivity from the Azure NetApp Files delegated subnet hosting the Azure NetApp Files volumes.
+ * Peered virtual network topologies with AD DS domain controllers must have peering configured correctly to support Azure NetApp Files to AD DS domain controller network connectivity.
* Network Security Groups (NSGs) and AD DS domain controller firewalls must have appropriately configured rules to support Azure NetApp Files connectivity to AD DS and DNS. * Ensure that the latency is less than 10ms RTT between Azure NetApp Files and AD DS domain controllers.
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
na Previously updated : 11/03/2022 Last updated : 11/07/2022 # What's new in Azure NetApp Files
Azure NetApp Files is updated regularly. This article provides a summary about t
## November 2022
+* [Azure NetApp Files datastores for Azure VMware Solution](../azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md) is now generally available (GA) with expanded regional coverage.
+ * [Encrypted SMB connections to Domain Controller](create-active-directory-connections.md#encrypted-smb-dc) (Preview) With the Encrypted SMB connections to Active Directory Domain Controller capability you can now specify whether encryption should be used for communication between SMB server and domain controller in Active Directory connections. When enabled, only SMB3 will be used for encrypted domain controller connections.
azure-resource-manager Azure Subscription Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-subscription-service-limits.md
The following limits apply to [Azure role-based access control (Azure RBAC)](../
[!INCLUDE [signalr-service-limits](../../../includes/signalr-service-limits.md)]
+## Azure Spring Apps limits
+
+To learn more about the limits for Azure Spring Apps, see [Quotas and service plans for Azure Spring Apps](../../spring-apps/quotas.md).
+ ## Azure Virtual Desktop Service limits [!INCLUDE [azure-virtual-desktop-service-limits](../../../includes/azure-virtual-desktop-limits.md)]
azure-vmware Attach Azure Netapp Files To Azure Vmware Solution Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md
Title: Attach Azure NetApp Files datastores to Azure VMware Solution hosts
description: Learn how to create Azure NetApp Files-based NSF datastores for Azure VMware Solution hosts. Previously updated : 10/18/2022 Last updated : 11/07/2022
-# Attach Azure NetApp Files datastores to Azure VMware Solution hosts (Preview)
+# Attach Azure NetApp Files datastores to Azure VMware Solution hosts
[Azure NetApp Files](../azure-netapp-files/azure-netapp-files-introduction.md) is an enterprise-class, high-performance, metered file storage service. The service supports the most demanding enterprise file-workloads in the cloud: databases, SAP, and high-performance computing applications, with no code changes. For more information on Azure NetApp Files, see [Azure NetApp Files](../azure-netapp-files/index.yml) documentation. [Azure VMware Solution](./introduction.md) supports attaching Network File System (NFS) datastores as a persistent storage option. You can create NFS datastores with Azure NetApp Files volumes and attach them to clusters of your choice. You can also create virtual machines (VMs) for optimal cost and performance.
-> [!IMPORTANT]
-> Azure NetApp Files datastores for Azure VMware Solution hosts is currently in public preview. This version is provided without a service-level agreement and is not recommended for production workloads. Some features may not be supported or may have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- By using NFS datastores backed by Azure NetApp Files, you can expand your storage instead of scaling the clusters. You can also use Azure NetApp Files volumes to replicate data from on-premises or primary VMware environments for the secondary site. Create your Azure VMware Solution and create Azure NetApp Files NFS volumes in the virtual network connected to it using an ExpressRoute. Ensure there's connectivity from the private cloud to the NFS volumes created. Use those volumes to create NFS datastores and attach the datastores to clusters of your choice in a private cloud. As a native integration, no other permissions configured via vSphere are needed.
Before you begin the prerequisites, review the [Performance best practices](#per
1. [Deploy Azure VMware Solution](./deploy-azure-vmware-solution.md) private cloud and a dedicated virtual network connected via ExpressRoute gateway. The virtual network gateway should be configured with the Ultra performance SKU and have FastPath enabled. For more information, see [Configure networking for your VMware private cloud](tutorial-configure-networking.md) and [Network planning checklist](tutorial-network-checklist.md). 1. Create an [NFSv3 volume for Azure NetApp Files](../azure-netapp-files/azure-netapp-files-create-volumes.md) in the same virtual network created in the previous step. 1. Verify connectivity from the private cloud to Azure NetApp Files volume by pinging the attached target IP.
- 2. Verify the subscription is registered to the `ANFAvsDataStore` feature in the `Microsoft.NetApp` namespace. If the subscription isn't registered, register it now.
-
+ 2. Verify the subscription is registered to the `ANFAvsDataStore` feature in the `Microsoft.NetApp` namespace. If the subscription isn't registered, register it now.
+ `az feature register --name "ANFAvsDataStore" --namespace "Microsoft.NetApp"` `az feature show --name "ANFAvsDataStore" --namespace "Microsoft.NetApp" --query properties.state`
Before you begin the prerequisites, review the [Performance best practices](#per
1. Under the **Protocol** section, select **Azure VMware Solution Datastore** to indicate the volume is created to use as a datastore for Azure VMware Solution private cloud. 1. If you're using [export policies](../azure-netapp-files/azure-netapp-files-configure-export-policy.md) to control access to Azure NetApp Files volumes, enable the Azure VMware private cloud IP range, not individual host IPs. Faulty hosts in a private cloud could get replaced so if the IP isn't enabled, connectivity to datastore will be impacted.
+>[!NOTE]
+>Azure NetApp Files datastores for Azure VMware Solution are generally available. You must register Azure NetApp Files datastores for Azure VMware Solution before using it.
+ ## Supported regions Azure VMware Solution currently supports the following regions:
Azure VMware Solution currently supports the following regions:
**Brazil** : Brazil South.
-**Europe** : France Central, Germany West Central, North Europe, Switzerland West, UK South, UK West, West Europe
-
-**North America** : Canada Central, Canada East, Central US, East US, East US 2, North Central US, South Central US, West US.
+**Europe** : France Central, Germany West Central, North Europe, Sweden Central, Sweden North, Switzerland West, UK South, UK West, West Europe
-The list of supported regions will expand as the preview progresses.
+**North America** : Canada Central, Canada East, Central US, East US, East US 2, North Central US, South Central US, West US, West US 2.
## Performance best practices
To attach an Azure NetApp Files volume to your private cloud using Portal, follo
1. Under **Settings**, select **Preview features**. 1. Verify you're registered for both the `CloudSanExperience` and `AnfDatstoreExperience` features. 1. Navigate to your Azure VMware Solution.
-Under **Manage**, select **Storage (preview)**.
+Under **Manage**, select **Storage**.
1. Select **Connect Azure NetApp Files volume**. 1. In **Connect Azure NetApp Files volume**, select the **Subscription**, **NetApp account**, **Capacity pool**, and **Volume** to be attached as a datastore.
Under **Manage**, select **Storage (preview)**.
1. Verify the protocol is NFS. You'll need to verify the virtual network and subnet to ensure connectivity to the Azure VMware Solution private cloud. 1. Under **Associated cluster**, select the **Client cluster** to associate the NFS volume as a datastore 1. Under **Data store**, create a personalized name for your **Datastore name**.
- 1. When the datastore is created, you should see all of your datastores in the **Storage (preview)**.
+ 1. When the datastore is created, you should see all of your datastores in the **Storage**.
2. You'll also notice that the NFS datastores are added in vCenter.
To attach an Azure NetApp Files volume to your private cloud using Azure CLI, fo
`az feature register --name " AnfDatastoreExperience" --namespace "Microsoft.AVS"` `az feature show --name "AnfDatastoreExperience" --namespace "Microsoft.AVS" --query properties.state`+ 1. Verify the VMware extension is installed. If the extension is already installed, verify you're using the latest version of the Azure CLI extension. If an older version is installed, update the extension. `az extension show --name vmware`
Now that you've attached a datastore on Azure NetApp Files-based NFS volume to y
- **What are my options for backup and recovery?**
- Azure NetApp Files (ANF) supports [snapshots](../azure-netapp-files/azure-netapp-files-manage-snapshots.md) of datastores for quick checkpoints for near term recovery or quick clones. ANF backup lets you offload your ANF snapshots to Azure storage. This feature is available in public preview. Only for this technology are copies and stores-changed blocks relative to previously offloaded snapshots in an efficient format. This ability decreases Recovery Point Objective (RPO) and Recovery Time Objective (RTO) while lowering backup data transfer burden on the Azure VMware Solution service.
+ Azure NetApp Files supports [snapshots](../azure-netapp-files/azure-netapp-files-manage-snapshots.md) of datastores for quick checkpoints for near term recovery or quick clones. Azure NetApp Files backup lets you offload your Azure NetApp Files snapshots to Azure storage. With snapshots, copies and stores-changed blocks relative to previously offloaded snapshots are stored in an efficient format. This ability decreases Recovery Point Objective (RPO) and Recovery Time Objective (RTO) while lowering backup data transfer burden on the Azure VMware Solution service.
- **How do I monitor Storage Usage?**
azure-vmware Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/introduction.md
The diagram shows the adjacency between private clouds and VNets in Azure, Azure
## AV36P and AV52 node sizes generally available in Azure VMware Solution
- The new node sizes in will increase memory and storage options to optimize your workloads. These gains in performance enable you to do more per server, break storage bottlenecks, and lower transaction costs of latency-sensitive workloads. The availability of these new nodes allows large latency-sensitive services to be hosted efficiently on the Azure VMware Solution infrastructure.
+ The new node sizes increase memory and storage options to optimize your workloads. These gains in performance enable you to do more per server, break storage bottlenecks, and lower transaction costs of latency-sensitive workloads. The availability of these new nodes allows large latency-sensitive services to be hosted efficiently on the Azure VMware Solution infrastructure.
**AV36P key highlights for Memory and Storage optimized Workloads:**
backup Backup Azure Vms Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-vms-troubleshoot.md
Title: Troubleshoot backup errors with Azure VMs
description: In this article, learn how to troubleshoot errors encountered with backup and restore of Azure virtual machines. Previously updated : 10/20/2022 Last updated : 11/07/2022
To resolve this issue:
**Resolution**: Use the same subscription for Restore of Trusted Launch Azure VMs.
+### UserErrorCrossSubscriptionRestoreInvalidTargetSubscription
+
+**Error code**: UserErrorCrossSubscriptionRestoreInvalidTargetSubscription
+
+**Error message**: Operation failed as the target subscription specified for restore is not registered to the Azure Recovery Services Resource Provider.
+
+**Recommended action**: Ensure that the target subscription is registered to the Recovery Services Resource Provider before you attempt a cross subscription restore. Creating a vault in the target Subscription should typically register the Subscription to Recovery Services vault Provider.
+ ## Backup or restore takes time If your backup takes more than 12 hours, or restore takes more than 6 hours, review [best practices](backup-azure-vms-introduction.md#best-practices), and
backup Multi User Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/multi-user-authorization.md
Title: Configure Multi-user authorization using Resource Guard
description: This article explains how to configure Multi-user authorization using Resource Guard. zone_pivot_groups: backup-vaults-recovery-services-vault-backup-vault Previously updated : 09/15/2022 Last updated : 11/08/2022
Learn about various [MUA usage scenarios](./multi-user-authorization-concept.md?
The **Security admin** creates the Resource Guard. We recommend that you create it in a **different subscription** or a **different tenant** as the vault. However, it should be in the **same region** as the vault. The Backup admin must **NOT** have *contributor* access on the Resource Guard or the subscription that contains it.
-For the following example, create the Resource Guard in a tenant different from the vault tenant.
+**Choose a client**
+
+# [Azure portal](#tab/azure-portal)
+
+To create the Resource Guard in a tenant different from the vault tenant, follow these steps:
+ 1. In the Azure portal, go to the directory under which you want to create the Resource Guard. :::image type="content" source="./media/multi-user-authorization/portal-settings-directories-subscriptions.png" alt-text="Screenshot showing the portal settings.":::
For the following example, create the Resource Guard in a tenant different from
Follow notifications for status and successful creation of the Resource Guard.
+# [PowerShell](#tab/powershell)
+
+Use the following command to create a resource guard:
+
+ ```azurepowershell-interactive
+ New-AzDataProtectionResourceGuard -Location ΓÇ£LocationΓÇ¥ -Name ΓÇ£ResourceGuardNameΓÇ¥ -ResourceGroupName ΓÇ£rgNameΓÇ¥
+ ```
+++ ### Select operations to protect using Resource Guard
-Choose the operations you want to protect using the Resource Guard out of all supported critical operations. By default, all supported critical operations are enabled. However, you can exempt certain operations from falling under the purview of MUA using Resource Guard. The security admin can perform the following steps:
+Choose the operations you want to protect using the Resource Guard out of all supported critical operations. By default, all supported critical operations are enabled. However, you (as the security admin) can exempt certain operations from falling under the purview of MUA using Resource Guard.
+
+**Choose a client**
+
+# [Azure portal](#tab/azure-portal)
+
+To exempt operations, follow these steps:
1. In the Resource Guard created above, go to **Properties**. 2. Select **Disable** for operations that you want to exclude from being authorized using the Resource Guard.
Choose the operations you want to protect using the Resource Guard out of all su
:::image type="content" source="./media/multi-user-authorization/demo-resource-guard-properties.png" alt-text="Screenshot showing demo resource guard properties.":::
+# [PowerShell](#tab/powershell)
+
+Use the following commands to update the operations. These exclude operations from protection by the resource guard.
+
+ ```azurepowershell-interactive
+ $resourceGuard = Get-AzDataProtectionResourceGuard -SubscriptionId "xxxxxxxx-xxxx-xxxx-xxxxxxxxxxxx" -ResourceGroupName "rgName" -Name "resGuardName"
+ $criticalOperations = $resourceGuard.ResourceGuardOperation.VaultCriticalOperation
+ $operationsToBeExcluded = $criticalOperations | Where-Object { $_ -match "backupSecurityPIN/action" -or $_ -match "backupInstances/delete" }
++
+ Update-AzDataProtectionResourceGuard -SubscriptionId "xxxxxxxx-xxxx-xxxx-xxxxxxxxxxxx" -ResourceGroupName "rgName" -Name $resourceGuard.Name -CriticalOperationExclusionList $operationsToBeExcluded
+ ```
+
+- The first command fetches the resource guard that needs to be updated.
+- The second and third commands fetch the critical operations that you want to update.
+- The fourth command excludes some critical operations from the resource guard.
+++++ ## Assign permissions to the Backup admin on the Resource Guard to enable MUA To enable MUA on a vault, the admin of the vault must have **Reader** role on the Resource Guard or subscription containing the Resource Guard. To assign the **Reader** role on the Resource Guard:
To enable MUA on a vault, the admin of the vault must have **Reader** role on th
## Enable MUA on a Recovery Services vault
-Now that the Backup admin has the Reader role on the Resource Guard, they can easily enable multi-user authorization on vaults managed by them. The following steps are performed by the **Backup admin**.
+After the Reader role assignment on the Resource Guard is complete, enable multi-user authorization on vaults (as the **Backup admin**) that you manage.
+
+**Choose a client**
+
+# [Azure portal](#tab/azure-portal)
+
+To enable MUA on the vaults, follow these steps.
1. Go to the Recovery Services vault. Go to **Properties** on the left navigation panel, then to **Multi-User Authorization** and click **Update**.
Now that the Backup admin has the Reader role on the Resource Guard, they can ea
:::image type="content" source="./media/multi-user-authorization/testvault1-enable-mua.png" alt-text="Screenshot showing how to enable Multi-user authentication.":::
+# [PowerShell](#tab/powershell)
+
+Use the following command to enable MUA on a Recovery Services vault:
+
+ ```azurepowershell-interactive
+ $token = (Get-AzAccessToken -TenantId "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx").Token
+ Set-AzRecoveryServicesResourceGuardMapping -VaultId ΓÇ£VaultArmIdΓÇ¥ -ResourceGuardId "ResourceGuardArmId" -Token $token
+ ```
+
+- The first command fetches the access token for the resource guard tenant where the resource guard is present.
+- The second command creates a mapping between the RSVault $vault and Resource guard.
+
+>[!NOTE]
+>The token parameter is optional and is only needed to authenticate cross tenant protected operations.
++++ ## Protected operations using MUA Once you have enabled MUA, the operations in scope will be restricted on the vault, if the Backup admin tries to perform them without having the required role (that is, Contributor role) on the Resource Guard.
The following screenshot shows an example of disabling soft delete for an MUA-en
## Disable MUA on a Recovery Services vault
-Disabling MUA is a protected operation, and hence, is protected using MUA. This means that the Backup admin must have the required Contributor role in the Resource Guard. Details on obtaining this role are described here. Following is a summary of steps to disable MUA on a vault.
+Disabling MUA is a protected operation, so, so, vaults are protected using MUA. If you (the Backup admin) want to disable MUA, you must have the required Contributor role in the Resource Guard.
+
+**Choose a client**
+
+# [Azure portal](#tab/azure-portal)
+
+To disable MUA on a vault, follow these steps:
+ 1. The Backup admin requests the Security admin for **Contributor** role on the Resource Guard. They can request this to use the methods approved by the organization such as JIT procedures, like [Azure AD Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md), or other internal tools and procedures. 1. The Security admin approves the request (if they find it worthy of being approved) and informs the Backup admin. Now the Backup admin has the ΓÇÿContributorΓÇÖ role on the Resource Guard. 1. The Backup admin goes to the vault > **Properties** > **Multi-user Authorization**.
Disabling MUA is a protected operation, and hence, is protected using MUA. This
:::image type="content" source="./media/multi-user-authorization/disable-mua.png" alt-text="Screenshot showing to disable multi-user authentication.":::
+# [PowerShell](#tab/powershell)
+
+Use the following command to disable MUA on a Recovery Services vault:
+
+ ```azurepowershell-interactive
+ $token = (Get-AzAccessToken -TenantId "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx").Token
+ Remove-AzRecoveryServicesResourceGuardMapping -VaultId ΓÇ£VaultArmIdΓÇ¥ -Token $token
+ ```
+
+- The first command fetches the access token for the resource guard tenant, where the resource guard is present.
+- The second command deletes the mapping between the Recovery Services vault and the resource guard.
+
+>[!NOTE]
+>The token parameter is optional and is only needed to authenticate the cross tenant protected operations.
+++++++ ::: zone-end
bastion Configuration Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/configuration-settings.md
Title: 'About Azure Bastion configuration settings' description: Learn about the available configuration settings for Azure Bastion. + Previously updated : 08/03/2022-- Last updated : 08/15/2022 # About Bastion configuration settings
You can specify the port that you want to use to connect to your VMs. By default
Custom port values are supported for the Standard SKU only.
+## Shareable link (Preview)
+
+The Bastion **Shareable Link** feature lets users connect to a target resource using Azure Bastion without accessing the Azure portal.
+
+When a user without Azure credentials clicks a shareable link, a webpage will open that prompts the user to sign in to the target resource via RDP or SSH. Users authenticate using username and password or private key, depending on what you have configured in the Azure portal for that target resource. Users can connect to the same resources that you can currently connect to with Azure Bastion: VMs or virtual machine scale set.
+
+| Method | Value | Links | Requires Standard SKU |
+| | | | |
+| Azure portal |Shareable Link | [Configure](shareable-link.md)| Yes |
+ ## Next steps For frequently asked questions, see the [Azure Bastion FAQ](bastion-faq.md).
bastion Shareable Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/shareable-link.md
+
+ Title: 'Create a shareable link for Azure Bastion'
+description: Learn how to create a shareable link to let a user connect to a target resource via Bastion without using the Azure portal.
+++ Last updated : 09/13/2022+++
+# Create a shareable link for Bastion - preview
+
+The Bastion **Shareable Link** feature lets users connect to a target resource (virtual machine or virtual machine scale set) using Azure Bastion without accessing the Azure portal. This article helps you use the Shareable Link feature to create a shareable link for an existing Azure Bastion deployment.
+
+When a user without Azure credentials clicks a shareable link, a webpage opens that prompts the user to sign in to the target resource via RDP or SSH. Users authenticate using username and password or private key, depending on what you have configured for the target resource. The shareable link does not contain any credentials - the admin must provide sign-in credentials to the user.
+
+By default, users in your org will have only read access to shared links. If a user has read access, they'll only be able to use and view shared links, but can't create or delete a shareable link. For more information, see the [Permissions](#permissions) section of this article.
+
+## Considerations
+
+* Shareable Links isn't currently supported on peered VNets.
+* Shareable Links is not supported for national clouds during preview.
+* The Standard SKU is required for this feature.
+
+## Prerequisites
+
+* Azure Bastion is deployed to your VNet. See [Tutorial - Deploy Bastion using manual settings](tutorial-create-host-portal.md) for steps.
+
+* Bastion must be configured to use the **Standard** SKU for this feature. You can update the SKU from Basic to Standard when you configure the shareable links feature.
+
+* The VNet contains the VM resource to which you want to create a shareable link.
+
+## Enable Shareable Link feature
+
+Before you can create a shareable link to a VM, you must first enable the feature.
+
+1. In the Azure portal, go to your bastion resource.
+
+1. On your **Bastion** page, in the left pane, click **Configuration**.
+
+ :::image type="content" source="./media/shareable-link/configuration-settings.png" alt-text="Screenshot of Configuration settings with shareable link selected." lightbox="./media/shareable-link/configuration-settings.png":::
+
+1. On the **Configuration** page, for **Tier**, select **Standard** if it isn't already selected. This feature requires the **Standard SKU**.
+
+1. Select **Shareable Link** from the listed features to enable the Shareable Link feature.
+
+1. Verify that you've selected the settings that you want, then click **Apply**.
+
+1. Bastion will immediately begin updating the settings for your bastion host. Updates will take about 10 minutes.
+
+## Create shareable links
+
+In this section, you specify each resource for which you want to create a shareable link
+
+1. In the Azure portal, go to your bastion resource.
+
+1. On your bastion page, in the left pane, click **Shareable links**. Click **+ Add** to open the **Create shareable link** page.
+
+ :::image type="content" source="./media/shareable-link/add.png" alt-text="Screenshot shareable links page with + add." lightbox="./media/shareable-link/add.png":::
+
+1. On the **Create shareable link** page, select the resources for which you want to create a shareable link. You can select specific resources, or you can select all. A separate shareable link will be created for each selected resource. Click **Apply** to create links.
+
+ :::image type="content" source="./media/shareable-link/select-vm.png" alt-text="Screenshot of shareable links page to create a shareable link." lightbox="./media/shareable-link/select-vm.png":::
+
+1. Once the links are created, you can view them on the **Shareable links** page. The following example shows links for multiple resources. You can see that each resource has a separate link and the link status is **Active**. To share a link, copy it, then send it to the user. The link doesn't contain authentication credentials.
+
+ :::image type="content" source="./media/shareable-link/copy-link.png" alt-text="Screenshot of shareable links page to show all available resource links." lightbox="./media/shareable-link/copy-link.png":::
+
+## Connect to a VM
+
+1. After receiving the link, the user opens the link in their browser.
+
+1. In the left corner, the user can select whether to see text and images copied to the clipboard. The user inputs the required information, then clicks **Login** to connect. A shared link doesn't contain authentication credentials. The admin must provide sign-in credentials to the user. Custom port and protocols are supported.
+
+ :::image type="content" source="./media/shareable-link/login.png" alt-text="Screenshot of Sign-in to bastion using the shareable link in the browser." lightbox="./media/shareable-link/login.png":::
+
+> [!NOTE]
+> If a link is no longer able to be opened, this means that someone in your organization has deleted that resource. While you'll still be able to see the shared links in your list, it will no longer connect to the target resource and will lead to a connection error. You can delete the shared link in your list, or keep it for auditing purposes.
+>
+
+## Delete a shareable link
+
+1. In the Azure portal, go to your **Bastion resource -> Shareable Links**.
+
+1. On the **Shareable Links** page, select the resource link that you want to delete, then click **Delete**.
+
+ :::image type="content" source="./media/shareable-link/delete.png" alt-text="Screenshot of selecting link to delete." lightbox="./media/shareable-link/delete.png":::
+
+## Permissions
+
+Permissions to the Shareable Link feature are configured using Access control (IAM). By default, users in your org will have only read access to shared links. If a user has read access, they'll only be able to use and view shared links, but can't create or delete a shared link.
+
+To give someone permissions to create or delete a shared link, use the following steps:
+
+1. In the Azure portal, go to the Bastion host.
+1. Go to the **Access control (IAM)** page.
+1. In the Microsoft.Network/bastionHosts section, configure the following permissions:
+
+ * Other: Creates shareable urls for the VMs under a bastion and returns the URLs.
+ * Other: Deletes shareable urls for the provided VMs under a bastion.
+ * Other: Deletes shareable urls for the provided tokens under a bastion.
+
+ These correspond to the following PowerShell cmdlets:
+
+ * Microsoft.Network/bastionHosts/createShareableLinks/action
+ * Microsoft.Network/bastionHosts/deleteShareableLinks/action
+ * Microsoft.Network/bastionHosts/deleteShareableLinksByToken/action
+ * Microsoft.Network/bastionHosts/getShareableLinks/action - If this isn't enabled, the user won't be able to see a shareable link.
+
+## Next steps
+
+* For additional features, see [Bastion features and configuration settings](configuration-settings.md).
+* For more information about Azure Bastion, see [What is Azure Bastion?](bastion-overview.md)
cdn Cdn Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-billing.md
If you are using Azure Blob storage as the origin for your content, you also inc
> [!NOTE] > Starting October 2019, If you are using Azure CDN from Microsoft, the cost of data transfer from Origins hosted in Azure to CDN PoPs is free of charge. Azure CDN from Verizon and Azure CDN from Akamai are subject to the rates described below.
-For more information about Azure Storage billing, see [Understanding Azure Storage Billing ΓÇô Bandwidth, Transactions, and Capacity](https://blogs.msdn.microsoft.com/windowsazurestorage/2010/07/08/understanding-windows-azure-storage-billing-bandwidth-transactions-and-capacity/).
+For more information about Azure Storage billing, see [Plan and manage costs for Azure Storage](../storage/common/storage-plan-manage-costs.md).
If you are using *hosted service delivery*, you will incur charges as follows:
If you use one of the following Azure services as your CDN origin, you will not
- Azure Cache for Redis ## How do I manage my costs most effectively?
-Set the longest TTL possible on your content.
+Set the longest TTL possible on your content.
cognitive-services Streaming Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/How-to/streaming-inference.md
A sample request:
{ "variables": [ {
- "variableName": "Variable_1",
+ "variable": "Variable_1",
"timestamps": [ "2021-01-01T00:00:00Z", "2021-01-01T00:01:00Z",
A sample request:
] }, {
- "variableName": "Variable_2",
+ "variable": "Variable_2",
"timestamps": [ "2021-01-01T00:00:00Z", "2021-01-01T00:01:00Z",
A sample request:
] }, {
- "variableName": "Variable_3",
+ "variable": "Variable_3",
"timestamps": [ "2021-01-01T00:00:00Z", "2021-01-01T00:01:00Z",
cognitive-services Concept Face Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-face-detection.md
Try out the capabilities of face detection quickly and easily using Vision Studi
## Face ID
-The face ID is a unique identifier string for each detected face in an image. You can request a face ID in your [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) API call.
+The face ID is a unique identifier string for each detected face in an image. Note that Face ID requires limited access approval by filling out the [intake form](https://aka.ms/facerecognition). For more information, see the Face [limited access page](/legal/cognitive-services/computer-vision/limited-access-identity?context=%2Fazure%2Fcognitive-services%2Fcomputer-vision%2Fcontext%2Fcontext). You can request a face ID in your [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) API call.
## Face landmarks
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/whats-new.md
Learn what's new in the service. These items may be release notes, videos, blog
### Computer Vision Image Analysis 4.0 public preview
-Version 4.0 of Computer Vision has been released in public preview. The new API includes image captioning, image tagging, object detection people detection, and Read OCR functionality, available in the same Analyze Image operation. The OCR is optimized for general, non-document images in a performance-enhanced synchronous API that makes it easier to embed OCR-powered experiences in your workflows.
+Image Analysis 4.0 has been released in public preview. The new API includes image captioning, image tagging, object detection, smart crops, people detection, and Read OCR functionality, all available through one Analyze Image operation. The OCR is optimized for general, non-document images in a performance-enhanced synchronous API that makes it easier to embed OCR-powered experiences in your workflows.
## September 2022
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/quickstart.md
Publishing your model makes it available for use with the Translator API. A proj
1. Developers should use the `Category ID` when making translation requests with Microsoft Translator [Text API V3](../reference/v3-0-translate.md?tabs=curl). More information about the Translator Text API can be found on the [API Reference](../reference/v3-0-reference.md) webpage.
-1. Business users may want to download and install our free [DocumentTranslator app for Windows](https://github.com/MicrosoftTranslator/DocumentTranslator/releases/tag/V2.9.4).
+1. Business users may want to download and install our free [DocumentTranslator app for Windows](https://github.com/MicrosoftTranslator/DocumentTranslation/releases).
## Next steps
cognitive-services Sovereign Clouds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/sovereign-clouds.md
The following table lists the base URLs for Azure sovereign cloud endpoints:
|--|--| |Azure portal | <ul><li>[Azure Government Portal](https://portal.azure.us/)</li></ul>| | Available regions</br></br>The region-identifier is a required header when using Translator for the government cloud. | <ul><li>`usgovarizona` </li><li> `usgovvirginia`</li></ul>|
-|Available pricing tiers|<ul><li>Free (F0) and Standard (S0). See [Translator pricing](https://azure.microsoft.com/pricing/details/cognitive-services/translator/)</li></ul>|
-|Supported Features | <ul><li>[Text Translation](https://docs.azure.cn/cognitive-services/translator/reference/v3-0-reference)</li><li>[Document Translation](document-translation/overview.md)</li><li>[Custom Translator](custom-translator/overview.md)</li></ul>|
+|Available pricing tiers|<ul><li>Free (F0) and Standard (S1). See [Translator pricing](https://azure.microsoft.com/pricing/details/cognitive-services/translator/)</li></ul>|
+|Supported Features | <ul><li>[Text Translation](reference/v3-0-reference.md)</li><li>[Document Translation](document-translation/overview.md)</li><li>[Custom Translator](custom-translator/overview.md)</li></ul>|
|Supported Languages| <ul><li>[Translator language support](language-support.md)</li></ul>| <!-- markdownlint-disable MD036 -->
curl -X POST "https://api.cognitive.microsofttranslator.us/translate?api-version
``` > [!div class="nextstepaction"]
-> [Azure Government: Translator text reference](reference/rest-api-guide.md)
+> [Azure Government: Translator text reference](/azure/azure-government/documentation-government-cognitiveservices#translator)
### [Azure China 21 Vianet](#tab/china)
cognitive-services Use Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/how-to/use-containers.md
The following table describes the minimum and recommended specifications for the
| | Minimum host specs | Recommended host specs | Minimum TPS | Maximum TPS| |||-|--|--|
-| **1 document/request** | 4 core, 10GB memory | 6 core, 12GB memory |15 | 30|
+| **1 document/request** | 4 core, 12GB memory | 6 core, 12GB memory |15 | 30|
| **10 documents/request** | 6 core, 16GB memory | 8 core, 20GB memory |15 | 30| CPU core and memory correspond to the `--cpus` and `--memory` settings, which are used as part of the `docker run` command.
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/language-support.md
json
{ "taskName": "analyze 1", "kind": "Healthcare",
+ "parameters":
+ {
+ "modelVersion": "2022-08-15-preview"
+ }
} ] }
json
## Docker container
-The docker container supports English language, model version 03-01-2022.
+The docker container supports English language, model version 2022-03-01.
Additional languages are also supported when using a docker container to deploy the API: Spanish, French, German Italian, Portuguese and Hebrew. This functionality is currently in preview, model version 2022-08-15-preview. Full details for deploying the service in a container can be found [here](../text-analytics-for-health/how-to/use-containers.md).
In order to download the new container images from the Microsoft public containe
For English, Spanish, Italian, French, German and Portuguese: ```
-docker pull mcr.microsoft.com/azure-cognitive-services/textanalytics/latin
+docker pull mcr.microsoft.com/azure-cognitive-services/textanalytics/healthcare:latin
``` For Hebrew: ```
-docker pull mcr.microsoft.com/azure-cognitive-services/textanalytics/semitic
+docker pull mcr.microsoft.com/azure-cognitive-services/textanalytics/healthcare:semitic
```
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/overview.md
Previously updated : 6/30/2021 Last updated : 11/07/2022 recommendations: false keywords:
cognitive-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/reference.md
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
#### Example request ```console
-curl https://YOUR_RESOURCE_NAME.openaiazure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=2022-06-01-preview\
+curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=2022-06-01-preview\
-H "Content-Type: application/json" \ -H "api-key: YOUR_API_KEY" \ -d "{
curl -X DELETE https://example_resource_name.openai.azure.com/openai/deployments
## Next steps
-Learn more about the [underlying models that power Azure OpenAI](./concepts/models.md).
+Learn more about the [underlying models that power Azure OpenAI](./concepts/models.md).
cognitive-services What Are Cognitive Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/what-are-cognitive-services.md
You can create a Cognitive Services resource with hands-on quickstarts using any
* [Azure portal](cognitive-services-apis-create-account.md?tabs=multiservice%2Cwindows "Azure portal") * [Azure CLI](cognitive-services-apis-create-account-cli.md?tabs=windows "Azure CLI")
-* [Azure SDK client libraries](cognitive-services-apis-create-account-cli.md?tabs=windows "cognitive-services-apis-create-account-client-library?pivots=programming-language-csharp")
+* [Azure SDK client libraries](cognitive-services-apis-create-account-client-library.md?tabs=windows "cognitive-services-apis-create-account-client-library?pivots=programming-language-csharp")
* [Azure Resource Manager (ARM template)](./create-account-resource-manager-template.md?tabs=portal "Azure Resource Manager (ARM template)") ## Use Cognitive Services in different development environments
communication-services Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/pricing.md
Note: USA mixed rates to `+1-425` is $0.013. Refer to the following link for det
## Call Recording
-Azure Communication Services allows customers to record PSTN, WebRTC, Conference, SIP Interface calls. Currently Call Recording supports mixed audio+video MP4 and mixed audio-only MP3/WAV output formats. Call Recording SDKs are available for Java and C#. Refer to [this page to learn more](../quickstarts/voice-video-calling/call-recording-sample.md).
+Azure Communication Services allow developers to record PSTN, WebRTC, Conference, or SIP calls. Call Recording supports mixed video MP4, mixed audio MP3/WAV, and unmixed audio WAV output formats. Call Recording SDKs are available for Java and C#. To learn more view Call Recording [concepts](./voice-video-calling/call-recording.md) and [quickstart](../quickstarts/voice-video-calling/get-started-call-recording.md).
### Price
-You're charged $0.01/min for mixed audio+video format and $0.002/min for mixed audio-only.
+- Mixed video (audio+video): $0.01/min
+- Mixed audio: $0.002/min
+- Unmixed audio: $0.0012/participant/min
-### Pricing example: Record a call in a mixed audio+video format
+
+### Pricing example: Record a video call
Alice made a group call with her colleagues, Bob and Charlie. -- The call lasts a total of 60 minutes. And recording was active during 60 minutes.
+- The call lasts a total of 60 minutes and recording was active during 60 minutes.
- Bob stayed in a call for 30 minutes and Alice and Charlie for 60 minutes. **Cost calculations**-- You'll be charged the length of the meeting. (Length of the meeting is the timeline between user starts a recording and either explicitly stops or when there's no one left in a meeting).
+- You'll be charged for the length of the meeting. (Length of the meeting is the timeline between user starts a recording and either explicitly stops or when there's no one left in a meeting).
- 60 minutes x $0.01 per recording per minute = $0.6
-### Pricing example: Record a call in a mixed audio+only format
+### Pricing example: Record an audio call in a mixed format
Alice starts a call with Jane. - The call lasts a total of 60 minutes. The recording lasted for 45 minutes. **Cost calculations**-- You'll be charged the length of the recording.
+- You'll be charged for the length of the recording.
- 45 minutes x $0.002 per recording per minute = $0.09
+### Pricing example: Record an audio call in an unmixed format
+
+Bob starts a call with his financial advisor, Charlie.
+
+- The call lasts a total of 60 minutes. The recording lasted for 50 minutes.
+
+**Cost calculations**
+- You'll be charged for the length of the recording per participant.
+- 50 minutes x $0.0012 x 2 per recording per participant per minute = $0.12
+ ## Chat With Communication Services you can enhance your application with the ability to send and receive chat messages between two or more users. Chat SDKs are available for JavaScript, .NET, Python, and Java. Refer to [this page to learn about SDKs](./sdk-options.md)
communication-services Call Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/call-automation.md
> Functionality described on this document is currently in private preview. Private preview includes access to SDKs and documentation for testing purposes that are not yet available publicly. > Apply to become an early adopter by filling out the form for [preview access to Azure Communication Services](https://aka.ms/ACS-EarlyAdopter).
-Azure Communication Services Call Automation provides developers the ability to build server-based, intelligent call workflows for voice and PSTN channels. The SDKs, available for .NET and Java, uses an action-event model to help you build personalized customer interactions. Your communication applications can listen to real-time call events and perform control plane actions (like answer, transfer, play audio, etc.) to steer and control calls based on your business logic.
+Azure Communication Services Call Automation provides developers the ability to build server-based, intelligent call workflows, and call recording for voice and PSTN channels. The SDKs, available for .NET and Java, uses an action-event model to help you build personalized customer interactions. Your communication applications can listen to real-time call events and perform control plane actions (like answer, transfer, play audio, start recording, etc.) to steer and control calls based on your business logic.
> [!NOTE] > Call Automation currently doesnt interoperate with Microsoft Teams. Actions like making, redirecting a call to a Teams user or adding them to a call using Call Automation isnt supported.
Some of the common use cases that can be build using Call Automation include:
- Integrate your communication applications with Contact Centers and your private telephony networks using Direct Routing. - Protect your customer's identity by building number masking services to connect buyers to sellers or users to partner vendors on your platform. - Increase engagement by building automated customer outreach programs for marketing and customer service.
+- Analyze in a post-call process your unmixed audio recordings for quality assurance purposes.
ACS Call Automation can be used to build calling workflows for customer service scenarios, as depicted in the high-level architecture below. You can answer inbound calls or make outbound calls. Execute actions like playing a welcome message, connecting the customer to a live agent on an ACS Calling SDK client app to answer the incoming call request. With support for ACS PSTN or Direct Routing, you can then connect this workflow back to your contact center.
The following list presents the set of features that are currently available in
| Query scenarios | Get the call state | ✔️ | ✔️ | | | Get a participant in a call | ✔️ | ✔️ | | | List all participants in a call | ✔️ | ✔️ |
+| Call Recording | Start/pause/resume/stop recording | ✔️ | ✔️ |
*Transfer of VoIP call to a phone number is currently not supported.
These actions can be performed on the calls that are answered or placed using Ca
**Transfer** ΓÇô When your application answers a call or places an outbound call to an endpoint, that endpoint can be transferred to another destination endpoint. Transferring a 1:1 call will remove your application's ability to control the call using the Call Automation SDKs.
+**Record** - You decide when to start/pause/resume/stop recording based on your application business logic, or you can grant control to the end user to trigger those actions. To learn more, view our [concepts](./call-recording.md) and [quickstart](../../quickstarts/voice-video-calling/get-started-call-recording.md).
+ **Hang-up** ΓÇô When your application has answered a one-to-one call, the hang-up action will remove the call leg and terminate the call with the other endpoint. If there are more than two participants in the call (group call), performing a ΓÇÿhang-upΓÇÖ action will remove your applicationΓÇÖs endpoint from the group call. **Terminate** ΓÇô Whether your application has answered a one-to-one or group call, or placed an outbound call with one or more participants, this action will remove all participants and end the call. This operation is triggered by setting `forEveryOne` property to true in Hang-Up call action.
The Call Automation events are sent to the web hook callback URI specified when
## Next Steps > [!div class="nextstepaction"]
-> [Get started with Call Automation](./../../quickstarts/voice-video-calling/Callflows-for-customer-interactions.md)
+> [Get started with Call Automation](./../../quickstarts/voice-video-calling/Callflows-for-customer-interactions.md)
communication-services Call Recording https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/call-recording.md
An Event Grid notification `Microsoft.Communication.RecordingFileStatusUpdated`
"documentId": string, // Document id for retrieving from storage "index": int, // Index providing ordering for this chunk in the entire recording "endReason": string, // Reason for chunk ending: "SessionEnded",ΓÇ»"ChunkMaximumSizeExceededΓÇ¥, etc.
- "metadataLocation": <string>, // url of the metadata for this chunk
- "contentLocation": <string> // url of the mp4, mp3, or wav for this chunk
+ "metadataLocation": <string>, // url of the metadata for this chunk
+ "contentLocation": <string>, // url of the mp4, mp3, or wav for this chunk
+ "deleteLocation": <string> // url of the mp4, mp3, or wav to delete this chunk
} ] },
confidential-computing Guest Attestation Example https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/guest-attestation-example.md
The [*guest attestation*](guest-attestation-confidential-vms.md) feature helps you to confirm that a confidential VM runs on a hardware-based trusted execution environment (TEE) with security features enabled for isolation and integrity.
-Sample applications for use with the guest attestation APIs are [available on GitHub](https://github.com/Azure/confidential-computing-cvm-guest-attestation) for [Linux](https://github.com/Azure/confidential-computing-cvm-guest-attestation/tree/main/cvm-guest-attestation-linux-app) and [Windows](https://github.com/Azure/confidential-computing-cvm-guest-attestation/tree/main/cvm-guest-attestation-windows-app).
+Sample applications for use with the guest attestation APIs are [available on GitHub](https://github.com/Azure/confidential-computing-cvm-guest-attestation).
Depending on your [type of scenario](guest-attestation-confidential-vms.md#scenarios), you can reuse the sample code in your client program or workload code.
To use a sample application in C++ for use with the guest attestation APIs, foll
1. Sign in to your VM.
-1. Clone the [sample Linux application](https://github.com/Azure/confidential-computing-cvm-guest-attestation/tree/main/cvm-guest-attestation-linux-app).
+1. Clone the sample Linux application.
1. Install the `build-essential` package. This package installs everything required for compiling the sample application.
To use a sample application in C++ for use with the guest attestation APIs, foll
#### [Windows](#tab/windows) 1. Install Visual Studio with the [**Desktop development with C++** workload](/cpp/build/vscpp-step-0-installation).
-1. Clone the [sample Windows application](https://github.com/Azure/confidential-computing-cvm-guest-attestation/tree/main/cvm-guest-attestation-windows-app).
+1. Clone the sample Windows application.
1. Build your project. From the **Build** menu, select **Build Solution**. 1. After the build succeeds, go to the `Release` build folder. 1. Run the application by running the `AttestationClientApp.exe`.
container-apps Observability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/observability.md
These features include:
|[Azure Monitor alerts](alerts.md) | Create and manage alerts to notify you of events and conditions based on metric and log data.| >[!NOTE]
-> While not a built-in feature, [Azure Monitor's Application Insights](../azure-monitor/app/app-insights-overview.md) is a powerful tool to monitor your web and background applications. Although Container Apps doesn't support the Application Insights auto-instrumentation agent, you can instrument your application code using Application Insights SDKs.
+> While not a built-in feature, [Azure Monitor Application Insights](../azure-monitor/app/app-insights-overview.md) is a powerful tool to monitor your web and background applications. Although Container Apps doesn't support the Application Insights auto-instrumentation agent, you can instrument your application code using Application Insights SDKs.
## Application lifecycle observability
cosmos-db Access Secrets From Keyvault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/access-secrets-from-keyvault.md
Title: Use Key Vault to store and access Azure Cosmos DB keys
-description: Use Azure Key Vault to store and access Azure Cosmos DB connection string, keys, endpoints.
+ Title: |
+ Tutorial: Store and use Azure Cosmos DB credentials with Azure Key Vault
+description: |
+ Use Azure Key Vault to store and access Azure Cosmos DB connection string, keys, and endpoints.
+ ms.devlang: csharp Previously updated : 06/01/2022- Last updated : 11/07/2022
-# Secure Azure Cosmos DB credentials using Azure Key Vault
+# Tutorial: Store and use Azure Cosmos DB credentials with Azure Key Vault
+ [!INCLUDE[NoSQL, MongoDB, Cassandra, Gremlin, Table](includes/appliesto-nosql-mongodb-cassandra-gremlin-table.md)]
->[!IMPORTANT]
-> The recommended solution to access Azure Cosmos DB is to use a [system-assigned managed identity](managed-identity-based-authentication.md). If your service cannot take advantage of managed identities then use the [cert based solution](certificate-based-authentication.md). If both the managed identity solution and cert based solution do not meet your needs, please use the key vault solution below.
+> [!IMPORTANT]
+> It's recommended to access Azure Cosmos DB is to use a [system-assigned managed identity](managed-identity-based-authentication.md). If your service cannot take advantage of managed identities then use the [certificate-based authentication](certificate-based-authentication.md). If both the managed identity solution and cert based solution do not meet your needs, please use the Azure Key vault solution in this article.
+
+If you're using Azure Cosmos DB as your database, you connect to databases, container, and items by using an SDK, the API endpoint, and either the primary or secondary key.
+
+It's not a good practice to store the endpoint URI and sensitive read-write keys directly within application code or configuration file. Ideally, this data is read from environment variables within the host. In Azure App Service, [app settings](/azure/app-service/configure-common#configure-app-settings) allow you to inject runtime credentials for your Azure Cosmos DB account without the need for developers to store these credentials in an insecure clear text manner.
+
+Azure Key Vault iterates on this best practice further by allowing you to store these credentials securely while giving services like Azure App Service managed access to the credentials. Azure App Service will securely read your credentials from Azure Key Vault and inject those credentials into your running application.
+
+With this best practice, developers can store the credentials for tools like the [Azure Cosmos DB emulator](local-emulator.md) or [Try Azure Cosmos DB free](try-free.md) during development. Then, the operations team can ensure that the correct production settings are injected at runtime.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+>
+> - Create an Azure Key Vault instance
+> - Add Azure Cosmos DB credentials as secrets to the key vault
+> - Create and register an Azure App Service resource and grant "read key" permissions
+> - Inject key vault secrets into the App Service resource
+>
+
+> [!NOTE]
+> This tutorial and the sample application uses an Azure Cosmos DB for NoSQL account. You can perform many of the same steps using other APIs.
+
+## Prerequisites
+
+- An existing Azure Cosmos DB for NoSQL account.
+ - If you have an Azure subscription, [create a new account](nosql/how-to-create-account.md?tabs=azure-portal).
+ - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+ - Alternatively, you can [try Azure Cosmos DB free](try-free.md) before you commit.
+- GitHub account.
+
+## Before you begin: Get Azure Cosmos DB credentials
+
+Before you start, you'll get the credentials for your existing account.
+
+1. Navigate to the [Azure portal](https://portal.azure.com/) page for the existing Azure Cosmos DB for NoSQL account.
+
+1. From the Azure Cosmos DB for NoSQL account page, select the **Keys** navigation menu option.
+
+ :::image type="content" source="media/access-secrets-from-keyvault/cosmos-keys-option.png" lightbox="media/access-secrets-from-keyvault/cosmos-keys-option.png" alt-text="Screenshot of an Azure Cosmos DB SQL API account page. The Keys option is highlighted in the navigation menu.":::
+
+1. Record the values from the **URI** and **PRIMARY KEY** fields. You'll use these values later in this tutorial.
+
+ :::image type="content" source="media/access-secrets-from-keyvault/cosmos-endpoint-key-credentials.png" lightbox="media/access-secrets-from-keyvault/cosmos-endpoint-key-credentials.png" alt-text="Screenshot of Keys page with various credentials for an Azure Cosmos DB SQL API account.":::
+
+## Create an Azure Key Vault resource
+
+First, create a new key vault to store your API for NoSQL credentials.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Select **Create a resource > Security > Key Vault**.
+
+1. On the **Create key vault** page, enter the following information:
+
+ | Setting | Description |
+ | | |
+ | **Subscription** | Select the Azure subscription that you wish to use for this Azure Cosmos account. |
+ | **Resource group** | Select a resource group, or select **Create new**, then enter a unique name for the new resource group. |
+ | **Key vault name** | Enter a globally unique name for your key vault. |
+ | **Region** | Select a geographic location to host your Azure Cosmos DB account. Use the location that is closest to your users to give them the fastest access to the data. |
+ | **Pricing tier** | Select *Standard*. |
+
+1. Leave the remaining settings to their default values.
+
+1. Select **Review + create**.
+
+1. Review the settings you provide, and then select **Create**. It takes a few minutes to create the account. Wait for the portal page to display **Your deployment is complete** before moving on.
+
+## Add Azure Cosmos DB access keys to the Key Vault
+
+Now, store your Azure Cosmos DB credentials as secrets in the key vault.
+
+1. Select **Go to resource** to go to the Azure Key Vault resource page.
+
+1. From the Azure Key Vault resource page, select the **Secrets** navigation menu option.
+
+1. Select **Generate/Import** from the menu.
+
+ :::image type="content" source="media/access-secrets-from-keyvault/create-new-secret.png" alt-text="Screenshot of the Generate/Import option in a key vault menu.":::
+
+1. On the **Create a secret** page, enter the following information:
+
+ | Setting | Description |
+ | | |
+ | **Upload options** | *Manual* |
+ | **Name** | *cosmos-endpoint* |
+ | **Secret value** | Enter the **URI** you copied earlier in this tutorial. |
+
+ :::image type="content" source="media/access-secrets-from-keyvault/create-endpoint-secret.png" alt-text="Screenshot of the Create a secret dialog in the Azure portal with details for an URI secret.":::
+
+1. Select **Create** to create the new **cosmos-endpoint** secret.
+
+1. Select **Generate/Import** from the menu again. On the **Create a secret** page, enter the following information:
+
+ | Setting | Description |
+ | | |
+ | **Upload options** | *Manual* |
+ | **Name** | *cosmos-readwrite-key* |
+ | **Secret value** | Enter the **PRIMARY KEY** you copied earlier in this tutorial. |
+
+ :::image type="content" source="media/access-secrets-from-keyvault/create-key-secret.png" alt-text="Screenshot of the Create a secret dialog in the Azure portal with details for a PRIMARY KEY secret.":::
+
+1. Select **Create** to create the new **cosmos-readwrite-key** secret.
+
+1. After the secrets are created, view them in the list of secrets within the **Secrets** page.
+
+ :::image type="content" source="media/access-secrets-from-keyvault/view-secrets-list.png" alt-text="Screenshot of the list of secrets for a key vault.":::
+
+1. Select each key, select the latest version, and then copy the **Secret Identifier**. You'll use the identifier for the **cosmos-endpoint** and **cosmos-readwrite-key** secrets later in this tutorial.
+
+ > [!TIP]
+ > The secret identifier will be in this format `https://<key-vault-name>.vault.azure.net/secrets/<secret-name>/<version-id>`. For example, if the name of the key vault is **msdocs-key-vault**, the name of the key is **cosmos-readwrite-key**, and the version if **83b995e363d947999ac6cf487ae0e12e**; then the secret identifier would be `https://msdocs-key-vault.vault.azure.net/secrets/cosmos-readwrite-key/83b995e363d947999ac6cf487ae0e12e`.
+ >
+ > :::image type="content" source="media/access-secrets-from-keyvault/view-secret-identifier.png" alt-text="Screenshot of a secret identifier for a key vault secret named cosmos-readwrite-key.":::
+ >
+
+## Create and register an Azure Web App with Azure Key Vault
+
+In this section, create a new Azure Web App, deploy a sample application, and then register the Web App's managed identity with Azure Key Vault.
+
+1. Create a new GitHub repository using the [cosmos-db-nosql-dotnet-sample-web-environment-variables template](https://github.com/azure-samples/cosmos-db-nosql-dotnet-sample-web-environment-variables/generate).
+
+1. In the Azure portal, select **Create a resource > Web > Web App**.
+
+1. On the **Create Web App** page and **Basics** tab, enter the following information:
+
+ | Setting | Description |
+ | | |
+ | **Subscription** | Select the Azure subscription that you wish to use for this Azure Cosmos account. |
+ | **Resource group** | Select a resource group, or select **Create new**, then enter a unique name for the new resource group. |
+ | **Name** | Enter a globally unique name for your web app. |
+ | **Publish** | Select *Code*. |
+ | **Runtime stack** | Select *.NET 6 (LTS)*. |
+ | **Operating System** | Select *Windows*. |
+ | **Region** | Select a geographic location to host your Azure Cosmos DB account. Use the location that is closest to your users to give them the fastest access to the data. |
+
+1. Leave the remaining settings to their default values.
+
+1. Select **Next: Deployment**.
+
+1. On the **Deployment** tab, enter the following information:
+
+ | Setting | Description |
+ | | |
+ | **Continuous deployment** | Select *Enable*. |
+ | **GitHub account** | Select *Authorize*. Follow the GitHub account authorization prompts to grant Azure permission to read your newly created GitHub repository. |
+ | **Organization** | Select the organization for your new GitHub repository. |
+ | **Repository** | Select the name your new GitHub repository. |
+ | **Branch** | Select *main*. |
+
+1. Select **Review + create**.
+
+1. Review the settings you provide, and then select **Create**. It takes a few minutes to create the account. Wait for the portal page to display **Your deployment is complete** before moving on.
+
+1. You may need to wait a few extra minutes for the web application to be initially deployed to the web app. From the Azure Web App resource page, select **Browse** to see the default state of the app.
+
+ :::image type="content" source="media/access-secrets-from-keyvault/sample-web-app-empty.png" lightbox="media/access-secrets-from-keyvault/sample-web-app-empty.png" alt-text="Screenshot of the web application in it's default state without credentials.":::
+
+1. Select the **Identity** navigation menu option.
-When using Azure Cosmos DB, you can access the database, collections, documents by using the endpoint and the key within the app's configuration file. However, it's not safe to put keys and URL directly in the application code because they're available in clear text format to all the users. You want to make sure that the endpoint and keys are available but through a secured mechanism. This scenario is where Azure Key Vault can help you to securely store and manage application secrets.
+1. On the **Identity** page, select **On** for **System-assigned** managed identity, and then select **Save**.
-The following steps are required to store and read Azure Cosmos DB access keys from Key Vault:
+ :::image type="content" source="media/access-secrets-from-keyvault/enable-managed-identity.png" alt-text="Screenshot of system-assigned managed identity being enabled from the Identity page.":::
-* Create a Key Vault
-* Add Azure Cosmos DB access keys to the Key Vault
-* Create an Azure web application
-* Register the application & grant permissions to read the Key Vault
+## Inject Azure Key Vault secrets as Azure Web App app settings
+Finally, inject the secrets stored in your key vault as app settings within the web app. The app settings will, in turn, inject the credentials into the application at runtime without storing the credentials in clear text.
-## Create a Key Vault
+1. Return to the key vault page in the Azure portal. Select **Access policies** from the navigation menu.
-1. Sign in to [Azure portal](https://portal.azure.com/).
-2. Select **Create a resource > Security > Key Vault**.
-3. On the **Create key vault** section provide the following information:
- * **Name:** Provide a unique name for your Key Vault.
- * **Subscription:** Choose the subscription that you'll use.
- * Within **Resource Group**, choose **Create new** and enter a resource group name.
- * In the Location pull-down menu, choose a location.
- * Leave other options to their defaults.
-4. After providing the information above, select **Create**.
+1. On the **Access policies** page, select **Create** from the menu.
-## Add Azure Cosmos DB access keys to the Key Vault.
-1. Navigate to the Key Vault you created in the previous step, open the **Secrets** tab.
-2. Select **+Generate/Import**,
+ :::image type="content" source="media/access-secrets-from-keyvault/create-access-policy.png" alt-text="Screenshot of the Create option in the Access policies menu.":::
- * Select **Manual** for **Upload options**.
- * Provide a **Name** for your secret
- * Provide the connection string of your Azure Cosmos DB account into the **Value** field. And then select **Create**.
+1. On the **Permissions** tab of the **Create an access policy** page, select the **Get** option in the **Secret permissions** section. Select **Next**.
- :::image type="content" source="./media/access-secrets-from-keyvault/create-a-secret.png" alt-text="Screenshot of the Create a secret dialog in the Azure portal.":::
+ :::image type="content" source="media/access-secrets-from-keyvault/get-secrets-permission.png" alt-text="Screenshot of the Get permission enabled for Secret permissions.":::
-4. After the secret is created, open it and copy the **Secret Identifier that is in the following format. You'll use this identifier in the next section.
+1. On the **Principal** tab, select the name of the web app you created earlier in this tutorial. Select **Next**.
- `https://<Key_Vault_Name>.vault.azure.net/secrets/<Secret _Name>/<ID>`
+ :::image type="content" source="media/access-secrets-from-keyvault/assign-principal.png" alt-text="Screenshot of a web app managed identity assigned to a permission.":::
-## Create an Azure web application
+ > [!NOTE]
+ > In this example screenshot, the web app is named **msdocs-dotnet-web**.
-1. Create an Azure web application or you can download the app from the [GitHub repository](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/Demo/keyvaultdemo). It's a simple MVC application.
+1. Select **Next** again to skip the **Application** tab. On the **Review + create** tab, review the settings you provide, and then select **Create**.
-2. Unzip the downloaded application and open the **HomeController.cs** file. Update the secret ID in the following line:
+1. Return to the web app page in the Azure portal. Select **Configuration** from the navigation menu.
- `var secret = await keyVaultClient.GetSecretAsync("<Your Key VaultΓÇÖs secret identifier>")`
+1. On the **Configuration** page, select **New application setting**. In the **Add/Edit application setting** dialog, enter the following information:
-3. **Save** the file, **Build** the solution.
-4. Next deploy the application to Azure. Open the context menu for the project and choose **publish**. Create a new app service profile (you can name the app WebAppKeyVault1) and select **Publish**.
+ | Setting | Description |
+ | | |
+ | **Name** | `CREDENTIALS__ENDPOINT` |
+ | **Key** | Get the **secret identifier** for the **cosmos-endpoint** secret in your key vault that you created earlier in this tutorial. Enter the identifier in the following format: `@Microsoft.KeyVault(SecretUri=<secret-identifier>)`. |
-5. Once the application is deployed from the Azure portal, navigate to web app that you deployed, and turn on the **Managed service identity** of this application.
+ > [!TIP]
+ > Ensure that the environment variable has a double underscore (`__`) value instead of a single underscore. The double-underscore is a key delimeter supported by .NET on all platforms. For more information, see [environment variables configuration](/dotnet/core/extensions/configuration-providers#environment-variable-configuration-provider).
- :::image type="content" source="./media/access-secrets-from-keyvault/turn-on-managed-service-identity.png" alt-text="Screenshot of the Managed service identity page in the Azure portal.":::
+ > [!NOTE]
+ > For example, if the secret identifier is `https://msdocs-key-vault.vault.azure.net/secrets/cosmos-endpoint/69621c59ef5b4b7294b5def118921b07`, then the reference would be `@Microsoft.KeyVault(SecretUri=https://msdocs-key-vault.vault.azure.net/secrets/cosmos-endpoint/69621c59ef5b4b7294b5def118921b07)`.
+ >
+ > :::image type="content" source="media/access-secrets-from-keyvault/create-app-setting.png" alt-text="Screenshot of the Add/Edit application setting dialog with a new app setting referencing a key vault secret.":::
+ >
-If you run the application now, you'll see the following error, as you have not given any permission to this application in Key Vault.
+1. Select **OK** to persist the new app setting
+1. Select **New application setting** again. In the **Add/Edit application setting** dialog, enter the following information and then select **OK**:
-## Register the application & grant permissions to read the Key Vault
+ | Setting | Description |
+ | | |
+ | **Name** | `CREDENTIALS__KEY` |
+ | **Key** | Get the **secret identifier** for the **cosmos-readwrite-key** secret in your key vault that you created earlier in this tutorial. Enter the identifier in the following format: `@Microsoft.KeyVault(SecretUri=<secret-identifier>)`. |
-In this section, you register the application with Azure Active Directory and give permissions for the application to read the Key Vault.
+1. Back on the **Configuration** page, select **Save** to update the app settings for the web app.
-1. Navigate to the Azure portal, open the **Key Vault** you created in the previous section.
+ :::image type="content" source="media/access-secrets-from-keyvault/save-app-settings.png" alt-text="Screenshot of the Save option in the Configuration page's menu.":::
-2. Open **Access policies**, select **+Add New** find the web app you deployed, select permissions and select **OK**.
+1. Wait a few minutes for the web app to restart with the new app settings. At this point, the new app settings should indicate that they're a **Key vault Reference**.
- :::image type="content" source="./media/access-secrets-from-keyvault/add-access-policy.png" alt-text="Add access policy":::
+ :::image type="content" source="media/access-secrets-from-keyvault/app-settings-reference.png" lightbox="media/access-secrets-from-keyvault/app-settings-reference.png" alt-text="Screenshot of the Key vault Reference designation on two app settings in a web app.":::
-Now, if you run the application, you can read the secret from Key Vault.
+1. Select **Overview** from the navigation menu. Select **Browse** to see the app with populated credentials.
-
-Similarly, you can add a user to access the key Vault. You need to add yourself to the Key Vault by selecting **Access Policies** and then grant all the permissions you need to run the application from Visual studio. When this application is running from your desktop, it takes your identity.
+ :::image type="content" source="media/access-secrets-from-keyvault/sample-web-app-populated.png" lightbox="media/access-secrets-from-keyvault/sample-web-app-populated.png" alt-text="Screenshot of the web application with valid Azure Cosmos DB for NoSQL account credentials.":::
## Next steps
-* To configure a firewall for Azure Cosmos DB, see [firewall support](how-to-configure-firewall.md) article.
-* To configure virtual network service endpoint, see [secure access by using VNet service endpoint](how-to-configure-vnet-service-endpoint.md) article.
+- To configure a firewall for Azure Cosmos DB, see [firewall support](how-to-configure-firewall.md) article.
+- To configure virtual network service endpoint, see [secure access by using VNet service endpoint](how-to-configure-vnet-service-endpoint.md) article.
cosmos-db Change Feed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/change-feed.md
Change feed functionality is surfaced as change stream in API for MongoDB and Qu
Native Apache Cassandra provides change data capture (CDC), a mechanism to flag specific tables for archival as well as rejecting writes to those tables once a configurable size-on-disk for the CDC log is reached. The change feed feature in Azure Cosmos DB for Apache Cassandra enhances the ability to query the changes with predicate via CQL. To learn more about the implementation details, see [Change feed in the Azure Cosmos DB for Apache Cassandra](cassandr).
+## Measuing change feed request unit consumption
+
+Use Azure Monitor to measure the request unit (RU) consumption of the change feed. For more information, see [monitor throughput or request unit usage in Azure Cosmos DB](monitor-request-unit-usage.md).
+ ## Next steps You can now proceed to learn more about change feed in the following articles:
cosmos-db How To Setup Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-managed-identity.md
az cosmosdb identity remove \
## Next steps
+> [!div class="nextstepaction"]
+> [Tutorial: Store and use Azure Cosmos DB credentials with Azure Key Vault](access-secrets-from-keyvault.md)
+ - Learn more about [managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md) - Learn more about [customer-managed keys on Azure Cosmos DB](how-to-setup-cmk.md)
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/introduction.md
Title: Introduction to Azure Cosmos DB
-description: Learn about Azure Cosmos DB. This globally distributed multi-model database is built for low latency, elastic scalability, high availability, and offers native support for NoSQL data.
+description: Learn about Azure Cosmos DB. This globally distributed multi-model database is built for low latency, elastic scalability, high availability, and offers native support for NoSQL and relational data.
adobe-target: true
Today's applications are required to be highly responsive and always online. To achieve low latency and high availability, instances of these applications need to be deployed in datacenters that are close to their users. Applications need to respond in real time to large changes in usage at peak hours, store ever increasing volumes of data, and make this data available to users in milliseconds.
-Azure Cosmos DB is a fully managed NoSQL database for modern app development. Single-digit millisecond response times, and automatic and instant scalability, guarantee speed at any scale. Business continuity is assured with [SLA-backed](https://azure.microsoft.com/support/legal/sla/cosmos-db) availability and enterprise-grade security.
+Azure Cosmos DB is a fully managed NoSQL and relational database for modern app development. Single-digit millisecond response times, and automatic and instant scalability, guarantee speed at any scale. Business continuity is assured with [SLA-backed](https://azure.microsoft.com/support/legal/sla/cosmos-db) availability and enterprise-grade security.
App development is faster and more productive thanks to:
You can [Try Azure Cosmos DB for Free](https://azure.microsoft.com/try/cosmosdb/
> [!TIP] > To learn more about Azure Cosmos DB, join us every Thursday at 1PM Pacific on Azure Cosmos DB Live TV. See the [Upcoming session schedule and past episodes](https://gotcosmos.com/tv). ## Key Benefits
Gain unparalleled [SLA-backed](https://azure.microsoft.com/support/legal/sla/cos
Build fast with open source APIs, multiple SDKs, schemaless data and no-ETL analytics over operational data. - Deeply integrated with key Azure services used in modern (cloud-native) app development including Azure Functions, IoT Hub, AKS (Azure Kubernetes Service), App Service, and more.-- Choose from multiple database APIs including the native API for NoSQL, API for MongoDB, Apache Cassandra, Apache Gremlin, and Table.
+- Choose from multiple database APIs including the native API for NoSQL, MongoDB, PostgreSQL, Apache Cassandra, Apache Gremlin, and Table.
- Build apps on API for NoSQL using the languages of your choice with SDKs for .NET, Java, Node.js and Python. Or your choice of drivers for any of the other database APIs. - Change feed makes it easy to track and manage changes to database containers and create triggered events with Azure Functions. - Azure Cosmos DB's schema-less service automatically indexes all your data, regardless of the data model, to deliver blazing fast queries.
cosmos-db Change Feed Processor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/change-feed-processor.md
The change feed processor is resilient to user code errors. That means that if y
> [!NOTE] > There is only one scenario where a batch of changes will not be retried. If the failure happens on the first ever delegate execution, the lease store has no previous saved state to be used on the retry. On those cases, the retry would use the [initial starting configuration](#starting-time), which might or might not include the last batch.
-To prevent your change feed processor from getting "stuck" continuously retrying the same batch of changes, you should add logic in your delegate code to write documents, upon exception, to a dead-letter queue. This design ensures that you can keep track of unprocessed changes while still being able to continue to process future changes. The dead-letter queue might be another Azure Cosmos DB container. The exact data store does not matter, simply that the unprocessed changes are persisted.
+To prevent your change feed processor from getting "stuck" continuously retrying the same batch of changes, you should add logic in your delegate code to write documents, upon exception, to an errored-message queue. This design ensures that you can keep track of unprocessed changes while still being able to continue to process future changes. The errored-message queue might be another Azure Cosmos DB container. The exact data store does not matter, simply that the unprocessed changes are persisted.
In addition, you can use the [change feed estimator](how-to-use-change-feed-estimator.md) to monitor the progress of your change feed processor instances as they read the change feed or use the [life cycle notifications](#life-cycle-notifications) to detect underlying failures.
The change feed processor is resilient to user code errors. That means that if y
> [!NOTE] > There is only one scenario where a batch of changes will not be retried. If the failure happens on the first ever delegate execution, the lease store has no previous saved state to be used on the retry. On those cases, the retry would use the [initial starting configuration](#starting-time), which might or might not include the last batch.
-To prevent your change feed processor from getting "stuck" continuously retrying the same batch of changes, you should add logic in your delegate code to write documents, upon exception, to a dead-letter queue. This design ensures that you can keep track of unprocessed changes while still being able to continue to process future changes. The dead-letter queue might be another Azure Cosmos DB container. The exact data store does not matter, simply that the unprocessed changes are persisted.
+To prevent your change feed processor from getting "stuck" continuously retrying the same batch of changes, you should add logic in your delegate code to write documents, upon exception, to an errored-message. This design ensures that you can keep track of unprocessed changes while still being able to continue to process future changes. The errored-message might be another Azure Cosmos DB container. The exact data store does not matter, simply that the unprocessed changes are persisted.
In addition, you can use the [change feed estimator](how-to-use-change-feed-estimator.md) to monitor the progress of your change feed processor instances as they read the change feed.
cosmos-db Estimate Ru With Capacity Planner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/estimate-ru-with-capacity-planner.md
After you sign in, you can see more fields compared to the fields in basic mode.
|Indexing policy|By default, Azure Cosmos DB [indexes all properties](../index-policy.md) in all items for flexible and efficient queries (maps to the **Automatic** indexing policy). <br/><br/> If you choose **off**, none of the properties are indexed. This results in the lowest RU charge for writes. Select **off** policy if you expect to only do [point reads](/dotnet/api/microsoft.azure.cosmos.container.readitemasync) (key value lookups) and/or writes, and no queries. <br/><br/> If you choose **Automatic**, Azure Cosmos DB automatically indexes all the items as they are written. <br/><br/> **Custom** indexing policy allows you to include or exclude specific properties from the index for lower write throughput and storage. To learn more, see [indexing policy](../index-overview.md) and [sample indexing policies](how-to-manage-indexing-policy.md#indexing-policy-examples) articles.| |Total data stored in transactional store |Total estimated data stored(GB) in the transactional store in a single region.| |Use analytical store| Choose **On** if you want to use analytical store. Enter the **Total data stored in analytical store**, it represents the estimated data stored(GB) in the analytical store in a single region. |
-|Workload mode|Select **Steady** option if your workload volume is constant. <br/><br/> Select **Variable** option if your workload volume changes over time. For example, during a specific day or a month. The following setting is available if you choose the variable workload option:<ul><li>Percentage of time at peak: Percentage of time in a month where your workload requires peak (highest) throughput. </li></ul> <br/><br/> For example, if you have a workload that has high activity during 9am ΓÇô 6pm weekday business hours, then the percentage of time at peak is: 45 hours at peak / 730 hours / month = ~6%.<br/><br/>With peak and off-peak intervals, you can optimize your cost by [programmatically scaling your provisioned throughput](../set-throughput.md#update-throughput-on-a-database-or-a-container) up and down accordingly.|
+|Workload mode|Select **Steady** option if your workload volume is constant. <br/><br/> Select **Variable** option if your workload volume changes over time. For example, during a specific day or a month. The following setting is available if you choose the variable workload option:<ul><li>Percentage of time at peak: Percentage of time in a month where your workload requires peak (highest) throughput. </li></ul> <br/><br/> For example, if you have a workload that has high activity during 9am ΓÇô 6pm weekday business hours, then the percentage of time at peak is: `(9 hours per weekday at peak * 5 days per week at peak) / (24 hours per day at peak * 7 days in a week) = 45 / 168 = ~27%`.<br/><br/>With peak and off-peak intervals, you can optimize your cost by [programmatically scaling your provisioned throughput](../set-throughput.md#update-throughput-on-a-database-or-a-container) up and down accordingly.|
|Item size|The size of the data item (for example, document), ranging from 1 KB to 2 MB. You can add estimates for multiple sample items. <br/><br/>You can also **Upload sample (JSON)** document for a more accurate estimate.<br/><br/>If your workload has multiple types of items (with different JSON content) in the same container, you can upload multiple JSON documents and get the estimate. Use the **Add new item** button to add multiple sample JSON documents.| | Number of properties | The average number of properties per an item. | |Point reads/sec |Number of point read operations expected per second per region. Point reads are the key/value lookup on a single item ID and a partition key. Point read operations are different from query read operations. To learn more about point reads, see the [options to read data](../optimize-cost-reads-writes.md#reading-data-point-reads-and-queries) article. If your workload mode is **Variable**, you can provide the expected number of point read operations at peak and off peak. |
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-dotnet.md
ms.devlang: csharp Previously updated : 11/03/2022 Last updated : 11/07/2022
Get started with the Azure Cosmos DB client library for .NET to create databases
## Prerequisites - An Azure account with an active subscription.
- - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
- - Alternatively, you can [try Azure Cosmos DB free](../try-free.md) before you commit.
+ - No Azure subscription? You can [try Azure Cosmos DB free](../try-free.md) with no credit card required.
- [.NET 6.0 or later](https://dotnet.microsoft.com/download) - [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
This section walks you through creating an Azure Cosmos DB account and setting u
### <a id="create-account"></a>Create an Azure Cosmos DB account > [!TIP]
-> Alternatively, you can [try Azure Cosmos DB free](../try-free.md) before you commit. If you create an account using the free trial, you can safely skip this section.
+> No Azure subscription? You can [try Azure Cosmos DB free](../try-free.md) with no credit card required. If you create an account using the free trial, you can safely skip ahead to the [Create a new .NET app](#create-a-new-net-app) section.
[!INCLUDE [Create resource tabbed conceptual - ARM, Azure CLI, PowerShell, Portal](./includes/create-resources.md)]
cosmos-db Samples Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/samples-dotnet.md
> * [.NET](samples-dotnet.md) >
-The [cosmos-db-sql-api-dotnet-samples](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples) GitHub repository includes multiple sample projects. These projects illustrate how to perform common operations on Azure Cosmos DB for NoSQL resources.
+The [cosmos-db-nosql-dotnet-samples](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples) GitHub repository includes multiple sample projects. These projects illustrate how to perform common operations on Azure Cosmos DB for NoSQL resources.
## Prerequisites
The sample projects are all self-contained and are designed to be ran individual
| Task | API reference | | : | : |
-| [Create a client with endpoint and key](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples/blob/v3/101-client-endpoint-key/Program.cs#L11-L14) |[``CosmosClient(string, string)``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.-ctor#microsoft-azure-cosmos-cosmosclient-ctor(system-string-system-string-microsoft-azure-cosmos-cosmosclientoptions)) |
-| [Create a client with connection string](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples/blob/v3/102-client-connection-string/Program.cs#L11-L13) |[``CosmosClient(string)``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.-ctor#microsoft-azure-cosmos-cosmosclient-ctor(system-string-microsoft-azure-cosmos-cosmosclientoptions)) |
-| [Create a client with ``DefaultAzureCredential``](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples/blob/v3/103-client-default-credential/Program.cs#L20-L23) |[``CosmosClient(string, TokenCredential)``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.-ctor#microsoft-azure-cosmos-cosmosclient-ctor(system-string-azure-core-tokencredential-microsoft-azure-cosmos-cosmosclientoptions)) |
-| [Create a client with custom ``TokenCredential``](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples/blob/v3/104-client-secret-credential/Program.cs#L25-L28) |[``CosmosClient(string, TokenCredential)``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.-ctor#microsoft-azure-cosmos-cosmosclient-ctor(system-string-azure-core-tokencredential-microsoft-azure-cosmos-cosmosclientoptions)) |
+| [Create a client with endpoint and key](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples/blob/main/101-client-endpoint-key/Program.cs#L11-L14) |[``CosmosClient(string, string)``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.-ctor#microsoft-azure-cosmos-cosmosclient-ctor(system-string-system-string-microsoft-azure-cosmos-cosmosclientoptions)) |
+| [Create a client with connection string](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples/blob/main/102-client-connection-string/Program.cs#L11-L13) |[``CosmosClient(string)``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.-ctor#microsoft-azure-cosmos-cosmosclient-ctor(system-string-microsoft-azure-cosmos-cosmosclientoptions)) |
+| [Create a client with ``DefaultAzureCredential``](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples/blob/main/103-client-default-credential/Program.cs#L20-L23) |[``CosmosClient(string, TokenCredential)``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.-ctor#microsoft-azure-cosmos-cosmosclient-ctor(system-string-azure-core-tokencredential-microsoft-azure-cosmos-cosmosclientoptions)) |
+| [Create a client with custom ``TokenCredential``](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples/blob/main/104-client-secret-credential/Program.cs#L25-L28) |[``CosmosClient(string, TokenCredential)``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.-ctor#microsoft-azure-cosmos-cosmosclient-ctor(system-string-azure-core-tokencredential-microsoft-azure-cosmos-cosmosclientoptions)) |
### Databases | Task | API reference | | : | : |
-| [Create a database](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples/blob/v3/200-create-database/Program.cs#L19-L21) |[``CosmosClient.CreateDatabaseIfNotExistsAsync``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.createdatabaseifnotexistsasync) |
+| [Create a database](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples/blob/main/200-create-database/Program.cs#L19-L21) |[``CosmosClient.CreateDatabaseIfNotExistsAsync``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.createdatabaseifnotexistsasync) |
### Containers | Task | API reference | | : | : |
-| [Create a container](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples/blob/v3/225-create-container/Program.cs#L26-L30) |[``Database.CreateContainerIfNotExistsAsync``](/dotnet/api/microsoft.azure.cosmos.database.createcontainerifnotexistsasync) |
+| [Create a container](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples/blob/main/225-create-container/Program.cs#L26-L30) |[``Database.CreateContainerIfNotExistsAsync``](/dotnet/api/microsoft.azure.cosmos.database.createcontainerifnotexistsasync) |
### Items | Task | API reference | | : | : |
-| [Create an item](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples/blob/v3/250-create-item/Program.cs#L35-L46) |[``Container.CreateItemAsync<>``](/dotnet/api/microsoft.azure.cosmos.container.createitemasync) |
-| [Point read an item](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples/blob/v3/275-read-item/Program.cs#L51-L54) |[``Container.ReadItemAsync<>``](/dotnet/api/microsoft.azure.cosmos.container.readitemasync) |
-| [Query multiple items](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples/blob/v3/300-query-items/Program.cs#L64-L80) |[``Container.GetItemQueryIterator<>``](/dotnet/api/microsoft.azure.cosmos.container.getitemqueryiterator) |
+| [Create an item](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples/blob/main/250-create-item/Program.cs#L35-L46) |[``Container.CreateItemAsync<>``](/dotnet/api/microsoft.azure.cosmos.container.createitemasync) |
+| [Point read an item](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples/blob/main/275-read-item/Program.cs#L51-L54) |[``Container.ReadItemAsync<>``](/dotnet/api/microsoft.azure.cosmos.container.readitemasync) |
+| [Query multiple items](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples/blob/main/300-query-items/Program.cs#L64-L80) |[``Container.GetItemQueryIterator<>``](/dotnet/api/microsoft.azure.cosmos.container.getitemqueryiterator) |
## Next steps
cosmos-db Tutorial Dotnet Console App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/tutorial-dotnet-console-app.md
In this tutorial, you learn how to:
## Prerequisites - An existing Azure Cosmos DB for NoSQL account.
- - If you have an Azure subscription, [create a new account](how-to-create-account.md?tabs=azure-portal).
- - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
- - Alternatively, you can [try Azure Cosmos DB free](../try-free.md) before you commit.
+ - If you have an existing Azure subscription, [create a new account](how-to-create-account.md?tabs=azure-portal).
+ - No Azure subscription? You can [try Azure Cosmos DB free](../try-free.md) with no credit card required.
- [Visual Studio Code](https://code.visualstudio.com) - [.NET 6 (LTS) or later](https://dotnet.microsoft.com/download/dotnet/6.0) - Experience writing C# applications.
cosmos-db Tutorial Dotnet Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/tutorial-dotnet-web-app.md
In this tutorial, you learn how to:
## Prerequisites - An existing Azure Cosmos DB for NoSQL account.
- - If you have an Azure subscription, [create a new account](how-to-create-account.md?tabs=azure-portal).
- - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
- - Alternatively, you can [try Azure Cosmos DB free](../try-free.md) before you commit.
+ - If you have an existing Azure subscription, [create a new account](how-to-create-account.md?tabs=azure-portal).
+ - No Azure subscription? You can [try Azure Cosmos DB free](../try-free.md) with no credit card required.
- [Visual Studio Code](https://code.visualstudio.com) - [.NET 6 (LTS) or later](https://dotnet.microsoft.com/download/dotnet/6.0) - Experience writing C# applications.
cosmos-db Partial Document Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/partial-document-update.md
A JSON Patch document:
{ "op": "add", "path": "/color", "value": "silver" }, { "op": "remove", "path": "/used" }, { "op": "set", "path": "/price", "value": 355.45 }
- { "op": "increment", "path": "/inventory/quantity", "value": 10 }
+ { "op": "incr", "path": "/inventory/quantity", "value": 10 }
] ```
cosmos-db Reserved Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/reserved-capacity.md
You can cancel, exchange, or refund reservations with certain limitations. For m
## Exceeding reserved capacity
-When you reserve capacity for your Azure Cosmos DB resources, you are reserving [provisioned thorughput](set-throughput.md). If the provisioned throughput is exceeded, requests beyond that provisioning will be rate-limited. For more information, see [provisioned throughput types](how-to-choose-offer.md#overview-of-provisioned-throughput-types).
+When you reserve capacity for your Azure Cosmos DB resources, you are reserving [provisioned thorughput](set-throughput.md). If the provisioned throughput is exceeded, requests beyond that provisioning will be billed using pay-as-you go rates. For more information on reservations, see the [Azure reservations](../cost-management-billing/reservations/save-compute-costs-reservations.md) article. For more information on provisioned throughput, see [provisioned throughput types](how-to-choose-offer.md#overview-of-provisioned-throughput-types).
## Next steps
cosmos-db How To Use Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-use-nodejs.md
communicate with the Storage REST services.
Add the following code to the top of the **server.js** file in your application: ```javascript
-const { TableServiceClient } = require("@azure/data-tables");
+const { TableServiceClient, odata } = require("@azure/data-tables");
``` ## Connect to Azure Table service
For successful batch operations, `result` contains information for each operatio
To return a specific entity based on the **PartitionKey** and **RowKey**, use the **getEntity** method. ```javascript
-let result = await tableClient.getEntity("hometasks", "1");
- // result contains the entity
+let result = await tableClient.getEntity("hometasks", "1")
+ .catch((error) => {
+ // handle any errors
+ });
+ // result contains the entity
``` After this operation is complete, `result` contains the entity.
cosmos-db Try Free https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/try-free.md
Title: Try Azure Cosmos DB free
description: Try Azure Cosmos DB free of charge. No sign-up or credit card required. It's easy to test your apps, deploy, and run small workloads free for 30 days. Upgrade your account at any time during your trial. -+
cost-management-billing Assign Access Acm Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/assign-access-acm-data.md
To view cost data for Azure EA subscriptions, a user must have at least read acc
| **Scope** | **Defined at** | **Required access to view data** | **Prerequisite EA setting** | **Consolidates data to** | | | | | | |
-| Billing account<sup>1</sup> | [https://ea.azure.com](https://ea.azure.com/) | Enterprise Admin | None | All subscriptions from the enterprise agreement |
+| Billing account┬╣ | [https://ea.azure.com](https://ea.azure.com/) | Enterprise Admin | None | All subscriptions from the enterprise agreement |
| Department | [https://ea.azure.com](https://ea.azure.com/) | Department Admin | **DA view charges** enabled | All subscriptions belonging to an enrollment account that is linked to the department |
-| Enrollment account<sup>2</sup> | [https://ea.azure.com](https://ea.azure.com/) | Account Owner | **AO view charges** enabled | All subscriptions from the enrollment account |
+| Enrollment account┬▓ | [https://ea.azure.com](https://ea.azure.com/) | Account Owner | **AO view charges** enabled | All subscriptions from the enrollment account |
| Management group | [https://portal.azure.com](https://portal.azure.com/) | Cost Management Reader (or Contributor) | **AO view charges** enabled | All subscriptions below the management group | | Subscription | [https://portal.azure.com](https://portal.azure.com/) | Cost Management Reader (or Contributor) | **AO view charges** enabled | All resources/resource groups in the subscription | | Resource group | [https://portal.azure.com](https://portal.azure.com/) | Cost Management Reader (or Contributor) | **AO view charges** enabled | All resources in the resource group |
-<sup>1</sup> The billing account is also referred to as the Enterprise Agreement or Enrollment.
+┬╣ The billing account is also referred to as the Enterprise Agreement or Enrollment.
-<sup>2</sup> The enrollment account is also referred to as the account owner.
+┬▓ The enrollment account is also referred to as the account owner.
Direct enterprise administrators can assign the billing account, department, and enrollment account scope the in the [Azure portal](https://portal.azure.com/). For more information, see [Azure portal administration for direct Enterprise Agreements](../manage/direct-ea-administration.md).
cost-management-billing Tutorial Export Acm Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-export-acm-data.md
description: This article shows you how you can create and manage exported Cost Management data so that you can use it in external systems. Previously updated : 08/23/2022 Last updated : 11/07/2022
If you have a Microsoft Customer Agreement, Microsoft Partner Agreement, or Ente
If you don't have a Microsoft Customer Agreement, Microsoft Partner Agreement, or Enterprise Agreement, then you won't see the **File Partitioning** option.
+Partitioning isn't currently supported for resource groups or management group scopes.
+ #### Update existing exports to use file partitioning If you have existing exports and you want to set up file partitioning, create a new export. File partitioning is only available with the latest Exports version. There may be minor changes to some of the fields in the usage files that get created.
cost-management-billing Create Multiple Subscriptions Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/create-multiple-subscriptions-error.md
+
+ Title: Error when you create multiple subscriptions
+
+description: Provides the solution for a problem where you get an error message when you try to create multiple subscriptions.
++
+tags: billing
+++ Last updated : 11/07/2022+++
+# Error when you create multiple subscriptions
+
+When you try to create multiple Azure subscriptions in a short period of time, you might receive an error stating:
+
+`Subscription not created. Please try again later.`
+
+The error is normal and expected.
+
+The error can occur for customers with the following Azure subscription agreement type:
+
+- Microsoft Customer Agreement purchased directly through Azure.com
+
+## Solution
+
+Expect a delay before you can create another subscription.
+
+## Need help? Contact us.
+
+If you have questions or need help, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458).
+
+## Next steps
+
+- Learn more about [Programmatically creating Azure subscriptions for a Microsoft Customer Agreement with the latest APIs](programmatically-create-subscription-microsoft-customer-agreement.md).
cost-management-billing Direct Ea Administration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/direct-ea-administration.md
After a department is created, the EA admin can add department administrators an
- Add accounts - Remove accounts - Download usage details-- View the monthly usage and charges <sup>1</sup>
+- View the monthly usage and charges ┬╣
- <sup>1</sup> An EA admin must grant the permissions.
+ ┬╣ An EA admin must grant the permissions.
### To add a department administrator
cost-management-billing Ea Portal Administration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-portal-administration.md
After a department is created, the enterprise administrator can add department a
- Add accounts - Remove accounts - Download usage details-- View the monthly usage and charges <sup>1</sup>
+- View the monthly usage and charges ┬╣
-> <sup>1</sup> An enterprise administrator must grant these permissions. If you were given permission to view department monthly usage and charges, but can't see them, contact your partner.
+> ┬╣ An enterprise administrator must grant these permissions. If you were given permission to view department monthly usage and charges, but can't see them, contact your partner.
### To add a department administrator
cost-management-billing Manage Tax Information https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/manage-tax-information.md
Customers in the following countries or regions can add their Tax IDs.
|Ghana | Greece | |Guatemala | Hungary | |Iceland | Italy |
-| India <sup>1</sup> | Indonesia |
+| India ┬╣ | Indonesia |
|Ireland | Isle of Man | |Kenya | Korea | | Latvia | Liechtenstein |
Customers in the following countries or regions can add their Tax IDs.
> [!NOTE] > If you don't see the Tax IDs section, Tax IDs are not yet collected for your region. Or, updating Tax IDs in the Azure portal isn't supported for your account.
-<sup>1</sup> Follow the instructions in the next section to add your Goods and Services Taxpayer Identification Number (GSTIN).
+┬╣ Follow the instructions in the next section to add your Goods and Services Taxpayer Identification Number (GSTIN).
## Add your GSTIN for billing accounts in India
cost-management-billing Mca Request Billing Ownership https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mca-request-billing-ownership.md
As the user that approved the transfer:
You can request billing ownership of products for the subscription types listed below. -- [Action pack](https://azure.microsoft.com/offers/ms-azr-0025p/)<sup>1</sup>-- [Azure in Open Licensing](https://azure.microsoft.com/offers/ms-azr-0111p/)<sup>1</sup>-- [Azure Pass Sponsorship](https://azure.microsoft.com/offers/azure-pass/)<sup>1</sup>
+- [Action pack](https://azure.microsoft.com/offers/ms-azr-0025p/)┬╣
+- [Azure in Open Licensing](https://azure.microsoft.com/offers/ms-azr-0111p/)┬╣
+- [Azure Pass Sponsorship](https://azure.microsoft.com/offers/azure-pass/)┬╣
- [Enterprise Dev/Test](https://azure.microsoft.com/offers/ms-azr-0148p/)-- [Free Trial](https://azure.microsoft.com/offers/ms-azr-0044p/)<sup>1</sup>
+- [Free Trial](https://azure.microsoft.com/offers/ms-azr-0044p/)┬╣
- [Pay-As-You-Go](https://azure.microsoft.com/offers/ms-azr-0003p/) - [Pay-As-You-Go Dev/Test](https://azure.microsoft.com/offers/ms-azr-0023p/)-- [Microsoft Azure Plan](https://azure.microsoft.com/offers/ms-azr-0017g/)<sup>2</sup>-- [Microsoft Azure Sponsored Offer](https://azure.microsoft.com/offers/ms-azr-0036p/)<sup>1</sup>
+- [Microsoft Azure Plan](https://azure.microsoft.com/offers/ms-azr-0017g/)┬▓
+- [Microsoft Azure Sponsored Offer](https://azure.microsoft.com/offers/ms-azr-0036p/)┬╣
- [Microsoft Enterprise Agreement](https://azure.microsoft.com/pricing/enterprise-agreement/) - Subscription and reservation transfer are supported for direct EA customers. A direct enterprise agreement is one that's signed between Microsoft and an enterprise agreement customer. - Only subscription transfers are supported for indirect EA customers. Reservation transfers aren't supported. An indirect EA agreement is one where a customer signs an agreement with a Microsoft partner. - [Microsoft Customer Agreement](https://azure.microsoft.com/pricing/purchase-options/microsoft-customer-agreement/)-- [Microsoft Cloud Partner Program](https://azure.microsoft.com/offers/ms-azr-0025p/)<sup>1</sup>-- [MSDN Platforms](https://azure.microsoft.com/offers/ms-azr-0062p/)<sup>1</sup>-- [Visual Studio Enterprise (BizSpark) subscribers](https://azure.microsoft.com/offers/ms-azr-0064p/)<sup>1</sup>-- [Visual Studio Enterprise (Cloud Partner Program) subscribers](https://azure.microsoft.com/offers/ms-azr-0029p/)<sup>1</sup>-- [Visual Studio Enterprise subscribers](https://azure.microsoft.com/offers/ms-azr-0063p/)<sup>1</sup>-- [Visual Studio Professional](https://azure.microsoft.com/offers/ms-azr-0059p/)<sup>1</sup>-- [Visual Studio Test Professional subscribers](https://azure.microsoft.com/offers/ms-azr-0060p/)<sup>1</sup>
+- [Microsoft Cloud Partner Program](https://azure.microsoft.com/offers/ms-azr-0025p/)┬╣
+- [MSDN Platforms](https://azure.microsoft.com/offers/ms-azr-0062p/)┬╣
+- [Visual Studio Enterprise (BizSpark) subscribers](https://azure.microsoft.com/offers/ms-azr-0064p/)┬╣
+- [Visual Studio Enterprise (Cloud Partner Program) subscribers](https://azure.microsoft.com/offers/ms-azr-0029p/)┬╣
+- [Visual Studio Enterprise subscribers](https://azure.microsoft.com/offers/ms-azr-0063p/)┬╣
+- [Visual Studio Professional](https://azure.microsoft.com/offers/ms-azr-0059p/)┬╣
+- [Visual Studio Test Professional subscribers](https://azure.microsoft.com/offers/ms-azr-0060p/)┬╣
-<sup>1</sup> Any credit available on the subscription won't be available in the new account after the transfer.
+┬╣ Any credit available on the subscription won't be available in the new account after the transfer.
-<sup>2</sup> Only supported for products in accounts that are created during sign-up on the Azure website.
+┬▓ Only supported for products in accounts that are created during sign-up on the Azure website.
## Check for access [!INCLUDE [billing-check-mca](../../../includes/billing-check-mca.md)]
cost-management-billing Mpa Request Ownership https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mpa-request-ownership.md
On the Review request tab, the following status messages might be displayed.
You can request billing ownership of the following subscription types.
-* [Enterprise Dev/Test](https://azure.microsoft.com/offers/ms-azr-0148p/)<sup>1</sup>
+* [Enterprise Dev/Test](https://azure.microsoft.com/offers/ms-azr-0148p/)┬╣
* [Microsoft Enterprise Agreement](https://azure.microsoft.com/pricing/enterprise-agreement/)
-* Azure Plan<sup>1</sup> [(Microsoft Customer Agreement in Enterprise Motion)](https://www.microsoft.com/Licensing/how-to-buy/microsoft-customer-agreement)
+* Azure Plan┬╣ [(Microsoft Customer Agreement in Enterprise Motion)](https://www.microsoft.com/Licensing/how-to-buy/microsoft-customer-agreement)
-<sup>1</sup> You must convert an EA Dev/Test subscription to an EA Enterprise offer using a support ticket and respectively, an Azure Plan Dev/Test offer to Azure plan. A Dev/Test subscription will be billed at a pay-as-you-go rate after conversion. There's no discount currently available through the Dev/Test offer to CSP partners.
+┬╣ You must convert an EA Dev/Test subscription to an EA Enterprise offer using a support ticket and respectively, an Azure Plan Dev/Test offer to Azure plan. A Dev/Test subscription will be billed at a pay-as-you-go rate after conversion. There's no discount currently available through the Dev/Test offer to CSP partners.
## Additional information
cost-management-billing Pay By Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/pay-by-invoice.md
tags: billing
Previously updated : 07/18/2022 Last updated : 11/04/2022
If you're not automatically approved, you can submit a request to Azure support
- If existing, current payment method: - Order ID (requesting for invoice option): - Account Admins Live ID (or Org ID) (should be company domain):
- - Commerce Account ID:
+ - Commerce Account ID┬╣:
- Company Name (as registered under VAT or Government Website): - Company Address (as registered under VAT or Government Website): - Company Website:
If you're not automatically approved, you can submit a request to Azure support
- Add your billing contact information in the Azure portal before the credit limit can be approved. The contact details should be related to the company's Accounts Payable or Finance department. 1. Verify your contact information and preferred contact method, and then select **Create**.
+┬╣ If you don't know your Commerce Account ID, it's the GUID ID shown on the Properties page for your billing account. To view your Commerce Account ID in the Azure portal, navigate to **Cost Management** > select a billing scope > in the left menu, select **Properties**. On the billing scope Properties page, notice the GUID ID value. It's your Commerce Account ID.
+ If we need to run a credit check because of the amount of credit that you need, we'll send you a credit check application. We might ask you to provide your companyΓÇÖs audited financial statements. If no financial information is provided or if the information isn't strong enough to support the amount of credit limit required, we might ask for a security deposit or a standby letter of credit to approve your credit check request. ## Switch to pay by check or wire transfer after approval
cost-management-billing Understand Ea Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/understand-ea-roles.md
To help manage your organization's usage and spend, Azure customers with an Enterprise Agreement can assign six distinct administrative roles: - Enterprise Administrator-- Enterprise Administrator (read only)<sup>1</sup>
+- Enterprise Administrator (read only)┬╣
- EA purchaser - Department Administrator - Department Administrator (read only)-- Account Owner<sup>2</sup>
+- Account Owner┬▓
-<sup>1</sup> The Bill-To contact of the EA contract will be under this role.
+┬╣ The Bill-To contact of the EA contract will be under this role.
-<sup>2</sup> The Bill-To contact cannot be added or changed in the Azure EA Portal and will be added to the EA enrollment based on the user who is set up as the Bill-To contact on agreement level. To change the Bill-To contact, a request needs to be made through a partner/software advisor to the Regional Operations Center (ROC).
+┬▓ The Bill-To contact cannot be added or changed in the Azure EA Portal and will be added to the EA enrollment based on the user who is set up as the Bill-To contact on agreement level. To change the Bill-To contact, a request needs to be made through a partner/software advisor to the Regional Operations Center (ROC).
The first enrollment administrator that is set up during the enrollment provisioning determines the authentication type of the Bill-to contact account. When the bill-to contact gets added to the EA Portal as a read-only administrator, they are given Microsoft account authentication.
The following sections describe the limitations and capabilities of each role.
| EA purchaser assigned to an SPN | Unlimited | |Department Administrator|Unlimited| |Department Administrator (read only)|Unlimited|
-|Account Owner|1 per account<sup>3</sup>|
+|Account Owner|1 per account┬│|
-<sup>3</sup> Each account requires a unique Microsoft account, or work or school account.
+┬│ Each account requires a unique Microsoft account, or work or school account.
## Organization structure and permissions by role
The following sections describe the limitations and capabilities of each role.
||||||||| |View Enterprise Administrators|Γ£ö|Γ£ö| Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ö| |Add or remove Enterprise Administrators|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|
-|View Notification Contacts<sup>4</sup> |Γ£ö|Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ö|
-|Add or remove Notification Contacts<sup>4</sup> |Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|
+|View Notification Contacts⁴ |✔|✔|✔|✘|✘|✘|✔|
+|Add or remove Notification Contacts⁴ |✔|✘|✘|✘|✘|✘|✘|
|Create and manage Departments |Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ| |View Department Administrators|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ÿ|Γ£ö| |Add or remove Department Administrators|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|
-|View Accounts in the enrollment |Γ£ö|Γ£ö|Γ£ö|Γ£ö<sup>5</sup>|Γ£ö<sup>5</sup>|Γ£ÿ|Γ£ö|
-|Add Accounts to the enrollment and change Account Owner|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ö<sup>5</sup>|Γ£ÿ|Γ£ÿ|Γ£ÿ|
+|View Accounts in the enrollment |✔|✔|✔|✔⁵|✔⁵|✘|✔|
+|Add Accounts to the enrollment and change Account Owner|✔|✘|✘|✔⁵|✘|✘|✘|
|Purchase reservations|Γ£ö|Γ£ÿ|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ| |Create and manage subscriptions and subscription permissions|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ö|Γ£ÿ| -- <sup>4</sup> Notification contacts are sent email communications about the Azure Enterprise Agreement.-- <sup>5</sup> Task is limited to accounts in your department.
+- ⁴ Notification contacts are sent email communications about the Azure Enterprise Agreement.
+- ⁵ Task is limited to accounts in your department.
## Add a new enterprise administrator
Direct EA admins can add department admins in the Azure portal. For more informa
|View department spending quotas|Γ£ö|Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ö| |Set department spending quotas|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ| |View organization's EA price sheet|Γ£ö|Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ö|
-|View usage and cost details|Γ£ö|Γ£ö|Γ£ö|Γ£ö<sup>6</sup>|Γ£ö<sup>6</sup>|Γ£ö<sup>7</sup>|Γ£ö|
+|View usage and cost details|✔|✔|✔|✔⁶|✔⁶|✔⁷|✔|
|Manage resources in Azure portal|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ö|Γ£ÿ| -- <sup>6</sup> Requires that the Enterprise Administrator enable **DA view charges** policy in the Enterprise portal. The Department Administrator can then see cost details for the department.-- <sup>7</sup> Requires that the Enterprise Administrator enable **AO view charges** policy in the Enterprise portal. The Account Owner can then see cost details for the account.
+- ⁶ Requires that the Enterprise Administrator enable **DA view charges** policy in the Enterprise portal. The Department Administrator can then see cost details for the department.
+- ⁷ Requires that the Enterprise Administrator enable **AO view charges** policy in the Enterprise portal. The Account Owner can then see cost details for the account.
## See pricing for different user roles
cost-management-billing Microsoft Customer Agreement Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/microsoft-customer-agreement/microsoft-customer-agreement-get-started.md
When you move from a pay-as-you-go or an enterprise agreement to a Microsoft Cus
| MCA purchase method | Previous payment method - Credit card | Previous payment method - Invoice | New payment method under MCA - Credit card | New payment method under MCA - Invoice | | | | | | |
-| Through a Microsoft representative | | Γ£ö | Γ£ö <sup>4</sup> | Γ£ö <sup>2</sup> |
-| Azure website | Γ£ö | Γ£ö <sup>1</sup> | Γ£ö | Γ£ö <sup>3</sup> |
+| Through a Microsoft representative | | ✔ | ✔ ⁴ | ✔ ² |
+| Azure website | Γ£ö | Γ£ö ┬╣ | Γ£ö | Γ£ö ┬│ |
-<sup>1</sup> By request.
+┬╣ By request.
-<sup>2</sup> You continue to pay by invoice/wire transfer under the MCA but will need to send your payments to a different bank account. For information about where to send your payment, see [Pay your bill](../understand/pay-bill.md#wire-bank-details) after you select your country in the list.
+┬▓ You continue to pay by invoice/wire transfer under the MCA but will need to send your payments to a different bank account. For information about where to send your payment, see [Pay your bill](../understand/pay-bill.md#wire-bank-details) after you select your country in the list.
-<sup>3</sup> For more information, see [Pay for your Azure subscription by invoice](../manage/pay-by-invoice.md).
+┬│ For more information, see [Pay for your Azure subscription by invoice](../manage/pay-by-invoice.md).
-<sup>4</sup> For more information, see [Pay your bill for Microsoft Azure](../understand/pay-bill.md#pay-now-in-the-azure-portal).
+⁴ For more information, see [Pay your bill for Microsoft Azure](../understand/pay-bill.md#pay-now-in-the-azure-portal).
## Complete outstanding payments
cost-management-billing Overview Azure Hybrid Benefit Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/scope-level/overview-azure-hybrid-benefit-scope.md
The following table summarizes how many NCLs you need to fully discount the SQL
| | | | | SQL Managed Instance or Instance pool | Business Critical | 4 per vCore | | SQL Managed Instance or Instance pool | General Purpose | 1 per vCore |
-| SQL Database or Elastic pool<sup>1</sup> | Business Critical | 4 per vCore |
-| SQL Database or Elastic pool<sup>1</sup> | General Purpose | 1 per vCore |
-| SQL Database or Elastic pool<sup>1</sup> | Hyperscale | 1 per vCore |
+| SQL Database or Elastic pool┬╣ | Business Critical | 4 per vCore |
+| SQL Database or Elastic pool┬╣ | General Purpose | 1 per vCore |
+| SQL Database or Elastic pool┬╣ | Hyperscale | 1 per vCore |
| Azure Data Factory SQL Server Integration Services | Enterprise | 4 per vCore | | Azure Data Factory SQL Server Integration Services | Standard | 1 per vCore |
-| SQL Server Virtual Machines<sup>2</sup> | Enterprise | 4 per vCPU |
-| SQL Server Virtual Machines<sup>2</sup> | Standard | 1 per vCPU |
+| SQL Server Virtual Machines┬▓ | Enterprise | 4 per vCPU |
+| SQL Server Virtual Machines┬▓ | Standard | 1 per vCPU |
-<sup>1</sup> *Azure Hybrid Benefit isn't available in the serverless compute tier of Azure SQL Database.*
+┬╣ *Azure Hybrid Benefit isn't available in the serverless compute tier of Azure SQL Database.*
-<sup>2</sup> *Subject to a minimum of four vCore licenses per Virtual Machine.*
+┬▓ *Subject to a minimum of four vCore licenses per Virtual Machine.*
## Ongoing scope-level management
data-factory Concepts Pipeline Execution Triggers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-pipeline-execution-triggers.md
For a complete sample, see [Quickstart: Create a data factory by using the REST
The following sample command shows you how to manually run your pipeline by using Azure PowerShell: ```powershell
-Invoke-AzDataFactoryV2Pipeline -DataFactory $df -PipelineName "Adfv2QuickStartPipeline" -ParameterFile .\PipelineParameters.json
+Invoke-AzDataFactoryV2Pipeline -DataFactory $df -PipelineName "Adfv2QuickStartPipeline" -ParameterFile .\PipelineParameters.json -ResourceGroupName "myResourceGroup"
``` You pass parameters in the body of the request payload. In the .NET SDK, Azure PowerShell, and the Python SDK, you pass values in a dictionary that's passed as an argument to the call:
defender-for-cloud Enable Pull Request Annotations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-pull-request-annotations.md
Title: Enable pull request annotations in GitHub or in Azure DevOps
description: Add pull request annotations in GitHub or in Azure DevOps. By adding pull request annotations, your SecOps and developer teams so that they can be on the same page when it comes to mitigating issues. Previously updated : 10/30/2022 Last updated : 11/07/2022 # Enable pull request annotations in GitHub and Azure DevOps
Before you can enable pull request annotations, your main branch must have enabl
1. Locate the Build Validation section.
-1. Ensure the CI Build is toggled to **On**.
+1. Ensure the build validation for your repository is toggled to **On**.
- :::image type="content" source="media/tutorial-enable-pr-annotations/build-validation.png" alt-text="Screenshot that shows where the CI Build toggle is located.":::
+ :::image type="content" source="media/tutorial-enable-pr-annotations/build-validation.png" alt-text="Screenshot that shows where the CI Build toggle is located." lightbox="media/tutorial-enable-pr-annotations/build-validation.png":::
1. Select **Save**.
defender-for-iot Alert Engine Messages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/alert-engine-messages.md
Title: OT monitoring alert types and descriptions description: Learn more about the alerts that are triggered for traffic on OT networks. Previously updated : 11/01/2022 Last updated : 11/03/2022
This article provides information on the alert types, descriptions, and severity that may be generated from the Defender for IoT engines. This information can be used to help map alerts into playbooks, define Forwarding rules, Exclusion rules, and custom alerts and define the appropriate rules within a SIEM. Alerts appear in the Alerts window, which allows you to manage the alert event.
-### Alert news
+## Alerts disabled by default
-New alerts may be added and existing alerts may be updated or disabled. Certain disabled alerts can be re-enabled from the **Support** page of the sensor console. Alerts that can be re-enabled are marked with an asterisk (*) in the tables below.
+Several alerts are disabled by default, as indicated by asterisks (*) in the tables below. Sensor administrator users can enable or disable alerts from the **Support** page on a specific sensor.
-You may have configured newly disabled alerts in your Forwarding rules. If so, you may need to update related Defender for IoT Exclusion rules, or update SIEM rules and playbooks where relevant.
+If you disable alerts that are referenced in other places, such as alert forwarding rules, make sure to update those references as needed.
See [What's new in Microsoft Defender for IoT?](release-notes.md#whats-new-in-microsoft-defender-for-iot) for detailed information about changes made to alerts.
Each alert has one of the following categories:
Policy engine alerts describe detected deviations from learned baseline behavior.
-| Title | Description | Severity | Category |
-|--|--|--|--|
-| **Beckhoff Software Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
-| **Database Login Failed** | A failed sign-in attempt was detected from a source device to a destination server. This might be the result of human error, but could also indicate a malicious attempt to compromise the server or data on it. <br><br> Threshold: 2 sign-in failures in 5 minutes | Major | Authentication |
-| **Emerson ROC Firmware Version Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
-| **External address within the network communicated with Internet** | A source device defined as part of your network is communicating with Internet addresses. The source isn't authorized to communicate with Internet addresses. | Critical | Internet Access |
-| **Field Device Discovered Unexpectedly** | A new source device was detected on the network but hasn't been authorized. | Major | Discovery |
-| **Firmware Change Detected** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
-| **Firmware Version Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
-| **Foxboro I/A Unauthorized Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **FTP Login Failed** | A failed sign-in attempt was detected from a source device to a destination server. This alert might be the result of human error, but could also indicate a malicious attempt to compromise the server or data on it. | Major | Authentication |
-| **Function Code Raised Unauthorized Exception** | A source device (secondary) returned an exception to a destination device (primary). | Major | Command Failures |
-| **GOOSE Message Type Settings** | Message (identified by protocol ID) settings were changed on a source device. | Warning | Unauthorized Communication Behavior |
-| **Honeywell Firmware Version Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
-| * **Illegal HTTP Communication** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal HTTP Communication Behavior |
-| **Internet Access Detected** | A source device defined as part of your network is communicating with Internet addresses. The source isn't authorized to communicate with Internet addresses. | Major | Internet Access |
-| **Mitsubishi Firmware Version Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
-| **Modbus Address Range Violation** | A primary device requested access to a new secondary memory address. | Major | Unauthorized Communication Behavior |
-| **Modbus Firmware Version Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
-| **New Activity Detected - CIP Class** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **New Activity Detected - CIP Class Service** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **New Activity Detected - CIP PCCC Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **New Activity Detected - CIP Symbol** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **New Activity Detected - EtherNet/IP I/O Connection** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **New Activity Detected - EtherNet/IP Protocol Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **New Activity Detected - GSM Message Code** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **New Activity Detected - LonTalk Command Codes** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **New Port Discovery** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Warning | Discovery |
-| **New Activity Detected - LonTalk Network Variable** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **New Activity Detected - Ovation Data Request** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **New Activity Detected - Read/Write Command (AMS Index Group)** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Configuration Changes |
-| **New Activity Detected - Read/Write Command (AMS Index Offset)** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Configuration Changes |
-| **New Activity Detected - Unauthorized DeltaV Message Type** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **New Activity Detected - Unauthorized DeltaV ROC Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **New Activity Detected - Unauthorized RPC Message Type** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **New Activity Detected - Using AMS Protocol Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **New Activity Detected - Using Siemens SICAM Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **New Activity Detected - Using Suitelink Protocol command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **New Activity Detected - Using Suitelink Protocol sessions** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **New Activity Detected - Using Yokogawa VNetIP Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **New Asset Detected** | A new source device was detected on the network but hasn't been authorized. <br><br>This alert applies to devices discovered in OT subnets. New devices discovered in IT subnets don't trigger an alert.| Major | Discovery |
-| **New LLDP Device Configuration** | A new source device was detected on the network but hasn't been authorized. | Major | Configuration Changes |
-| **Omron FINS Unauthorized Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **S7 Plus PLC Firmware Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
-| **Sampled Values Message Type Settings** | Message (identified by protocol ID) settings were changed on a source device. | Warning | Unauthorized Communication Behavior |
-| **Suspicion of Illegal Integrity Scan** | A scan was detected on a DNP3 source device (outstation). This scan wasn't authorized as learned traffic on your network. | Major | Scan |
-| **Toshiba Computer Link Unauthorized Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Minor | Unauthorized Communication Behavior |
-| **Unauthorized ABB Totalflow File Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **Unauthorized ABB Totalflow Register Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **Unauthorized Access to Siemens S7 Data Block** | A source device attempted to access a resource on another device. An access attempt to this resource between these two devices hasn't been authorized as learned traffic on your network. | Warning | Unauthorized Communication Behavior |
-| **Unauthorized Access to Siemens S7 Plus Object** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **Unauthorized Access to Wonderware Tag** | A source device attempted to access a resource on another device. An access attempt to this resource between these two devices hasn't been authorized as learned traffic on your network. | Major | Unauthorized Communication Behavior |
-| **Unauthorized BACNet Object Access** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **Unauthorized BACNet Route** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **Unauthorized Database Login** | A sign-in attempt between a source client and destination server was detected. Communication between these devices hasn't been authorized as learned traffic on your network. | Major | Authentication |
-| **Unauthorized Database Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal Communication Behavior |
-| **Unauthorized Emerson ROC Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **Unauthorized GE SRTP File Access** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **Unauthorized GE SRTP Protocol Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **Unauthorized GE SRTP System Memory Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **Unauthorized HTTP Activity** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal HTTP Communication Behavior |
-| * **Unauthorized HTTP SOAP Action** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal HTTP Communication Behavior |
-| * **Unauthorized HTTP User Agent** | An unauthorized application was detected on a source device. The application hasn't been authorized as a learned application on your network. | Major | Abnormal HTTP Communication Behavior |
-| **Unauthorized Internet Connectivity Detected** | A source device defined as part of your network is communicating with Internet addresses. The source isn't authorized to communicate with Internet addresses. | Critical | Internet Access |
-| **Unauthorized Mitsubishi MELSEC Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **Unauthorized MMS Program Access** | A source device attempted to access a resource on another device. An access attempt to this resource between these two devices hasn't been authorized as learned traffic on your network. | Major | Programming |
-| **Unauthorized MMS Service** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **Unauthorized Multicast/Broadcast Connection** | A Multicast/Broadcast connection was detected between a source device and other devices. Multicast/Broadcast communication isn't authorized. | Critical | Abnormal Communication Behavior |
-| **Unauthorized Name Query** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal Communication Behavior |
-| **Unauthorized OPC UA Activity** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **Unauthorized OPC UA Request/Response** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **Unauthorized Operation was detected by a User Defined Rule** | Traffic was detected between two devices. This activity is unauthorized, based on a Custom Alert Rule defined by a user. | Major | Custom Alerts |
-| **Unauthorized PLC Configuration Read** | The source device isn't defined as a programming device but performed a read/write operation on a destination controller. Programming changes should only be performed by programming devices. A programming application may have been installed on this device. | Warning | Configuration Changes |
-| **Unauthorized PLC Configuration Write** | The source device sent a command to read/write the program of a destination controller. This activity wasn't previously seen. | Major | Configuration Changes |
-| **Unauthorized PLC Program Upload** | The source device sent a command to read/write the program of a destination controller. This activity wasn't previously seen. | Major | Programming |
-| **Unauthorized PLC Programming** | The source device isn't defined as a programming device but performed a read/write operation on a destination controller. Programming changes should only be performed by programming devices. A programming application may have been installed on this device. | Critical | Programming |
-| **Unauthorized Profinet Frame Type** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **Unauthorized SAIA S-Bus Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **Unauthorized Siemens S7 Execution of Control Function** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **Unauthorized Siemens S7 Execution of User Defined Function** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **Unauthorized Siemens S7 Plus Block Access** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **Unauthorized Siemens S7 Plus Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **Unauthorized SMB Login** | A sign-in attempt between a source client and destination server was detected. Communication between these devices hasn't been authorized as learned traffic on your network. | Major | Authentication |
-| **Unauthorized SNMP Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal Communication Behavior |
-| **Unauthorized SSH Access** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Remote Access |
-| **Unauthorized Windows Process** | An unauthorized application was detected on a source device. The application hasn't been authorized as a learned application on your network. | Major | Abnormal Communication Behavior |
-| **Unauthorized Windows Service** | An unauthorized application was detected on a source device. The application hasn't been authorized as a learned application on your network. | Major | Abnormal Communication Behavior |
-| **Unauthorized Operation was detected by a User Defined Rule** | New traffic parameters were detected. This parameter combination violates a user defined rule | Major |
-| **Unpermitted Modbus Schneider Electric Extension** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **Unpermitted Usage of ASDU Types** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **Unpermitted Usage of DNP3 Function Code** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **Unpermitted Usage of Internal Indication (IIN)** | A DNP3 source device (outstation) reported an internal indication (IIN) that hasn't authorized as learned traffic on your network. | Major | Illegal Commands |
-| **Unpermitted Usage of Modbus Function Code** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| Title | Description | Severity | Category | MITRE ATT&CK <br> tactics and techniques |
+|--|--|--|--|--|
+| **Beckhoff Software Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change | **Tactics:** <br> - Inhibit Response Function <br> - Persistence <br><br> **Techniques:** <br> - T0857: System Firmware |
+| **Database Login Failed** | A failed sign-in attempt was detected from a source device to a destination server. This might be the result of human error, but could also indicate a malicious attempt to compromise the server or data on it. <br><br> Threshold: 2 sign-in failures in 5 minutes | Major | Authentication | **Tactics:** <br> - Lateral Movement <br> - Collection <br><br> **Techniques:** <br> - T0812: Default Credentials <br> - T0811: Data from Information Repositories|
+| **Emerson ROC Firmware Version Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change | **Tactics:** <br> - Inhibit Response Function <br> - Persistence <br><br> **Techniques:** <br> - T0857: System Firmware |
+| **External address within the network communicated with Internet** | A source device defined as part of your network is communicating with Internet addresses. The source isn't authorized to communicate with Internet addresses. | Critical | Internet Access | **Tactics:** <br> - Initial Access <br><br> **Techniques:** <br> - T0883: Internet Accessible Device |
+| **Field Device Discovered Unexpectedly** | A new source device was detected on the network but hasn't been authorized. | Major | Discovery | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Firmware Change Detected** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change | **Tactics:** <br> - Inhibit Response Function <br> - Persistence <br><br> **Techniques:** <br> - T0857: System Firmware |
+| **Firmware Version Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change | **Tactics:** <br> - Inhibit Response Function <br> - Persistence <br><br> **Techniques:** <br> - T0857: System Firmware |
+| **Foxboro I/A Unauthorized Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **FTP Login Failed** | A failed sign-in attempt was detected from a source device to a destination server. This alert might be the result of human error, but could also indicate a malicious attempt to compromise the server or data on it. | Major | Authentication | **Tactics:** <br> - Lateral Movement <br> - Command And Control <br><br> **Techniques:** <br> - T0812: Default Credentials <br> - T0869: Standard Application Layer Protocol |
+| **Function Code Raised Unauthorized Exception** | A source device (secondary) returned an exception to a destination device (primary). | Major | Command Failures | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0835: Manipulate I/O Image |
+| **GOOSE Message Type Settings** | Message (identified by protocol ID) settings were changed on a source device. | Warning | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **Honeywell Firmware Version Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change | **Tactics:** <br> - Inhibit Response Function <br> - Persistence <br><br> **Techniques:** <br> - T0857: System Firmware |
+| **Illegal HTTP Communication [*](#alerts-disabled-by-default)** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal HTTP Communication Behavior | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0846: Remote System Discovery |
+| **Internet Access Detected** | A source device defined as part of your network is communicating with Internet addresses. The source isn't authorized to communicate with Internet addresses. | Major | Internet Access | **Tactics:** <br> - Initial Access <br><br> **Techniques:** <br> - T0883: Internet Accessible Device |
+| **Mitsubishi Firmware Version Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change | **Tactics:** <br> - Inhibit Response Function <br> - Persistence <br><br> **Techniques:** <br> - T0857: System Firmware |
+| **Modbus Address Range Violation** | A primary device requested access to a new secondary memory address. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Modbus Firmware Version Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change | **Tactics:** <br> - Inhibit Response Function <br> - Persistence <br><br> **Techniques:** <br> - T0857: System Firmware |
+| **New Activity Detected - CIP Class** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0888: Remote System Information Discovery |
+| **New Activity Detected - CIP Class Service** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **New Activity Detected - CIP PCCC Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **New Activity Detected - CIP Symbol** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **New Activity Detected - EtherNet/IP I/O Connection** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Discovery <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0846: Remote System Discovery <br> - T0835: Manipulate I/O Image |
+| **New Activity Detected - EtherNet/IP Protocol Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **New Activity Detected - GSM Message Code** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - CommandAndControl <br><br> **Techniques:** <br> - T0869: Standard Application Layer Protocol |
+| **New Activity Detected - LonTalk Command Codes** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Collection <br> - Impair Process Control <br><br> **Techniques:** <br> - T0861 - Point & Tag Identification <br> - T0855: Unauthorized Command Message |
+| **New Port Discovery** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Warning | Discovery | **Tactics:** <br> - Lateral Movement <br><br> **Techniques:** <br> - T0867: Lateral Tool Transfer |
+| **New Activity Detected - LonTalk Network Variable** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **New Activity Detected - Ovation Data Request** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Collection <br> - Discovery <br><br> **Techniques:** <br> - T0801: Monitor Process State <br> - T0888: Remote System Information Discovery |
+| **New Activity Detected - Read/Write Command (AMS Index Group)** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Configuration Changes | **Tactics:** <br> - Impair Process Control <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **New Activity Detected - Read/Write Command (AMS Index Offset)** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Configuration Changes | **Tactics:** <br> - Impair Process Control <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **New Activity Detected - Unauthorized DeltaV Message Type** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
+| **New Activity Detected - Unauthorized DeltaV ROC Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
+| **New Activity Detected - Unauthorized RPC Message Type** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **New Activity Detected - Using AMS Protocol Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Inhibit Response Function <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter <br> - T0821: Modify Controller Tasking |
+| **New Activity Detected - Using Siemens SICAM Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **New Activity Detected - Using Suitelink Protocol command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **New Activity Detected - Using Suitelink Protocol sessions** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **New Activity Detected - Using Yokogawa VNetIP Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
+| **New Asset Detected** | A new source device was detected on the network but hasn't been authorized. <br><br>This alert applies to devices discovered in OT subnets. New devices discovered in IT subnets don't trigger an alert.| Major | Discovery | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **New LLDP Device Configuration** | A new source device was detected on the network but hasn't been authorized. | Major | Configuration Changes | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Omron FINS Unauthorized Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **S7 Plus PLC Firmware Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change | **Tactics:** <br> - Inhibit Response Function <br> - Persistence <br><br> **Techniques:** <br> - T0857: System Firmware |
+| **Sampled Values Message Type Settings** | Message (identified by protocol ID) settings were changed on a source device. | Warning | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **Suspicion of Illegal Integrity Scan** | A scan was detected on a DNP3 source device (outstation). This scan wasn't authorized as learned traffic on your network. | Major | Scan | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Toshiba Computer Link Unauthorized Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Minor | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
+| **Unauthorized ABB Totalflow File Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
+| **Unauthorized ABB Totalflow Register Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
+| **Unauthorized Access to Siemens S7 Data Block** | A source device attempted to access a resource on another device. An access attempt to this resource between these two devices hasn't been authorized as learned traffic on your network. | Warning | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Initial Access <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0811: Data from Information Repositories |
+| **Unauthorized Access to Siemens S7 Plus Object** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking <br> - T0809: Data Destruction |
+| **Unauthorized Access to Wonderware Tag** | A source device attempted to access a resource on another device. An access attempt to this resource between these two devices hasn't been authorized as learned traffic on your network. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Collection <br> - Impair Process Control <br><br> **Techniques:** <br> - T0861: Point & Tag Identification <br> - T0855: Unauthorized Command Message |
+| **Unauthorized BACNet Object Access** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
+| **Unauthorized BACNet Route** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
+| **Unauthorized Database Login** | A sign-in attempt between a source client and destination server was detected. Communication between these devices hasn't been authorized as learned traffic on your network. | Major | Authentication | **Tactics:** <br> - Lateral Movement <br> - Persistence <br> - Collection <br><br> **Techniques:** <br> - T0859: Valid Accounts <br> - T0811: Data from Information Repositories |
+| **Unauthorized Database Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Initial Access <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0811: Data from Information Repositories |
+| **Unauthorized Emerson ROC Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
+| **Unauthorized GE SRTP File Access** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Collection <br> - LateralMovement <br> - Persistence <br><br> **Techniques:** <br> - T0801: Monitor Process State <br> - T0859: Valid Accounts |
+| **Unauthorized GE SRTP Protocol Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
+| **Unauthorized GE SRTP System Memory Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Discovery <br> - Impair Process Control <br><br> **Techniques:** <br> - T0846: Remote System Discovery <br> - T0855: Unauthorized Command Message |
+| **Unauthorized HTTP Activity** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal HTTP Communication Behavior | **Tactics:** <br> - Initial Access <br> - Command And Control <br><br> **Techniques:** <br> - T0822: External Remote Services <br> - T0869: Standard Application Layer Protocol |
+| **Unauthorized HTTP SOAP Action [*](#alerts-disabled-by-default)** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal HTTP Communication Behavior | **Tactics:** <br> - Command And Control <br> - Execution <br><br> **Techniques:** <br> - T0869: Standard Application Layer Protocol <br> - T0871: Execution through API |
+| **Unauthorized HTTP User Agent [*](#alerts-disabled-by-default)** | An unauthorized application was detected on a source device. The application hasn't been authorized as a learned application on your network. | Major | Abnormal HTTP Communication Behavior | **Tactics:** <br> - Command And Control <br><br> **Techniques:** <br> - T0869: Standard Application Layer Protocol |
+| **Unauthorized Internet Connectivity Detected** | A source device defined as part of your network is communicating with Internet addresses. The source isn't authorized to communicate with Internet addresses. | Critical | Internet Access | **Tactics:** <br> - Initial Access <br><br> **Techniques:** <br> - T0883: Internet Accessible Device |
+| **Unauthorized Mitsubishi MELSEC Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
+| **Unauthorized MMS Program Access** | A source device attempted to access a resource on another device. An access attempt to this resource between these two devices hasn't been authorized as learned traffic on your network. | Major | Programming | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
+| **Unauthorized MMS Service** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
+| **Unauthorized Multicast/Broadcast Connection** | A Multicast/Broadcast connection was detected between a source device and other devices. Multicast/Broadcast communication isn't authorized. | Critical | Abnormal Communication Behavior | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Unauthorized Name Query** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **Unauthorized OPC UA Activity** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **Unauthorized OPC UA Request/Response** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **Unauthorized Operation was detected by a User Defined Rule** | Traffic was detected between two devices. This activity is unauthorized, based on a Custom Alert Rule defined by a user. | Major | Custom Alerts | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Unauthorized PLC Configuration Read** | The source device isn't defined as a programming device but performed a read/write operation on a destination controller. Programming changes should only be performed by programming devices. A programming application may have been installed on this device. | Warning | Configuration Changes | **Tactics:** <br> - Collection <br><br> **Techniques:** <br> - T0801: Monitor Process State |
+| **Unauthorized PLC Configuration Write** | The source device sent a command to read/write the program of a destination controller. This activity wasn't previously seen. | Major | Configuration Changes | **Tactics:** <br> - Impair Process Control <br> - Persistence <br> - Impact <br><br> **Techniques:** <br> - T0839: Module Firmware <br> - T0831: Manipulation of Control <br> - T0889: Modify Program |
+| **Unauthorized PLC Program Upload** | The source device sent a command to read/write the program of a destination controller. This activity wasn't previously seen. | Major | Programming | **Tactics:** <br> - Impair Process Control <br> - Persistence <br> - Collection <br><br> **Techniques:** <br> - T0839: Module Firmware <br> - T0845: Program Upload |
+| **Unauthorized PLC Programming** | The source device isn't defined as a programming device but performed a read/write operation on a destination controller. Programming changes should only be performed by programming devices. A programming application may have been installed on this device. | Critical | Programming | **Tactics:** <br> - Impair Process Control <br> - Persistence <br> - Lateral Movement <br><br> **Techniques:** <br> - T0839: Module Firmware <br> - T0889: Modify Program <br> - T0843: Program Download |
+| **Unauthorized Profinet Frame Type** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **Unauthorized SAIA S-Bus Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **Unauthorized Siemens S7 Execution of Control Function** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0809: Data Destruction |
+| **Unauthorized Siemens S7 Execution of User Defined Function** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0836: Modify Parameter <br> - T0863: User Execution |
+| **Unauthorized Siemens S7 Plus Block Access** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Inhibit Response Function <br> - Persistence <br> - Execution <br><br> **Techniques:** <br> - T0803 - Block Command Message <br> - T0889: Modify Program <br> - T0821: Modify Controller Tasking |
+| **Unauthorized Siemens S7 Plus Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0863: User Execution |
+| **Unauthorized SMB Login** | A sign-in attempt between a source client and destination server was detected. Communication between these devices hasn't been authorized as learned traffic on your network. | Major | Authentication | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br> - Persistence <br><br> **Techniques:** <br> - T0886: Remote Services <br> - T0859: Valid Accounts |
+| **Unauthorized SNMP Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal Communication Behavior | **Tactics:** <br> - Discovery <br> - Command And Control <br><br> **Techniques:** <br> - T0842: Network Sniffing <br> - T0885: Commonly Used Port |
+| **Unauthorized SSH Access** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Remote Access | **Tactics:** <br> - InitialAccess <br> - Lateral Movement <br> - Command And Control <br><br> **Techniques:** <br> - T0886: Remote Services <br> - T0869: Standard Application Layer Protocol |
+| **Unauthorized Windows Process** | An unauthorized application was detected on a source device. The application hasn't been authorized as a learned application on your network. | Major | Abnormal Communication Behavior | **Tactics:** <br> - Execution <br> - Privilege Escalation <br> - Command And Control <br><br> **Techniques:** <br> - T0841: Hooking <br> - T0885: Commonly Used Port |
+| **Unauthorized Windows Service** | An unauthorized application was detected on a source device. The application hasn't been authorized as a learned application on your network. | Major | Abnormal Communication Behavior | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services |
+| **Unauthorized Operation was detected by a User Defined Rule** | New traffic parameters were detected. This parameter combination violates a user defined rule | Major | | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Unpermitted Modbus Schneider Electric Extension** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **Unpermitted Usage of ASDU Types** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |**Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **Unpermitted Usage of DNP3 Function Code** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **Unpermitted Usage of Internal Indication (IIN)** | A DNP3 source device (outstation) reported an internal indication (IIN) that hasn't authorized as learned traffic on your network. | Major | Illegal Commands | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Unpermitted Usage of Modbus Function Code** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
## Anomaly engine alerts Anomaly engine alerts describe detected anomalies in network activity.
-| Title | Description | Severity | Category |
-|--|--|--|--|
-| **Abnormal Exception Pattern in Slave** | An excessive number of errors were detected on a source device. This alert may be the result of an operational issue. <br><br> Threshold: 20 exceptions in 1 hour | Minor | Abnormal Communication Behavior |
-| * **Abnormal HTTP Header Length** | The source device sent an abnormal message. This alert may indicate an attempt to attack the destination device. | Critical | Abnormal HTTP Communication Behavior |
-| * **Abnormal Number of Parameters in HTTP Header** | The source device sent an abnormal message. This alert may indicate an attempt to attack the destination device. | Critical | Abnormal HTTP Communication Behavior |
-| **Abnormal Periodic Behavior In Communication Channel** | A change in the frequency of communication between the source and destination devices was detected. | Minor | Abnormal Communication Behavior |
-| **Abnormal Termination of Applications** | An excessive number of stop commands were detected on a source device. This alert may be the result of an operational issue or an attempt to manipulate the device. <br><br> Threshold: 20 stop commands in 3 hours | Major | Abnormal Communication Behavior |
-| **Abnormal Traffic Bandwidth** | Abnormal bandwidth was detected on a channel. Bandwidth appears to be lower/higher than previously detected. For details, work with the Total Bandwidth widget. | Warning | Bandwidth Anomalies |
-| **Abnormal Traffic Bandwidth Between Devices** | Abnormal bandwidth was detected on a channel. Bandwidth appears to be lower/higher than previously detected. For details, work with the Total Bandwidth widget. | Warning | Bandwidth Anomalies |
-| **Address Scan Detected** | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. <br><br> Threshold: 50 connections to the same B class subnet in 2 minutes | Critical | Scan |
-| **ARP Address Scan Detected** | A source device was detected scanning network devices using Address Resolution Protocol (ARP). This device address hasn't been authorized as valid ARP scanning address. <br><br> Threshold: 40 scans in 6 minutes | Critical | Scan |
-| **ARP Spoofing** | An abnormal quantity of packets was detected in the network. This alert could indicate an attack, for example, an ARP spoofing or ICMP flooding attack. <br><br> Threshold: 60 packets in 1 minute | Warning | Abnormal Communication Behavior |
-| **Excessive Login Attempts** | A source device was seen performing excessive sign-in attempts to a destination server. This alert may indicate a brute force attack. The server may be compromised by a malicious actor. <br><br> Threshold: 20 sign-in attempts in 1 minute | Critical | Authentication |
-| **Excessive Number of Sessions** | A source device was seen performing excessive sign-in attempts to a destination server. This may indicate a brute force attack. The server may be compromised by a malicious actor. <br><br> Threshold: 50 sessions in 1 minute | Critical | Abnormal Communication Behavior |
-| **Excessive Restart Rate of an Outstation** | An excessive number of restart commands were detected on a source device. These alerts may be the result of an operational issue or an attempt to manipulate the device. <br><br> Threshold: 10 restarts in 1 hour | Major | Restart/ Stop Commands |
-| **Excessive SMB login attempts** | A source device was seen performing excessive sign-in attempts to a destination server. This may indicate a brute force attack. The server may be compromised by a malicious actor. <br><br> Threshold: 10 sign-in attempts in 10 minutes | Critical | Authentication |
-| **ICMP Flooding** | An abnormal quantity of packets was detected in the network. This alert could indicate an attack, for example, an ARP spoofing or ICMP flooding attack. <br><br> Threshold: 60 packets in 1 minute | Warning | Abnormal Communication Behavior |
-|* **Illegal HTTP Header Content** | The source device initiated an invalid request. | Critical | Abnormal HTTP Communication Behavior |
-| **Inactive Communication Channel** | A communication channel between two devices was inactive during a period in which activity is usually observed. This might indicate that the program generating this traffic was changed, or the program might be unavailable. It's recommended to review the configuration of installed program and verify that it's configured properly. <br><br> Threshold: 1 minute | Warning | Unresponsive |
-| **Long Duration Address Scan Detected** | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. <br><br> Threshold: 50 connections to the same B class subnet in 10 minutes | Critical | Scan |
-| **Password Guessing Attempt Detected** | A source device was seen performing excessive sign-in attempts to a destination server. This may indicate a brute force attack. The server may be compromised by a malicious actor. <br><br> Threshold: 100 attempts in 1 minute | Critical | Authentication |
-| **PLC Scan Detected** | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. <br><br> Threshold: 10 scans in 2 minutes | Critical | Scan |
-| **Port Scan Detected** | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. <br><br> Threshold: 25 scans in 2 minutes | Critical | Scan |
-| **Unexpected message length** | The source device sent an abnormal message. This alert may indicate an attempt to attack the destination device. <br><br> Threshold: text length - 32768 | Critical | Abnormal Communication Behavior |
-| **Unexpected Traffic for Standard Port** | Traffic was detected on a device using a port reserved for another protocol. | Major | Abnormal Communication Behavior |
+| Title | Description | Severity | Category | MITRE ATT&CK <br> tactics and techniques |
+|--|--|--|--|--|
+| **Abnormal Exception Pattern in Slave** | An excessive number of errors were detected on a source device. This alert may be the result of an operational issue. <br><br> Threshold: 20 exceptions in 1 hour | Minor | Abnormal Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0806: Brute Force I/O |
+| **Abnormal HTTP Header Length [*](#alerts-disabled-by-default)** | The source device sent an abnormal message. This alert may indicate an attempt to attack the destination device. | Critical | Abnormal HTTP Communication Behavior | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br> - Command And Control <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services <br> - T0869: Standard Application Layer Protocol |
+| **Abnormal Number of Parameters in HTTP Header [*](#alerts-disabled-by-default)** | The source device sent an abnormal message. This alert may indicate an attempt to attack the destination device. | Critical | Abnormal HTTP Communication Behavior | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br> - Command And Control <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services <br> - T0869: Standard Application Layer Protocol |
+| **Abnormal Periodic Behavior In Communication Channel** | A change in the frequency of communication between the source and destination devices was detected. | Minor | Abnormal Communication Behavior | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Abnormal Termination of Applications** | An excessive number of stop commands were detected on a source device. This alert may be the result of an operational issue or an attempt to manipulate the device. <br><br> Threshold: 20 stop commands in 3 hours | Major | Abnormal Communication Behavior | **Tactics:** <br> - Persistence <br> - Impact <br><br> **Techniques:** <br> - T0889: Modify Program <br> - T0831: Manipulation of Control |
+| **Abnormal Traffic Bandwidth** | Abnormal bandwidth was detected on a channel. Bandwidth appears to be lower/higher than previously detected. For details, work with the Total Bandwidth widget. | Warning | Bandwidth Anomalies | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Abnormal Traffic Bandwidth Between Devices** | Abnormal bandwidth was detected on a channel. Bandwidth appears to be lower/higher than previously detected. For details, work with the Total Bandwidth widget. | Warning | Bandwidth Anomalies | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Address Scan Detected** | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. <br><br> Threshold: 50 connections to the same B class subnet in 2 minutes | Critical | Scan | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **ARP Address Scan Detected** | A source device was detected scanning network devices using Address Resolution Protocol (ARP). This device address hasn't been authorized as valid ARP scanning address. <br><br> Threshold: 40 scans in 6 minutes | Critical | Scan | **Tactics:** <br> - Discovery <br> - Collection <br><br> **Techniques:** <br> - T0842: Network Sniffing <br> - T0830: Man in the Middle |
+| **ARP Spoofing** | An abnormal quantity of packets was detected in the network. This alert could indicate an attack, for example, an ARP spoofing or ICMP flooding attack. <br><br> Threshold: 60 packets in 1 minute | Warning | Abnormal Communication Behavior | **Tactics:** <br> - Collection <br><br> **Techniques:** <br> - T0830: Man in the Middle |
+| **Excessive Login Attempts** | A source device was seen performing excessive sign-in attempts to a destination server. This alert may indicate a brute force attack. The server may be compromised by a malicious actor. <br><br> Threshold: 20 sign-in attempts in 1 minute | Critical | Authentication | **Tactics:** <br> - LateralMovement <br> - Impair Process Control <br><br> **Techniques:** <br> - T0812: Default Credentials <br> - T0806: Brute Force I/O |
+| **Excessive Number of Sessions** | A source device was seen performing excessive sign-in attempts to a destination server. This may indicate a brute force attack. The server may be compromised by a malicious actor. <br><br> Threshold: 50 sessions in 1 minute | Critical | Abnormal Communication Behavior | **Tactics:** <br> - Lateral Movement <br> - Impair Process Control <br><br> **Techniques:** <br> - T0812: Default Credentials <br> - T0806: Brute Force I/O |
+| **Excessive Restart Rate of an Outstation** | An excessive number of restart commands were detected on a source device. These alerts may be the result of an operational issue or an attempt to manipulate the device. <br><br> Threshold: 10 restarts in 1 hour | Major | Restart/ Stop Commands | **Tactics:** <br> - Inhibit Response Function <br> - Impair Process Control <br><br> **Techniques:** <br> - T0814: Denial of Service <br> - T0806: Brute Force I/O |
+| **Excessive SMB login attempts** | A source device was seen performing excessive sign-in attempts to a destination server. This may indicate a brute force attack. The server may be compromised by a malicious actor. <br><br> Threshold: 10 sign-in attempts in 10 minutes | Critical | Authentication | **Tactics:** <br> - Persistence <br> - Execution <br> - LateralMovement <br><br> **Techniques:** <br> - T0812: Default Credentials <br> - T0853: Scripting <br> - T0859: Valid Accounts |
+| **ICMP Flooding** | An abnormal quantity of packets was detected in the network. This alert could indicate an attack, for example, an ARP spoofing or ICMP flooding attack. <br><br> Threshold: 60 packets in 1 minute | Warning | Abnormal Communication Behavior | **Tactics:** <br> - Discovery <br> - Collection <br><br> **Techniques:** <br> - T0842: Network Sniffing <br> - T0830: Man in the Middle |
+| **Illegal HTTP Header Content [*](#alerts-disabled-by-default)** | The source device initiated an invalid request. | Critical | Abnormal HTTP Communication Behavior | **Tactics:** <br> - Initial Access <br> - LateralMovement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services |
+| **Inactive Communication Channel** | A communication channel between two devices was inactive during a period in which activity is usually observed. This might indicate that the program generating this traffic was changed, or the program might be unavailable. It's recommended to review the configuration of installed program and verify that it's configured properly. <br><br> Threshold: 1 minute | Warning | Unresponsive | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0881: Service Stop |
+| **Long Duration Address Scan Detected** | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. <br><br> Threshold: 50 connections to the same B class subnet in 10 minutes | Critical | Scan | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Password Guessing Attempt Detected** | A source device was seen performing excessive sign-in attempts to a destination server. This may indicate a brute force attack. The server may be compromised by a malicious actor. <br><br> Threshold: 100 attempts in 1 minute | Critical | Authentication | **Tactics:** <br> - Lateral Movement <br><br> **Techniques:** <br> - T0812: Default Credentials <br> - T0806: Brute Force I/O |
+| **PLC Scan Detected** | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. <br><br> Threshold: 10 scans in 2 minutes | Critical | Scan | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Port Scan Detected** | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. <br><br> Threshold: 25 scans in 2 minutes | Critical | Scan | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Unexpected message length** | The source device sent an abnormal message. This alert may indicate an attempt to attack the destination device. <br><br> Threshold: text length - 32768 | Critical | Abnormal Communication Behavior | **Tactics:** <br> - InitialAccess <br> - LateralMovement <br><br> **Techniques:** <br> - T0869: Exploitation of Remote Services |
+| **Unexpected Traffic for Standard Port** | Traffic was detected on a device using a port reserved for another protocol. | Major | Abnormal Communication Behavior | **Tactics:** <br> - Command And Control <br> - Discovery <br><br> **Techniques:** <br> - T0869: Standard Application Layer Protocol <br> - T0842: Network Sniffing |
## Protocol violation engine alerts Protocol engine alerts describe detected deviations in the packet structure, or field values compared to protocol specifications.
-| Title | Description | Severity | Category |
-|--|--|--|--|
-| **Excessive Malformed Packets In a Single Session** | An abnormal number of malformed packets sent from the source device to the destination device. This alert might indicate erroneous communications, or an attempt to manipulate the targeted device. <br><br> Threshold: 2 malformed packets in 10 minutes | Major | Illegal Commands |
-| **Firmware Update** | A source device sent a command to update firmware on a destination device. Verify that recent programming, configuration and firmware upgrades made to the destination device are valid. | Warning | Firmware Change |
-| **Function Code Not Supported by Outstation** | The destination device received an invalid request. | Major | Illegal Commands |
-| **Illegal BACNet message** | The source device initiated an invalid request. | Major | Illegal Commands |
-| **Illegal Connection Attempt on Port 0** | A source device attempted to connect to destination device on port number zero (0). For TCP, port 0 is reserved and canΓÇÖt be used. For UDP, the port is optional and a value of 0 means no port. There's usually no service on a system that listens on port 0. This event may indicate an attempt to attack the destination device, or indicate that an application was programmed incorrectly. | Minor | Illegal Commands |
-| **Illegal DNP3 Operation** | The source device initiated an invalid request. | Major | Illegal Commands |
-| **Illegal MODBUS Operation (Exception Raised by Master)** | The source device initiated an invalid request. | Major | Illegal Commands |
-| **Illegal MODBUS Operation (Function Code Zero)** | The source device initiated an invalid request. | Major | Illegal Commands |
-| **Illegal Protocol Version** | The source device initiated an invalid request. | Major | Illegal Commands |
-| **Incorrect Parameter Sent to Outstation** | The destination device received an invalid request. | Major | Illegal Commands |
-| **Initiation of an Obsolete Function Code (Initialize Data)** | The source device initiated an invalid request. | Minor | Illegal Commands |
-| **Initiation of an Obsolete Function Code (Save Config)** | The source device initiated an invalid request. | Minor | Illegal Commands |
-| **Master Requested an Application Layer Confirmation** | The source device initiated an invalid request. | Warning | Illegal Commands |
-| **Modbus Exception** | A source device (secondary) returned an exception to a destination device (primary). | Major | Illegal Commands |
-| **Slave Device Received Illegal ASDU Type** | The destination device received an invalid request. | Major | Illegal Commands |
-| **Slave Device Received Illegal Command Cause of Transmission** | The destination device received an invalid request. | Major | Illegal Commands |
-| **Slave Device Received Illegal Common Address** | The destination device received an invalid request. | Major | Illegal Commands |
-| **Slave Device Received Illegal Data Address Parameter** | The destination device received an invalid request. | Major | Illegal Commands |
-| **Slave Device Received Illegal Data Value Parameter** | The destination device received an invalid request. | Major | Illegal Commands |
-| **Slave Device Received Illegal Function Code** | The destination device received an invalid request. | Major | Illegal Commands |
-| **Slave Device Received Illegal Information Object Address** | The destination device received an invalid request. | Major | Illegal Commands |
-| **Unknown Object Sent to Outstation** | The destination device received an invalid request. | Major | Illegal Commands |
-| **Usage of a Reserved Function Code** | The source device initiated an invalid request. | Major | Illegal Commands |
-| **Usage of Improper Formatting by Outstation** | The source device initiated an invalid request. | Warning | Illegal Commands |
-| **Usage of Reserved Status Flags (IIN)** | A DNP3 source device (outstation) used the reserved Internal Indicator 2.6. It's recommended to check the device's configuration. | Warning | Illegal Commands |
+| Title | Description | Severity | Category | MITRE ATT&CK <br> tactics and techniques |
+|--|--|--|--|--|
+| **Excessive Malformed Packets In a Single Session** | An abnormal number of malformed packets sent from the source device to the destination device. This alert might indicate erroneous communications, or an attempt to manipulate the targeted device. <br><br> Threshold: 2 malformed packets in 10 minutes | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0806: Brute Force I/O |
+| **Firmware Update** | A source device sent a command to update firmware on a destination device. Verify that recent programming, configuration and firmware upgrades made to the destination device are valid. | Warning | Firmware Change | **Tactics:** <br> - Inhibit Response Function <br> - Persistence <br><br> **Techniques:** <br> - T0857: System Firmware |
+| **Function Code Not Supported by Outstation** | The destination device received an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **Illegal BACNet message** | The source device initiated an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **Illegal Connection Attempt on Port 0** | A source device attempted to connect to destination device on port number zero (0). For TCP, port 0 is reserved and canΓÇÖt be used. For UDP, the port is optional and a value of 0 means no port. There's usually no service on a system that listens on port 0. This event may indicate an attempt to attack the destination device, or indicate that an application was programmed incorrectly. | Minor | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **Illegal DNP3 Operation** | The source device initiated an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services |
+| **Illegal MODBUS Operation (Exception Raised by Master)** | The source device initiated an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services |
+| **Illegal MODBUS Operation (Function Code Zero)** | The source device initiated an invalid request. | Major | Illegal Commands |**Tactics:** <br> - Initial Access <br> - Lateral Movement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services |
+| **Illegal Protocol Version** | The source device initiated an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Initial Access <br> - LateralMovement <br> - Impair Process Control <br><br> **Techniques:** <br> - T0820: Remote Services <br> - T0836: Modify Parameter |
+| **Incorrect Parameter Sent to Outstation** | The destination device received an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **Initiation of an Obsolete Function Code (Initialize Data)** | The source device initiated an invalid request. | Minor | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **Initiation of an Obsolete Function Code (Save Config)** | The source device initiated an invalid request. | Minor | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **Master Requested an Application Layer Confirmation** | The source device initiated an invalid request. | Warning | Illegal Commands | **Tactics:** <br> - Command And Control <br><br> **Techniques:** <br> - T0869: Standard Application Layer Protocol |
+| **Modbus Exception** | A source device (secondary) returned an exception to a destination device (primary). | Major | Illegal Commands | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0814: Denial of Service |
+| **Slave Device Received Illegal ASDU Type** | The destination device received an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **Slave Device Received Illegal Command Cause of Transmission** | The destination device received an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **Slave Device Received Illegal Common Address** | The destination device received an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **Slave Device Received Illegal Data Address Parameter** | The destination device received an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **Slave Device Received Illegal Data Value Parameter** | The destination device received an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **Slave Device Received Illegal Function Code** | The destination device received an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **Slave Device Received Illegal Information Object Address** | The destination device received an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **Unknown Object Sent to Outstation** | The destination device received an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **Usage of a Reserved Function Code** | The source device initiated an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **Usage of Improper Formatting by Outstation** | The source device initiated an invalid request. | Warning | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **Usage of Reserved Status Flags (IIN)** | A DNP3 source device (outstation) used the reserved Internal Indicator 2.6. It's recommended to check the device's configuration. | Warning | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
## Malware engine alerts Malware engine alerts describe detected malicious network activity.
-| Title | Description| Severity | Category |
-|--|--|--|--|
-| **Connection Attempt to Known Malicious IP** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity |
-| **Invalid SMB Message (DoublePulsar Backdoor Implant)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| **Malicious Domain Name Request** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity |
-| **Malware Test File Detected - EICAR AV Success** | An EICAR AV test file was detected in traffic between two devices (over any transport - TCP or UDP). The file isn't malware. It's used to confirm that the antivirus software is installed correctly. Demonstrate what happens when a virus is found, and check internal procedures and reactions when a virus is found. Antivirus software should detect EICAR as if it were a real virus. | Major | Suspicion of Malicious Activity |
-| **Suspicion of Conficker Malware** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malware |
-| **Suspicion of Denial Of Service Attack** | A source device attempted to initiate an excessive number of new connections to a destination device. This may indicate a Denial Of Service (DOS) attack against the destination device, and might interrupt device functionality, affect performance and service availability, or cause unrecoverable errors. <br><br> Threshold: 3000 attempts in 1 minute | Critical | Suspicion of Malicious Activity |
-| **Suspicion of Malicious Activity** | Suspicious network activity was detected. This activity may be associated with an attack that triggered known 'Indicators of Compromise' (IOCs). Alert metadata should be reviewed by the security team. | Major | Suspicion of Malicious Activity |
-| **Suspicion of Malicious Activity (BlackEnergy)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| **Suspicion of Malicious Activity (DarkComet)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| **Suspicion of Malicious Activity (Duqu)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| **Suspicion of Malicious Activity (Flame)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| **Suspicion of Malicious Activity (Havex)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| **Suspicion of Malicious Activity (Karagany)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| **Suspicion of Malicious Activity (LightsOut)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| **Suspicion of Malicious Activity (Name Queries)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. <br><br> Threshold: 25 name queries in 1 minute | Major | Suspicion of Malicious Activity |
-| **Suspicion of Malicious Activity (Poison Ivy)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| **Suspicion of Malicious Activity (Regin)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| **Suspicion of Malicious Activity (Stuxnet)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| **Suspicion of Malicious Activity (WannaCry)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malware |
-| **Suspicion of NotPetya Malware - Illegal SMB Parameters Detected** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| **Suspicion of NotPetya Malware - Illegal SMB Transaction Detected** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| **Suspicion of Remote Code Execution with PsExec** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity |
-| **Suspicion of Remote Windows Service Management** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity |
-| **Suspicious Executable File Detected on Endpoint** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity |
-| **Suspicious Traffic Detected** | Suspicious network activity was detected. This activity may be associated with an attack that triggered known 'Indicators of Compromise' (IOCs). Alert metadata should be reviewed by the security team | Critical | Suspicion of Malicious Activity |
-| **Backup Activity with Antivirus Signatures** | Traffic detected between the source device and the destination backup server triggered this alert. The traffic includes backup of antivirus software that might contain malware signatures. This is most likely legitimate backup activity. | Warning | Backup
+| Title | Description| Severity | Category | MITRE ATT&CK <br> tactics and techniques |
+|--|--|--|--|--|
+| **Connection Attempt to Known Malicious IP** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity | **Tactics:** <br> - Initial Access <br> - Command And Control <br><br> **Techniques:** <br> - T0883: Internet Accessible Device <br> - T0884: Connection Proxy |
+| **Invalid SMB Message (DoublePulsar Backdoor Implant)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Initial Access <br> - LateralMovement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services |
+| **Malicious Domain Name Request** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity | **Tactics:** <br> - Initial Access <br> - Command And Control <br><br> **Techniques:** <br> - T0883: Internet Accessible Device <br> - T0884: Connection Proxy |
+| **Malware Test File Detected - EICAR AV Success** | An EICAR AV test file was detected in traffic between two devices (over any transport - TCP or UDP). The file isn't malware. It's used to confirm that the antivirus software is installed correctly. Demonstrate what happens when a virus is found, and check internal procedures and reactions when a virus is found. Antivirus software should detect EICAR as if it were a real virus. | Major | Suspicion of Malicious Activity | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Suspicion of Conficker Malware** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malware | **Tactics:** <br> - Initial Access <br> - Impact <br><br> **Techniques:** <br> - T0826: Loss of Availability <br> - T0828: Loss of Productivity and Revenue <br> - T0847: Replication Through Removable Media |
+| **Suspicion of Denial Of Service Attack** | A source device attempted to initiate an excessive number of new connections to a destination device. This may indicate a Denial Of Service (DOS) attack against the destination device, and might interrupt device functionality, affect performance and service availability, or cause unrecoverable errors. <br><br> Threshold: 3000 attempts in 1 minute | Critical | Suspicion of Malicious Activity | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0814: Denial of Service |
+| **Suspicion of Malicious Activity** | Suspicious network activity was detected. This activity may be associated with an attack that triggered known 'Indicators of Compromise' (IOCs). Alert metadata should be reviewed by the security team. | Major | Suspicion of Malicious Activity | **Tactics:** <br> - Lateral Movement <br><br> **Techniques:** <br> - T0867: Lateral Tool Transfer |
+| **Suspicion of Malicious Activity (BlackEnergy)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Command And Control <br><br> **Techniques:** <br> - T0869: Standard Application Layer Protocol |
+| **Suspicion of Malicious Activity (DarkComet)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Impact <br><br> **Techniques:** <br> - T0882: Theft of Operational Information |
+| **Suspicion of Malicious Activity (Duqu)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Impact <br><br> **Techniques:** <br> - T0882: Theft of Operational Information |
+| **Suspicion of Malicious Activity (Flame)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Collection <br> - Impact <br><br> **Techniques:** <br> - T0882: Theft of Operational Information <br> - T0811: Data from Information Repositories |
+| **Suspicion of Malicious Activity (Havex)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Collection <br> - Discovery <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0861: Point & Tag Identification <br> - T0846: Remote System Discovery <br> - T0814: Denial of Service |
+| **Suspicion of Malicious Activity (Karagany)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Impact <br><br> **Techniques:** <br> - T0882: Theft of Operational Information |
+| **Suspicion of Malicious Activity (LightsOut)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Evasion <br><br> **Techniques:** <br> - T0849: Masquerading |
+| **Suspicion of Malicious Activity (Name Queries)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. <br><br> Threshold: 25 name queries in 1 minute | Major | Suspicion of Malicious Activity | **Tactics:** <br> - Command And Control <br><br> **Techniques:** <br> - T0884: Connection Proxy |
+| **Suspicion of Malicious Activity (Poison Ivy)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services |
+| **Suspicion of Malicious Activity (Regin)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br> - Impact <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services <br> - T0882: Theft of Operational Information |
+| **Suspicion of Malicious Activity (Stuxnet)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br> - Impact <br><br> **Techniques:** <br> - T0818: Engineering Workstation Compromise <br> - T0866: Exploitation of Remote Services <br> - T0831: Manipulation of Control |
+| **Suspicion of Malicious Activity (WannaCry)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malware | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services <br> - T0867: Lateral Tool Transfer |
+| **Suspicion of NotPetya Malware - Illegal SMB Parameters Detected** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services |
+| **Suspicion of NotPetya Malware - Illegal SMB Transaction Detected** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Lateral Movement <br><br> **Techniques:** <br> - T0867: Lateral Tool Transfer |
+| **Suspicion of Remote Code Execution with PsExec** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity | **Tactics:** <br> - Lateral Movement <br> - Initial Access <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services |
+| **Suspicion of Remote Windows Service Management** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity | **Tactics:** <br> - Initial Access <br><br> **Techniques:** <br> - T0822: NetworkExternal Remote Services |
+| **Suspicious Executable File Detected on Endpoint** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity | **Tactics:** <br> - Evasion <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0851: Rootkit |
+| **Suspicious Traffic Detected** | Suspicious network activity was detected. This activity may be associated with an attack that triggered known 'Indicators of Compromise' (IOCs). Alert metadata should be reviewed by the security team | Critical | Suspicion of Malicious Activity | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Backup Activity with Antivirus Signatures** | Traffic detected between the source device and the destination backup server triggered this alert. The traffic includes backup of antivirus software that might contain malware signatures. This is most likely legitimate backup activity. | Warning | Backup | **Tactics:** <br> - Impact <br><br> **Techniques:** <br> - T0882: Theft of Operational Information |
## Operational engine alerts Operational engine alerts describe detected operational incidents, or malfunctioning entities.
-| Title | Description | Severity | Category |
-|--|--|--|--|
-| **An S7 Stop PLC Command was Sent** | The source device sent a stop command to a destination controller. The controller will stop operating until a start command is sent. | Warning | Restart/ Stop Commands |
-| **BACNet Operation Failed** | A server returned an error code. This alert indicates a server error or an invalid request by a client. | Major | Command Failures |
-| **Bad MMS Device State** | An MMS Virtual Manufacturing Device (VMD) sent a status message. The message indicates that the server may not be configured correctly, partially operational, or not operational at all. | Major | Operational Issues |
-| **Change of Device Configuration** | A configuration change was detected on a source device. | Minor | Configuration Changes |
-| **Continuous Event Buffer Overflow at Outstation** | A buffer overflow event was detected on a source device. The event may cause data corruption, program crashes, or execution of malicious code. <br><br> Threshold: 3 occurrences in 10 minutes | Major | Buffer Overflow |
-| **Controller Reset** | A source device sent a reset command to a destination controller. The controller stopped operating temporarily and started again automatically. | Warning | Restart/ Stop Commands |
-| **Controller Stop** | The source device sent a stop command to a destination controller. The controller will stop operating until a start command is sent. | Warning | Restart/ Stop Commands |
-| **Device Failed to Receive a Dynamic IP Address** | The source device is configured to receive a dynamic IP address from a DHCP server but didn't receive an address. This indicates a configuration error on the device, or an operational error in the DHCP server. It's recommended to notify the network administrator of the incident | Major | Command Failures |
-| **Device is Suspected to be Disconnected (Unresponsive)** | A source device didn't respond to a command sent to it. It may have been disconnected when the command was sent. <br><br> Threshold: 8 attempts in 5 minutes | Major | Unresponsive |
-| **EtherNet/IP CIP Service Request Failed** | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures |
-| **EtherNet/IP Encapsulation Protocol Command Failed** | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures |
-| **Event Buffer Overflow in Outstation** | A buffer overflow event was detected on a source device. The event may cause data corruption, program crashes, or execution of malicious code. | Major | Buffer Overflow |
-| **Expected Backup Operation Did Not Occur** | Expected backup/file transfer activity didn't occur between two devices. This alert may indicate errors in the backup / file transfer process. <br><br> Threshold: 100 seconds | Major | Backup |
-| **GE SRTP Command Failure** | A server returned an error code. This alert indicates a server error or an invalid request by a client. | Major | Command Failures |
-| **GE SRTP Stop PLC Command was Sent** | The source device sent a stop command to a destination controller. The controller will stop operating until a start command is sent. | Warning | Restart/ Stop Commands |
-| **GOOSE Control Block Requires Further Configuration** | A source device sent a GOOSE message indicating that the device needs commissioning. This means that the GOOSE control block requires further configuration and GOOSE messages are partially or completely non-operational. | Major | Configuration Changes |
-| **GOOSE Dataset Configuration was Changed** | A message (identified by protocol ID) dataset was changed on a source device. This means the device will report a different dataset for this message. | Warning | Configuration Changes |
-| **Honeywell Controller Unexpected Status** | A Honeywell Controller sent an unexpected diagnostic message indicating a status change. | Warning | Operational Issues |
-|* **HTTP Client Error** | The source device initiated an invalid request. | Warning | Abnormal HTTP Communication Behavior |
-| **Illegal IP Address** | System detected traffic between a source device and an IP address that is an invalid address. This may indicate wrong configuration or an attempt to generate illegal traffic. | Minor | Abnormal Communication Behavior |
-| **Master-Slave Authentication Error** | The authentication process between a DNP3 source device (primary) and a destination device (outstation) failed. | Minor | Authentication |
-| **MMS Service Request Failed** | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures |
-| **No Traffic Detected on Sensor Interface** | A sensor stopped detecting network traffic on a network interface. | Critical | Sensor Traffic |
-| **OPC UA Server Raised an Event That Requires User's Attention** | An OPC UA server sent an event notification to a client. This type of event requires user attention | Major | Operational Issues |
-| **OPC UA Service Request Failed** | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures |
-| **Outstation Restarted** | A cold restart was detected on a source device. This means the device was physically turned off and back on again. | Warning | Restart/ Stop Commands |
-| **Outstation Restarts Frequently** | An excessive number of cold restarts were detected on a source device. This means the device was physically turned off and back on again an excessive number of times. <br><br> Threshold: 2 restarts in 10 minutes | Minor | Restart/ Stop Commands |
-| **Outstation's Configuration Changed** | A configuration change was detected on a source device. | Major | Configuration Changes |
-| **Outstation's Corrupted Configuration Detected** | This DNP3 source device (outstation) reported a corrupted configuration. | Major | Configuration Changes |
-| **Profinet DCP Command Failed** | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures |
-| **Profinet Device Factory Reset** | A source device sent a factory reset command to a Profinet destination device. The reset command clears Profinet device configurations and stops its operation. | Warning | Restart/ Stop Commands |
-| * **RPC Operation Failed** | A server returned an error code. This alert indicates a server error or an invalid request by a client. | Major | Command Failures |
-| **Sampled Values Message Dataset Configuration was Changed** | A message (identified by protocol ID) dataset was changed on a source device. This means the device will report a different dataset for this message. | Warning | Configuration Changes |
-| **Slave Device Unrecoverable Failure** | An unrecoverable condition error was detected on a source device. This kind of error usually indicates a hardware failure or failure to perform a specific command. | Major | Command Failures |
-| **Suspicion of Hardware Problems in Outstation** | An unrecoverable condition error was detected on a source device. This kind of error usually indicates a hardware failure or failure to perform a specific command. | Major | Operational Issues |
-| **Suspicion of Unresponsive MODBUS Device** | A source device didn't respond to a command sent to it. It may have been disconnected when the command was sent. <br><br> Threshold: Minimum of 1 valid response for a minimum of 3 requests within 5 minutes | Minor | Unresponsive |
-| **Traffic Detected on Sensor Interface** | A sensor resumed detecting network traffic on a network interface. | Warning | Sensor Traffic |
-
-\* The alert is disabled by default, but can be enabled again. To enable the alert, navigate to the Support page, find the alert and select **Enable**. You need administrative level permissions to access the Support page.
+| Title | Description | Severity | Category | MITRE ATT&CK <br> tactics and techniques |
+|--|--|--|--|--|
+| **An S7 Stop PLC Command was Sent** | The source device sent a stop command to a destination controller. The controller will stop operating until a start command is sent. | Warning | Restart/ Stop Commands | **Tactics:** <br> - Lateral Movement <br> - Defense Evasion <br> - Execution <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0843: Program Download <br> - T0858: Change Operating Mode <br> - T0814: Denial of Service |
+| **BACNet Operation Failed** | A server returned an error code. This alert indicates a server error or an invalid request by a client. | Major | Command Failures | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **Bad MMS Device State** | An MMS Virtual Manufacturing Device (VMD) sent a status message. The message indicates that the server may not be configured correctly, partially operational, or not operational at all. | Major | Operational Issues | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0814: Denial of Service |
+| **Change of Device Configuration** | A configuration change was detected on a source device. | Minor | Configuration Changes | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **Continuous Event Buffer Overflow at Outstation** | A buffer overflow event was detected on a source device. The event may cause data corruption, program crashes, or execution of malicious code. <br><br> Threshold: 3 occurrences in 10 minutes | Major | Buffer Overflow | **Tactics:** <br> - Inhibit Response Function <br> - Impair Process Control <br> - Persistence <br><br> **Techniques:** <br> - T0814: Denial of Service <br> - T0806: Brute Force I/O <br> - T0839: Module Firmware |
+| **Controller Reset** | A source device sent a reset command to a destination controller. The controller stopped operating temporarily and started again automatically. | Warning | Restart/ Stop Commands | **Tactics:** <br> - Defense Evasion <br> - Execution <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0858: Change Operating Mode <br> - T0814: Denial of Service |
+| **Controller Stop** | The source device sent a stop command to a destination controller. The controller will stop operating until a start command is sent. | Warning | Restart/ Stop Commands | **Tactics:** <br> - Lateral Movement <br> - Defense Evasion <br> - Execution <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0843: Program Download <br> - T0858: Change Operating Mode <br> - T0814: Denial of Service |
+| **Device Failed to Receive a Dynamic IP Address** | The source device is configured to receive a dynamic IP address from a DHCP server but didn't receive an address. This indicates a configuration error on the device, or an operational error in the DHCP server. It's recommended to notify the network administrator of the incident | Major | Command Failures | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Device is Suspected to be Disconnected (Unresponsive)** | A source device didn't respond to a command sent to it. It may have been disconnected when the command was sent. <br><br> Threshold: 8 attempts in 5 minutes | Major | Unresponsive | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0881: Service Stop |
+| **EtherNet/IP CIP Service Request Failed** | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **EtherNet/IP Encapsulation Protocol Command Failed** | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures | **Tactics:** <br> - Collection <br><br> **Techniques:** <br> - T0801: Monitor Process State |
+| **Event Buffer Overflow in Outstation** | A buffer overflow event was detected on a source device. The event may cause data corruption, program crashes, or execution of malicious code. | Major | Buffer Overflow | **Tactics:** <br> - Inhibit Response Function <br> - Impair Process Control <br> - Persistence <br><br> **Techniques:** <br> - T0814: Denial of Service <br> - T0839: Module Firmware |
+| **Expected Backup Operation Did Not Occur** | Expected backup/file transfer activity didn't occur between two devices. This alert may indicate errors in the backup / file transfer process. <br><br> Threshold: 100 seconds | Major | Backup | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0809: Data Destruction |
+| **GE SRTP Command Failure** | A server returned an error code. This alert indicates a server error or an invalid request by a client. | Major | Command Failures | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **GE SRTP Stop PLC Command was Sent** | The source device sent a stop command to a destination controller. The controller will stop operating until a start command is sent. | Warning | Restart/ Stop Commands | **Tactics:** <br> - Lateral Movement <br> - Defense Evasion <br> - Execution <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0843: Program Download <br> - T0858: Change Operating Mode <br> - T0814: Denial of Service |
+| **GOOSE Control Block Requires Further Configuration** | A source device sent a GOOSE message indicating that the device needs commissioning. This means that the GOOSE control block requires further configuration and GOOSE messages are partially or completely non-operational. | Major | Configuration Changes | **Tactics:** <br> - Impair Process Control <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0803: Block Command Message <br> - T0821: Modify Controller Tasking |
+| **GOOSE Dataset Configuration was Changed** | A message (identified by protocol ID) dataset was changed on a source device. This means the device will report a different dataset for this message. | Warning | Configuration Changes | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **Honeywell Controller Unexpected Status** | A Honeywell Controller sent an unexpected diagnostic message indicating a status change. | Warning | Operational Issues | **Tactics:** <br> - Evasion <br> - Execution <br><br> **Techniques:** <br> - T0858: Change Operating Mode |
+| **HTTP Client Error [*](#alerts-disabled-by-default)** | The source device initiated an invalid request. | Warning | Abnormal HTTP Communication Behavior | **Tactics:** <br> - Command And Control <br><br> **Techniques:** <br> - T0869: Standard Application Layer Protocol |
+| **Illegal IP Address** | System detected traffic between a source device and an IP address that is an invalid address. This may indicate wrong configuration or an attempt to generate illegal traffic. | Minor | Abnormal Communication Behavior | **Tactics:** <br> - Discovery <br> - Impair Process Control <br><br> **Techniques:** <br> - T0842: Network Sniffing <br> - T0836: Modify Parameter |
+| **Master-Slave Authentication Error** | The authentication process between a DNP3 source device (primary) and a destination device (outstation) failed. | Minor | Authentication | **Tactics:** <br> - Lateral Movement <br> - Persistence <br><br> **Techniques:** <br> - T0859: Valid Accounts |
+| **MMS Service Request Failed** | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **No Traffic Detected on Sensor Interface** | A sensor stopped detecting network traffic on a network interface. | Critical | Sensor Traffic | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0881: Service Stop |
+| **OPC UA Server Raised an Event That Requires User's Attention** | An OPC UA server sent an event notification to a client. This type of event requires user attention | Major | Operational Issues | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0838: Modify Alarm Settings |
+| **OPC UA Service Request Failed** | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **Outstation Restarted** | A cold restart was detected on a source device. This means the device was physically turned off and back on again. | Warning | Restart/ Stop Commands | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0816: Device Restart/Shutdown |
+| **Outstation Restarts Frequently** | An excessive number of cold restarts were detected on a source device. This means the device was physically turned off and back on again an excessive number of times. <br><br> Threshold: 2 restarts in 10 minutes | Minor | Restart/ Stop Commands | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0814: Denial of Service <br> - T0816: Device Restart/Shutdown |
+| **Outstation's Configuration Changed** | A configuration change was detected on a source device. | Major | Configuration Changes | **Tactics:** <br> - Inhibit Response Function <br> - Persistence <br><br> **Techniques:** <br> - T0857: System Firmware |
+| **Outstation's Corrupted Configuration Detected** | This DNP3 source device (outstation) reported a corrupted configuration. | Major | Configuration Changes | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0809: Data Destruction |
+| **Profinet DCP Command Failed** | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **Profinet Device Factory Reset** | A source device sent a factory reset command to a Profinet destination device. The reset command clears Profinet device configurations and stops its operation. | Warning | Restart/ Stop Commands | **Tactics:** <br> - Defense Evasion <br> - Execution <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0858: Change Operating Mode <br> - T0814: Denial of Service |
+| **RPC Operation Failed [*](#alerts-disabled-by-default)** | A server returned an error code. This alert indicates a server error or an invalid request by a client. | Major | Command Failures | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **Sampled Values Message Dataset Configuration was Changed** | A message (identified by protocol ID) dataset was changed on a source device. This means the device will report a different dataset for this message. | Warning | Configuration Changes | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **Slave Device Unrecoverable Failure** | An unrecoverable condition error was detected on a source device. This kind of error usually indicates a hardware failure or failure to perform a specific command. | Major | Command Failures | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0814: Denial of Service |
+| **Suspicion of Hardware Problems in Outstation** | An unrecoverable condition error was detected on a source device. This kind of error usually indicates a hardware failure or failure to perform a specific command. | Major | Operational Issues | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0814: Denial of Service <br> - T0881: Service Stop |
+| **Suspicion of Unresponsive MODBUS Device** | A source device didn't respond to a command sent to it. It may have been disconnected when the command was sent. <br><br> Threshold: Minimum of 1 valid response for a minimum of 3 requests within 5 minutes | Minor | Unresponsive | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0881: Service Stop |
+| **Traffic Detected on Sensor Interface** | A sensor resumed detecting network traffic on a network interface. | Warning | Sensor Traffic | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
## Next steps
defender-for-iot How To Manage Individual Sensors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-individual-sensors.md
Title: Manage individual sensors description: Learn how to manage individual sensors, including managing activation files, certificates, performing backups, and updating a standalone sensor. Previously updated : 06/02/2022 Last updated : 11/07/2022
This article describes how to manage individual sensors, such as managing activa
You can also perform some management tasks for multiple sensors simultaneously from the Azure portal or an on-premises management console. For more information, see [Next steps](#next-steps).
+## View overall sensor status
+
+When you sign into your sensor, the first page shown is the **Overview** page.
+
+For example:
++
+The **Overview** page shows the following widgets:
+
+| Name | Description |
+|--|--|
+| **General Settings** | Displays a list of the sensor's basic configuration settings |
+| **Traffic Monitoring** | Displays a graph detailing traffic in the sensor. The graph shows traffic as units of Mbps per hour on the day of viewing. |
+| **Top 5 OT Protocols** | Displays a bar graph that details the top five most used OT protocols. The bar graph also provides the number of devices that are using each of those protocols. |
+| **Traffic By Port** | Displays a pie chart showing the types of ports in your network, with the amount of traffic detected in each type of port. |
+| **Top open alerts** | Displays a table listing any currently open alerts with high severity levels, including critical details about each alert. |
+
+Select the link in each widget to drill down for more information in your sensor.
+ ## Manage sensor activation files Your sensor was onboarded with Microsoft Defender for IoT from the Azure portal. Each sensor was onboarded as either a locally connected sensor or a cloud-connected sensor.
You'll receive an error message if the activation file couldn't be uploaded. The
Following sensor installation, a local self-signed certificate is generated and used to access the sensor web application. When logging in to the sensor for the first time, Administrator users are prompted to provide an SSL/TLS certificate.
-Sensor Administrators may be required to update certificates that were uploaded after initial login. This may happen for example if a certificate expired.
+Sensor Administrators may be required to update certificates that were uploaded after initial login. This may happen, for example, if a certificate expired.
**To update a certificate:**
If the upload fails, contact your security or IT administrator, or review the in
**To change the certificate validation setting:**
-1. Enable or disable the **Enable Certificate Validation** toggle. If the option is enabled and validation fails, communication between relevant components is halted and a validation error is presented in the console. If disabled, certificate validation is not carried out. See [About certificate validation](how-to-deploy-certificates.md#about-certificate-validation) for more information.
+1. Enable or disable the **Enable Certificate Validation** toggle. If the option is enabled and validation fails, communication between relevant components is halted, and a validation error is presented in the console. If disabled, certificate validation is not carried out. See [About certificate validation](how-to-deploy-certificates.md#about-certificate-validation) for more information.
1. Select **Save**.
-For more information about first-time certificate upload see,
+For more information about first-time certificate upload, see,
[First-time sign-in and activation checklist](how-to-activate-and-set-up-your-sensor.md#first-time-sign-in-and-activation-checklist) ## Connect a sensor to the management console
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
Title: What's new in Microsoft Defender for IoT description: This article lets you know what's new in the latest release of Defender for IoT. Previously updated : 08/08/2022 Last updated : 11/03/2022 # What's new in Microsoft Defender for IoT?
For more information, see the [Microsoft Security Development Lifecycle practice
Our alert reference article now includes the following details for each alert: -- **Alert category**, helpful when you want to investigate alerts that are aggregated by a specific activity or configure SIEM rules to generate incidents based on specific activities
+- **Alert category**, helpful when you want to investigate alerts that are aggregated by a specific activity or configure SIEM rules to generate incidents based on specific activities.
+
+- **MITRE ATT&CK for ICS tactics and techniques**, which describe the actions an adversary may take while operating within the network. Use the tactics and techniques listed for each alert to learn about the network areas that might be at risk and collaborate more efficiently across your security and OT teams more as you secure those assets.
- **Alert threshold**, for relevant alerts. Thresholds indicate the specific point at which an alert is triggered. Modify alert thresholds as needed from the sensor's **Support** page.
Defender for IoT now provides vulnerability data in the Azure portal for detecte
Access vulnerability data in the Azure portal from the following locations: -- On a device details page select the **Vulnerabilities** tab to view current vulnerabilities on the selected device. For example, from the **Device inventory** page, select a specific device and then select **Vulnerabilities**.
+- On a device details page, select the **Vulnerabilities** tab to view current vulnerabilities on the selected device. For example, from the **Device inventory** page, select a specific device and then select **Vulnerabilities**.
For more information, see [View the device inventory](how-to-manage-device-inventory-for-organizations.md#view-the-device-inventory).
Use the following table to understand the mapping between legacy hardware profil
|Legacy name |New name | Description | ||||
-|**Corporate** | **C5600** | A *Corporate* environment, with: <br>16 Cores<br>32 GB RAM<br>5.6 TB disk storage |
-|**Enterprise** | **E1800** | An *Enterprise* environment, with: <br>8 Cores<br>32 GB RAM<br>1.8 TB disk storage |
-|**SMB** | **L500** | A *Production line* environment, with: <br>4 Cores<br>8 GB RAM<br>500 GB disk storage |
-|**Office** | **L100** | A *Production line* environment, with: <br>4 Cores<br>8 GB RAM<br>100 GB disk storage |
-|**Rugged** | **L64** | A *Production line* environment, with: <br>4 Cores<br>8 GB RAM<br>64 GB disk storage |
+|**Corporate** | **C5600** | A *Corporate* environment, with: <br>16 Cores<br>32-GB RAM<br>5.6-TB disk storage |
+|**Enterprise** | **E1800** | An *Enterprise* environment, with: <br>8 Cores<br>32-GB RAM<br>1.8-TB disk storage |
+|**SMB** | **L500** | A *Production line* environment, with: <br>4 Cores<br>8-GB RAM<br>500-GB disk storage |
+|**Office** | **L100** | A *Production line* environment, with: <br>4 Cores<br>8-GB RAM<br>100-GB disk storage |
+|**Rugged** | **L64** | A *Production line* environment, with: <br>4 Cores<br>8-GB RAM<br>64-GB disk storage |
-We also now support new enterprise hardware profiles, for sensors supporting both 500 GB and 1 TB disk sizes.
+We also now support new enterprise hardware profiles, for sensors supporting both 500 GB and 1-TB disk sizes.
For more information, see [Which appliances do I need?](ot-appliance-sizing.md)
defender-for-iot Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/workbooks.md
Title: Use Azure Monitor workbooks in Microsoft Defender for IoT
+ Title: Visualize Microsoft Defender for IoT data with Azure Monitor workbooks
description: Learn how to view and create Azure Monitor workbooks for Defender for IoT data. Last updated 09/04/2022
-# Use Azure Monitor workbooks in Microsoft Defender for IoT
-
-> [!IMPORTANT]
-> The **Azure Monitor workbooks** are currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+# Visualize Microsoft Defender for IoT data with Azure Monitor workbooks
Azure Monitor workbooks provide graphs, charts, and dashboards that visually reflect data stored in your Azure Resource Graph subscriptions and are available directly in Microsoft Defender for IoT.
deployment-environments Configure Catalog Item https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/configure-catalog-item.md
Title: Configure a Catalog Item in Azure Deployment Environments
-description: This article helps you configure a Catalog Item in GitHub repo or Azure DevOps repo.
+ Title: Add and configure a catalog item
+
+description: Learn how to add and configure a catalog item in your repository to use in your Azure Deployment Environments Preview dev center projects.
+ Last updated 10/12/2022 -+
-# Configure a Catalog Item in GitHub repo or Azure DevOps repo
-In Azure Deployment Environments Preview service, you can use a [Catalog](concept-environments-key-concepts.md#catalogs) to provide your development teams with a curated set of predefined [*infrastructure as code (IaC)*](/devops/deliver/what-is-infrastructure-as-code) templates called [Catalog Items](concept-environments-key-concepts.md#catalog-items). A catalog item is a combination of an *infrastructure as code (IaC)* template (for example, [Azure Resource Manager (ARM) templates](../azure-resource-manager/templates/overview.md)) and a manifest (*manifest.yml*) file.
+# Add and configure a catalog item
+
+In Azure Deployment Environments Preview, you can use a [catalog](concept-environments-key-concepts.md#catalogs) to provide your development teams with a curated set of predefined [infrastructure as code (IaC)](/devops/deliver/what-is-infrastructure-as-code) templates called [*catalog items*](concept-environments-key-concepts.md#catalog-items).
+
+A catalog item is combined of least two files:
+
+- An [Azure Resource Manager template (ARM template)](../azure-resource-manager/templates/overview.md) in JSON file format. For example, *azuredeploy.json*.
+- A manifest YAML file (*manifest.yml*).
>[!NOTE]
-> Azure Deployment Environments Preview currently only supports Azure Resource Manager (ARM) templates.
+> Azure Deployment Environments Preview currently supports only ARM templates.
-The IaC template will contain the environment definition and the manifest file will be used to provide metadata about the template. The catalog items that you provide in the catalog will be used by your development teams to deploy environments in Azure.
+The IaC template contains the environment definition (template), and the manifest file provides metadata about the template. Your development teams use the catalog items that you provide in the catalog to deploy environments in Azure.
-We offer an example [Sample Catalog](https://aka.ms/deployment-environments/SampleCatalog) that you can attach as-is, or you can fork and customize the catalog items. You can attach your private repo to use your own catalog items.
+We offer a [sample catalog](https://aka.ms/deployment-environments/SampleCatalog) that you can use as your repository. You also can use your own private repository, or you can fork and customize the catalog items in the sample catalog.
-After you [attach a catalog](how-to-configure-catalog.md) to your dev center, the service will scan through the specified folder path to identify folders containing an ARM template and the associated manifest file. The specified folder path should be a folder that contains sub-folders with the catalog item files.
+After you [add a catalog](how-to-configure-catalog.md) to your dev center, the service scans the specified folder path to identify folders that contain an ARM template and an associated manifest file. The specified folder path should be a folder that contains subfolders that hold the catalog item files.
-In this article, you'll learn how to:
+In this article, you learn how to:
-* Add a new catalog item
-* Update a catalog item
-* Delete a catalog item
+> [!div class="checklist"]
+>
+> - Add a catalog item
+> - Update a catalog item
+> - Delete a catalog item
> [!IMPORTANT]
-> Azure Deployment Environments is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> Azure Deployment Environments currently is in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise are not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+<a name="add-a-new-catalog-item"></a>
+
+## Add a catalog item
-## Add a new catalog item
+To add a catalog item:
-Provide a new catalog item to your development team as follows:
+1. In your repository, create a subfolder in the repository folder path.
-1. Create a subfolder in the specified folder path, and then add a *ARM_template.json* and the associated *manifest.yaml* file.
- :::image type="content" source="../deployment-environments/media/configure-catalog-item/create-subfolder-in-path.png" alt-text="Screenshot of subfolder in folder path containing ARM template and manifest file.":::
+1. Add two files to the new repository subfolder:
- 1. **Add ARM template**
-
- To implement infrastructure as code for your Azure solutions, use Azure Resource Manager templates (ARM templates).
-
- [Azure Resource Manager (ARM) templates](../azure-resource-manager/templates/overview.md) help you define the infrastructure and configuration of your Azure solution and repeatedly deploy it in a consistent state.
-
- To learn about how to get started with ARM templates, see the following:
-
- - [Understand the structure and syntax of Azure Resource Manager Templates](../azure-resource-manager/templates/syntax.md) describes the structure of an Azure Resource Manager template and the properties that are available in the different sections of a template.
- - [Use linked templates](../azure-resource-manager/templates/linked-templates.md?tabs=azure-powershell#use-relative-path-for-linked-templates) describes how to use linked templates with the new ARM `relativePath` property to easily modularize your templates and share core components between catalog items.
+ - An ARM template as a JSON file.
- 1. **Add manifest file**
-
- The *manifest.yaml* file contains metadata related to the ARM template.
-
- The following is a sample *manifest.yaml* file.
-
- ```
- name: WebApp
- version: 1.0.0
- summary: Azure Web App Environment
- description: Deploys an Azure Web App without a data store
- runner: ARM
- templatePath: azuredeploy.json
- ```
-
- >[!NOTE]
- > `version` is an optional field, and will later be used to support multiple versions of catalog items.
+ To implement IaC for your Azure solutions, use ARM templates. [ARM templates](../azure-resource-manager/templates/overview.md) help you define the infrastructure and configuration of your Azure solution and repeatedly deploy it in a consistent state.
-1. On the **Catalogs** page of the dev center, select the specific repo, and then select **Sync**.
+ To learn how to get started with ARM templates, see the following articles:
- :::image type="content" source="../deployment-environments/media/configure-catalog-item/sync-catalog-items.png" alt-text="Screenshot showing how to sync the catalog." :::
+ - [Understand the structure and syntax of ARM templates](../azure-resource-manager/templates/syntax.md): Describes the structure of an ARM template and the properties that are available in the different sections of a template.
+ - [Use linked templates](../azure-resource-manager/templates/linked-templates.md?tabs=azure-powershell#use-relative-path-for-linked-templates): Describes how to use linked templates with the new ARM template `relativePath` property to easily modularize your templates and share core components between catalog items.
-1. The service scans through the repository to discover any new catalog items and makes them available to all the projects.
+ - A manifest as a YAML file.
-## Update an existing catalog item
+ The *manifest.yaml* file contains metadata related to the ARM template.
-To modify the configuration of Azure resources in an existing catalog item, directly update the associated *ARM_Template.json* file in the repository. The change is immediately reflected when you create a new environment using the specific catalog item, and when you redeploy an environment associated with that catalog item.
+ The following script is an example of the contents of a *manifest.yaml* file:
-To update any metadata related to the ARM template, modify the *manifest.yaml* and [update the catalog](how-to-configure-catalog.md).
+ ```yaml
+ name: WebApp
+ version: 1.0.0
+ summary: Azure Web App Environment
+ description: Deploys a web app in Azure without a datastore
+ runner: ARM
+ templatePath: azuredeploy.json
+ ```
+
+ > [!NOTE]
+ > The `version` field is optional. Later, the field will be used to support multiple versions of catalog items.
+
+ :::image type="content" source="../deployment-environments/media/configure-catalog-item/create-subfolder-in-path.png" alt-text="Screenshot that shows a folder path with a subfolder that contains an ARM template and a manifest file.":::
+
+1. In your dev center, go to **Catalogs**, select the repository, and then select **Sync**.
+
+ :::image type="content" source="../deployment-environments/media/configure-catalog-item/sync-catalog-items.png" alt-text="Screenshot that shows how to sync the catalog." :::
+
+The service scans the repository to find new catalog items. After you sync the repository, new catalog items are available to all projects in the dev center.
+
+## Update a catalog item
+
+To modify the configuration of Azure resources in an existing catalog item, update the associated ARM template JSON file in the repository. The change is immediately reflected when you create a new environment by using the specific catalog item. The update also is applied when you redeploy an environment that's associated with that catalog item.
+
+To update any metadata related to the ARM template, modify *manifest.yaml*, and then [update the catalog](how-to-configure-catalog.md#update-a-catalog).
## Delete a catalog item
-To delete an existing Catalog Item, delete the subfolder containing the ARM template and the associated manifest, and then [update the catalog](how-to-configure-catalog.md).
-Once you delete a catalog item, development teams will no longer be able to use the specific catalog item to deploy a new environment. You'll need to update the catalog item reference for any existing environments created using the deleted catalog item. Redeploying the environment without updating the reference will result in a deployment failure.
+To delete an existing catalog item, in the repository, delete the subfolder that contains the ARM template JSON file and the associated manifest YAML file. Then, [update the catalog](how-to-configure-catalog.md#update-a-catalog).
+
+After you delete a catalog item, development teams can no longer use the specific catalog item to deploy a new environment. Update the catalog item reference for any existing environments that were created by using the deleted catalog item. If the reference isn't updated and the environment is redeployed, the deployment fails.
## Next steps
-* [Create and configure projects](./quickstart-create-and-configure-projects.md)
-* [Create and configure environment types](quickstart-create-access-environments.md).
+- Learn how to [create and configure a project](./quickstart-create-and-configure-projects.md).
+- Learn how to [create and configure an environment type](quickstart-create-access-environments.md).
deployment-environments How To Configure Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-catalog.md
Title: Configure a catalog
+ Title: Add and configure a catalog
-description: Learn how to configure a catalog in your dev center to provide curated infra-as-code templates to your development teams to deploy self-serve environments.
+description: Learn how to add and configure a catalog in your Azure Deployment Environments Preview dev center to provide deployment templates for your development teams.
- + Last updated 10/12/2022
-# Configure a catalog to provide curated infra-as-code templates
+# Add and configure a catalog
+
+Learn how to add and configure a [catalog](./concept-environments-key-concepts.md#catalogs) in your Azure Deployment Environments Preview dev center. You can use a catalog to provide your development teams with a curated set of infrastructure as code (IaC) templates called [*catalog items*](./concept-environments-key-concepts.md#catalog-items).
-Learn how to configure a dev center [catalog](./concept-environments-key-concepts.md#catalogs) to provide your development teams with a curated set of 'infra-as-code' templates called [catalog items](./concept-environments-key-concepts.md#catalog-items). To learn about configuring catalog items, see [How to configure a catalog item](./configure-catalog-item.md).
+For more information about catalog items, see [Add and configure a catalog item](./configure-catalog-item.md).
-The catalog could be a repository hosted in [GitHub](https://github.com) or in [Azure DevOps Services](https://dev.azure.com/).
+A catalog is a repository that's hosted in [GitHub](https://github.com) or [Azure DevOps](https://dev.azure.com/).
-* To learn how to host a repository in GitHub, see [Get started with GitHub](https://docs.github.com/get-started).
-* To learn how to host a Git repository in an Azure DevOps Services project, see [Azure Repos](https://azure.microsoft.com/services/devops/repos/).
+- To learn how to host a repository in GitHub, see [Get started with GitHub](https://docs.github.com/get-started).
+- To learn how to host a Git repository in an Azure DevOps project, see [Azure Repos](https://azure.microsoft.com/services/devops/repos/).
-We offer an example [Sample Catalog](https://aka.ms/deployment-environments/SampleCatalog) that you can attach as-is, or you can fork and customize the catalog items. You can attach your private repo to use your own catalog items.
+We offer a [sample catalog](https://aka.ms/deployment-environments/SampleCatalog) that you can use as your repository. You also can use your own private repository, or you can fork and customize the catalog items in the sample catalog.
-In this article, you'll learn how to:
+In this article, you learn how to:
-* [Add a new catalog](#add-a-new-catalog)
-* [Update a catalog](#update-a-catalog)
-* [Delete a catalog](#delete-a-catalog)
+> [!div class="checklist"]
+>
+> - Add a catalog
+> - Update a catalog
+> - Delete a catalog
-## Add a new catalog
+> [!IMPORTANT]
+> Azure Deployment Environments currently is in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise are not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-To add a new catalog, you'll need to:
+## Add a catalog
+To add a catalog, you complete these tasks:
+
+- Get the clone URL for your repository.
+- Create a personal access token
+- Store the personal access token as a key vault secret in Azure Key Vault.
+- Add your repository as a catalog.
### Get the clone URL for your repository
-**Get the clone URL of your GitHub repo**
+You can choose from two types of repositories:
+
+- A GitHub repository
+- An Azure DevOps repository
+
+#### Get the clone URL of a GitHub repository
1. Go to the home page of the GitHub repository that contains the template definitions. 1. [Get the clone URL](/azure/devops/repos/git/clone#get-the-clone-url-of-a-github-repo). 1. Copy and save the URL. You'll use it later.
-**Get the clone URL of your Azure DevOps Services Git repo**
+#### Get the clone URL of an Azure DevOps repository
1. Go to the home page of your team collection (for example, `https://contoso-web-team.visualstudio.com`), and then select your project. 1. [Get the clone URL](/azure/devops/repos/git/clone#get-the-clone-url-of-an-azure-repos-git-repo). 1. Copy and save the URL. You'll use it later.
-### Create a personal access token and store it as a Key Vault secret
+### Create a personal access token
+
+Next, create a personal access token. Depending on the type of repository you use, create a personal access token either in GitHub or in Azure DevOps.
#### Create a personal access token in GitHub 1. Go to the home page of the GitHub repository that contains the template definitions. 1. In the upper-right corner of GitHub, select the profile image, and then select **Settings**.
-1. In the left sidebar, select **<> Developer settings**.
+1. In the left sidebar, select **Developer settings**.
1. In the left sidebar, select **Personal access tokens**. 1. Select **Generate new token**.
-1. On the **New personal access token** page, add a description for your token in the **Note** field.
-1. Select an expiration for your token from the **Expiration** dropdown.
-1. For a private repository, select the **repo** scope under **Select scopes**.
-1. Select **Generate Token**.
+1. In **New personal access token**, in **Note**, enter a description for your token.
+1. In the **Expiration** dropdown, select an expiration for your token.
+1. For a private repository, under **Select scopes**, select the **repo** scope.
+1. Select **Generate token**.
1. Save the generated token. You'll use the token later.
-#### Create a personal access token in Azure DevOps Services
+#### Create a personal access token in Azure DevOps
1. Go to the home page of your team collection (for example, `https://contoso-web-team.visualstudio.com`), and then select your project.
-1. [Create a Personal access token](/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate#create-a-pat).
+1. Create a [personal access token](/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate#create-a-pat).
1. Save the generated token. You'll use the token later.
-#### Store the personal access token as a Key Vault secret
+### Store the personal access token as a key vault secret
+
+To store the personal access token you generated as a [key vault secret](../key-vault/secrets/about-secrets.md) and copy the secret identifier:
-To store the personal access token(PAT) that you generated as a [Key Vault secret](../key-vault/secrets/about-secrets.md) and copy the secret identifier:
-1. [Create a vault](../key-vault/general/quick-create-portal.md#create-a-vault)
-1. [Add](../key-vault/secrets/quick-create-portal.md#add-a-secret-to-key-vault) the personal access token (PAT) as a secret to the Key Vault.
-1. [Open](../key-vault/secrets/quick-create-portal.md#retrieve-a-secret-from-key-vault) the secret and copy the secret identifier.
+1. Create a [key vault](../key-vault/general/quick-create-portal.md#create-a-vault).
+1. Add the personal access token as a [secret to the key vault](../key-vault/secrets/quick-create-portal.md#add-a-secret-to-key-vault).
+1. Open the secret and [copy the secret identifier](../key-vault/secrets/quick-create-portal.md#retrieve-a-secret-from-key-vault).
-### Connect your repository as a catalog
+### Add your repository as a catalog
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Go to your dev center.
-1. Ensure that the [identity](./how-to-configure-managed-identity.md) attached to the dev center has [access to the Key Vault's secret](./how-to-configure-managed-identity.md#assign-the-managed-identity-access-to-the-key-vault-secret) where the PAT is stored.
-1. Select **Catalogs** from the left pane.
-1. Select **+ Add** from the command bar.
-1. On the **Add catalog** form, enter the following details, and then select **Add**.
+1. In the [Azure portal](https://portal.azure.com/), go to your dev center.
+1. Ensure that the [identity](./how-to-configure-managed-identity.md) that's attached to the dev center has [access to the key vault secret](./how-to-configure-managed-identity.md#grant-the-managed-identity-access-to-the-key-vault-secret) where your personal access token is stored.
+1. In the left menu under **Environment configuration**, select **Catalogs**, and then select **Add**.
+1. In **Add catalog**, enter the following information, and then select **Add**:
| Field | Value | | -- | -- | | **Name** | Enter a name for the catalog. |
- | **Git clone URI** | Enter the [Git HTTPS clone URL](#get-the-clone-url-for-your-repository) for GitHub or Azure DevOps Services repo, that you copied earlier.|
- | **Branch** | Enter the repository branch you'd like to connect to.|
- | **Folder Path** | Enter the folder path relative to the clone URI that contains sub-folders with your catalog items. This folder path should be the path to the folder containing the sub-folders with the catalog item manifests, and not the path to the folder with the catalog item manifest itself.|
- | **Secret Identifier**| Enter the [secret identifier](#create-a-personal-access-token-and-store-it-as-a-key-vault-secret) which contains your Personal Access Token(PAT) for the repository.|
+ | **Git clone URI** | Enter or paste the [clone URL](#get-the-clone-url-for-your-repository) for either your GitHub repository or your Azure DevOps repository.|
+ | **Branch** | Enter the repository branch to connect to.|
+ | **Folder path** | Enter the folder path relative to the clone URI that contains subfolders with your catalog items. This folder path should be the path to the folder that contains the subfolders with the catalog item manifests, and not the path to the folder with the catalog item manifest itself.|
+ | **Secret identifier**| Enter the [secret identifier](#create-a-personal-access-token) that contains your personal access token for the repository.|
+
+ :::image type="content" source="media/how-to-configure-catalog/catalog-item-add.png" alt-text="Screenshot that shows how to add a catalog to a dev center.":::
-1. Verify that your catalog is listed on the **Catalogs** page. If the connection is successful, the **Status** will show as **Connected**.
+1. In **Catalogs** for the dev center, verify that your catalog appears. If the connection is successful, **Status** is **Connected**.
## Update a catalog If you update the ARM template contents or definition in the attached repository, you can provide the latest set of catalog items to your development teams by syncing the catalog.
-To sync to the updated catalog:
+To sync an updated catalog:
-1. Select **Catalogs** from the left pane.
-1. Select the specific catalog and select **Sync**. The service scans through the repository and makes the latest list of catalog items available to all the associated projects in the dev center.
+1. In the left menu for your dev center, under **Environment configuration**, select **Catalogs**,
+1. Select the specific catalog, and then select **Sync**. The service scans through the repository and makes the latest list of catalog items available to all the associated projects in the dev center.
## Delete a catalog
-You can delete a catalog to remove it from the dev center. Any templates contained in a deleted catalog will not be available when deploying new environments. You'll need to update the catalog item reference for any existing environments created using the catalog items in the deleted catalog. If the reference is not updated and the environment is redeployed, it'll result in deployment failure.
+You can delete a catalog to remove it from the dev center. Any templates in a deleted catalog won't be available to development teams when they deploy new environments. Update the catalog item reference for any existing environments that were created by using the catalog items in the deleted catalog. If the reference isn't updated and the environment is redeployed, the deployment fails.
To delete a catalog:
-1. Select **Catalogs** from the left pane.
-1. Select the specific catalog and select **Delete**.
-1. Confirm to delete the catalog.
+1. In the left menu for your dev center, under **Environment configuration**, select **Catalogs**.
+1. Select the specific catalog, and then select **Delete**.
+1. In the **Delete catalog** dialog, select **Continue** to delete the catalog.
## Catalog sync errors
-When adding or syncing a catalog, you may encounter a sync error. This indicates that some or all of the catalog items were found to have errors. You can use CLI or REST API to *GET* the catalog, the response to which will show you the list of invalid catalog items which failed due to schema, reference, or validation errors and ignored catalog items which were detected to be duplicates.
+When you add or sync a catalog, you might encounter a sync error. A sync error indicates that some or all the catalog items have errors. Use the Azure CLI or the REST API to GET the catalog. The GET response shows you the type of errors:
+
+- Ignored catalog items that were detected to be duplicates
+- Invalid catalog items that failed due to schema, reference, or validation errors
+
+### Resolve ignored catalog item errors
-### Handling ignored catalog items
+An ignored catalog item error occurs if you add two or more catalog items that have the same name. You can resolve this issue by renaming catalog items so that each catalog item has a unique name within the catalog.
-Ignored catalog items are caused by adding two or more catalog items with the same name. You can resolve this issue by renaming catalog items so that each item has a unique name within the catalog.
+### Resolve invalid catalog item errors
-### Handling invalid catalog items
+An invalid catalog item error might occur for a variety of reasons:
-Invalid catalog items can be caused due to a variety of reasons. Potential issues are:
+- **Manifest schema errors**. Ensure that your catalog item manifest matches the [required schema](./configure-catalog-item.md#add-a-catalog-item).
- - **Manifest schema errors**
- - Ensure that your catalog item manifest matches the required schema as described [here](./configure-catalog-item.md#add-a-new-catalog-item).
+- **Validation errors**. Check the following items to resolve validation errors:
- - **Validation errors**
- - Ensure that the manifest's engine type is correctly configured as "ARM".
- - Ensure that the catalog item name is between 3 and 63 characters.
- - Ensure that the catalog item name includes only URL-valid characters. This includes alphanumeric characters as well as these symbols: *~!,.';:=-\_+)(\*&$@*
+ - Ensure that the manifest's engine type is correctly configured as `ARM`.
+ - Ensure that the catalog item name is between 3 and 63 characters.
+ - Ensure that the catalog item name includes only characters that are valid for a URL: alphanumeric characters and the symbols `~` `!` `,` `.` `'` `;` `:` `=` `-` `_` `+` `)` `(` `*` `&` `$` `@`.
- - **Reference errors**
- - Ensure that the template path referenced by the manifest is a valid relative path to a file within the repository.
+- **Reference errors**. Ensure that the template path that the manifest references is a valid relative path to a file in the repository.
## Next steps
-* [Create and Configure Projects](./quickstart-create-and-configure-projects.md).
+- Learn how to [create and configure a project](./quickstart-create-and-configure-projects.md).
+- Learn how to [create and configure a project environment type](how-to-configure-project-environment-types.md).
deployment-environments How To Configure Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-managed-identity.md
Title: Configure a managed identity
-description: Learn how to configure a managed identity that'll be used to deploy environments.
+description: Learn how to configure a managed identity that will be used to deploy environments in your Azure Deployment Environments Preview dev center.
- + Last updated 10/12/2022 # Configure a managed identity
- A [Managed Identity](../active-directory/managed-identities-azure-resources/overview.md) is used to provide elevation-of-privilege capabilities and securely authenticate to any service that supports Azure Active Directory (Azure AD) authentication. Azure Deployment Environments Preview service uses identities to provide self-serve capabilities to your development teams without granting them access to the target subscriptions in which the Azure resources are created.
+A [managed identity](../active-directory/managed-identities-azure-resources/overview.md) adds elevated-privileges capabilities and secure authentication to any service that supports Azure Active Directory (Azure AD) authentication. Azure Deployment Environments Preview uses identities to give development teams self-serve deployment capabilities without giving them access to the subscriptions in which Azure resources are created.
-The managed identity attached to the dev center should be [granted 'Owner' access to the deployment subscriptions](how-to-configure-managed-identity.md) configured per environment type. When an environment deployment is requested, the service grants appropriate permissions to the deployment identities configured per environment type to perform deployments on behalf of the user.
-The managed identity attached to a dev center will also be used to connect to a [catalog](how-to-configure-catalog.md) and access the [catalog items](configure-catalog-item.md) made available through the catalog.
+The managed identity that's attached to a dev center should be [assigned the Owner role in the deployment subscriptions](how-to-configure-managed-identity.md#assign-a-subscription-role-assignment-to-the-managed-identity) for each environment type. When an environment deployment is requested, the service grants appropriate permissions to the deployment identities that are set up for the environment type to deploy on behalf of the user.
+The managed identity that's attached to a dev center also is used to add to a [catalog](how-to-configure-catalog.md) and access [catalog items](configure-catalog-item.md) in the catalog.
-In this article, you'll learn about:
+In this article, you learn how to:
-* Types of managed identities
-* Assigning a subscription role assignment to the managed identity
-* Assigning the identity access to the Key Vault secret
+> [!div class="checklist"]
+>
+> - Add a managed identity to your dev center
+> - Assign a subscription role assignment to a managed identity
+> - Grant access to a key vault secret for a managed identity
> [!IMPORTANT]
-> Azure Deployment Environments is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> Azure Deployment Environments currently is in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise are not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-## Types of managed identities
+## Add a managed identity
-In Azure Deployment Environments, you can use two types of managed identities:
+In Azure Deployment Environments, you can choose between two types of managed identities:
-* A **system-assigned identity** is tied to either your dev center or the project environment type and is deleted when the attached resource is deleted. A dev center or a project environment type can have only one system-assigned identity.
-* A **user-assigned identity** is a standalone Azure resource that can be assigned to your dev center or to a project environment type. For Azure Deployment Environments Preview, a dev center or a project environment type can have only one user-assigned identity.
+- **System-assigned identity**: A system-assigned identity is tied either to your dev center or to the project environment type. A system-assigned identity is deleted when the attached resource is deleted. A dev center or a project environment type can have only one system-assigned identity.
+- **User-assigned identity**: A user-assigned identity is a standalone Azure resource that you can assign to your dev center or to a project environment type. For Azure Deployment Environments Preview, a dev center or a project environment type can have only one user-assigned identity.
> [!NOTE]
-> If you add both a system-assigned identity and a user-assigned identity, only the user-assigned identity will be used by the service.
+> In Azure Deployment Environments Preview, if you add both a system-assigned identity and a user-assigned identity, only the user-assigned identity is used.
-### Configure a system-assigned managed identity for a dev center
+### Add a system-assigned managed identity to a dev center
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Access Azure Deployment Environments.
-1. Select your dev center from the list.
-1. Select **Identity** from the left pane.
-1. On the **System assigned** tab, set the **Status** to **On**, select **Save** and then confirm enabling a System assigned managed identity.
+1. Sign in to the [Azure portal](https://portal.azure.com/) and go to Azure Deployment Environments.
+1. In **Dev centers**, select your dev center.
+1. In the left menu under **Settings**, select **Identity**.
+1. Under **System assigned**, set **Status** to **On**.
+1. Select **Save**.
+1. In the **Enable system assigned managed identity** dialog, select **Yes**.
+### Add a user-assigned managed identity to a dev center
-### Configure a user-assigned managed identity for a dev center
+1. Sign in to the [Azure portal](https://portal.azure.com/) and go to Azure Deployment Environments.
+1. In **Dev centers**, select your dev center.
+1. In the left menu under **Settings**, select **Identity**.
+1. Under **User assigned**, select **Add** to attach an existing identity.
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Access Azure Deployment Environments.
-1. Select your dev center from the list.
-1. Select **Identity** from the left pane.
-1. Switch to the **User assigned** tab and select **+ Add** to attach an existing identity.
+ :::image type="content" source="./media/configure-managed-identity/configure-user-assigned-managed-identity.png" alt-text="Screenshot that shows the user-assigned managed identity.":::
+1. In **Add user assigned managed identity**, enter or select the following information:
-1. On the **Add user assigned managed identity** page, add the following details:
- 1. Select the **Subscription** in which the identity exists.
- 1. Select an existing **User assigned managed identities** from the dropdown.
+ 1. In **Subscription**, select the subscription in which the identity exists.
+ 1. In **User assigned managed identities**, select an existing identity.
1. Select **Add**. ## Assign a subscription role assignment to the managed identity
-The identity attached to the dev center should be granted 'Owner' access to all the deployment subscriptions, as well as 'Reader' access to all subscriptions that a project lives in. When a user creates or deploys an environment, the service grants appropriate access to the deployment identity attached to a project environment type and use it to perform deployment on behalf of the user. This will allow you to empower developers to create environments without granting them access to the subscription and abstract Azure governance related constructs from them.
+The identity that's attached to the dev center should be assigned the Owner role for all the deployment subscriptions and the Reader role for all subscriptions that contain the relevant project. When a user creates or deploys an environment, the service grants appropriate access to the deployment identity that's attached to a project environment type. The deployment identity uses the access to perform deployments on behalf of the user. You can use the managed identity to empower developers to create environments without granting them access to the subscription.
+
+### Add a role assignment to a system-assigned managed identity
+
+1. In the Azure portal, go to your dev center.
+1. In the left menu under **Settings**, select **Identity**.
+1. Under **System assigned** > **Permissions**, select **Azure role assignments**.
+
+ :::image type="content" source="./media/configure-managed-identity/system-assigned-azure-role-assignment.png" alt-text="Screenshot that shows the Azure role assignment for system-assigned identity.":::
+
+1. In **Azure role assignments**, select **Add role assignment (Preview)**, and then enter or select the following information:
+
+ 1. In **Scope**, select **Subscription**.
+ 1. In **Subscription**, select the subscription in which to use the managed identity.
+ 1. In **Role**, select **Owner**.
+ 1. Select **Save**.
-1. To add a role assignment to the managed identity:
- 1. For a system-assigned identity, select **Azure role assignments**.
-
- :::image type="content" source="./media/configure-managed-identity/system-assigned-azure-role-assignment.png" alt-text="Screenshot showing the Azure role assignment for system assigned identity.":::
+### Add a role assignment to a user-assigned managed identity
- 1. For the user-assigned identity, select the specific identity, and then select the **Azure role assignments** from the left pane.
+1. In the Azure portal, go to your dev center.
+1. In the left menu under **Settings**, select **Identity**.
+1. Under **User assigned**, select the identity.
+1. In the left menu, select **Azure role assignments**.
+1. In **Azure role assignments**, select **Add role assignment (Preview)**, and then enter or select the following information:
-1. On the **Azure role assignments** page, select **Add role assignment (Preview)** and provide the following details:
- 1. For **Scope**, select **SubScription** from the dropdown.
- 1. For **Subscription**, select the target subscription to use from the dropdown.
- 1. For **Role**, select **Owner** from the dropdown.
+ 1. In **Scope**, select **Subscription**.
+ 1. In **Subscription**, select the subscription in which to use the managed identity.
+ 1. In **Role**, select **Owner**.
1. Select **Save**.
-## Assign the managed identity access to the Key Vault secret
+## Grant the managed identity access to the key vault secret
->[!NOTE]
-> Providing the identity with access to the Key Vault secret, which contains the repo's personal access token (PAT), is a prerequisite to adding the repo as a catalog.
+You can set up your key vault to use either a [key vault access policy'](../key-vault/general/assign-access-policy.md) or [Azure role-based access control](../key-vault/general/rbac-guide.md).
-To grant the identity access to the secret:
+> [!NOTE]
+> Before you can add a repository as a catalog, you must grant the managed identity access to the key vault secret that contains the repository's personal access token.
+
+### Key vault access policy
+
+If the key vault is configured to use a key vault access policy:
+
+1. In the Azure portal, go to the key vault that contains the secret with the personal access token.
+1. In the left menu, select **Access policies**, and then select **Create**.
+1. In **Create an access policy**, enter or select the following information:
-A Key Vault can be configured to use either the [Vault access policy'](../key-vault/general/assign-access-policy.md) or the [Azure role-based access control](../key-vault/general/rbac-guide.md) permission model.
+ 1. On the **Permissions** tab, under **Secret permissions**, select the **Get** checkbox, and then select **Next**.
+ 1. On the **Principal** tab, select the identity that's attached to the dev center.
+ 1. Select **Review + create**, and then select **Create**.
-1. If the Key Vault is configured to use the **Vault access policy** permission model,
- 1. Access the [Azure portal](https://portal.azure.com/) and search for the specific Key Vault that contains the PAT secret.
- 1. Select **Access policies** from the left pane.
- 1. Select **+ Create**.
- 1. On the **Create an access policy** page, provide the following details:
- 1. Enable **Get** for **Secret permissions** on the **Permissions** page.
- 1. Select the identity that is attached to the dev center as **Principal**.
- 1. Select **Create** on the **Review + create** page.
+### Azure role-based access control
-1. If the Key Vault is configured to use **Azure role-based access control** permission model,
- 1. Select the specific identity and select the **Azure role assignments** from the left pane.
- 1. Select **Add Role Assignment** and provide the following details:
- 1. Select Key Vault from the **Scope** dropdown.
- 1. Select the **Subscription** in which the Key Vault exists.
- 1. Select the specific Key Vault for **Resource**.
- 1. Select **Key Vault Secrets User** from the dropdown for **Role**.
- 1. Select **Save**.
+If the key vault is configured to use Azure role-based access control:
+
+1. In the Azure portal, go to the key vault that contains the secret with the personal access token.
+1. In the left menu, select **Access control (IAM)**.
+1. Select the identity, and in the left menu, select **Azure role assignments**.
+1. Select **Add role assignment**, and then enter or select the following information:
+
+ 1. In **Scope**, select the key vault.
+ 1. In **Subscription**, select the subscription that contains the key vault.
+ 1. In **Resource**, select the key vault.
+ 1. In **Role**, select **Key Vault Secrets User**.
+ 1. Select **Save**.
## Next steps
-* [Configure a Catalog](how-to-configure-catalog.md)
-* [Configure a project environment type](how-to-configure-project-environment-types.md)
+- Learn how to [add and configure a catalog](how-to-configure-catalog.md).
+- Learn how to [create and configure a project environment type](how-to-configure-project-environment-types.md).
deployment-environments Quickstart Create And Configure Devcenter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-and-configure-devcenter.md
To create and configure a Dev center in Azure Deployment Environments by using t
## Attach an identity to the dev center
-After you create a dev center, attach an [identity](concept-environments-key-concepts.md#identities) to the dev center. Learn about the two [types of identities](how-to-configure-managed-identity.md#types-of-managed-identities) you can attach:
+After you create a dev center, attach an [identity](concept-environments-key-concepts.md#identities) to the dev center. Learn about the two [types of identities](how-to-configure-managed-identity.md#add-a-managed-identity) you can attach:
- System-assigned managed identity - User-assigned managed identity
For more information, see [Configure a managed identity](how-to-configure-manage
To attach a system-assigned managed identity to your dev center:
-1. Complete the steps to create a [system-assigned managed identity](how-to-configure-managed-identity.md#configure-a-system-assigned-managed-identity-for-a-dev-center).
+1. Complete the steps to create a [system-assigned managed identity](how-to-configure-managed-identity.md#add-a-system-assigned-managed-identity-to-a-dev-center).
:::image type="content" source="media/quickstart-create-and-configure-devcenter/system-assigned-managed-identity.png" alt-text="Screenshot that shows a system-assigned managed identity."::: 1. After you create a system-assigned managed identity, assign the Owner role to give the [identity access](how-to-configure-managed-identity.md#assign-a-subscription-role-assignment-to-the-managed-identity) on the subscriptions that will be used to configure [project environment types](concept-environments-key-concepts.md#project-environment-types).
- Make sure that the identity has [access to the key vault secret](how-to-configure-managed-identity.md#assign-the-managed-identity-access-to-the-key-vault-secret) that contains the personal access token to access your repository.
+ Make sure that the identity has [access to the key vault secret](how-to-configure-managed-identity.md#grant-the-managed-identity-access-to-the-key-vault-secret) that contains the personal access token to access your repository.
### Attach an existing user-assigned managed identity To attach a user-assigned managed identity to your dev center:
-1. Complete the steps to attach a [user-assigned managed identity](how-to-configure-managed-identity.md#configure-a-user-assigned-managed-identity-for-a-dev-center).
+1. Complete the steps to attach a [user-assigned managed identity](how-to-configure-managed-identity.md#add-a-user-assigned-managed-identity-to-a-dev-center).
:::image type="content" source="media/quickstart-create-and-configure-devcenter/user-assigned-managed-identity.png" alt-text="Screenshot that shows a user-assigned managed identity."::: 1. After you attach the identity, assign the Owner role to give the [identity access](how-to-configure-managed-identity.md#assign-a-subscription-role-assignment-to-the-managed-identity) on the subscriptions that will be used to configure [project environment types](how-to-configure-project-environment-types.md). Give the identity Reader access to all subscriptions that a project lives in.
- Make sure that the identity has [access to the key vault secret](how-to-configure-managed-identity.md#assign-the-managed-identity-access-to-the-key-vault-secret) that contains the personal access token to access the repository.
+ Make sure that the identity has [access to the key vault secret](how-to-configure-managed-identity.md#grant-the-managed-identity-access-to-the-key-vault-secret) that contains the personal access token to access the repository.
> [!NOTE] > The [identity](concept-environments-key-concepts.md#identities) that's attached to the dev center should be assigned the Owner role for access to the deployment subscription for each environment type.
firewall Firewall Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-preview.md
Previously updated : 10/04/2022 Last updated : 11/07/2022
Run the following Azure PowerShell command to turn off this feature:
Unregister-AzProviderFeature -FeatureName AFWEnableNetworkRuleNameLogging -ProviderNamespace Microsoft.Network ```
-### IDPS Private IP ranges (preview)
-
-In Azure Firewall Premium IDPS, private IP address ranges are used to identify if traffic is inbound, outbound, or internal (East-West). Each signature is applied on specific traffic direction, as indicated in the signature rules table. By default, only ranges defined by IANA RFC 1918 are considered private IP addresses. So traffic sent from a private IP address range to a private IP address range is considered internal. To modify your private IP addresses, you can now easily edit, remove, or add ranges as needed.
-- ### Structured firewall logs (preview) Today, the following diagnostic log categories are available for Azure Firewall:
Policy analytics starts monitoring the flows in the DNAT, Network, and Applicati
> [!TIP] > Policy Analytics has a dependency on both Log Analytics and Azure Firewall resource specific logging. Verify the Firewall is configured appropriately or follow the previous instructions. Be aware that logs take 60 minutes to appear after enabling them for the first time. This is because logs are aggregated in the backend every hour. You can check logs are configured appropriately by running a log analytics query on the resource specific tables such as **AZFWNetworkRuleAggregation**, **AZFWApplicationRuleAggregation**, and **AZFWNatRuleAggregation**.
+### Single click upgrade/downgrade (preview)
+
+You can now easily upgrade your existing Firewall Standard SKU to Premium SKU as well as downgrade from Premium to Standard SKU. The process is fully automated and has no service impact (zero service downtime).
+
+In the upgrade process, you can select the policy to be attached to the upgraded Premium SKU. You can select an existing Premium Policy or an existing Standard Policy. You can use your existing Standard policy and let the system automatically duplicate, upgrade to Premium Policy, and then attach it to the newly created Premium Firewall.
+
+This new capability is available through the Azure portal as shown here, as well as via PowerShell and Terraform simply by changing the sku_tier attribute.
++
+> [!NOTE]
+> This new upgrade/downgrade capability will also support the Basic SKU for GA.
++ ## Next steps To learn more about Azure Firewall, see [What is Azure Firewall?](overview.md).
firewall Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/overview.md
Previously updated : 10/13/2022 Last updated : 11/07/2022 # Customer intent: As an administrator, I want to evaluate Azure Firewall so I can determine if I want to use it.
Untrusted customer signed certificates|Customer signed certificates are not trus
|Wrong source IP address in Alerts with IDPS for HTTP (without TLS inspection).|When plain text HTTP traffic is in use, and IDPS issues a new alert, and the destination is a public IP address, the displayed source IP address is wrong (the internal IP address is displayed instead of the original IP address).|A fix is being investigated.| |Certificate Propagation|After a CA certificate is applied on the firewall, it may take between 5-10 minutes for the certificate to take effect.|A fix is being investigated.| |TLS 1.3 support|TLS 1.3 is partially supported. The TLS tunnel from client to the firewall is based on TLS 1.2, and from the firewall to the external Web server is based on TLS 1.3.|Updates are being investigated.|
-|KeyVault Private Endpoint|KeyVault supports Private Endpoint access to limit its network exposure. Trusted Azure Services can bypass this limitation if an exception is configured as described in the [KeyVault documentation](../key-vault/general/overview-vnet-service-endpoints.md#trusted-services). Azure Firewall is not currently listed as a trusted service and can't access the Key Vault.|A fix is being investigated.|
|Availability Zones for Firewall Premium in the Southeast Asia region|You can't currently deploy Azure Firewall Premium with Availability Zones in the Southeast Asia region.|Deploy the firewall in Southeast Asia without Availability Zones, or deploy in a region that supports Availability Zones.|
firewall Premium Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/premium-features.md
Previously updated : 10/12/2022 Last updated : 11/07/2022
To learn more about Azure Firewall Premium Intermediate CA certificate requireme
A network intrusion detection and prevention system (IDPS) allows you to monitor your network for malicious activity, log information about this activity, report it, and optionally attempt to block it.
-Azure Firewall Premium provides signature-based IDPS to allow rapid detection of attacks by looking for specific patterns, such as byte sequences in network traffic, or known malicious instruction sequences used by malware. The IDPS signatures are applicable for both application and network level traffic (Layers 3-7), they're fully managed, and continuously updated. IDPS can be applied to inbound, spoke-to-spoke (East-West), and outbound traffic. Spoke-to-spoke (East-West) includes traffic that goes from/to an on-premises network. You can configure your IDPS private IP address ranges using the **Private IP ranges** preview feature. For more information, see [Azure Firewall preview features](firewall-preview.md#idps-private-ip-ranges-preview).
+Azure Firewall Premium provides signature-based IDPS to allow rapid detection of attacks by looking for specific patterns, such as byte sequences in network traffic, or known malicious instruction sequences used by malware. The IDPS signatures are applicable for both application and network level traffic (Layers 3-7), they're fully managed, and continuously updated. IDPS can be applied to inbound, spoke-to-spoke (East-West), and outbound traffic. Spoke-to-spoke (East-West) includes traffic that goes from/to an on-premises network. You can configure your IDPS private IP address ranges using the **Private IP ranges** preview feature. For more information, see [IDPS Private IP ranges](#idps-private-ip-ranges).
The Azure Firewall signatures/rulesets include: - An emphasis on fingerprinting actual malware, Command and Control, exploit kits, and in the wild malicious activity missed by traditional prevention methods.
IDPS allows you to detect attacks in all ports and protocols for non-encrypted t
The IDPS Bypass List allows you to not filter traffic to any of the IP addresses, ranges, and subnets specified in the bypass list.
+### IDPS Private IP ranges
+
+In Azure Firewall Premium IDPS, private IP address ranges are used to identify if traffic is inbound, outbound, or internal (East-West). Each signature is applied on specific traffic direction, as indicated in the signature rules table. By default, only ranges defined by IANA RFC 1918 are considered private IP addresses. So traffic sent from a private IP address range to a private IP address range is considered internal. To modify your private IP addresses, you can now easily edit, remove, or add ranges as needed.
++ ### IDPS signature rules IDPS signature rules allow you to:
IDPS signature rules have the following properties:
|Signature ID |Internal ID for each signature. This ID is also presented in Azure Firewall Network Rules logs.| |Mode |Indicates if the signature is active or not, and whether firewall will drop or alert upon matched traffic. The below signature mode can override IDPS mode<br>- **Disabled**: The signature isn't enabled on your firewall.<br>- **Alert**: You'll receive alerts when suspicious traffic is detected.<br>- **Alert and Deny**: You'll receive alerts and suspicious traffic will be blocked. Few signature categories are defined as ΓÇ£Alert OnlyΓÇ¥, therefore by default, traffic matching their signatures won't be blocked even though IDPS mode is set to ΓÇ£Alert and DenyΓÇ¥. Customers may override this by customizing these specific signatures to ΓÇ£Alert and DenyΓÇ¥ mode. <br><br> Note: IDPS alerts are available in the portal via network rule log query.| |Severity |Each signature has an associated severity level that indicates the probability that the signature is an actual attack.<br>- **Low**: An abnormal event is one that doesn't normally occur on a network or Informational events are logged. Probability of attack is low.<br>- **Medium**: The signature indicates an attack of a suspicious nature. The administrator should investigate further.<br>- **High**: The attack signatures indicate that an attack of a severe nature is being launched. There's little probability that the packets have a legitimate purpose.|
-|Direction |The traffic direction for which the signature is applied.<br>- **Inbound**: Signature is applied only on traffic arriving from the Internet and destined to your [configured private IP address range](firewall-preview.md#idps-private-ip-ranges-preview).<br>- **Outbound**: Signature is applied only on traffic sent from your [configured private IP address range](firewall-preview.md#idps-private-ip-ranges-preview) to the Internet.<br>- **Bidirectional**: Signature is always applied on any traffic direction.|
+|Direction |The traffic direction for which the signature is applied.<br>- **Inbound**: Signature is applied only on traffic arriving from the Internet and destined to your [configured private IP address range](#idps-private-ip-ranges).<br>- **Outbound**: Signature is applied only on traffic sent from your [configured private IP address range](#idps-private-ip-ranges) to the Internet.<br>- **Bidirectional**: Signature is always applied on any traffic direction.|
|Group |The group name that the signature belongs to.| |Description |Structured from the following three parts:<br>- **Category name**: The category name that the signature belongs to as described in [Azure Firewall IDPS signature rule categories](idps-signature-categories.md).<br>- High level description of the signature<br>- **CVE-ID** (optional) in the case where the signature is associated with a specific CVE. The ID is listed here.| |Protocol |The protocol associated with this signature.|
frontdoor Migrate Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/migrate-tier.md
+
+ Title: Migrate Azure Front Door (classic) to Standard/Premium tier using the Azure portal (Preview)
+description: This article provides step-by-step instructions on how to migrate from an Azure Front Door (classic) profile to an Azure Front Door Standard or Premium tier profile.
++++ Last updated : 11/04/2022+++
+# Migrate Azure Front Door (classic) to Standard/Premium tier using the Azure portal (Preview)
+
+Azure Front Door Standard and Premium tier bring the latest cloud delivery network features to Azure. With enhanced security features and an all-in-one service, your application content is secured and closer to your end users with the Microsoft global network. This article will guide you through the migration process to migrate your Front Door (classic) profile to either a Standard or Premium tier profile to begin using these latest features.
+
+## Prerequisites
+
+* Review the [About Front Door tier migration](tier-migration.md) article.
+* Ensure your Front Door (classic) profile can be migrated:
+ * HTTPS is required for all custom domains. Azure Front Door Standard and Premium enforce HTTPS on all domains. If you don't have your own certificate, you can use an Azure Front Door managed certificate. The certificate is free and managed for you.
+ * If you use BYOC (Bring your own certificate) for Azure Front Door (classic), you'll need to grant Key Vault access to your Azure Front Door Standard or Premium profile by completing the following steps:
+ * Register the service principal for **Microsoft.AzureFrontDoor-Cdn** as an app in your Azure Active Directory using Azure PowerShell.
+ * Grant **Microsoft.AzureFrontDoor-Cdn** access to your Key Vault.
+ * Session affinity gets enabled from the origin group settings in the Azure Front Door Standard or Premium profile. In Azure Front Door (classic), session affinity is managed at the domain level. As part of the migration, session affinity is based on the Classic profile's configuration. If you have two domains in the Classic profile that shares the same backend pool (origin group), session affinity has to be consistent across both domains in order for migration to be compatible.
+
+## Validate compatibility
+
+1. Go to the Azure Front Door (classic) resource and select **Migration** from under *Settings*.
+
+ :::image type="content" source="./media/migrate-tier/overview.png" alt-text="Screenshot of the migration button for a Front Door (classic) profile.":::
+
+1. Select **Validate** to see if your Front Door (classic) profile is compatible for migration. This check can take up to two minutes depending on the complexity of your Front Door profile.
+
+ :::image type="content" source="./media/migrate-tier/validate.png" alt-text="Screenshot of the validate compatibility button from the migration page.":::
+
+1. If the migration isn't compatible, you can select **View errors to see a list of errors, and recommendation to resolve them.
+
+ :::image type="content" source="./media/migrate-tier/validation-failed.png" alt-text="Screenshot of the Front Door validate migration with errors.":::
+
+1. Once the migration tool has validated that your Front Door profile is compatible to migrate, you can move onto preparing for migration.
+
+ :::image type="content" source="./media/migrate-tier/validation-passed.png" alt-text="Screenshot of the Front Door migration passing validation.":::
+
+## Prepare for migration
+
+1. A default name for the new Front Door profile has been provided for you. You can change this name before proceeding to the next step.
+
+ :::image type="content" source="./media/migrate-tier/prepare-name.png" alt-text="Screenshot of the prepared name for Front Door migration.":::
+
+1. A Front Door tier is automatically selected for you based on the Front Door (classic) WAF policy settings.
+
+ :::image type="content" source="./media/migrate-tier/prepare-tier.png" alt-text="Screenshot of the selected tier for the new Front Door profile.":::
+
+ * A Standard tier gets selected if you *only have custom WAF rules* associated to the Front Door (classic) profile. You may choose to upgrade to a Premium tier.
+ * A Premium tier gets selected if you *have managed WAF rules* associated to the Classic profile. To use Standard tier, the managed WAF rules must first be removed from the Classic profile.
+
+1. Select **Configure WAF policy upgrades** to configure the WAF policies to be upgraded. Select the action you would like to happen for each WAF policy. You can either copy the old WAF policy to the new WAF policy or select and existing WAF policy that matches the Front Door tier. If you chose to copy the WAF policy, each WAF policy will be given a default WAF policy name that you can change. Select **Apply** once you finish making changes to the WAF policy configuration.
+
+ :::image type="content" source="./media/migrate-tier/prepare-waf.png" alt-text="Screenshot of the configure WAF policy link during Front Door migration preparation.":::
+
+ > [!NOTE]
+ > The **Configure WAF policy upgrades** link only appears if you have WAF policies associated to the Front Door (classic) profile.
+
+ For each WAF policy associated to the Front Door (classic) profile select an action. You can make copy of the WAF policy that matches the tier you're migrating the Front Door profile to or you can use an existing WAF policy that matches the tier. You may also update the WAF policy names from the default names assigned. Select **Apply** to save the WAF settings.
+
+ :::image type="content" source="./media/migrate-tier/waf-policy.png" alt-text="Screenshot of the upgrade wAF policy screen.":::
+
+1. Select **Prepare**, and then select **Yes** to confirm you would like to proceed with the migration process. Once confirmed, you won't be able to make any further changes to the Front Door (classic) settings.
+
+ :::image type="content" source="./media/migrate-tier/prepare-confirmation.png" alt-text="Screenshot the prepare button and confirmation to proceed with Front Door migration.":::
+
+1. Select the link that appears to view the configuration of the new Front Door profile. At this time, review each of the settings for the new profile to ensure all settings are correct. Once you're done reviewing the read-only profile, select the **X** in the top right corner of the page to go back to the migration screen.
+
+ :::image type="content" source="./media/migrate-tier/verify-new-profile.png" alt-text="Screenshot of the link to view the new read-only Front Door profile.":::
+
+> [!NOTE]
+> If you're not using your own certificate, enabling managed identities and granting access to the Key Vault is not required. You can skip to the [**Migrate**](migrate-tier.md#migrate) step.
+
+## Enable managed identities
+
+You're using your own certificate and will need to enable managed identity so Azure Front Door can access the certificate in your Key Vault.
+
+1. Select **Enable** and then select either **System assigned** or **User assigned** depending on the type of managed identities you want to use. For more information, see [What are managed identities for Azure resources?](../active-directory/managed-identities-azure-resources/overview.md)
+
+ :::image type="content" source="./media/migrate-tier/enable-managed-identity.png" alt-text="Screenshot of the enable manage identity button for Front Door migration.":::
+
+ * *System assigned* - Toggle the status to **On** and then select **Save**.
+
+ * *User assigned* - To create a user assigned managed identity, see [Create a user-assigned identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md). If you've already have a user managed identity, select the identity, and then select **Add**.
+
+1. Select the **X** to return to the migration page. You'll then see that you've successfully enabled managed identities.
+
+ :::image type="content" source="./media/migrate-tier/enable-managed-identity-successful.png" alt-text="Screenshot of managed identity getting enabled.":::
+
+## Grant manage identity to Key Vault
+
+Select **Grant** to add managed identities from the last section to all the Key Vaults used in the Front Door (classic) profile.
++
+## Migrate
+
+1. Select **Migrate** to initiate the migration process. When prompted, select **Yes** to confirm you want to move forward with the migration. Once the migration is completed, you can select the banner at the top to go to the new Front Door profile.
+
+ :::image type="content" source="./media/migrate-tier/migrate.png" alt-text="Screenshot of migrate and confirmation button for Front Door migration.":::
+
+ > [!NOTE]
+ > If you cancel the migration, only the new Front Door profile will get deleted. Any new WAF policy copies will need to be manually deleted.
+
+1. Once the migration completes, you can select the banner the top of the page or the link from the successful message to go to the new Front Door profile.
+
+ :::image type="content" source="./media/migrate-tier/successful-migration.png" alt-text="Screenshot of a successful Front Door migration.":::
+
+1. The Front Door (classic) profile is now in a **Disabled** state and can be deleted from your subscription.
+
+ :::image type="content" source="./media/migrate-tier/classic-profile.png" alt-text="Screenshot of the overview page of a Front Door (classic) in a disabled state.":::
+
+## Next steps
+
+* Understand the [mapping between Front Door tiers](tier-mapping.md) settings.
+* Learn more about the [Azure Front Door tier migration process](tier-migration.md).
frontdoor Routing Methods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/routing-methods.md
All Front Door configurations have backend health monitoring and automated insta
## <a name = "latency"></a>Lowest latencies based traffic-routing
-Deploying origins in two or more locations across the globe can improve the responsiveness of your applications by routing traffic to the destination that is 'closest' to your end users. Latency is the default traffic-routing method for your Front Door configuration. This routing method forwards requests from your end users to the closest origin behind Azure Front Door. This routing mechanism combined with the anycast architecture of Azure Front Door ensures that each of your end users get the best performance based on their location.
+Deploying origins in two or more locations across the globe can improve the responsiveness of your applications by routing traffic to the destination that is 'closest' to your end users. Latency is the default traffic-routing method for your Front Door configuration. This routing method forwards requests from your end users to the closest origin behind Azure Front Door. This routing mechanism combined with the anycast architecture of Azure Front Door ensures that each of your end users gets the best performance based on their location.
The 'closest' origin isn't necessarily closest as measured by geographic distance. Instead, Azure Front Door determines the closest origin by measuring network latency. Read more about [Azure Front Door routing architecture](front-door-routing-architecture.md). The following table shows the overall decision flow:
-| Available origins | Priority | Latency signal (based on health probe) | Weights |
-|-| -- | -- | -- |
-| First, select all origins that are enabled and returned healthy (200 OK) for the health probe. If there are six origins A, B, C, D, E, and F, and among them C is unhealthy and E is disabled. The list of available origins is A, B, D, and F. | Next, the top priority origins among the available ones are selected. If origin A, B, and D have priority 1 and origin F has a priority of 2. Then, the selected origins will be A, B, and D.| Select the origins with latency range (least latency & latency sensitivity in ms specified). If origin A is 15 ms, B is 30 ms and D is 60 ms away from the Azure Front Door environment where the request landed, and latency sensitivity is 30 ms, then the lowest latency pool consist of origin A and B, because D is beyond 30 ms away from the closest origin that is A. | Lastly, Azure Front Door will round robin the traffic among the final selected group of origins in the ratio of weights specified. For example, if origin A has a weight of 5 and origin B has a weight of 8, then the traffic will be distributed in the ratio of 5:8 among origins A and B. |
>[!NOTE] > By default, the latency sensitivity property is set to 0 ms. With this setting the request is always forwarded to the fastest available origins and weights on the origin don't take effect unless two origins have the same network latency.
frontdoor Tier Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/tier-mapping.md
+
+ Title: Azure Front Door profile mapping between Classic and Standard/Premium tier
+description: This article explains the differences and settings mapping between an Azure Front Door (classic) and Standard/Premium profile.
++++ Last updated : 11/03/2022+++
+# Mapping between Azure Front Door (classic) and Standard/Premium tier
+
+As you migrate from Azure Front Door (classic) to Front Door Standard or Premium, you'll notice some configurations have been changed, or moved to a new location to provide a better experience when managing the Front Door profile. In this article you'll learn how routing rules, cache duration, rules engine configuration, WAF policy and custom domains gets mapped to new Front Door tiers.
+
+## Routing rules
+
+| Front Door (classic) settings | Mapping in Standard and Premium |
+|--|--|
+| Route status - Enable/disable | Same as Front Door (classic) profile. |
+| Accepted protocol | Copied from Front Door (classic) profile. |
+| Frontend/domains | Copied from Front Door (classic) profile. |
+| Patterns to match | Copied from Front Door (classic) profile. |
+| Rules engine configuration | Rules engine changes to Rule Set and will retain route association from Front Door (classic) profile. |
+| Route type: Forwarding | Backend pool changes to Origin group. Forwarding protocol is copied from Front Door (classic) profile. </br> - If URL rewrite is set to `disabled`, the origin path in Standard and Premium profile is set to empty. </br> - If URL rewrite is set to `enabled`, the origin path is copied from *Custom forwarding path* of the Front Door (classic) profile. |
+| Route type: Redirect | URL redirect rule gets created in Rule set. The Rule set name is called *URLRedirectMigratedRuleSet2*. |
+
+## Cache duration
+
+In Azure Front Door (classic), the *Minimum cache duration* is located in the routing settings and the *Use default cache duration* is located in the Rules engine. Azure Front Door Standard and Premium tier only support caching in a Rule set.
+
+| Front Door (classic) | Front Door Standard and Premium |
+|--|--|
+| When caching is *disabled* and the default caching is used. | Caching is *disabled*. |
+| When caching is *enabled* and the default caching duration is used. | Caching is *enabled*, the origin caching behavior is honored. |
+| Caching is *enabled*. | Caching is *enabled*. |
+| When use default cache duration is set to *No*, the input cache duration is used. | Cache behavior is set to override always and the input cache duration is used. |
+| N/A | Caching is *enabled*, the caching behavior is set to override if origin is missing, and the input cache duration is used. |
+
+## Route configuration override in Rule engine actions
+
+The route configuration override in Front Door (classic) is split into three different actions in rules engine for Standard and Premium profile. Those three actions are URL Redirect, URL Rewrite and Route Configuration Override.
+
+| Actions | Mapping in Standard and Premium |
+|--|--|
+| Route type set to forward | 1. Forward with URL rewrites disabled. All configurations are copied to the Standard or Premium profile.</br>2. Forward with URL rewrites enabled. There will be two rule actions, one for URL rewrite and one for the route configuration override in the Standard or Premium profile.</br> For URL rewrites - </br>- Custom forwarding path in Classic profile is the same as source pattern in Standard or Premium profile.</br>- Destination from Classic profile is copied over to Standard or Premium profile. |
+| Route type set to redirect | Mapping is 1:1 in the Standard or Premium profile. |
+| Route configuration override | 1. Backend pool is 1:1 mapping for origin group in Standard or Premium profile.</br>2. Caching</br>- Enabling and disabling caching is 1:1 mapping in the Standard or Premium profile.</br>- Query string is 1:1 mapping in Standard or Premium profile.</br>3. Dynamic compression is 1:1 mapping in the Standard or Premium profile.
+| Use default cache duration | Same as mentioned in the [Cache duration](#cache-duration) section. |
+
+## Other configurations
+
+| Front Door (classic) configuration | Mapping in Standard and Premium |
+|--|--|
+| Request and response header | Request and response header in Rules engine actions is copied over to Rule set in Standard/Premium profile. |
+| Enforce certificate name check | Enforce certificate name check is supported at the profile level of Azure Front Door (classic). In a Front Door Standard or Premium profile this setting can be found in the origin settings. This configuration will apply to all origins in the migrated Standard or Premium profile. |
+| Origin response time | Origin response time is copied over to the migrated Standard or Premium profile. |
+| Web Application Firewall (WAF) | If the Azure Front Door (classic) profile has WAF policies associated, the migration will create a copy of WAF policies with a default name for the Standard or Premium profile. The names for each WAF policy can be changed during setup from the default names. You can also select an existing Standard or Premium WAF policy that matches the migrated Front Door profile. |
+| Custom domain | This section will use `www.contoso.com` as an example to show a domain going through the migration. The custom domain `www.contoso.com` points to `contoso.azurefd.net` in Front Door (classic) for the CNAME record. </br></br>When the custom domain `www.contoso.com` gets moved to the new Front Door profile:</br>- The association for the custom domain shows the new Front Door endpoint as `contoso-hashvalue.z01.azurefd.net`. The CNAME of the custom domain will automatically point to the new endpoint name with the hash value in the backend. At this point, you can change the CNAME record with your DNS provider to point to the new endpoint name with the hash value.</br>- The classic endpoint `contoso.azurefd.net` will show as a custom domain in the migrated Front Door profile under the *Migrated domain* tab of the **Domains* page. This domain will be associated to the default migrated route. This default route can only be removed once the domain is disassociated from it. The domain properties can't be updated, for the exception of associating and removing the association from a route. The domain can only be deleted after you've changed the CNAME to the new endpoint name.</br>- The certificate state and DNS state for `www.contoso.com` is the same as the Front Door (classic) profile.</br></br> There are no changes to the managed certificate auto rotation settings. |
+
+## Next steps
+
+* Learn more about the [Azure Front Door tier migration process](tier-migration.md).
+* Learn how to [migrate from Classic to Standard/Premium tier](migrate-tier.md) using the Azure portal.
frontdoor Tier Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/tier-migration.md
+
+ Title: About Azure Front Door (classic) to Standard/Premium tier migration (Preview)
+description: This article explains the migration process and changes expected when using the migration tool to Azure Front Door Standard/Premium tier.
++++ Last updated : 11/3/2022+++
+# About Azure Front Door (classic) to Standard/Premium tier migration (Preview)
+
+Azure Front Door Standard and Premium tiers were released in March 2022 as the next generation content delivery network service. The newer tiers combine the capabilities of Azure Front Door (classic), Microsoft CDN (classic), and Web Application Firewall (WAF). With features such as Private Link integration, enhanced rules engine and advanced diagnostics you have the ability to secure and accelerate your web applications to bring a better experience to your customers.
+
+Azure recommends migrating to the newer tiers to benefit from the new features and improvements over the Classic tier. To help with the migration process, Azure Front Door provides a zero-downtime migration to migrate your workload from Azure Front Door (class) to either Standard or Premium tier.
+
+In this article you'll learn about the migration process, understand the breaking changes involved, and what to do before, during and after the migration.
+
+## Migration process overview
+
+Azure Front Door zero-down time migration happens in three stages. The first stage is validation, followed by preparing for migration, and then migrate. The time it takes for a migration to complete depends on the complexity of the Azure Front Door profile. You can expect the migration to take a few minutes for a simple Azure Front Door profile and longer for a profile that has many frontend domains, backend pools, routing rules and rule engine rules.
+
+### Five steps of migration
+
+**Validate compatibility** - The migration will validate if the Azure Front Door (classic) profile is eligible for migration. You'll be prompted with messages on what needs to be fixed before you can move onto the preparation phase. For more information, see [prerequisites](#prerequisites).
+
+**Prepare for migration** - Azure Front Door will create a new Standard or Premium profile based on your Classic profile configuration in a disabled state. The new Front Door profile created will depend on the Web Application Firewall (WAF) policy you've associated to the profile.
+
+* **Premium tier** - If you have *managed WAF* policies associated to the Azure Front Door (classic) profile. A premium tier profile **can't** be downgraded to a standard tier after migration.
+* **Standard tier** - If you have *custom WAF* policies associated to the Azure Front Door (classic) profile. A standard tier profile **can** be upgraded to premium tier after migration.
+
+ During the preparation stage, Azure Front Door will create copies of WAF policies specific to the Front Door tier with default names. You can change the name for the WAF policies at this time. You can also select an existing WAF policy that matches the tier you're migrating to. At this time, a read-only view of the newly created profile is provided for you to verify configurations.
+
+ > [!NOTE]
+ > No changes can be to the Front Door (classic) configuration once this step has been initiated.
+
+**Enable managed identity** - During this step you can configure managed identities for Azure Front Door to access your certificate in a Key Vault.
+
+**Grant managed identity to Key Vault** - This step adds managed identity access to all the Key Vaults used in the Front Door (classic) profile.
+
+**Migrate/Abort migration**
+
+* **Migrate** - Once you select this option, the Azure Front Door (classic) profile gets disabled and the Azure Front Door Standard or Premium profile will be activated. Traffic will start going through the new profile once the migration completes.
+* **Abort migration** - If you decided you no longer want to move forward with the migration process, selecting this option will delete the new Front Door profile that was created.
+
+> [!NOTE]
+> * If you cancel the migration only the new Front Door profile gets deleted, any WAF policy copies will need to be manually deleted.
+> * Traffic to your Azure Front Door (classic) will continue to be serve until migration has been completed.
+> * Each Azure Front Door (classic) profile can create one Azure Front Door Standard or Premium profile.
+
+Migration is only available can be completed using the Azure portal. Service charges for Azure Front Door Standard or Premium tier will start once migration is completed.
+
+## Breaking changes between tiers
+
+### Dev-ops
+
+Azure Front Door Standard/Premium uses a different resource provider namespace of *Microsoft.Cdn*, while Azure Front Door (classic) uses *Microsoft.Network*. After you've migrated your Azure Front Door profile, you need to change your Dev-ops script to use the new namespace, different Azure PowerShell module and CLI commands and API.
+
+### Endpoint with hash value
+
+Azure Front Door Standard and Premium endpoints are generated to include a hash value to prevent your domain from being taken over. The format of the endpoint name is `<endpointname>-<hashvalue>.z01.azurefd.net`. The Classic Front Door endpoint name will continue to work after migration but we recommend replacing it with the newly created endpoint name from the Standard or Premium profile. For more information, see [Endpoint domain names](endpoint.md#endpoint-domain-names).
+
+### Logs and metrics
+
+Diagnostic logs and metrics won't be migrated. Azure Front Door Standard/Premium log fields are different from Front Door (classic). The newer tier also has heath probe logs and is recommended you enable diagnostic logging after the migration complete. Standard and Premium tier also supports built-in reports that will start displaying data once the migration is done.
+
+## Prerequisites
+
+* HTTPS is required for all custom domains. All Azure Front Door Standard and Premium tiers enforce HTTPS on every domain. If you don't your own certificate, you can use Azure Front Door managed certificate that is free and managed for you.
+* If you use BYOC for Azure Front Door (classic), you need to grant Key Vault access to your Azure Front Door Standard or Premium profile by completing the following steps:
+ * Register the service principal for **Microsoft.AzureFrontDoor-Cdn** as an app in your Azure Active Directory using Azure PowerShell.
+ * Grant **Microsoft.AzureFrontDoor-Cdn** access to your Key Vault.
+* Session affinity is enabled from within the origin group in an Azure Front Door Standard and Premium profile. In Azure Front Door (classic), session affinity is controlled at the domain level. As part of the migration, session affinity gets enabled or disabled based on the Classic profile's configuration. If you have two domains in a Classic profile that shares the same origin group, session affinity has to be consistent across both domains in order for migration can pass validation.
+
+> [!IMPORTANT]
+> * If your Azure Front Door (classic) profile can qualify to migrate to Standard tier but the number of resources exceeds the Standard tier quota limit, it will be migrated to Premium tier instead.
+> * If you use Azure PowerShell, Azure CLI, API, or Terraform to do the migration, then you need to create WAF policies separately.
+
+### Web Application Firewall (WAF)
+
+The default Azure Front Door tier created during migration is determined by the type of rules contain in the WAF policy. In this section we'll, cover scenarios for different rule type for a WAF policy.
+
+* Classic WAF policy contains only custom rules.
+ * The new Azure Front Door profile defaults to Standard tier and can be upgraded to Premium during migration. If you use the portal for migration, Azure will create custom WAF rules for Standard. If you upgrade to Premium during migration, custom WAF rules will be created by the migration capability, but managed WAF rules will need to be created manually after migration.
+* Classic WAF policy has only managed WAF rules, or both managed and custom WAF rules.
+ * The new Azure Front Door profile defaults to Premium tier and isn't eligible for downgrade during migration. Remove the WAF policy association or delete the manage WAF rules from the Classic WAF policy.
+
+ > [!NOTE]
+ > To avoid creating duplicate WAF policies during migration, the Azure portal provides the option to either create copies or reuse an existing Azure Front Door Standard or Premium WAF policy.
+
+* If you migrate your Azure Front Door profile using Azure PowerShell or Azure CLI, you need to create the WAF policies separately before migration.
+
+## Naming convention for migration
+
+During the migration, a default profile name is used in the format of `<endpointprefix>-migrated`. For example, a Classic endpoint named `myEndpoint.azurefd.net`, will have the default name of `myEndpoint-migrated`.
+WAF policy name will use the format of `<classicWAFpolicyname>-<standard or premium>`. For example, a Classic WAF policy named `contosoWAF1`, will have the default name of `contosoWAF1-premium`. You can rename the Front Door profile and the WAF policy during migration. Renaming of rule engine and routes isn't supported, instead default names will be assigned.
+
+URL redirect and URL rewrite are supported through rules engine in Azure Front Door Standard and Premium, while Azure Front Door (classic) supports them through routing rules. During migration, these two rules get created as rules engine rules in a Standard and Premium profile. The names of these rules are `urlRewriteMigrated` and `urlRedirectMigrated`.
+
+## Resource states
+
+The following table explains the various stages of the migration process and if changes can be made to the profile.
+
+| Migration state | Front Door (classic) resource state | Can make changes? | Front Door Standard/Premium | Can make changes? |
+|--|--|--|--|--|
+|Before migration| Active | Yes | N/A | N/A |
+| Step 1: Validating compatibility | Active | Yes | N/A | N/A |
+| Step 2: Preparing for migration | Migrating | No | Creating | No |
+| Step 5: Committing migration | Migrating | No | CommittingMigration | No |
+| Step 5: Committed migration | Migrated | No | Active | Yes |
+| Step 5: Aborting migration | AbortingMigration | No | Deleting | No |
+| Step 5: Aborted migration | Active | Yes | Deleted | N/A |
+
+## Next steps
+
+* Understand the [mapping between Front Door tiers](tier-mapping.md) settings.
+* Learn how to [migrate from Classic to Standard/Premium tier](migrate-tier.md) using the Azure portal.
frontdoor Tier Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/tier-upgrade.md
+
+ Title: Upgrade from Azure Front Door Standard to Premium tier (Preview)
+description: This article provides step-by-step instructions on how to upgrade from an Azure Front Door Standard to an Azure Front Door Premium tier profile.
++++ Last updated : 11/2/2022+++
+# Upgrade from Azure Front Door Standard to Premium tier (Preview)
+
+Azure Front Door supports upgrading from Standard to Premium tier for more advanced capabilities and an increase in quota limits. The upgrade won't cause any downtime to your services or applications. For more information about the differences between Standard and Premium tier, see [Tier comparison](standard-premium/tier-comparison.md).
+
+This article will walk you through how to perform the tier upgrade on the configuration page of a Front Door Standard profile. Once upgraded, you'll be charge for the Azure Front Door Premium monthly base fee at an hourly rate.
+
+> [!IMPORTANT]
+> Downgrading from Premium to Standard tier is not supported.
+
+## Prerequisite
+
+Confirm you have an Azure Front Door Standard profile available in your subscription to upgrade.
+
+## Upgrade tier
+
+1. Go to the Azure Front Door Standard profile you want to upgrade and select **Configuration (preview)** from under *Settings*.
+
+ :::image type="content" source="./media/tier-upgrade/overview.png" alt-text="Screenshot of the configuration button under settings for a Front Door standard profile.":::
+
+1. Select **Upgrade** to begin the upgrade process. If you don't have any WAF policies associated to your Front Door Standard profile, then you'll be prompted with a confirmation to proceed with the upgrade.
+
+ :::image type="content" source="./media/tier-upgrade/upgrade-button.png" alt-text="Screenshot of the upgrade button on the configuration page a Front Door Standard profile.":::
+
+1. If you have WAF policies associated to the Front Door Standard profile, then you'll be taken to the *Upgrade WAF policies* page. On this page, you'll decide whether you want to make copies of the WAF policies or use an existing premium WAF policy. You can also change the name of the new WAF policy copy during this step.
+
+ :::image type="content" source="./media/tier-upgrade/upgrade-waf.png" alt-text="Screenshot of the upgrade WAF policies page.":::
+
+ > [!NOTE]
+ > To use managed WAF rules for the new premium WAF policy copies, you'll need to manually enable them after the upgrade.
+
+1. Select **Upgrade** once you're done setting up the WAF policies. Select **Yes** to confirm you would like to proceed with the upgrade.
+
+ :::image type="content" source="./media/tier-upgrade/confirm-upgrade.png" alt-text="Screenshot of the confirmation message from upgrade WAF policies page.":::
+
+1. The upgrade process will create new premium WAF policy copies and associate them to the upgraded Front Door profile. The upgrade can take a few minutes to complete depending on the complexity of your Front Door profile.
+
+ :::image type="content" source="./media/tier-upgrade/upgrade-in-progress.png" alt-text="Screenshot of the configuration page with upgrade in progress status.":::
+
+1. Once the upgrade completes, you'll see **Tier: Premium** display on the *Configuration* page.
+
+ :::image type="content" source="./media/tier-upgrade/upgrade-complete.png" alt-text="Screenshot of the Front Door tier upgraded to premium on the configuration page.":::
+
+ > [!NOTE]
+ > You're now being billed for the Azure Front Door Premium base fee at an hourly rate.
+
+## Next steps
+
+* Learn more about [Managed rule for WAF policy](../web-application-firewall/afds/waf-front-door-drs.md).
+* Learn how to enable [Private Link to origin resources](private-link.md).
iot-central Concepts Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-iot-edge.md
Title: Azure IoT Edge and Azure IoT Central | Microsoft Docs
description: Understand how to use Azure IoT Edge with an IoT Central application. Previously updated : 06/08/2022 Last updated : 10/11/2022
To learn more about IoT Edge, see [What is Azure IoT Edge?](../../iot-edge/about
IoT Edge is made up of three components:
-* *IoT Edge modules* are containers that run Azure services, partner services, or your own code. Modules are deployed to IoT Edge devices, and run locally on those devices. To learn more, see [Understand Azure IoT Edge modules](../../iot-edge/iot-edge-modules.md).
-* The *IoT Edge runtime* runs on each IoT Edge device, and manages the modules deployed to each device. The runtime consists of two IoT Edge modules: *IoT Edge agent* and *IoT Edge hub*. To learn more, see [Understand the Azure IoT Edge runtime and its architecture](../../iot-edge/iot-edge-runtime.md).
+* [IoT Edge modules](../../iot-edge/iot-edge-modules.md) are containers that run Azure services, partner services, or your own code. Modules are deployed to IoT Edge devices, and run locally on those devices. A [deployment manifest](../../iot-edge/module-composition.md) specifies the modules to deploy to an IoT Edge device.
+* The [IoT Edge runtime](../../iot-edge/iot-edge-runtime.md) runs on each IoT Edge device, and manages the modules deployed to each device. The runtime consists of two IoT Edge modules: [IoT Edge agent and IoT Edge hub](../../iot-edge/module-edgeagent-edgehub.md).
* A *cloud-based interface* enables you to remotely monitor and manage IoT Edge devices. IoT Central is an example of a cloud interface. IoT Central enables the following capabilities to for IoT Edge devices:
+* Deployment manifest management. An IoT Central application can manage a collection of deployment manifests and assign them to devices.
* Device templates to describe the capabilities of an IoT Edge device, such as:
- * Deployment manifest upload capability, which helps you manage a manifest for a fleet of devices.
- * Modules that run on the IoT Edge device.
- * The telemetry each module sends.
- * The properties each module reports.
- * The commands each module responds to.
+ * The telemetry each IoT Edge module sends.
+ * The properties each IoT Edge module reports.
+ * The commands each IoT Edge module responds to.
* The relationships between an IoT Edge gateway device and downstream device. * Cloud properties that aren't stored on the IoT Edge device. * Device views and forms.
An IoT Edge device can be:
IoT Edge devices can use *shared access signature* tokens or X.509 certificates to authenticate with IoT Central. You can manually register your IoT Edge devices in IoT Central before they connect for the first time, or use the Device Provisioning Service to handle the registration. To learn more, see [How devices connect](overview-iot-central-developer.md#how-devices-connect).
-IoT Central uses [device templates](concepts-device-templates.md) to define how IoT Central interacts with a device. For example, a device template specifies:
+IoT Central optionally uses [device templates](concepts-device-templates.md) to define how IoT Central interacts with an IoT Edge device. For example, a device template specifies:
-* The types of telemetry and properties a device sends so that IoT Central can interpret them and create visualizations.
-* The commands a device responds to so that IoT Central can display a UI for an operator to use to call the commands.
+* The types of telemetry and properties an IoT Edge device sends so that IoT Central can interpret them and create visualizations.
+* The commands an IoT Edge device responds to so that IoT Central can display a UI for an operator to use to call the commands.
-An IoT Edge device can send telemetry, synchronize property values, and respond to commands in the same way as a standard device. So, an IoT Edge device needs a device template in IoT Central.
+If there's no device template associated with a device, telemetry and property values display as *unmodeled* data. However, you can still use IoT Central data export capabilities to forward telemetry and property values to other backend services.
-### IoT Edge device templates
-
-IoT Central device templates use models to describe the capabilities of devices. The following diagram shows the structure of the model for an IoT Edge device:
--
-IoT Central models an IoT Edge device as follows:
-
-* Every IoT Edge device template has a capability model.
-* For every custom module listed in the deployment manifest, a module capability model is generated.
-* A relationship is established between each module capability model and a device model.
-* A module capability model implements one or more module interfaces.
-* Each module interface contains telemetry, properties, and commands.
-
-### IoT Edge deployment manifests and IoT Central device templates
+## IoT Edge deployment manifests
In IoT Edge, you deploy and manage business logic in the form of modules. IoT Edge modules are the smallest unit of computation managed by IoT Edge, and can contain Azure services such as Azure Stream Analytics, or your own solution-specific code.
-An IoT Edge *deployment manifest* lists the IoT Edge modules to deploy on the device and how to configure them. To learn more, see [Learn how to deploy modules and establish routes in IoT Edge](../../iot-edge/module-composition.md).
+An IoT Edge [deployment manifest](../../iot-edge/module-composition.md) lists the IoT Edge modules to deploy on the device and how to configure them.
-In Azure IoT Central, you import a deployment manifest to create a device template for the IoT Edge device.
+In Azure IoT Central, navigate to **Edge manifests** to import and manage the deployment manifests for the IoT Edge devices in your solution.
The following code snippet shows an example IoT Edge deployment manifest:
In the previous snippet, you can see:
* There are three modules. The *IoT Edge agent* and *IoT Edge hub* system modules that are present in every deployment manifest. The custom **SimulatedTemperatureSensor** module. * The public module images are pulled from an Azure Container Registry repository that doesn't require any credentials to connect. For private module images, set the container registry credentials to use in the `registryCredentials` setting for the *IoT Edge agent* module.
-* The custom **SimulatedTemperatureSensor** module has two properties `"SendData": true` and `"SendInterval": 10`.
+* The custom **SimulatedTemperatureSensor** module has two writable properties `"SendData": true` and `"SendInterval": 10`.
-When you import this deployment manifest into an IoT Central application, it generates the following device template:
+The following screenshot shows this deployment manifest imported into IoT Central:
-In the previous screenshot you can see:
+If your application uses [organizations](howto-create-organizations.md), you can assign your deployment manifests to specific organizations. The previous screenshot shows the deployment manifest assigned to the **Store Manager / Americas** organization.
-* A module called **SimulatedTemperatureSensor**. The *IoT Edge agent* and *IoT Edge hub* system modules don't appear in the template.
-* An interface called **management** that includes two writable properties called **SendData** and **SendInterval**.
+To learn how to use the **Edge manifests** page and assign deployment manifests to IoT Edge devices, see [Manage IoT Edge deployment manifests in your IoT Central application](howto-manage-deployment-manifests.md).
-The deployment manifest doesn't include information about the telemetry the **SimulatedTemperatureSensor** module sends or the commands it responds to. Add these definitions to the device template manually before you publish it.
+### Manage an unassigned device
-To learn more, see [Tutorial: Add an Azure IoT Edge device to your Azure IoT Central application](/training/modules/connect-iot-edge-device-to-iot-central/).
+An IoT Edge device that doesn't have an associated device template is known as an *unassigned* device. You can't use IoT Central features such as dashboards, device groups, analytics, rules, and jobs with unassigned devices. However, you can use the following capabilities with unassigned devices:
-### Update a deployment manifest
+* View raw data such as telemetry and properties.
+* Call device commands.
+* Read and write properties.
-When you replace the deployment manifest, any connected IoT Edge devices download the new manifest and update their modules. However, IoT Central doesn't update the interfaces in the device template with any changes to the module configuration. For example, if you replace the manifest shown in the previous snippet with the following manifest, you don't automatically see the **SendUnits** property in the **management** interface in the device template. Manually add the new property to the **management** interface for IoT Central to recognize it:
-```json
-{
- "modulesContent": {
- "$edgeAgent": {
- "properties.desired": {
- "schemaVersion": "1.0",
- "runtime": {
- "type": "docker",
- "settings": {
- "minDockerVersion": "v1.25",
- "loggingOptions": "",
- "registryCredentials": {}
- }
- },
- "systemModules": {
- "edgeAgent": {
- "type": "docker",
- "settings": {
- "image": "mcr.microsoft.com/azureiotedge-agent:1.0.9",
- "createOptions": "{}"
- }
- },
- "edgeHub": {
- "type": "docker",
- "status": "running",
- "restartPolicy": "always",
- "settings": {
- "image": "mcr.microsoft.com/azureiotedge-hub:1.0.9",
- "createOptions": "{}"
- }
- }
- },
- "modules": {
- "SimulatedTemperatureSensor": {
- "version": "1.0",
- "type": "docker",
- "status": "running",
- "restartPolicy": "always",
- "settings": {
- "image": "mcr.microsoft.com/azureiotedge-simulated-temperature-sensor:1.0",
- "createOptions": "{}"
- }
- }
- }
- }
- },
- "$edgeHub": {
- "properties.desired": {
- "schemaVersion": "1.0",
- "routes": {
- "route": "FROM /* INTO $upstream"
- },
- "storeAndForwardConfiguration": {
- "timeToLiveSecs": 7200
- }
- }
- },
- "SimulatedTemperatureSensor": {
- "properties.desired": {
- "SendData": true,
- "SendInterval": 10,
- "SendUnits": "Celsius"
- }
- }
- }
-}
-```
+You can also manage individual modules on unassigned devices:
++
+## IoT Edge device templates
+
+IoT Central device templates use models to describe the capabilities of IoT Edge devices. Device templates are optional for IoT Edge devices. The device template enables you to interact with telemetry, properties, and commands using IoT Central capabilities such as dashboards and analytics. The following diagram shows the structure of the model for an IoT Edge device:
++
+IoT Central models an IoT Edge device as follows:
+
+* Every IoT Edge device template has a capability model.
+* For every custom module listed in the deployment manifest, add a module definition if you want to use IoT Central to interact with that module.
+* A module capability model implements one or more module interfaces.
+* Each module interface contains telemetry, properties, and commands.
+
+You can generate the basic capability model based on the modules and properties defined in the device manifest. To learn more, see [Add modules and properties to device templates](howto-manage-deployment-manifests.md#add-modules-and-properties-to-device-templates).
## IoT Edge gateway patterns
iot-central How To Connect Devices X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/how-to-connect-devices-x509.md
You can repeat the above steps for _mytestselfcertsecondary_ certificate as well
This section assumes you're using a group enrollment to connect your IoT Edge device. Follow the steps in the previous sections to: - [Generate root and device certificates](#generate-root-and-device-certificates)-- [Create a group enrollment](#create-a-group-enrollment) <!-- No slightly different type of enrollment group - UPDATE!! -->
+- [Create a group enrollment](#create-a-group-enrollment)
To connect the IoT Edge device to IoT Central using the X.509 device certificate:
iot-central How To Connect Iot Edge Transparent Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/how-to-connect-iot-edge-transparent-gateway.md
Title: Connect an IoT Edge transparent gateway to an Azure IoT Central applicati
description: How to connect devices through an IoT Edge transparent gateway to an IoT Central application. The article shows how to use both the IoT Edge 1.1 and 1.2 runtimes. Previously updated : 05/08/2022 Last updated : 10/11/2022
To follow the steps in this article, download the following files to your comput
+## Import deployment manifest
+
+Every IoT Edge device needs a deployment manifest to configure the IoT Edge runtime. To import a deployment manifest for the IoT Edge transparent gateway:
+
+1. Navigate to **Edge manifests**.
+
+1. Select **+ New**, enter a name for the deployment manifest such as *Transparent gateway* and then upload the *EdgeTransparentGatewayManifest.json* file you downloaded previously.
+
+1. Select **Create** to save the deployment manifest in your application.
+ ## Add device templates Both the downstream devices and the gateway device can use device templates in IoT Central. IoT Central lets you model the relationship between your downstream devices and your gateway so you can view and manage them after they're connected. A device template isn't required to attach a downstream device to a gateway.
To create a device template for an IoT Edge transparent gateway device:
1. On the **Customize** page of the wizard, enter a name such as *Edge Gateway* for the device template.
-1. On the **Customize** page of the wizard, check **Gateway device with downstream devices**.
+1. On the **Customize** page of the wizard, check **This is a gateway device**.
+
+1. On the **Review** page, select **Create**.
-1. On the **Customize** page of the wizard, select **Browse**. Upload the *EdgeTransparentGatewayManifest.json* file you downloaded previously.
+1. On the **Create a model** page, select **Custom model**.
1. Add an entry in **Relationships** to the downstream device template.
To add the devices:
1. Navigate to the devices page in your IoT Central application.
-1. Add an instance of the transparent gateway IoT Edge device. In this article, the gateway device ID is `edgegateway`.
+1. Add an instance of the transparent gateway IoT Edge device. When you add the device, make sure that you select the **Transparent gateway** deployment manifest. In this article, the gateway device ID is `edgegateway`.
1. Add one or more instances of the downstream device. In this article, the downstream devices are thermostats with IDs `thermostat1` and `thermostat2`.
To let you try out this scenario, the following steps show you how to deploy the
To try out the transparent gateway scenario, select the following button to deploy two Linux virtual machines. One virtual machine has the IoT Edge 1.1 runtime installed and is the transparent IoT Edge gateway. The other virtual machine is a downstream device where you run code to send simulated thermostat telemetry:
-[![Deploy to Azure Button](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fiot-central-docs-samples%2Fmaster%2Ftransparent-gateway-1-1%2FDeployGatewayVMs.json)
+[![Deploy to Azure Button](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fiot-central-docs-samples%2Fmain%2Ftransparent-gateway-1-1%2FDeployGatewayVMs.json)
When the two virtual machines are deployed and running, verify the IoT Edge gateway device is running on the `edgegateway` virtual machine:
When the two virtual machines are deployed and running, verify the IoT Edge gate
To try out the transparent gateway scenario, select the following button to deploy two Linux virtual machines. One virtual machine has the IoT Edge 1.2 runtime installed and is the transparent IoT Edge gateway. The other virtual machine is a downstream device where you run code to send simulated thermostat telemetry:
-[![Deploy to Azure Button](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fiot-central-docs-samples%2Fmaster%2Ftransparent-gateway-1-2%2FDeployGatewayVMs.json)
+[![Deploy to Azure Button](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fiot-central-docs-samples%2Fmain%2Ftransparent-gateway-1-2%2FDeployGatewayVMs.json)
When the two virtual machines are deployed and running, verify the IoT Edge gateway device is running on the `edgegateway` virtual machine:
Your transparent gateway is now configured and ready to start forwarding telemet
sudo nano /etc/aziot/config.toml ```
-1. Locate the `Certificate settings` settings. Add the certificate settings as follows:
+1. Locate the following settings in the configuration file. Add the certificate settings as follows:
```text trust_bundle_cert = "file:///home/AzureUser/certs/certs/azure-iot-test-only.root.ca.cert.pem"
To run the thermostat simulator on the `leafdevice` virtual machine:
... ```
+ > [!TIP]
+ > If you see an error when the downstream device tries to connect. Try re-running the device provisioning steps above.
+ 1. To see the telemetry in IoT Central, navigate to the **Overview** page for the **thermostat1** device: :::image type="content" source="media/how-to-connect-iot-edge-transparent-gateway/downstream-device-telemetry.png" alt-text="Screenshot showing telemetry from the downstream device." lightbox="media/how-to-connect-iot-edge-transparent-gateway/downstream-device-telemetry.png":::
iot-central Howto Connect Eflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-connect-eflow.md
Title: Azure IoT Edge for Linux on Windows (EFLOW) with IoT Central | Microsoft Docs description: Learn how to connect Azure IoT Edge for Linux on Windows (EFLOW) with IoT Central -- Previously updated : 06/16/2022++ Last updated : 10/11/2022
In this how-to article, you learn how to:
+* Import a device manifest for an IoT Edge device.
* Create a device template for an IoT Edge device. * Create an IoT Edge device in IoT Central. * Connect and provision an EFLOW device.
To complete the steps in this article, you need:
To follow the steps in this article, download the [EnvironmentalSensorManifest.json](https://raw.githubusercontent.com/Azure-Samples/iot-central-docs-samples/main/iotedge/EnvironmentalSensorManifest.json) file to your computer.
+## Import a deployment manifest
+
+You use a deployment manifest to specify the modules to run on an IoT Edge device. IoT Central manages the deployment manifests for the IoT Edge devices in your solution. To import the deployment manifest for this example:
+
+1. In your IoT Central application, navigate to **Edge manifests**.
+
+1. Select **+ New**. Enter a name such as *Environmental Sensor* for your deployment manifest, and then upload the *EnvironmentalSensorManifest.json* file you downloaded previously.
+
+1. Select **Next** and then **Create**.
+
+The example deployment manifest includes a custom module called *SimulatedTemperatureSensor*.
+ ## Add device template In this section, you create an IoT Central device template for an IoT Edge device. You import an IoT Edge manifest to get started, and then modify the template to add telemetry definitions and views:
In this section, you create an IoT Central device template for an IoT Edge devic
1. On the **Customize** page of the wizard, enter a name such as *Environmental Sensor Edge Device* for the device template.
-1. Select **Browse** and upload the *EnvironmentalSensorManifest.json* manifest file you downloaded previously.
- 1. On the **Review** page, select **Create**.
+1. On the **Create a model** page, select **Custom model**.
+
+1. In the model, select **Modules** and then **Import modules from manifest**. Select the **Environmental Sensor** deployment manifest and then select **Import**.
+ 1. Select the **management** interface in the **SimulatedTemperatureSensor** module to view the two properties defined in the manifest: :::image type="content" source="media/howto-connect-eflow/imported-manifest.png" alt-text="Device template created from IoT Edge manifest.":::
Before you can connect a device to IoT Central, you must register the device in
1. In your IoT Central application, navigate to the **Devices** page and select **Environmental Sensor Edge Device** in the list of available templates.
-1. Select **+ New** to add a new device from the template. On the **Create new device** page, select **Create**.
+1. Select **+ New** to add a new device from the template.
+
+1. On the **Create new device** page, select the **Environmental Sensor** deployment manifest, and then select **Create**.
You now have a new device with the status **Registered**: <