Updates from: 11/08/2022 02:12:17
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Self Asserted Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/self-asserted-technical-profile.md
Previously updated : 02/17/2022 Last updated : 11/07/2022
In a self-asserted technical profile, you can use the **InputClaims** and **Inpu
## Display claims
-The display claims feature is currently in **preview**.
- The **DisplayClaims** element contains a list of claims to be presented on the screen for collecting data from the user. To prepopulate the values of display claims, use the input claims that were previously described. The element may also contain a default value. The order of the claims in **DisplayClaims** specifies the order in which Azure AD B2C renders the claims on the screen. To force the user to provide a value for a specific claim, set the **Required** attribute of the **DisplayClaim** element to `true`.
Use output claims when:
- **Claims are output by output claims transformation**. - **Setting a default value in an output claim** without collecting data from the user or returning the data from the validation technical profile. The `LocalAccountSignUpWithLogonEmail` self-asserted technical profile sets the **executed-SelfAsserted-Input** claim to `true`. - **A validation technical profile returns the output claims** - Your technical profile may call a validation technical profile that returns some claims. You may want to bubble up the claims and return them to the next orchestration steps in the user journey. For example, when signing in with a local account, the self-asserted technical profile named `SelfAsserted-LocalAccountSignin-Email` calls the validation technical profile named `login-NonInteractive`. This technical profile validates the user credentials and also returns the user profile. Such as 'userPrincipalName', 'displayName', 'givenName' and 'surName'.-- **A display control returns the output claims** - Your technical profile may have a reference to a [display control](display-controls.md). The display control returns some claims, such as the verified email address. You may want to bubble up the claims and return them to the next orchestration steps in the user journey. The display control feature is currently in **preview**.
+- **A display control returns the output claims** - Your technical profile may have a reference to a [display control](display-controls.md). The display control returns some claims, such as the verified email address. You may want to bubble up the claims and return them to the next orchestration steps in the user journey.
The following example demonstrates the use of a self-asserted technical profile that uses both display claims and output claims.
active-directory Active Directory Saml Protocol Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-saml-protocol-reference.md
Previously updated : 10/27/2021 Last updated : 11/4/2022 -+
The SAML protocol requires the identity provider (Microsoft identity platform) a
When an application is registered with Azure AD, the app developer registers federation-related information with Azure AD. This information includes the **Redirect URI** and **Metadata URI** of the application.
-The Microsoft identity platform uses the cloud service's **Metadata URI** to retrieve the signing key and the logout URI. In the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>, you can open the app in **Azure Active Directory -> App registrations**, and then in **Manage -> Authentication**, you can update the Logout URL. This way the Microsoft identity platform can send the response to the correct URL.
+The Microsoft identity platform uses the cloud service's **Metadata URI** to retrieve the signing key and the logout URI. This way the Microsoft identity platform can send the response to the correct URL. In the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>;
-Azure AD exposes tenant-specific and common (tenant-independent) SSO and single sign-out endpoints. These URLs represent addressable locations--they're not just identifiers--so you can go to the endpoint to read the metadata.
+- Open the app in **Azure Active Directory** and select **App registrations**
+- Under **Manage**, select **Authentication**. From there you can update the Logout URL.
-- The tenant-specific endpoint is located at `https://login.microsoftonline.com/<TenantDomainName>/FederationMetadata/2007-06/FederationMetadata.xml`. The _\<TenantDomainName>_ placeholder represents a registered domain name or TenantID GUID of an Azure AD tenant. For example, the federation metadata of the contoso.com tenant is at: https://login.microsoftonline.com/contoso.com/FederationMetadata/2007-06/FederationMetadata.xml
+Azure AD exposes tenant-specific and common (tenant-independent) SSO and single sign-out endpoints. These URLs represent addressable locations, and aren't only identifiers. You can then go to the endpoint to read the metadata.
+
+- The tenant-specific endpoint is located at `https://login.microsoftonline.com/<TenantDomainName>/FederationMetadata/2007-06/FederationMetadata.xml`. The *\<TenantDomainName>* placeholder represents a registered domain name or TenantID GUID of an Azure AD tenant. For example, the federation metadata of the `contoso.com` tenant is at: https://login.microsoftonline.com/contoso.com/FederationMetadata/2007-06/FederationMetadata.xml
- The tenant-independent endpoint is located at
- `https://login.microsoftonline.com/common/FederationMetadata/2007-06/FederationMetadata.xml`. In this endpoint address, **common** appears instead of a tenant domain name or ID.
+ `https://login.microsoftonline.com/common/FederationMetadata/2007-06/FederationMetadata.xml`. In this endpoint address, *common* appears instead of a tenant domain name or ID.
## Next steps
active-directory Howto Configure App Instance Property Locks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-configure-app-instance-property-locks.md
+
+ Title: "How to configure app instance property lock in your applications"
+description: How to increase app security by configuring property modification locks for sensitive properties of the application.
++++++ Last updated : 11/03/2022+++
+# Customer intent: As an application developer, I want to learn how to protect properties of my application instance of being modified.
+
+# How to configure app instance property lock for your applications (Preview)
+
+Application instance lock is a feature in Azure Active Directory (Azure AD) that allows sensitive properties of a multi-tenant application object to be locked for modification after the application is provisioned in another tenant.
+This feature provides application developers with the ability to lock certain properties if the application doesn't support scenarios that require configuring those properties.
++
+## What are sensitive properties?
+
+The following property usage scenarios are considered as sensitive:
+
+- Credentials (`keyCredentials`, `passwordCredentials`) where usage type is `Sign`. This is a scenario where your application supports a SAML flow.
+- Credentials (`keyCredentials`, `passwordCredentials`) where usage type is `Verify`. In this scenario, your application supports an OIDC client credentials flow.
+- `TokenEncryptionKeyId` which specifies the keyId of a public key from the keyCredentials collection. When configured, Azure AD encrypts all the tokens it emits by using the key to which this property points. The application code that receives the encrypted token must use the matching private key to decrypt the token before it can be used for the signed-in user.
+
+## Configure an app instance lock
+
+To configure an app instance lock using the Azure portal:
+
+1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
+1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant that contains the app registration you want to configure.
+1. Search for and select **Azure Active Directory**.
+1. Under **Manage**, select **App registrations**, and then select the application you want to configure.
+1. Select **Authentication**, and then select **Configure** under the *App instance property lock* section.
+
+ :::image type="content" source="media/howto-configure-app-instance-property-locks/app-instance-lock-configure-overview.png" alt-text="Screenshot of an app registration's app instance lock in the Azure portal.":::
+
+2. In the **App instance property lock** pane, enter the settings for the lock. The table following the image describes each setting and their parameters.
+
+ :::image type="content" source="media/howto-configure-app-instance-property-locks/app-instance-lock-configure-properties.png" alt-text="Screenshot of an app registration's app instance property lock context pane in the Azure portal.":::
+
+ | Field | Description |
+ | - | -- |
+ | **Enable property lock** | Specifies if the property locks are enabled. |
+ | **All properties** | Locks all sensitive properties without needing to select each property scenario. |
+ | **Credentials used for verification** | Locks the ability to add or update credential properties (`keyCredentials`, `passwordCredentials`) where usage type is `verify`. |
+ | **Credentials used for signing tokens** | Locks the ability to add or update credential properties (`keyCredentials`, `passwordCredentials`) where usage type is `sign`. |
+ | **Token Encryption KeyId** | Locks the ability to change the `tokenEncryptionKeyId` property. |
+
+3. Select **Save** to save your changes.
active-directory Tutorial V2 Nodejs Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-nodejs-desktop.md
Create a folder to host your application, for example *ElectronDesktopApp*.
```console npm init -y
- npm install --save @azure/msal-node @microsoft/microsoft-graph-sdk isomorphic-fetch bootstrap jquery popper.js
+ npm install --save @azure/msal-node @microsoft/microsoft-graph-client isomorphic-fetch bootstrap jquery popper.js
npm install --save-dev electron@20.0.0 ```
active-directory User Flow Add Custom Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/user-flow-add-custom-attributes.md
Previously updated : 03/02/2021 Last updated : 11/07/2022 -+ +
+# Customer intent: As a tenant administrator, I want to create custom attributes for the self-service sign-up user flows.
# Define custom attributes for user flows
Once you've created a new user using a user flow that uses the newly created cus
## Next steps
-[Add a self-service sign-up user flow to an app](self-service-sign-up-user-flow.md)
+- [Add a self-service sign-up user flow to an app](self-service-sign-up-user-flow.md)
+- [Customize the user flow language](user-flow-customize-language.md)
active-directory Active Directory Users Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-users-restore.md
Previously updated : 08/17/2022 Last updated : 11/07/2022
# Restore or remove a recently deleted user using Azure Active Directory
-After you delete a user, the account remains in a suspended state for 30 days. During that 30-day window, the user account can be restored, along with all its properties. After that 30-day window passes, the permanent deletion process is automatically started.
+After you delete a user, the account remains in a suspended state for 30 days. During that 30-day window, the user account can be restored, along with all its properties. After that 30-day window passes, the permanent deletion process is automatically started and can't be stopped. During this time, the management of soft-deleted users is blocked. This limitation also applies to restoring a soft-deleted user via a match during Tenant sync cycle for on-premises hybrid scenarios.
You can view your restorable users, restore a deleted user, or permanently delete a user using Azure Active Directory (Azure AD) in the Azure portal.
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md
Title: What's new? Release notes - Azure Active Directory | Microsoft Docs description: Learn what is new with Azure Active Directory; such as the latest release notes, known issues, bug fixes, deprecated functionality, and upcoming changes.--++ featureFlags: - clicktale ms.assetid: 06a149f7-4aa1-4fb9-a8ec-ac2633b031fb
Previously updated : 1/31/2022- Last updated : 11/7/2022+
This page is updated monthly, so revisit it regularly. If you're looking for ite
**Service category:** Provisioning **Product capability:** AAD Connect Cloud Sync
-Microsoft will stop support for Azure AD provisioning agent with versions 1.1.818.0 and below starting Feb 1,2023. If you are using Azure AD cloud sync, please make sure you have the latest version of the agent. You can info about the agent release history [here](../app-provisioning/provisioning-agent-release-version-history.md). You can download the latest version [here](https://download.msappproxy.net/Subscription/d3c8b69d-6bf7-42be-a529-3fe9c2e70c90/Connector/provisioningAgentInstaller)
+Microsoft will stop support for Azure AD provisioning agent with versions 1.1.818.0 and below starting Feb 1,2023. If you're using Azure AD cloud sync, please make sure you have the latest version of the agent. You can info about the agent release history [here](../app-provisioning/provisioning-agent-release-version-history.md). You can download the latest version [here](https://download.msappproxy.net/Subscription/d3c8b69d-6bf7-42be-a529-3fe9c2e70c90/Connector/provisioningAgentInstaller)
-You can find out which version of the agent you are using as follows:
+You can find out which version of the agent you're using as follows:
-1. Going to the domain server which you have the agent installed
+1. Going to the domain server that you have the agent installed
1. Right-click on the Microsoft Azure AD Connect Provisioning Agent app
-1. Click on ΓÇ£DetailsΓÇ¥ tab and you can find the version number there
+1. Select on ΓÇ£DetailsΓÇ¥ tab and you can find the version number there
> [!NOTE] > Azure Active Directory (AD) Connect follows the [Modern Lifecycle Policy](/lifecycle/policies/modern). Changes for products and services under the Modern Lifecycle Policy may be more frequent and require customers to be alert for forthcoming modifications to their product or service.
For more information, see: [What are Lifecycle Workflows? (Public Preview)](../g
**Service category:** Access Reviews **Product capability:** Identity Governance
-This feature provides Machine Learning based recommendations to the reviewers of Azure AD Access Reviews to make the review experience easier and more accurate. The recommendation detects user affiliation with other users within the group, and leverages the scoring mechanism we built by computing the userΓÇÖs average distance with other users in the group. For more information, see: [Review recommendations for Access reviews](../governance/review-recommendations-access-reviews.md).
+This feature provides Machine Learning based recommendations to the reviewers of Azure AD Access Reviews to make the review experience easier and more accurate. The recommendation detects user affiliation with other users within the group, and applies the scoring mechanism we built by computing the userΓÇÖs average distance with other users in the group. For more information, see: [Review recommendations for Access reviews](../governance/review-recommendations-access-reviews.md).
When configuring writeback of attributes from Azure AD to SAP SuccessFactors Emp
To prevent accidental notification approvals, admins can now require users to enter the number displayed on the sign-in screen when approving an MFA notification in the Microsoft Authenticator app. We've also refreshed the Azure portal admin UX and Microsoft Graph APIs to make it easier for customers to manage Authenticator app feature roll-outs. As part of this update we have also added the highly requested ability for admins to exclude user groups from each feature.
-The number matching feature greatly up-levels the security posture of the Microsoft Authenticator app and protects organizations from MFA fatigue attacks. We highly encourage our customers to adopt this feature leveraging the rollout controls we have built. Number Matching will begin to be enabled for all users of the Microsoft Authenticator app starting 27th of February 2023.
+The number matching feature greatly up-levels the security posture of the Microsoft Authenticator app and protects organizations from MFA fatigue attacks. We highly encourage our customers to adopt this feature applying the rollout controls we have built. Number Matching will begin to be enabled for all users of the Microsoft Authenticator app starting 27th of February 2023.
For more information, see: [How to use number matching in multifactor authentication (MFA) notifications - Authentication methods policy](../authentication/how-to-mfa-number-match.md).
active-directory Create Lifecycle Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-lifecycle-workflow.md
If you are using the Azure portal to create a workflow, you can customize existi
1. On the **configure scope** page select the **Trigger type** and execution conditions to be used for this workflow. For more information on what can be configured, see: [Configure scope](understanding-lifecycle-workflows.md#configure-scope).
-1. Under rules, select the **Property**, **Operator**, and give it a **value**. The following picture gives an example of a rule being set up for a sales department. For a full list of user properties supported by Lifecycle Workflows, see [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta#supported-user-properties-and-query-parameters?toc=/azure/active-directory/governance/toc.json&bc=/azure/active-directory/governance/breadcrumb/toc.json)
+1. Under rules, select the **Property**, **Operator**, and give it a **value**. The following picture gives an example of a rule being set up for a sales department. For a full list of user properties supported by Lifecycle Workflows, see [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta#supported-user-properties-and-query-parameters)
:::image type="content" source="media/create-lifecycle-workflow/template-scope.png" alt-text="Screenshot of Lifecycle Workflows template scope configuration options.":::
active-directory Tutorial Onboard Custom Workflow Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-onboard-custom-workflow-portal.md
Use the following steps to create a pre-hire workflow that will generate a TAP a
:::image type="content" source="media/tutorial-lifecycle-workflows/configure-scope.png" alt-text="Screenshot of selecting a configuration scope." lightbox="media/tutorial-lifecycle-workflows/configure-scope.png":::
- 8. Next, you will configure the scope. The scope determines which users this workflow will run against. In this case, it will be on all users in the Sales department. On the configure scope screen, under **Rule** add the following settings and then select **Next: Review tasks**
+ 8. Next, you will configure the scope. The scope determines which users this workflow will run against. In this case, it will be on all users in the Sales department. On the configure scope screen, under **Rule** add the following settings and then select **Next: Review tasks**. For a full list of supported user properties, see: [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta#supported-user-properties-and-query-parameters)
:::image type="content" source="media/tutorial-lifecycle-workflows/review-tasks.png" alt-text="Screenshot of selecting review tasks." lightbox="media/tutorial-lifecycle-workflows/review-tasks.png":::
active-directory Tutorial Scheduled Leaver Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-scheduled-leaver-portal.md
Use the following steps to create a scheduled leaver workflow that will configur
7. Next, you will configure the basic information about the workflow. This information includes when the workflow will trigger, known as **Days from event**. So in this case, the workflow will trigger seven days after the employee's leave date. On the post-offboarding of an employee screen, add the following settings and then select **Next: Configure Scope**. :::image type="content" source="media/tutorial-lifecycle-workflows/leaver-basics.png" alt-text="Screenshot of leaver template basics information for a workflow." lightbox="media/tutorial-lifecycle-workflows/leaver-basics.png":::
- 8. Next, you will configure the scope. The scope determines which users this workflow will run against. In this case, it will be on all users in the Marketing department. On the configure scope screen, under **Rule** add the following and then select **Next: Review tasks**.
+ 8. Next, you will configure the scope. The scope determines which users this workflow will run against. In this case, it will be on all users in the Marketing department. On the configure scope screen, under **Rule** add the following and then select **Next: Review tasks**. For a full list of supported user properties, see: [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta#supported-user-properties-and-query-parameters)
:::image type="content" source="media/tutorial-lifecycle-workflows/leaver-scope.png" alt-text="Screenshot of reviewing scope details for a leaver workflow." lightbox="media/tutorial-lifecycle-workflows/leaver-scope.png"::: 9. On the following page, you may inspect the tasks if desired but no additional configuration is needed. Select **Next: Select users** when you are finished.
active-directory Understanding Lifecycle Workflows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/understanding-lifecycle-workflows.md
You can add extra expressions using **And/Or** to create complex conditionals, a
[![Extra expressions.](media/understanding-lifecycle-workflows/workflow-8.png)](media/understanding-lifecycle-workflows/workflow-8.png#lightbox) > [!NOTE]
-> For a full list of user properties supported by Lifecycle Workflows, see [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta#supported-user-properties-and-query-parameters?toc=/azure/active-directory/governance/toc.json&bc=/azure/active-directory/governance/breadcrumb/toc.json)
+> For a full list of user properties supported by Lifecycle Workflows, see [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta#supported-user-properties-and-query-parameters)
For more information, see [Create a lifecycle workflow.](create-lifecycle-workflow.md)
active-directory Grant Admin Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/grant-admin-consent.md
Title: Grant tenant-wide admin consent to an application
-description: Learn how to grant tenant-wide consent to an application so that end-users are not prompted for consent when signing in to an application.
+description: Learn how to grant tenant-wide consent to an application so that end-users aren't prompted for consent when signing in to an application.
Previously updated : 09/02/2022 Last updated : 11/07/2022
+zone_pivot_groups: enterprise-apps-minus-aad-powershell
#customer intent: As an admin, I want to grant tenant-wide admin consent to an application in Azure AD.
In this article, you'll learn how to grant tenant-wide admin consent to an application in Azure Active Directory (Azure AD). To understand how individual users consent, see [Configure how end-users consent to applications](configure-user-consent.md).
-When you grant tenant-wide admin consent to an application, you give the application access on behalf of the whole organization to the permissions requested. Granting admin consent on behalf of an organization is a sensitive operation, potentially allowing the application's publisher access to significant portions of your organization's data, or the permission to do highly privileged operations. Examples of such operations might be role management, full access to all mailboxes or all sites, and full user impersonation.
+When you grant tenant-wide admin consent to an application, you give the application access on behalf of the whole organization to the permissions requested. Granting admin consent on behalf of an organization is a sensitive operation, potentially allowing the application's publisher access to significant portions of your organization's data, or the permission to do highly privileged operations. Examples of such operations might be role management, full access to all mailboxes or all sites, and full user impersonation. Carefully review the permissions that the application is requesting before you grant consent.
By default, granting tenant-wide admin consent to an application will allow all users to access the application unless otherwise restricted. To restrict which users can sign-in to an application, configure the app to [require user assignment](application-properties.md#assignment-required) and then [assign users or groups to the application](assign-user-or-group-access-portal.md).
-Tenant-wide admin consent to an app grants the app and the app's publisher access to your organization's data. Carefully review the permissions that the application is requesting before you grant consent. For more information on consenting to applications, see [Azure Active Directory consent framework](../develop/consent-framework.md).
-
-Granting tenant-wide admin consent may revoke any permissions which had previously been granted tenant-wide for that application. Permissions which have previously been granted by users on their own behalf will not be affected.
+Granting tenant-wide admin consent may revoke any permissions that had previously been granted tenant-wide for that application. Permissions that have previously been granted by users on their own behalf won't be affected.
## Prerequisites
To grant tenant-wide admin consent, you need:
You can grant tenant-wide admin consent through *Enterprise applications* if the application has already been provisioned in your tenant. For example, an app could be provisioned in your tenant if at least one user has already consented to the application. For more information, see [How and why applications are added to Azure Active Directory](../develop/active-directory-how-applications-are-added.md). + To grant tenant-wide admin consent to an app listed in **Enterprise applications**: 1. Sign in to the [Azure portal](https://portal.azure.com) with one of the roles listed in the prerequisites section.
where:
As always, carefully review the permissions an application requests before granting consent. ++++
+In the following example, you'll grant delegated permissions defined by a resource enterprise application to a client enterprise application on behalf of all users.
+
+In the example, the resource enterprise application is Microsoft Graph of object ID `7ea9e944-71ce-443d-811c-71e8047b557a`. The Microsoft Graph defines the delegated permissions, `User.Read.All` and `Group.Read.All`. The consentType is `AllPrincipals`, indicating that you're consenting on behalf of all users in the tenant. The object ID of the client enterprise application is `b0d9b9e3-0ecf-4bfd-8dab-9273dd055a941`.
+
+> [!CAUTION]
+> Be careful! Permissions granted programmatically are not subject to review or confirmation. They take effect immediately.
+
+## Grant admin consent for delegated permissions
+
+1. Connect to Microsoft Graph PowerShell:
+
+ ```powershell
+ Connect-MgGraph -Scopes "Application.ReadWrite.All", "DelegatedPermissionGrant.ReadWrite.All"
+ ```
+
+1. Retrieve all the delegated permissions defined by Microsoft graph (the resource application) in your tenant application. Identify the delegated permissions that you'll grant the client application. In this example, the delegation permissions are `User.Read.All` and `Group.Read.All`
+
+ ```powershell
+ Get-MgServicePrincipal -Filter "displayName eq 'Microsoft Graph'" -Property Oauth2PermissionScopes | Select -ExpandProperty Oauth2PermissionScopes | fl
+ ```
+
+1. Grant the delegated permissions to the client enterprise application by running the following request.
+
+```powershell
+$params = @{
+
+ "ClientId" = "b0d9b9e3-0ecf-4bfd-8dab-9273dd055a94"
+ "ConsentType" = "AllPrincipals"
+ "ResourceId" = "7ea9e944-71ce-443d-811c-71e8047b557a"
+ "Scope" = "User.Read.All Group.Read.All"
+}
+
+New-MgOauth2PermissionGrant -BodyParameter $params |
+ Format-List Id, ClientId, ConsentType, ResourceId, Scope
+```
+
+1. Confirm that you've granted tenant wide admin consent by running the following request.
+
+ ```powershell
+ Get-MgOauth2PermissionGrant-Filter "clientId eq 'b0d9b9e3-0ecf-4bfd-8dab-9273dd055a94' consentType eq 'AllPrincipals'"
+ ```
+## Grant admin consent for application permissions
+
+In the following example, you grant the Microsoft Graph enterprise application (the principal of ID `b0d9b9e3-0ecf-4bfd-8dab-9273dd055a94`) an app role (application permission) of ID `df021288-bdef-4463-88db-98f22de89214` that's exposed by a resource enterprise application of ID `7ea9e944-71ce-443d-811c-71e8047b557a`.
+
+1. Connect to Microsoft Graph PowerShell:
+
+ ```powershell
+ Connect-MgGraph -Scopes "Application.ReadWrite.All", "AppRoleAssignment.ReadWrite.All"
+ ```
+
+1. Retrieve the app roles defined by Microsoft graph in your tenant. Identify the app role that you'll grant the client enterprise application. In this example, the app role ID is `df021288-bdef-4463-88db-98f22de89214`.
+
+ ```powershell
+ Get-MgServicePrincipal -Filter "displayName eq 'Microsoft Graph'" -Property AppRoles | Select -ExpandProperty appRoles |fl
+ ```
+
+1. Grant the application permission (app role) to the client enterprise application by running the following request.
+
+```powershell
+ $params = @{
+ "PrincipalId" ="b0d9b9e3-0ecf-4bfd-8dab-9273dd055a94"
+ "ResourceId" = "2cab1707-656d-40cc-8522-3178a184e03d"
+ "AppRoleId" = "df021288-bdef-4463-88db-98f22de89214"
+}
+
+New-MgServicePrincipalAppRoleAssignment -ServicePrincipalId '2cab1707-656d-40cc-8522-3178a184e03d' -BodyParameter $params |
+ Format-List Id, AppRoleId, CreatedDateTime, PrincipalDisplayName, PrincipalId, PrincipalType, ResourceDisplayName
+```
+++
+Use [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) to grant both delegated and application permissions.
+
+## Grant admin consent for delegated permissions
+
+In the following example, you'll grant delegated permissions defined by a resource enterprise application to a client enterprise application on behalf of all users.
+
+In the example, the resource enterprise application is Microsoft Graph of object ID `7ea9e944-71ce-443d-811c-71e8047b557a`. The Microsoft Graph defines the delegated permissions, `User.Read.All` and `Group.Read.All`. The consentType is `AllPrincipals`, indicating that you're consenting on behalf of all users in the tenant. The object ID of the client enterprise application is `b0d9b9e3-0ecf-4bfd-8dab-9273dd055a941`.
+
+> [!CAUTION]
+> Be careful! Permissions granted programmatically are not subject to review or confirmation. They take effect immediately.
+
+1. Retrieve all the delegated permissions defined by Microsoft graph (the resource application) in your tenant application. Identify the delegated permissions that you'll grant the client application. In this example, the delegation permissions are `User.Read.All` and `Group.Read.All`
+
+ ```http
+ GET https://graph.microsoft.com/v1.0/servicePrincipals?$filter=displayName eq 'Microsoft Graph'&$select=id,displayName,appId,oauth2PermissionScopes
+ ```
+
+1. Grant the delegated permissions to the client enterprise application by running the following request.
+
+ ```http
+ POST https://graph.microsoft.com/v1.0/oauth2PermissionGrants
+
+ Request body
+ {
+ "clientId": "b0d9b9e3-0ecf-4bfd-8dab-9273dd055a94",
+ "consentType": "AllPrincipals",
+ "resourceId": "7ea9e944-71ce-443d-811c-71e8047b557a",
+ "scope": "User.Read.All Group.Read.All"
+ }
+ ```
+1. Confirm that you've granted tenant wide admin consent by running the following request.
+
+ ```http
+ GET https://graph.microsoft.com/v1.0/oauth2PermissionGrants?$filter=clientId eq 'b0d9b9e3-0ecf-4bfd-8dab-9273dd055a94' and consentType eq 'AllPrincipals'
+ ```
+## Grant admin consent for application permissions
+
+In the following example, you grant the Microsoft Graph enterprise application (the principal of ID `b0d9b9e3-0ecf-4bfd-8dab-9273dd055a94`) an app role (application permission) of ID `df021288-bdef-4463-88db-98f22de89214` that's exposed by a resource enterprise application of ID `7ea9e944-71ce-443d-811c-71e8047b557a`.
+
+1. Retrieve the app roles defined by Microsoft graph in your tenant. Identify the app role that you'll grant the client enterprise application. In this example, the app role ID is `df021288-bdef-4463-88db-98f22de89214`
+
+ ```http
+ GET https://graph.microsoft.com/v1.0/servicePrincipals?$filter=displayName eq 'Microsoft Graph'&$select=id,displayName,appId,appRoles
+ ```
+1. Grant the application permission (app role) to the client enterprise application by running the following request.
+
+ ```http
+ POST https://graph.microsoft.com/v1.0/servicePrincipals/7ea9e944-71ce-443d-811c-71e8047b557a/appRoleAssignedTo
+
+ Request body
+
+ {
+ "principalId": "b0d9b9e3-0ecf-4bfd-8dab-9273dd055a94",
+ "resourceId": "7ea9e944-71ce-443d-811c-71e8047b557a",
+ "appRoleId": "df021288-bdef-4463-88db-98f22de89214"
+ }
+ ```
+ ## Next steps [Configure how end-users consent to applications](configure-user-consent.md)
active-directory Manage Application Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-application-permissions.md
Previously updated : 10/23/2021 Last updated : 11/07/2022 zone_pivot_groups: enterprise-apps-minus-graph
-# Review permissions granted to applications
+# Review permissions granted to enterprise applications
In this article, you'll learn how to review permissions granted to applications in your Azure Active Directory (Azure AD) tenant. You may need to review permissions when you've detected a malicious application or the application has been granted more permissions than is necessary.
-The steps in this article apply to all applications that were added to your Azure Active Directory (Azure AD) tenant via user or admin consent. For more information on consenting to applications, see [Azure Active Directory consent framework](../develop/consent-framework.md).
+The steps in this article apply to all applications that were added to your Azure Active Directory (Azure AD) tenant via user or admin consent. For more information on consenting to applications, see [User and admin consent](user-admin-consent-overview.md).
## Prerequisites
To review permissions granted to applications, you need:
- One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator. - A Service principal owner who isn't an administrator is able to invalidate refresh tokens.
-## Review application permissions
+## Review permissions
:::zone pivot="portal"
Each option generates PowerShell scripts that enable you to control user access
:::zone pivot="aad-powershell"
+## Revoke permissions
++ Using the following Azure AD PowerShell script revokes all permissions granted to an application. ```powershell
$spOAuth2PermissionsGrants | ForEach-Object {
# Get all application permissions for the service principal $spApplicationPermissions = Get-AzureADServiceAppRoleAssignedTo -ObjectId $sp.ObjectId -All $true | Where-Object { $_.PrincipalType -eq "ServicePrincipal" }
-# Remove all delegated permissions
+# Remove all application permissions
$spApplicationPermissions | ForEach-Object { Remove-AzureADServiceAppRoleAssignment -ObjectId $_.PrincipalId -AppRoleAssignmentId $_.objectId }
$sp = Get-MgServicePrincipal -ServicePrincipalID "$ServicePrincipalID"
Example: Get-MgServicePrincipal -ServicePrincipalId '22c1770d-30df-49e7-a763-f39d2ef9b369'
-# Get all application permissions for the service principal
+# Get all delegated permissions for the service principal
$spOAuth2PermissionsGrants= Get-MgOauth2PermissionGrant -All| Where-Object { $_.clientId -eq $sp.Id } # Remove all delegated permissions
active-directory Concept Activity Logs Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-activity-logs-azure-monitor.md
Previously updated : 10/03/2022 Last updated : 11/04/2022
To use this feature, you need:
* A user who's a **Global Administrator** or **Security Administrator** for the Azure AD tenant. * Azure AD Premium 1, or Premium 2 [license](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing), to access the Azure AD sign-in logs in the Azure portal.
-Depending on where you want to route the audit log data, you need either of the following:
+Depending on where you want to route the audit log data, you need one of the following endpoints:
* An Azure storage account that you have *ListKeys* permissions for. We recommend that you use a general storage account and not a Blob storage account. For storage pricing information, see the [Azure Storage pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=storage). * An Azure Event Hubs namespace to integrate with third-party solutions.
Depending on where you want to route the audit log data, you need either of the
## Cost considerations
-If you already have an Azure AD license, you need an Azure subscription to set up the storage account and Event Hub. The Azure subscription comes at no cost, but you have to pay to utilize Azure resources, including the storage account that you use for archival and the Event Hub that you use for streaming. The amount of data and, thus, the cost incurred, can vary significantly depending on the tenant size.
+If you already have an Azure AD license, you need an Azure subscription to set up the storage account and Event Hubs. The Azure subscription comes at no cost, but you have to pay to utilize Azure resources, including the storage account that you use for archival and the Event Hubs that you use for streaming. The amount of data and, thus, the cost incurred, can vary significantly depending on the tenant size.
### Storage size for activity logs
The following table contains a cost estimate of, depending on the size of the te
| Sign-ins | 100,000 | 15&nbsp;million | 1.7 TB | $35.41 | $424.92 |
-### Event Hub messages for activity logs
+### Event Hubs messages for activity logs
-Events are batched into approximately five-minute intervals and sent as a single message that contains all the events within that timeframe. A message in the Event Hub has a maximum size of 256 KB, and if the total size of all the messages within the timeframe exceeds that volume, multiple messages are sent.
+Events are batched into approximately five-minute intervals and sent as a single message that contains all the events within that timeframe. A message in the Event Hubs has a maximum size of 256 KB. If the total size of all the messages within the timeframe exceeds that volume, multiple messages are sent.
For example, about 18 events per second ordinarily occur for a large tenant of more than 100,000 users, a rate that equates to 5,400 events every five minutes. Because audit logs are about 2 KB per event, this equates to 10.8 MB of data. Therefore, 43 messages are sent to the Event Hub in that five-minute interval.
active-directory Concept All Sign Ins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-all-sign-ins.md
Title: Sign-in logs in Azure Active Directory - preview | Microsoft Docs
-description: Overview of the sign-in logs in Azure Active Directory including new features in preview.
+ Title: Sign-in logs (preview) in Azure Active Directory | Microsoft Docs
+description: Conceptual information about Azure AD sign-in logs, including new features in preview.
Previously updated : 10/03/2022 Last updated : 11/04/2022
-# Sign-in logs in Azure Active Directory - preview
+# Sign-in logs in Azure Active Directory (preview)
-As an IT administrator, you want to know how your IT environment is doing. The information about your systemΓÇÖs health enables you to assess whether and how you need to respond to potential issues.
+Reviewing sign-in errors and patterns provides valuable insight into how your users access applications and services. The sign-in logs provided by Azure Active Directory (Azure AD) are a powerful type of [activity log](overview-reports.md) that IT administrators can analyze. This article explains how to access and utilize the sign-in logs.
-To support you with this goal, the Azure Active Directory (Azure AD) portal gives you access to three activity logs:
--- **[Sign-in](concept-sign-ins.md)** ΓÇô Information about sign-ins and how your resources are used by your users.
+Two other activity logs are also available to help monitor the health of your tenant:
- **[Audit](concept-audit-logs.md)** ΓÇô Information about changes applied to your tenant, such as users and group management or updates applied to your tenantΓÇÖs resources. - **[Provisioning](concept-provisioning-logs.md)** ΓÇô Activities performed by a provisioning service, such as the creation of a group in ServiceNow or a user imported from Workday.
-The classic sign-in log in Azure AD provides you with an overview of interactive user sign-ins. Three additional sign-in logs are now in preview:
+The classic sign-in logs in Azure AD provide you with an overview of interactive user sign-ins. Three more sign-in logs are now in preview:
- Non-interactive user sign-ins- - Service principal sign-ins- - Managed identities for Azure resource sign-ins
-This article gives you an overview of the sign-in activity report with the preview of non-interactive, application, and managed identities for Azure resources sign-ins. For information about the sign-in report without the preview features, see [Sign-in logs in Azure Active Directory](concept-sign-ins.md).
-
-## What can you do with it?
-
-The sign-in log provides answers to questions like:
--- What is the sign-in pattern of a user, application or service?
+This article gives you an overview of the sign-in activity report with the preview of non-interactive, application, and managed identities for Azure resources sign-ins. For information about the sign-in report without the preview features, see [Sign-in logs in Azure Active Directory](concept-sign-ins.md).
-- How many users, apps or services have signed in over a week?
+## How do you access the sign-in logs?
-- WhatΓÇÖs the status of these sign-ins?
+You can always access your own sign-ins history at [https://mysignins.microsoft.com](https://mysignins.microsoft.com).
+To access the sign-ins log for a tenant, you must have one of the following roles:
-## Who can access the data?
+- Global Administrator
+- Security Administrator
+- Security Reader
+- Global Reader
+- Reports Reader
-- Users in the Security Administrator, Security Reader, and Report Reader roles
+The sign-in activity report is available in [all editions of Azure AD](reference-reports-data-retention.md#how-long-does-azure-ad-store-the-data). If you have an Azure Active Directory P1 or P2 license, you can access the sign-in activity report through the Microsoft Graph API. See [Getting started with Azure Active Directory Premium](../fundamentals/active-directory-get-started-premium.md) to upgrade your Azure Active Directory edition. It will take a couple of days for the data to show up in Graph after you upgrade to a premium license with no data activities before the upgrade.
-- Global Administrators
+**To access the Azure AD sign-ins log preview:**
-- Any user (non-admins) can access their own sign-ins
+1. Sign in to the [Azure portal](https://portal.azure.com) using the appropriate least privileged role.
+1. Go to **Azure Active Directory** > **Sign-ins log**.
+1. Select the **Try out our new sign-ins preview** link.
-## What Azure AD license do you need?
+ ![Screenshot of the preview link on the sign-in logs page.](./media/concept-all-sign-ins/sign-in-logs-preview-link.png)
-The sign-in activity report is available in [all editions of Azure AD](reference-reports-data-retention.md#how-long-does-azure-ad-store-the-data). If you have an Azure Active Directory P1 or P2 license, you also can access the sign-in activity report through the Microsoft Graph API. See [Getting started with Azure Active Directory Premium](../fundamentals/active-directory-get-started-premium.md) to upgrade your Azure Active Directory edition. It will take a couple of days for the data to show up in Graph after you upgrade to a premium license with no data activities before the upgrade.
+ To toggle back to the legacy view, select the **Click here to leave the preview** link.
-## Where can you find it in the Azure portal?
+ ![Screenshot of the leave preview link on the sign-in logs page.](./media/concept-all-sign-ins/sign-in-logs-leave-preview-link.png)
-The Azure portal provides you with several options to access the log. For example, on the Azure Active Directory menu, you can open the log in the **Monitoring** section.
+You can also access the sign-in logs from the following areas of Azure AD:
-![Screenshot of the sign-in logs menu option.](./media/concept-sign-ins/sign-ins-logs-menu.png)
+- Users
+- Groups
+- Enterprise applications
-Additionally, you can access the sign-in log using this link: [https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/SignIns](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/SignIns)
+On the sign-in logs page, you can switch between:
-On the sign-ins page, you can switch between:
+- **Interactive user sign-ins:** Sign-ins where a user provides an authentication factor, such as a password, a response through an MFA app, a biometric factor, or a QR code.
-- **Interactive user sign-ins** - Sign-ins where a user provides an authentication factor, such as a password, a response through an MFA app, a biometric factor, or a QR code.
+- **Non-interactive user sign-ins:** Sign-ins performed by a client on behalf of a user. These sign-ins don't require any interaction or authentication factor from the user. For example, authentication and authorization using refresh and access tokens that don't require a user to enter credentials.
-- **Non-interactive user sign-ins** - Sign-ins performed by a client on behalf of a user. These sign-ins don't require any interaction or authentication factor from the user. For example, authentication and authorization using refresh and access tokens that don't require a user to enter credentials.--- **Service principal sign-ins** - Sign-ins by apps and service principals that do not involve any user. In these sign-ins, the app or service provides a credential on its own behalf to authenticate or access resources.--- **Managed identities for Azure resources sign-ins** - Sign-ins by Azure resources that have secrets managed by Azure. For more information, see [What are managed identities for Azure resources?](../managed-identities-azure-resources/overview.md)
+- **Service principal sign-ins:** Sign-ins by apps and service principals that don't involve any user. In these sign-ins, the app or service provides a credential on its own behalf to authenticate or access resources.
+- **Managed identities for Azure resources sign-ins:** Sign-ins by Azure resources that have secrets managed by Azure. For more information, see [What are managed identities for Azure resources?](../managed-identities-azure-resources/overview.md)
![Screenshot of the sign-in log types.](./media/concept-all-sign-ins/sign-ins-report-types.png)
-Each tab on the sign-ins page shows the default columns below. Some tabs have additional columns:
--- Sign-in date--- Request ID--- User name or user ID
+## View the sign-ins log
-- Application name or application ID--- Status of the sign-in--- IP address of the device used for the sign-in
+To more effectively view the sign-ins log, spend a few moments customizing the view for your needs. You can specify what columns to include and filter the data to narrow things down.
### Interactive user sign-ins
-Interactive user sign-ins are sign-ins where a user provides an authentication factor to Azure AD or interacts directly with Azure AD or a helper app, such as the Microsoft Authenticator app. The factors users provide include passwords, responses to MFA challenges, biometric factors, or QR codes that a user provides to Azure AD or to a helper app.
-
-> [!NOTE]
-> This log also includes federated sign-ins from identity providers that are federated to Azure AD.
+Interactive user sign-ins provide an authentication factor to Azure AD or interact directly with Azure AD or a helper app, such as the Microsoft Authenticator app. Users can provide passwords, responses to MFA challenges, biometric factors, or QR codes to Azure AD or to a helper app. This log also includes federated sign-ins from identity providers that are federated to Azure AD.
> [!NOTE]
-> The interactive user sign-in log used to contain some non-interactive sign-ins from Microsoft Exchange clients. Although those sign-ins were non-interactive, they were included in the interactive user sign-in log for additional visibility. Once the non-interactive user sign-in log entered public preview in November 2020, those non-interactive sign-in logs were moved to the non-interactive user sign in log for increased accuracy.
-
+> The interactive user sign-in log previously contained some non-interactive sign-ins from Microsoft Exchange clients. Although those sign-ins were non-interactive, they were included in the interactive user sign-in log for additional visibility. Once the non-interactive user sign-in log entered public preview in November 2020, those non-interactive sign-in logs were moved to the non-interactive user sign in log for increased accuracy.
-**Report size:** small <br>
+**Report size:** small </br>
**Examples:** - A user provides username and password in the Azure AD sign-in screen.- - A user passes an SMS MFA challenge.- - A user provides a biometric gesture to unlock their Windows PC with Windows Hello for Business.- - A user is federated to Azure AD with an AD FS SAML assertion. - In addition to the default fields, the interactive sign-in log also shows: - The sign-in location--- Whether conditional access has been applied
+- Whether Conditional Access has been applied
You can customize the list view by clicking **Columns** in the toolbar.
-![Screenshot of the interactive user sign-in columns that can be customized.](./media/concept-all-sign-ins/columns-interactive.png "Interactive user sign-in columns")
-
-Customizing the view enables you to display additional fields or remove fields that are already displayed.
-
-![Screenshot of all interactive columns.](./media/concept-all-sign-ins/all-interactive-columns.png)
+![Screenshot customize columns button.](./media/concept-all-sign-ins/sign-in-logs-columns-preview.png)
### Non-interactive user sign-ins
-Non-interactive user sign-ins are sign-ins that were performed by a client app or OS components on behalf of a user. Like interactive user sign-ins, these sign-ins are done on behalf of a user. Unlike interactive user sign-ins, these sign-ins do not require the user to provide an authentication factor. Instead, the device or client app uses a token or code to authenticate or access a resource on behalf of a user. In general, the user will perceive these sign-ins as happening in the background of the userΓÇÖs activity.
-
+Like interactive user sign-ins, non-interactive sign-ins are done on behalf of a user. These sign-ins were performed by a client app or OS components on behalf of a user and don't require the user to provide an authentication factor. Instead, the device or client app uses a token or code to authenticate or access a resource on behalf of a user. In general, the user will perceive these sign-ins as happening in the background.
-**Report size:** Large <br>
+**Report size:** Large </br>
**Examples:** - A client app uses an OAuth 2.0 refresh token to get an access token.- - A client uses an OAuth 2.0 authorization code to get an access token and refresh token.- - A user performs single sign-on (SSO) to a web or Windows app on an Azure AD joined PC (without providing an authentication factor or interacting with an Azure AD prompt).- - A user signs in to a second Microsoft Office app while they have a session on a mobile device using FOCI (Family of Client IDs). In addition to the default fields, the non-interactive sign-in log also shows: - Resource ID- - Number of grouped sign-ins You can't customize the fields shown in this report.
-![Screenshot of the disabled columns option.](./media/concept-all-sign-ins/disabled-columns.png "Disabled columns")
+![Screenshot of the disabled columns option.](./media/concept-all-sign-ins/disabled-columns.png)
-To make it easier to digest the data, non-interactive sign-in events are grouped. Clients often create many non-interactive sign-ins on behalf of the same user in a short time period, which share all the same characteristics except for the time the sign-in was attempted. For example, a client may get an access token once per hour on behalf of a user. If the user or client do not change state, the IP address, resource, and all other information is the same for each access token request. When Azure AD logs multiple sign-ins that are identical other than time and date, those sign-ins will be from the same entity are aggregated into a single row. A row with multiple identical sign-ins (except for date and time issued) will have a value greater than 1 in the # sign-ins column. You can expand the row to see all the different sign-ins and their different time stamps. Sign-ins are aggregated in the non-interactive users when the following data matches:
+To make it easier to digest the data, non-interactive sign-in events are grouped. Clients often create many non-interactive sign-ins on behalf of the same user in a short time period. The non-interactive sign-ins share the same characteristics except for the time the sign-in was attempted. For example, a client may get an access token once per hour on behalf of a user. If the state of the user or client doesn't change, the IP address, resource, and all other information is the same for each access token request. The only state that does change is the date and time of the sign-in.
-- Application
+When Azure AD logs multiple sign-ins that are identical other than time and date, those sign-ins will be from the same entity and are aggregated into a single row. A row with multiple identical sign-ins (except for date and time issued) will have a value greater than 1 in the *# sign-ins* column. These aggregated sign-ins may also appear to have the same time stamps. The **Time aggregate** filter can set to 1 hour, 6 hours, or 24 hours. You can expand the row to see all the different sign-ins and their different time stamps.
-- User
+Sign-ins are aggregated in the non-interactive users when the following data matches:
+- Application
+- User
- IP address- - Status- - Resource ID The IP address of non-interactive sign-ins doesn't match the actual source IP of where the refresh token request is coming from. Instead, it shows the original IP used for the original token issuance.
-## Service principal sign-ins
+### Service principal sign-ins
-Unlike interactive and non-interactive user sign-ins, service principal sign-ins do not involve a user. Instead, they are sign-ins by any non-user account, such as apps or service principals (except managed identity sign-in, which are in included only in the managed identity sign-in log). In these sign-ins, the app or service provides its own credential, such as a certificate or app secret to authenticate or access resources.
+Unlike interactive and non-interactive user sign-ins, service principal sign-ins don't involve a user. Instead, they're sign-ins by any non-user account, such as apps or service principals (except managed identity sign-in, which are in included only in the managed identity sign-in log). In these sign-ins, the app or service provides its own credential, such as a certificate or app secret to authenticate or access resources.
-**Report size:** Large <br>
+**Report size:** Large </br>
**Examples:** - A service principal uses a certificate to authenticate and access the Microsoft Graph. - - An application uses a client secret to authenticate in the OAuth Client Credentials flow. -
-This report has a default list view that shows:
--- Sign-in date--- Request ID--- Service principal name or ID--- Status--- IP address--- Resource name--- Resource ID--- Number of sign-ins- You can't customize the fields shown in this report.
-![Disabled columns](./media/concept-all-sign-ins/disabled-columns.png "Disabled columns")
- To make it easier to digest the data in the service principal sign-in logs, service principal sign-in events are grouped. Sign-ins from the same entity under the same conditions are aggregated into a single row. You can expand the row to see all the different sign-ins and their different time stamps. Sign-ins are aggregated in the service principal report when the following data matches: - Service principal name or ID- - Status- - IP address- - Resource name or ID
-## Managed identity for Azure resources sign-ins
+### Managed identity for Azure resources sign-ins
-Managed identity for Azure resources sign-ins are sign-ins that were performed by resources that have their secrets managed by Azure to simplify credential management.
+Managed identities for Azure resources sign-ins are sign-ins that were performed by resources that have their secrets managed by Azure to simplify credential management. A VM with managed credentials uses Azure AD to get an Access Token.
-**Report size:** Small <br>
+**Report size:** Small </br>
**Examples:**
-A VM with managed credentials uses Azure AD to get an Access Token.
--
-This report has a default list view that shows:
---- Managed identity ID--- Managed identity Name--- Resource--- Resource ID--- Number of grouped sign-ins-
-You can't customize the fields shown in this report.
+ You can't customize the fields shown in this report.
To make it easier to digest the data, managed identities for Azure resources sign in logs, non-interactive sign-in events are grouped. Sign-ins from the same entity are aggregated into a single row. You can expand the row to see all the different sign-ins and their different time stamps. Sign-ins are aggregated in the managed identities report when all of the following data matches: - Managed identity name or ID- - Status- - Resource name or ID
-Select an item in the list view to display all sign-ins that are grouped under a node.
-
-Select a grouped item to see all details of the sign-in.
--
-## Sign-in error code
-
-If a sign-in failed, you can get more information about the reason in the **Basic info** section of the related log item.
+Select an item in the list view to display all sign-ins that are grouped under a node. Select a grouped item to see all details of the sign-in.
-![Screenshot shows a detailed information view.](./media/concept-all-sign-ins/error-code.png)
-
-While the log item provides you with a failure reason, there are cases where you might get more information using the [sign-in error lookup tool](https://login.microsoftonline.com/error). For example, if available, this tool provides you with remediation steps.
-
-![Error code lookup tool](./media/concept-all-sign-ins/error-code-lookup-tool.png)
---
-## Filter sign-in activities
+### Filter the results
-By setting a filter, you can narrow down the scope of the returned sign-in data. Azure AD provides you with a broad range of additional filters you can set. When setting your filter, you should always pay special attention to your configured **Date** range filter. A proper date range filter ensures that Azure AD only returns the data you really care about.
+Filtering the sign-ins log is a helpful way to quickly find logs that match a specific scenario. For example, you could filter the list to only view sign-ins that occurred in a specific geographic location, from a specific operating system, or from a specific type of credential.
-The **Date** range filter enables to you to define a timeframe for the returned data.
-Possible values are:
+Some filter options prompt you to select more options. Follow the prompts to make the selection you need for the filter. You can add multiple filters. Take note of the **Date** range in your filter to ensure that Azure AD only returns the data you need. The filter you configure for interactive sign-ins is persisted for non-interactive sign-ins and vice versa.
-- One month
+Select the **Add filters** option from the top of the table to get started.
-- Seven days--- Twenty-four hours--- Custom-
-![Date range filter](./media/concept-all-sign-ins/date-range-filter.png)
+![Screenshot of the sign-in logs page with the Add filters option highlighted.](./media/concept-all-sign-ins/sign-in-logs-filter-preview.png)
+There are several filter options to choose from. Below are some notable options and details.
+- **User:** The *user principal name* (UPN) of the user in question.
+- **Status:** Options are *Success*, *Failure*, and *Interrupted*.
+- **Resource:** The name of the service used for the sign-in.
+- **Conditional access:** The status of the Conditional Access (CA) policy. Options are:
+ - *Not applied:* No policy applied to the user and application during sign-in.
+ - *Success:* One or more CA policies applied to the user and application (but not necessarily the other conditions) during sign-in.
+ - *Failure:* The sign-in satisfied the user and application condition of at least one CA policy and grant controls are either not satisfied or set to block access.
+- **IP addresses:** There's no definitive connection between an IP address and where the computer with that address is physically located. Mobile providers and VPNs issue IP addresses from central pools that are often far from where the client device is used. Currently, converting IP address to a physical location is a best effort based on traces, registry data, reverse lookups and other information.
+The following table provides the options and descriptions for the **Client app** filter option.
+> [!NOTE]
+> Due to privacy commitments, Azure AD does not populate this field to the home tenant in the case of a cross-tenant scenario.
+
+|Name|Modern authentication|Description|
+||:-:||
+|Authenticated SMTP| |Used by POP and IMAP client's to send email messages.|
+|Autodiscover| |Used by Outlook and EAS clients to find and connect to mailboxes in Exchange Online.|
+|Exchange ActiveSync| |This filter shows all sign-in attempts where the EAS protocol has been attempted.|
+|Browser|![Blue checkmark.](./media/concept-all-sign-ins/check.png)|Shows all sign-in attempts from users using web browsers|
+|Exchange ActiveSync| | Shows all sign-in attempts from users with client apps using Exchange ActiveSync to connect to Exchange Online|
+|Exchange Online PowerShell| |Used to connect to Exchange Online with remote PowerShell. If you block basic authentication for Exchange Online PowerShell, you need to use the Exchange Online PowerShell module to connect. For instructions, see [Connect to Exchange Online PowerShell using multi-factor authentication](/powershell/exchange/exchange-online/connect-to-exchange-online-powershell/mfa-connect-to-exchange-online-powershell).|
+|Exchange Web Services| |A programming interface that's used by Outlook, Outlook for Mac, and third-party apps.|
+|IMAP4| |A legacy mail client using IMAP to retrieve email.|
+|MAPI over HTTP| |Used by Outlook 2010 and later.|
+|Mobile apps and desktop clients|![Blue checkmark.](./media/concept-all-sign-ins/check.png)|Shows all sign-in attempts from users using mobile apps and desktop clients.|
+|Offline Address Book| |A copy of address list collections that are downloaded and used by Outlook.|
+|Outlook Anywhere (RPC over HTTP)| |Used by Outlook 2016 and earlier.|
+|Outlook Service| |Used by the Mail and Calendar app for Windows 10.|
+|POP3| |A legacy mail client using POP3 to retrieve email.|
+|Reporting Web Services| |Used to retrieve report data in Exchange Online.|
+|Other clients| |Shows all sign-in attempts from users where the client app isn't included or unknown.|
+
+## Analyze the sign-in logs
+
+Now that your sign-in logs table is formatted appropriately, you can more effectively analyze the data. Some common scenarios are described here, but they aren't the only ways to analyze sign-in data. Further analysis and retention of sign-in data can be accomplished by exporting the logs to other tools.
+
+### Sign-in error code
+
+If a sign-in failed, you can get more information about the reason in the **Basic info** section of the related log item. The error code and associated failure reason appear in the details. Because of the complexity of some Azure AD environments, we can't document every possible error code and resolution. Some errors may require [submitting a support request](../fundamentals/how-to-get-support.md) to resolve the issue.
+
+![Screenshot of a sign-in error code.](./media/concept-all-sign-ins/error-code.png)
+
+For a list of error codes related to Azure AD authentication and authorization, see the [Azure AD authentication and authorization error codes](../develop/reference-aadsts-error-codes.md) article. In some cases, the [sign-in error lookup tool](https://login.microsoftonline.com/error) may provide remediation steps. Enter the **Error code** provided in the sign-in log details into the tool and select the **Submit** button.
+![Screenshot of the error code lookup tool.](./media/concept-all-sign-ins/error-code-lookup-tool.png)
-### Filter user sign-ins
+### Authentication details
-The filter for interactive and non-interactive sign-ins is the same. Because of this, the filter you have configured for interactive sign-ins is persisted for non-interactive sign-ins and vice versa.
+The **Authentication Details** tab in the details of a sign-in log provides the following information for each authentication attempt:
+- A list of authentication policies applied, such as Conditional Access or Security Defaults.
+- A list of session lifetime policies applied, such as Sign-in frequency or Remember MFA.
+- The sequence of authentication methods used to sign-in.
+- If the authentication attempt was successful and the reason why.
+This information allows you to troubleshoot each step in a userΓÇÖs sign-in. Use these details to track:
+- The volume of sign-ins protected by MFA.
+- The reason for the authentication prompt, based on the session lifetime policies.
+- Usage and success rates for each authentication method.
+- Usage of passwordless authentication methods, such as Passwordless Phone Sign-in, FIDO2, and Windows Hello for Business.
+- How frequently authentication requirements are satisfied by token claims, such as when users aren't interactively prompted to enter a password or enter an SMS OTP.
+While viewing the sign-ins log, select a sign-in event, and then select the **Authentication Details** tab.
+![Screenshot of the Authentication Details tab.](media/concept-all-sign-ins/authentication-details-tab.png)
-## Access the new sign-in activity logs
+When analyzing authentication details, take note of the following details:
-The sign-ins activity report in the Azure portal provides you with a simple method to switch the preview report on and off. If you have the preview logs enabled, you get a new menu that gives you access to all sign-in activity report types.
+- **OATH verification code** is logged as the authentication method for both OATH hardware and software tokens (such as the Microsoft Authenticator app).
+- The **Authentication details** tab can initially show incomplete or inaccurate data until log information is fully aggregated. Known examples include:
+ - A **satisfied by claim in the token** message is incorrectly displayed when sign-in events are initially logged.
+ - The **Primary authentication** row isn't initially logged.
+## Sign-in data used by other services
-To access the new sign-in logs with non-interactive and application sign-ins:
+Sign-in data is used by several services in Azure to monitor risky sign-ins and provide insight into application usage.
-1. In the [Azure portal](https://portal.azure.com), select **Azure Active Directory**.
+### Risky sign-in data in Azure AD Identity Protection
- ![Select Azure AD](./media/concept-all-sign-ins/azure-services.png)
+Sign-in log data visualization that relates to risky sign-ins is available in the **Azure AD Identity Protection** overview, which uses the following data:
-2. In the **Monitoring** section, click **Sign-ins**.
+- Risky users
+- Risky user sign-ins
+- Risky service principals
+- Risky service principal sign-ins
- ![Select sign-ins](./media/concept-all-sign-ins/sign-ins.png)
+For more information about the Azure AD Identity Protection tools, see the [Azure AD Identity Protection overview](../identity-protection/overview-identity-protection.md).
-3. Click the **Preview** bar.
+![Screenshot of risky users in Identity Protection.](media/concept-all-sign-ins/id-protection-overview.png)
- ![Enable new view](./media/concept-all-sign-ins/enable-new-preview.png)
+### Azure AD application and authentication sign-in activity
-4. To switch back to the default view, click the **Preview** bar again.
+With an application-centric view of your sign-in data, you can answer questions such as:
- ![Restore classic view](./media/concept-all-sign-ins/switch-back.png)
+- Who is using my applications?
+- What are the top three applications in my organization?
+- How is my newest application doing?
+To view application-specific sign-in data, go to **Azure AD** and select **Usage & insights** from the Monitoring section. These reports provide a closer look at sign-ins for Azure AD application activity and AD FS application activity. For more information, see [Azure AD Usage & insights](concept-usage-insights-report.md).
+![Screenshot of the Azure AD application activity report.](media/concept-all-sign-ins/azure-ad-app-activity.png)
+Azure AD Usage & insights also provides the **Authentication methods activity** report, which breaks down authentication by the method used. Use this report to see how many of your users are set up with MFA or passwordless authentication.
+![Screenshot of the Authentication methods report.](media/concept-all-sign-ins/azure-ad-authentication-methods.png)
+### Microsoft 365 activity logs
+You can view Microsoft 365 activity logs from the [Microsoft 365 admin center](/office365/admin/admin-overview/about-the-admin-center). Microsoft 365 activity and Azure AD activity logs share a significant number of directory resources. Only the Microsoft 365 admin center provides a full view of the Microsoft 365 activity logs.
+You can also access the Microsoft 365 activity logs programmatically by using the [Office 365 Management APIs](/office/office-365-management-api/office-365-management-apis-overview).
## Next steps
active-directory Concept Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-audit-logs.md
Previously updated : 10/03/2022 Last updated : 11/04/2022 # Audit logs in Azure Active Directory
-As an IT administrator, you want to know how your IT environment is doing. The information about your systemΓÇÖs health enables you to assess whether and how you need to respond to potential issues.
+Azure Active Directory (Azure AD) activity logs include audit logs, which is a comprehensive report on every logged event in Azure AD. Changes to applications, groups, users, and licenses are all captured in the Azure AD audit logs.
-To support you with this goal, the Azure Active Directory (Azure AD) portal gives you access to three activity logs:
+Two other activity logs are also available to help monitor the health of your tenant:
- **[Sign-ins](concept-sign-ins.md)** ΓÇô Information about sign-ins and how your resources are used by your users.-- **[Audit](concept-audit-logs.md)** ΓÇô Information about changes applied to your tenant such as users and group management or updates applied to your tenantΓÇÖs resources. - **[Provisioning](concept-provisioning-logs.md)** ΓÇô Activities performed by the provisioning service, such as the creation of a group in ServiceNow or a user imported from Workday. This article gives you an overview of the audit logs.
active-directory Concept Provisioning Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-provisioning-logs.md
Previously updated : 10/05/2022 Last updated : 11/04/2022
# Provisioning logs in Azure Active Directory
-As an IT administrator, you want to know how your IT environment is doing. The information about your systemΓÇÖs health enables you to assess whether and how you need to respond to potential issues.
+Azure Active Directory (Azure AD) integrates with several third party services to provision users into your tenant. If you need to troubleshoot an issue with a provisioned user, you can use the information captured in the Azure AD provisioning logs to help find a solution.
-To support you with this goal, the Azure Active Directory portal gives you access to three activity logs:
+Two other activity logs are also available to help monitor the health of your tenant:
- **[Sign-ins](concept-sign-ins.md)** ΓÇô Information about sign-ins and how your resources are used by your users. - **[Audit](concept-audit-logs.md)** ΓÇô Information about changes applied to your tenant such as users and group management or updates applied to your tenantΓÇÖs resources.-- **[Provisioning](concept-provisioning-logs.md)** ΓÇô Activities performed by the provisioning service, such as the creation of a group in ServiceNow or a user imported from Workday.- This article gives you an overview of the provisioning logs. - ## What can I do with it? You can use the provisioning logs to find answers to questions like:
You can use the provisioning logs to find answers to questions like:
- What users from Workday were successfully created in Active Directory?
-## How can I access it?
+## How do you access the provisioning logs?
-To view the provisioning activity report, your tenant must have an Azure AD Premium license associated with it. To upgrade your Azure AD edition, see [Getting started with Azure Active Directory Premium](../fundamentals/active-directory-get-started-premium.md).
+To view the provisioning logs, your tenant must have an Azure AD Premium license associated with it. To upgrade your Azure AD edition, see [Getting started with Azure Active Directory Premium](../fundamentals/active-directory-get-started-premium.md).
Application owners can view logs for their own applications. The following roles are required to view provisioning logs:
To access the provisioning log data, you have the following options:
- Download the provisioning logs as a CSV or JSON file.
-## What is the default view?
+## View the provisioning logs
-A provisioning log has a default list view that shows:
+To more effectively view the provisioning log, spend a few moments customizing the view for your needs. You can specify what columns to include and filter the data to narrow things down.
-- The identity-- The action-- The source system-- The target system-- The status-- The date
+### Customize the layout
-You can customize the list view by selecting **Columns** on the toolbar.
+The provisioning log has a default view, but you can customize columns.
+
+1. Select **Columns** from the menu at the top of the log.
+1. Select the columns you want to view and select the **Save** button at the bottom of the window.
![Screenshot that shows the button for customizing columns.](./media/concept-provisioning-logs/column-chooser.png "Column chooser") This area enables you to display more fields or remove fields that are already displayed.
-![Screenshot that shows available columns with some selected.](./media/concept-provisioning-logs/available-columns.png "Available columns")
-
-Select an item from the list to get more detailed information, such as the steps taken to provision the user and tips for troubleshooting issues.
-
-![Screenshot that shows detailed information.](./media/concept-provisioning-logs/steps.png "Filter")
--
-## Filter provisioning activities
+## Filter the results
When you filter your provisioning data, some filter values are dynamically populated based on your tenant. For example, if you don't have any "create" events in your tenant, there won't be a **Create** filter option.
-In the default view, you can select the following filters:
--- Identity-- Date-- Status-- Action-
-![Screenshot that shows filter values.](./media/concept-provisioning-logs/default-filter.png "Filter")
- The **Identity** filter enables you to specify the name or the identity that you care about. This identity might be a user, group, role, or other object.
-You can search by the name or ID of the object. The ID varies by scenario. For example, when you're provisioning an object from Azure AD to Salesforce, the source ID is the object ID of the user in Azure AD.
-The target ID is the ID of the user at Salesforce. When you're provisioning from Workday to Active Directory, the source ID is the Workday worker employee ID.
+You can search by the name or ID of the object. The ID varies by scenario.
+- If you're provisioning an object *from Azure AD to Salesforce*, the **source ID** is the object ID of the user in Azure AD. The **target ID** is the ID of the user at Salesforce.
+- If you're provisioning *from Workday to Azure AD*, the **source ID** is the Workday worker employee ID. The **target ID** is the ID of the user in Azure AD.
> [!NOTE] > The name of the user might not always be present in the **Identity** column. There will always be one ID. - The **Date** filter enables to you to define a timeframe for the returned data. Possible values are: - One month - Seven days - 30 days - 24 hours-- Custom time interval-
-When you select a custom time frame, you can configure a start date and an end date.
+- Custom time interval (configure a start date and an end date)
The **Status** filter enables you to select:
The **Action** filter enables you to filter these actions:
In addition to the filters of the default view, you can set the following filters.
-![Screenshot that shows fields that you can add as filters.](./media/concept-provisioning-logs/add-filter.png "Pick a field")
- - **Job ID**: A unique job ID is associated with each application that you've enabled provisioning for. - **Cycle ID**: The cycle ID uniquely identifies the provisioning cycle. You can share this ID with product support to look up the cycle in which this event occurred.
In addition to the filters of the default view, you can set the following filter
- **Application**: You can show only records of applications with a display name that contains a specific string.
-## Provisioning details
-
-When you select an item in the provisioning list view, you get more details about this item. The details are grouped into the following tabs.
+## Analyze the provisioning logs
-![Screenshot that shows four tabs that contain provisioning details.](./media/concept-provisioning-logs/provisioning-tabs.png "Tabs")
+When you select an item in the provisioning list view, you get more details about this item, such as the steps taken to provision the user and tips for troubleshooting issues. The details are grouped into four tabs.
- **Steps**: Outlines the steps taken to provision an object. Provisioning an object can consist of four steps:
Use the following table to better understand how to resolve errors that you find
|Error code|Description| |||
-|Conflict, EntryConflict|Correct the conflicting attribute values in either Azure AD or the application. Or, review your matching attribute configuration if the conflicting user account was supposed to be matched and taken over. Review the [documentation](../app-provisioning/customize-application-attributes.md) for more information on configuring matching attributes.|
+|Conflict,<br>EntryConflict|Correct the conflicting attribute values in either Azure AD or the application. Or, review your matching attribute configuration if the conflicting user account was supposed to be matched and taken over. Review the [documentation](../app-provisioning/customize-application-attributes.md) for more information on configuring matching attributes.|
|TooManyRequests|The target app rejected this attempt to update the user because it's overloaded and receiving too many requests. There's nothing to do. This attempt will automatically be retired. Microsoft has also been notified of this issue.| |InternalServerError |The target app returned an unexpected error. A service issue with the target application might be preventing it from working. This attempt will automatically be retried in 40 minutes.|
-|InsufficientRights, MethodNotAllowed, NotPermitted, Unauthorized| Azure AD authenticated with the target application but wasn't authorized to perform the update. Review any instructions that the target application has provided, along with the respective application [tutorial](../saas-apps/tutorial-list.md).|
+|InsufficientRights,<br>MethodNotAllowed,<br>NotPermitted,<br>Unauthorized| Azure AD authenticated with the target application but wasn't authorized to perform the update. Review any instructions that the target application has provided, along with the respective application [tutorial](../saas-apps/tutorial-list.md).|
|UnprocessableEntity|The target application returned an unexpected response. The configuration of the target application might not be correct, or a service issue with the target application might be preventing it from working.| |WebExceptionProtocolError |An HTTP protocol error occurred in connecting to the target application. There's nothing to do. This attempt will automatically be retried in 40 minutes.| |InvalidAnchor|A user that was previously created or matched by the provisioning service no longer exists. Ensure that the user exists. To force a new matching of all users, use the Microsoft Graph API to [restart the job](/graph/api/synchronization-synchronizationjob-restart?tabs=http&view=graph-rest-beta&preserve-view=true). <br><br>Restarting provisioning will trigger an initial cycle, which can take time to complete. Restarting provisioning also deletes the cache that the provisioning service uses to operate. That means all users and groups in the tenant will have to be evaluated again, and certain provisioning events might be dropped.| |NotImplemented | The target app returned an unexpected response. The configuration of the app might not be correct, or a service issue with the target app might be preventing it from working. Review any instructions that the target application has provided, along with the respective application [tutorial](../saas-apps/tutorial-list.md). |
-|MandatoryFieldsMissing, MissingValues |The user couldn't be created because required values are missing. Correct the missing attribute values in the source record, or review your matching attribute configuration to ensure that the required fields aren't omitted. [Learn more](../app-provisioning/customize-application-attributes.md) about configuring matching attributes.|
+|MandatoryFieldsMissing,<br>MissingValues |The user couldn't be created because required values are missing. Correct the missing attribute values in the source record, or review your matching attribute configuration to ensure that the required fields aren't omitted. [Learn more](../app-provisioning/customize-application-attributes.md) about configuring matching attributes.|
|SchemaAttributeNotFound |The operation couldn't be performed because an attribute was specified that doesn't exist in the target application. See the [documentation](../app-provisioning/customize-application-attributes.md) on attribute customization and ensure that your configuration is correct.| |InternalError |An internal service error occurred within the Azure AD provisioning service. There's nothing to do. This attempt will automatically be retried in 40 minutes.| |InvalidDomain |The operation couldn't be performed because an attribute value contains an invalid domain name. Update the domain name on the user or add it to the permitted list in the target application. |
Use the following table to better understand how to resolve errors that you find
|DuplicateSourceEntries | The operation couldn't be completed because more than one user was found with the configured matching attributes. Remove the duplicate user, or [reconfigure your attribute mappings](../app-provisioning/customize-application-attributes.md).| |ImportSkipped | When each user is evaluated, the system tries to import the user from the source system. This error commonly occurs when the user who's being imported is missing the matching property defined in your attribute mappings. Without a value present on the user object for the matching attribute, the system can't evaluate scoping, matching, or export changes. The presence of this error doesn't indicate that the user is in scope, because you haven't yet evaluated scoping for the user.| |EntrySynchronizationSkipped | The provisioning service has successfully queried the source system and identified the user. No further action was taken on the user and they were skipped. The user might have been out of scope, or the user might have already existed in the target system with no further changes required.|
-|SystemForCrossDomainIdentityManagementMultipleEntriesInResponse| A GET request to retrieve a user or group received multiple users or groups in the response. The system expects to receive only one user or group in the response. For example, if you do a [GET Group request](../app-provisioning/use-scim-to-provision-users-and-groups.md#get-group) to retrieve a group, provide a filter to exclude members, and your System for Cross-Domain Identity Management (SCIM) endpoint returns the members, you'll get this error.|
-|SystemForCrossDomainIdentityManagementServiceIncompatible|The Azure AD provisioning service is unable to parse the response from the third party application. Work with the application developer to ensure that the SCIM server is compatible with the [Azure AD SCIM client](../app-provisioning/use-scim-to-provision-users-and-groups.md#understand-the-azure-ad-scim-implementation).|
+|SystemForCrossDomainIdentity<br>ManagementMultipleEntriesInResponse| A GET request to retrieve a user or group received multiple users or groups in the response. The system expects to receive only one user or group in the response. For example, if you do a [GET Group request](../app-provisioning/use-scim-to-provision-users-and-groups.md#get-group) to retrieve a group, provide a filter to exclude members, and your System for Cross-Domain Identity Management (SCIM) endpoint returns the members, you'll get this error.|
+|SystemForCrossDomainIdentity<br>ManagementServiceIncompatible|The Azure AD provisioning service is unable to parse the response from the third party application. Work with the application developer to ensure that the SCIM server is compatible with the [Azure AD SCIM client](../app-provisioning/use-scim-to-provision-users-and-groups.md#understand-the-azure-ad-scim-implementation).|
|SchemaPropertyCanOnlyAcceptValue|The property in the target system can only accept one value, but the property in the source system has multiple. Ensure that you either map a single-valued attribute to the property that is throwing an error, update the value in the source to be single-valued, or remove the attribute from the mappings.| ## Next steps
active-directory Concept Reporting Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-reporting-api.md
Title: Get started with the Azure AD reporting API | Microsoft Docs description: How to get started with the Azure Active Directory reporting API -+ - Previously updated : 08/26/2022- Last updated : 11/04/2022+ - # Get started with the Azure Active Directory reporting API
-Azure Active Directory provides you with a variety of [reports](overview-reports.md), containing useful information for applications such as SIEM systems, audit, and business intelligence tools.
-
-By using the Microsoft Graph API for Azure AD reports, you can gain programmatic access to the data through a set of REST-based APIs. You can call these APIs from a variety of programming languages and tools.
-
-This article provides you with an overview of the reporting API, including ways to access it.
+Azure Active Directory provides you with several [reports](overview-reports.md), containing useful information such as security information and event management (SIEM) systems, audit, and business intelligence tools. By using the Microsoft Graph API for Azure AD reports, you can gain programmatic access to the data through a set of REST-based APIs. You can call these APIs from various programming languages and tools.
-If you run into issues, see [how to get support for Azure Active Directory](../fundamentals/active-directory-troubleshooting-support-howto.md).
+This article provides you with an overview of the reporting API, including ways to access it. If you run into issues, see [how to get support for Azure Active Directory](../fundamentals/active-directory-troubleshooting-support-howto.md).
## Prerequisites To access the reporting API, with or without user intervention, you need to:
-1. Assign roles (Security Reader, Security Admin, Global Admin)
-2. Register an application
-3. Grant permissions
-4. Gather configuration settings
+1. Confirm your roles and licenses
+1. Register an application
+1. Grant permissions
+1. Gather configuration settings
For detailed instructions, see the [prerequisites to access the Azure Active Directory reporting API](howto-configure-prerequisites-for-reporting-api.md). ## API Endpoints
-The Microsoft Graph API endpoint for audit logs is `https://graph.microsoft.com/v1.0/auditLogs/directoryAudits` and the Microsoft Graph API endpoint for sign-ins is `https://graph.microsoft.com/v1.0/auditLogs/signIns`. For more information, see the [audit API reference](/graph/api/resources/directoryaudit) and [sign-in API reference](/graph/api/resources/signIn).
+Microsoft Graph API endpoints:
+- **Audit logs:** `https://graph.microsoft.com/v1.0/auditLogs/directoryAudits`
+- **Sign-in logs:** `https://graph.microsoft.com/v1.0/auditLogs/signIns`
-You can use the [Identity Protection risk detections API](/graph/api/resources/identityprotection-root) to gain programmatic access to security detections using Microsoft Graph. For more information, see [Get started with Azure Active Directory Identity Protection and Microsoft Graph](../identity-protection/howto-identity-protection-graph-api.md).
-
-You can also use the [provisioning logs API](/graph/api/resources/provisioningobjectsummary) to get programmatic access to provisioning events in your tenant.
+Programmatic access APIs:
+- **Security detections:** [Identity Protection risk detections API](/graph/api/resources/identityprotection-root)
+- **Tenant provisioning events:** [Provisioning logs API](/graph/api/resources/provisioningobjectsummary)
+
+Check out the following helpful resources for Microsoft Graph API:
+- [Audit log API reference](/graph/api/resources/directoryaudit)
+- [Sign-in log API reference](/graph/api/resources/signIn)
+- [Get started with Azure Active Directory Identity Protection and Microsoft Graph](../identity-protection/howto-identity-protection-graph-api.md)
+
## APIs with Microsoft Graph Explorer
-You can use the [Microsoft Graph explorer](https://developer.microsoft.com/graph/graph-explorer) to verify your sign-in and audit API data. Make sure to sign in to your account using both of the sign-in buttons in the Graph Explorer UI, and set **AuditLog.Read.All** and **Directory.Read.All** permissions for your tenant as shown.
+You can use the [Microsoft Graph explorer](https://developer.microsoft.com/graph/graph-explorer) to verify your sign-in and audit API data. Sign in to your account using both of the sign-in buttons in the Graph Explorer UI, and set **AuditLog.Read.All** and **Directory.Read.All** permissions for your tenant as shown.
![Graph Explorer](./media/concept-reporting-api/graph-explorer.png)
active-directory Concept Sign In Diagnostics Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-sign-in-diagnostics-scenarios.md
Title: Sign in diagnostics for Azure AD scenarios description: Lists the scenarios that are supported by the sign-in diagnostics for Azure AD. -+ - -+ Previously updated : 08/26/2022---
-# Customer intent: As an Azure AD administrator, I want to know the scenarios that are supported by the sign in diagnostics for Azure AD so that I can determine whether the tool can help me with a sign-in issue.
Last updated : 11/04/2022++
+# Customer intent: As an Azure AD administrator, I want to know the scenarios that are supported by the sign in diagnostics for Azure AD so that I can determine whether the tool can help me with a sign-in issue.
# Sign in diagnostics for Azure AD scenarios
The sign-in diagnostic for Azure AD provides you with support for the following
- Pass Through Authentication
- - Seamless single sign on
----
+ - Seamless single sign-on
active-directory Concept Sign Ins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-sign-ins.md
Title: Sign-in logs in Azure Active Directory | Microsoft Docs
-description: Overview of the sign-in logs in Azure Active Directory.
+description: Conceptual information about Azure AD sign-in logs.
Previously updated : 10/06/2022 Last updated : 11/04/2022 # Sign-in logs in Azure Active Directory
-As an IT administrator, you want to know how your IT environment is doing. The information about your systemΓÇÖs health enables you to assess whether and how you need to respond to potential issues.
+Reviewing sign-in errors and patterns provides valuable insight into how your users access applications and services. The sign-in logs provided by Azure Active Directory (Azure AD) are a powerful type of [activity log](overview-reports.md) that IT administrators can analyze. This article explains how to access and utilize the sign-in logs.
-To support you with this goal, the Azure Active Directory portal gives you access to three activity logs:
+Two other activity logs are also available to help monitor the health of your tenant:
+- **[Audit](concept-audit-logs.md)** ΓÇô Information about changes applied to your tenant, such as users and group management or updates applied to your tenantΓÇÖs resources.
+- **[Provisioning](concept-provisioning-logs.md)** ΓÇô Activities performed by a provisioning service, such as the creation of a group in ServiceNow or a user imported from Workday.
-- **[Sign-ins](concept-sign-ins.md)** ΓÇô Information about sign-ins and how your resources are used by your users.-- **[Audit](concept-audit-logs.md)** ΓÇô Information about changes applied to your tenant such as users and group management or updates applied to your tenantΓÇÖs resources.-- **[Provisioning](concept-provisioning-logs.md)** ΓÇô Activities performed by the provisioning service, such as the creation of a group in ServiceNow or a user imported from Workday.-
-This article gives you an overview of the sign-ins report.
--
-## What can you do with it?
+## What can you do with sign-in logs?
You can use the sign-ins log to find answers to questions like:
You can use the sign-ins log to find answers to questions like:
- WhatΓÇÖs the status of these sign-ins?
+## How do you access the sign-in logs?
-## Who can access it?
-
-You can always access your own sign-ins history using this link: [https://mysignins.microsoft.com](https://mysignins.microsoft.com)
-
-To access the sign-ins log, you need to be:
--- A global administrator--- A user in one of the following roles:
- - Security administrator
-
- - Security reader
-
- - Global reader
-
- - Reports reader
---
-## What Azure AD license do you need?
-
-The sign-in activity report is available in [all editions of Azure AD](reference-reports-data-retention.md#how-long-does-azure-ad-store-the-data). If you have an Azure Active Directory P1 or P2 license, you also can access the sign-in activity report through the Microsoft Graph API.
--
-## Where can you find it in the Azure portal?
-
-The Azure portal provides you with several options to access the log. For example, on the Azure Active Directory menu, you can open the log in the **Monitoring** section.
-
-![Open sign-in logs](./media/concept-sign-ins/sign-ins-logs-menu.png)
-
-Additionally, you can get directly get to the sign-in logs using this link: [https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/SignIns](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/SignIns)
--
-## What is the default view?
-
-A sign-ins log has a default list view that shows:
--- The sign-in date-- The related user-- The application the user has signed in to-- The sign-in status-- The status of the risk detection-- The status of the multi-factor authentication (MFA) requirement-
-![Screenshot shows the Office 365 SharePoint Online Sign-ins.](./media/concept-sign-ins/sign-in-activity.png "Sign-in activity")
-
-You can customize the list view by clicking **Columns** in the toolbar.
-
-![Screenshot shows the Columns option in the Sign-ins page.](./media/concept-sign-ins/19.png "Sign-in activity")
-
-The **Columns** dialog gives you access to the selectable attributes. In a sign-in report, you can't have fields
-that have more than one value for a given sign-in request as column. This is, for example, true for authentication details, conditional access data and network location.
-
-![Screenshot shows the Columns dialog box where you can select attributes.](./media/concept-sign-ins/columns.png "Sign-in activity")
----
-## Sign-in error code
-
-If a sign-in failed, you can get more information about the reason in the **Basic info** section of the related log item.
-
-![sign-in error code](./media/concept-all-sign-ins/error-code.png)
-
-While the log item provides you with a failure reason, there are cases where you might get more information using the [sign-in error lookup tool](https://login.microsoftonline.com/error). For example, if available, this tool provides you with remediation steps.
-
-![Error code lookup tool](./media/concept-all-sign-ins/error-code-lookup-tool.png)
--
+You can always access your own sign-ins history at [https://mysignins.microsoft.com](https://mysignins.microsoft.com).
-## Filter sign-in activities
+To access the sign-ins log for a tenant, you must have one of the following roles:
+- Global Administrator
+- Security Administrator
+- Security Reader
+- Global Reader
+- Reports Reader
-You can filter the data in a log to narrow it down to a level that works for you:
+The sign-in activity report is available in [all editions of Azure AD](reference-reports-data-retention.md#how-long-does-azure-ad-store-the-data). If you have an Azure Active Directory P1 or P2 license, you can access the sign-in activity report through the Microsoft Graph API. See [Getting started with Azure Active Directory Premium](../fundamentals/active-directory-get-started-premium.md) to upgrade your Azure Active Directory edition. It will take a couple of days for the data to show up in Graph after you upgrade to a premium license with no data activities before the upgrade.
-![Screenshot shows the Add filters option.](./media/concept-sign-ins/04.png "Sign-in activity")
+**To access the Azure AD sign-ins log:**
-**Request ID** - The ID of the request you care about.
+1. Sign in to the [Azure portal](https://portal.azure.com) using the appropriate least privileged role.
+1. Go to **Azure Active Directory** > **Sign-ins log**.
-**User** - The name or the user principal name (UPN) of the user you care about.
+ ![Screenshot of the Monitoring side menu with sign-in logs highlighted.](./media/concept-sign-ins/side-menu-sign-in-logs.png)
-**Application** - The name of the target application.
-
-**Status** - The sign-in status you care about:
+You can also access the sign-in logs from the following areas of Azure AD:
-- Success--- Failure--- Interrupted
+- Users
+- Groups
+- Enterprise applications
+## View the sign-ins log
-**IP address** - The IP address of the device used to connect to your tenant.
+To more effectively view the sign-ins log, spend a few moments customizing the view for your needs. You can specify what columns to include and filter the data to narrow things down.
-The **Location** - The location the connection was initiated from:
+### Customize the layout
-- City
+The sign-ins log has a default view, but you can customize the view using over 30 column options.
-- State / Province
+1. Select **Columns** from the menu at the top of the log.
+1. Select the columns you want to view and select the **Save** button at the bottom of the window.
-- Country/Region
+![Screenshot of the sign-in logs page with the Columns option highlighted.](./media/concept-sign-ins/sign-in-logs-columns.png)
+### Filter the results <h3 id="filter-sign-in-activities"></h3>
-**Resource** - The name of the service used for the sign-in.
+Filtering the sign-ins log is a helpful way to quickly find logs that match a specific scenario. For example, you could filter the list to only view sign-ins that occurred in a specific geographic location, from a specific operating system, or from a specific type of credential.
+Some filter options prompt you to select more options. Follow the prompts to make the selection you need for the filter. You can add multiple filters.
-**Resource ID** - The ID of the service used for the sign-in.
+Select the **Add filters** option from the top of the table to get started.
+![Screenshot of the sign-in logs page with the Add filters option highlighted.](./media/concept-sign-ins/sign-in-logs-filter.png)
-**Client app** - The type of the client app used to connect to your tenant:
+There are several filter options to choose from. Below are some notable options and details.
-![Client app filter](./media/concept-sign-ins/client-app-filter.png)
+- **User:** The *user principal name* (UPN) of the user in question.
+- **Status:** Options are *Success*, *Failure*, and *Interrupted*.
+- **Resource:** The name of the service used for the sign-in.
+- **Conditional access:** The status of the Conditional Access (CA) policy. Options are:
+ - *Not applied:* No policy applied to the user and application during sign-in.
+ - *Success:* One or more CA policies applied to the user and application (but not necessarily the other conditions) during sign-in.
+ - *Failure:* The sign-in satisfied the user and application condition of at least one CA policy and grant controls are either not satisfied or set to block access.
+- **IP addresses:** There is no definitive connection between an IP address and where the computer with that address is physically located. Mobile providers and VPNs issue IP addresses from central pools that are often far from where the client device is actually used. Currently, converting IP address to a physical location is a best effort based on traces, registry data, reverse lookups and other information.
+The following table provides the options and descriptions for the **Client app** filter option.
> [!NOTE] > Due to privacy commitments, Azure AD does not populate this field to the home tenant in the case of a cross-tenant scenario. - |Name|Modern authentication|Description| ||:-:|| |Authenticated SMTP| |Used by POP and IMAP client's to send email messages.|
The **Location** - The location the connection was initiated from:
|Outlook Service| |Used by the Mail and Calendar app for Windows 10.| |POP3| |A legacy mail client using POP3 to retrieve email.| |Reporting Web Services| |Used to retrieve report data in Exchange Online.|
-|Other clients| |Shows all sign-in attempts from users where the client app is not included or unknown.|
-------
-**Operating system** - The operating system running on the device used sign-on to your tenant.
--
-**Device browser** - If the connection was initiated from a browser, this field enables you to filter by browser name.
--
-**Correlation ID** - The correlation ID of the activity.
--
+|Other clients| |Shows all sign-in attempts from users where the client app isn't included or unknown.|
+## Analyze the sign-in logs
-**Conditional access** - The status of the applied conditional access rules
+Now that your sign-in logs table is formatted appropriately, you can more effectively analyze the data. Some common scenarios are described here, but they aren't the only ways to analyze sign-in data. Further analysis and retention of sign-in data can be accomplished by exporting the logs to other tools.
-- **Not applied**: No policy applied to the user and application during sign-in.
+### Sign-in error codes
-- **Success**: One or more conditional access policies applied to the user and application (but not necessarily the other conditions) during sign-in.
+If a sign-in failed, you can get more information about the reason in the **Basic info** section of the related log item. The error code and associated failure reason appear in the details. Because of the complexity of some Azure AD environments, we cannot document every possible error code and resolution. Some errors may require [submitting a support request](../fundamentals/how-to-get-support.md) to resolve the issue.
-- **Failure**: The sign-in satisfied the user and application condition of at least one Conditional Access policy and grant controls are either not satisfied or set to block access.---
-## Sign-ins data shortcuts
-
-Azure AD and the Azure portal both provide you with additional entry points to sign-ins data:
--- The Identity security protection overview-- Users-- Groups-- Enterprise applications-
-### Users sign-ins data in Identity security protection
-
-The user sign-in graph in the **Identity security protection** overview page shows weekly aggregations of sign-ins. The default for the time period is 30 days.
-
-![Screenshot shows a graph of Sign-ins over a month.](./media/concept-sign-ins/06.png "Sign-in activity")
-
-When you click on a day in the sign-in graph, you get an overview of the sign-in activities for this day.
-
-Each row in the sign-in activities list shows:
-
-* Who has signed in?
-* What application was the target of the sign-in?
-* What is the status of the sign-in?
-* What is the MFA status of the sign-in?
-
-By clicking an item, you get more details about the sign-in operation:
--- User ID-- User-- Username-- Application ID-- Application-- Client-- Location-- IP address-- Date-- MFA Required-- Sign-in status-
-> [!NOTE]
-> IP addresses are issued in such a way that there is no definitive connection between an IP address and where the computer with that address is physically located. Mapping IP addresses is complicated by the fact that mobile providers and VPNs issue IP addresses from central pools that are often very far from where the client device is actually used.
-> Currently, converting IP address to a physical location is a best effort based on traces, registry data, reverse lookups and other information.
+![Screenshot of a sign-in error code.](./media/concept-sign-ins/error-code.png)
-On the **Users** page, you get a complete overview of all user sign-ins by clicking **Sign-ins** in the **Activity** section.
+For a list of error codes related to Azure AD authentication and authorization, see the [Azure AD authentication and authorization error codes](../develop/reference-aadsts-error-codes.md) article. In some cases, the [sign-in error lookup tool](https://login.microsoftonline.com/error) may provide remediation steps. Enter the **Error code** provided in the sign-in log details into the tool and select the **Submit** button.
-![Screenshot shows the Activity section where you can select Sign-ins.](./media/concept-sign-ins/08.png "Sign-in activity")
+![Screenshot of the error code lookup tool.](./media/concept-sign-ins/error-code-lookup-tool.png)
-## Authentication details
+### Authentication details
-The **Authentication Details** tab located within the sign-ins report provides the following information, for each authentication attempt:
+The **Authentication Details** tab in the details of a sign-in log provides the following information for each authentication attempt:
-- A list of authentication policies applied (such as Conditional Access, per-user MFA, Security Defaults)-- A list of session lifetime policies applied (such as Sign-in frequency, Remember MFA, Configurable Token lifetime)-- The sequence of authentication methods used to sign-in-- Whether or not the authentication attempt was successful-- Detail about why the authentication attempt succeeded or failed
+- A list of authentication policies applied, such as Conditional Access or Security Defaults.
+- A list of session lifetime policies applied, such as Sign-in frequency or Remember MFA.
+- The sequence of authentication methods used to sign-in.
+- If the authentication attempt was successful and the reason why.
-This information allows admins to troubleshoot each step in a userΓÇÖs sign-in, and track:
+This information allows you to troubleshoot each step in a userΓÇÖs sign-in. Use these details to track:
-- Volume of sign-ins protected by multi-factor authentication -- Reason for authentication prompt based on the session lifetime policies-- Usage and success rates for each authentication method -- Usage of passwordless authentication methods (such as Passwordless Phone Sign-in, FIDO2, and Windows Hello for Business) -- How frequently authentication requirements are satisfied by token claims (where users are not interactively prompted to enter a password, enter an SMS OTP, and so on)
+- The volume of sign-ins protected by MFA.
+- The reason for the authentication prompt, based on the session lifetime policies.
+- Usage and success rates for each authentication method.
+- Usage of passwordless authentication methods, such as Passwordless Phone Sign-in, FIDO2, and Windows Hello for Business.
+- How frequently authentication requirements are satisfied by token claims, such as when users aren't interactively prompted to enter a password or enter an SMS OTP.
-While viewing the Sign-ins report, select the **Authentication Details** tab:
+While viewing the sign-ins log, select a sign-in event, and then select the **Authentication Details** tab.
-![Screenshot of the Authentication Details tab](media/concept-sign-ins/auth-details-tab.png)
+![Screenshot of the Authentication Details tab](media/concept-sign-ins/authentication-details-tab.png)
->[!NOTE]
->**OATH verification code** is logged as the authentication method for both OATH hardware and software tokens (such as the Microsoft Authenticator app).
+When analyzing authentication details, take note of the following details:
->[!IMPORTANT]
->The **Authentication details** tab can initially show incomplete or inaccurate data, until log information is fully aggregated. Known examples include:
->- A **satisfied by claim in the token** message is incorrectly displayed when sign-in events are initially logged.
->- The **Primary authentication** row is not initially logged.
+- **OATH verification code** is logged as the authentication method for both OATH hardware and software tokens (such as the Microsoft Authenticator app).
+- The **Authentication details** tab can initially show incomplete or inaccurate data until log information is fully aggregated. Known examples include:
+ - A **satisfied by claim in the token** message is incorrectly displayed when sign-in events are initially logged.
+ - The **Primary authentication** row isn't initially logged.
+## Sign-in data used by other services
-## Usage of managed applications
+Sign-in data is used by several services in Azure to monitor risky sign-ins and provide insight into application usage.
-With an application-centric view of your sign-in data, you can answer questions such as:
+### Risky sign-in data in Azure AD Identity Protection
-* Who is using my applications?
-* What are the top three applications in your organization?
-* How is my newest application doing?
+Sign-in log data visualization that relates to risky sign-ins is available in the **Azure AD Identity Protection** overview, which uses the following data:
-The entry point to this data is the top three applications in your organization. The data is contained within the last 30 days report in the **Overview** section under **Enterprise applications**.
+- Risky users
+- Risky user sign-ins
+- Risky service principals
+- Risky service principal sign-ins
-![Screenshot shows where you can select Overview.](./media/concept-sign-ins/10.png "Sign-in activity")
+ For more information about the Azure AD Identity Protection tools, see the [Azure AD Identity Protection overview](../identity-protection/overview-identity-protection.md).
-The app-usage graphs weekly aggregations of sign-ins for your top three applications in a given time period. The default for the time period is 30 days.
+![Screenshot of risky users in Identity Protection.](media/concept-sign-ins/id-protection-overview.png)
-![Screenshot shows the App usage for a one month period.](./media/concept-sign-ins/graph-chart.png "Sign-in activity")
+### Azure AD application and authentication sign-in activity
-If you want to, you can set the focus on a specific application.
+To view application-specific sign-in data, go to **Azure AD** and select **Usage & insights** from the Monitoring section. These reports provide a closer look at sign-ins for Azure AD application activity and AD FS application activity. For more information, see [Azure AD Usage & insights](concept-usage-insights-report.md).
-![Reporting](./media/concept-sign-ins/single-app-usage-graph.png "Reporting")
+![Screenshot of the Azure AD application activity report.](media/concept-sign-ins/azure-ad-app-activity.png)
-When you click on a day in the app usage graph, you get a detailed list of the sign-in activities.
+Azure AD Usage & insights also provides the **Authentication methods activity** report, which breaks down authentication by the method used. Use this report to see how many of your users are set up with MFA or passwordless authentication.
-The **Sign-ins** option gives you a complete overview of all sign-in events to your applications.
+![Screenshot of the Authentication methods report.](media/concept-sign-ins/azure-ad-authentication-methods.png)
-## Microsoft 365 activity logs
+### Microsoft 365 activity logs
-You can view Microsoft 365 activity logs from the [Microsoft 365 admin center](/office365/admin/admin-overview/about-the-admin-center). Consider the point that, Microsoft 365 activity and Azure AD activity logs share a significant number of the directory resources. Only the Microsoft 365 admin center provides a full view of the Microsoft 365 activity logs.
+You can view Microsoft 365 activity logs from the [Microsoft 365 admin center](/office365/admin/admin-overview/about-the-admin-center). Microsoft 365 activity and Azure AD activity logs share a significant number of directory resources. Only the Microsoft 365 admin center provides a full view of the Microsoft 365 activity logs.
-You can also access the Microsoft 365 activity logs programmatically by using the [Office 365 Management APIs](/office/office-365-management-api/office-365-management-apis-overview).
+You can access the Microsoft 365 activity logs programmatically by using the [Office 365 Management APIs](/office/office-365-management-api/office-365-management-apis-overview).
## Next steps
active-directory Concept Usage Insights Report https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-usage-insights-report.md
Title: Usage and insights report | Microsoft Docs description: Introduction to usage and insights report in the Azure Active Directory portal -+ - Previously updated : 08/26/2022- Last updated : 11/03/2022+
-# Usage and insights report in the Azure Active Directory portal
+# Usage and insights in Azure Active Directory
-With the usage and insights report, you can get an application-centric view of your sign-in data. You can find answers to the following questions:
+With the Azure Active Directory (Azure AD) **Usage and insights** reports, you can get an application-centric view of your sign-in data. Usage & insights also includes a report on authentication methods activity. You can find answers to the following questions:
* What are the top used applications in my organization? * What applications have the most failed sign-ins? * What are the top sign-in errors for each application?
-## Prerequisites
+This article provides an overview of three reports that look sign-in data.
+
+## Access Usage & insights
-To access the data from the usage and insights report, you need:
+Accessing the data from Usage and insights requires:
* An Azure AD tenant * An Azure AD premium (P1/P2) license to view the sign-in data
-* A user in the global administrator, security administrator, security reader, or report reader roles. In addition, any user (non-admins) can access their own sign-ins.
+* A user in the Global Administrator, Security Administrator, Security Reader, or Report Reader roles.
+
+To access Usage & insights:
+
+1. Sign in to the [Azure portal](https://portal.azure.com) using the appropriate least privileged role.
+1. Go to **Azure Active Directory** > **Usage & insights**.
+
+The **Usage & insights** report is also available from the **Enterprise applications** area of Azure AD. All users can access their own sign-ins at the [My Sign-Ins portal](https://mysignins.microsoft.com/security-info).
+
+## View the Usage & insights reports
+
+There are currently three reports available in Azure AD Usage & insights. All three reports use sign-in data to provide helpful information an application usage and authentication methods.
+
+### Azure AD application activity (preview)
+
+The **Azure AD application activity (preview)** report shows the list of applications with one or more sign-in attempts. The report allows you to sort by the number of successful sign-ins, failed sign-ins, and the success rate.
+
+Select the **View sign in activity** link for an application to view more details. The sign-in graph per application counts interactive user sign-ins. The details of any sign-in failures appears below the table.
+
+![Screenshot shows Usage and insights for Application activity where you can select a range and view sign-in activity for different apps.](./media/concept-usage-insights-report/usage-insights-overview.png)
-## Access the usage and insights report
+Select a day in the application usage graph to see a detailed list of the sign-in activities for the application. This detailed list is actually the sign-in log with the filter set to the selected application and date.
-1. Navigate to the [Azure portal](https://portal.azure.com).
-2. Select the right directory, then select **Azure Active Directory** and choose **Enterprise applications**.
-3. From the **Activity** section, select **Usage & insights** to open the report.
+![Screenshot of the sign-in activity details for a selected application.](./media/concept-usage-insights-report/application-activity-sign-in-detail.png)
-![Screenshot shows Usage & insights selected from the Activity section.](./media/concept-usage-insights-report/main-menu.png)
-
+### AD FS application activity
-## Use the report
+The **AD FS application activity** report in Usage & insights lists all Active Directory Federated Services (AD FS) applications in your organization that have had an active user login to authenticate in the last 30 days. These applications have not been migrated to Azure AD for authentication.
-The usage and insights report shows the list of applications with one or more sign-in attempts, and allows you to sort by the number of successful sign-ins, failed sign-ins, and the success rate. The sign-in graph per application only counts interactive user sign-ins.
+### Authentication methods activity
-Clicking **Load more** at the bottom of the list allows you to view additional applications on the page. You can select the date range to view all applications that have been used within the range.
+The **Authentication methods activity** in Usage & insights displays visualizations of the different authentication methods used by your organization. The **Registration tab** displays statistics of users registered for each of your available authentication methods. Select the **Usage** tab at the top of the page to see actual usage for each authentication method.
-![Screenshot shows Usage & insights for Application activity where you can select a range and view sign-in activity for different apps.](./media/concept-usage-insights-report/usage-and-insights-report.png)
+You can also access several other reports and tools related to authentication.
-You can also set the focus on a specific application. Select **view sign-in activity** to see the sign-in activity over time for the application as well as the top errors.
+Are you planning on running a registration campaign to nudge users to sign up for MFA? Use the **Registration campaign** option from the side menu to set up a registration campaign. For more information, see [Nudge users to set up Microsoft Authenticator](../authentication/how-to-mfa-registration-campaign.md).
-When you select a day in the application usage graph, you get a detailed list of the sign-in activities for the application.
+Looking for the details of a user and their authentication methods? Look at the **User registration details** report from the side menu and search for a name or UPN. The default MFA method and other methods registered are displayed. You can also see if the user is capable of registering for one of the authentication methods.
+Looking for the status of an authentication registration or reset event of a user? Look at the **Registration and reset events** report from the side menu and then search for a name or UPN. You'll be able to see the method used to attempt to register or reset an authentication method.
## Next steps
-* [Sign-ins report](concept-sign-ins.md)
+- [Learn about the sign-ins report](concept-sign-ins.md)
+- [Learn about Azure AD authentication](../authentication/overview-authentication.md)
active-directory Overview Flagged Sign Ins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-flagged-sign-ins.md
Any user signing into Azure AD via web page can use flag sign-ins for review. Me
## Who can review flagged sign-ins?
-Reviewing flagged sign-in events requires permissions to read the Sign-in Report events in the Azure AD portal. For more information, see [who can access it?](concept-sign-ins.md#who-can-access-it)
+Reviewing flagged sign-in events requires permissions to read the Sign-in Report events in the Azure AD portal. For more information, see [who can access it?](concept-sign-ins.md#how-do-you-access-the-sign-in-logs)
To flag sign-in failures, you don't need extra permissions.
active-directory Reference Azure Ad Sla Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-azure-ad-sla-performance.md
The SLA attainment is truncated at three places after the decimal. Numbers are n
| July | 99.999% | 99.999% | | August | 99.999% | 99.999% | | September | 99.999% | 99.998% |
-| October | 99.999% | |
+| October | 99.999% | 99.999% |
| November | 99.998% | | | December | 99.978% | |
aks Concepts Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-network.md
# Network concepts for applications in Azure Kubernetes Service (AKS) In a container-based, microservices approach to application development, application components work together to process their tasks. Kubernetes provides various resources enabling this cooperation:
-* You can connect to and expose applications internally or externally.
-* You can build highly available applications by load balancing your applications.
-* For your more complex applications, you can configure ingress traffic for SSL/TLS termination or routing of multiple components.
+
+* You can connect to and expose applications internally or externally.
+* You can build highly available applications by load balancing your applications.
+* For your more complex applications, you can configure ingress traffic for SSL/TLS termination or routing of multiple components.
* For security reasons, you can restrict the flow of network traffic into or between pods and nodes. This article introduces the core concepts that provide networking to your applications in AKS:
This article introduces the core concepts that provide networking to your applic
To allow access to your applications or between application components, Kubernetes provides an abstraction layer to virtual networking. Kubernetes nodes connect to a virtual network, providing inbound and outbound connectivity for pods. The *kube-proxy* component runs on each node to provide these network features. In Kubernetes:
-* *Services* logically group pods to allow for direct access on a specific port via an IP address or DNS name.
-* You can distribute traffic using a *load balancer*.
-* More complex routing of application traffic can also be achieved with *Ingress Controllers*.
+
+* *Services* logically group pods to allow for direct access on a specific port via an IP address or DNS name.
+* You can distribute traffic using a *load balancer*.
+* More complex routing of application traffic can also be achieved with *Ingress Controllers*.
+* You can *control outbound (egress) traffic* for cluster nodes.
* Security and filtering of the network traffic for pods is possible with Kubernetes *network policies*. The Azure platform also simplifies virtual networking for AKS clusters. When you create a Kubernetes load balancer, you also create and configure the underlying Azure load balancer resource. As you open network ports to pods, the corresponding Azure network security group rules are configured. For HTTP application routing, Azure can also configure *external DNS* as new ingress routes are configured.
The LoadBalancer only works at layer 4. At layer 4, the Service is unaware of th
![Diagram showing Ingress traffic flow in an AKS cluster][aks-ingress] ### Create an ingress resource+ In AKS, you can create an Ingress resource using NGINX, a similar tool, or the AKS HTTP application routing feature. When you enable HTTP application routing for an AKS cluster, the Azure platform creates the Ingress controller and an *External-DNS* controller. As new Ingress resources are created in Kubernetes, the required DNS A records are created in a cluster-specific DNS zone. For more information, see [Deploy HTTP application routing][aks-http-routing].
Configure your ingress controller to preserve the client source IP on requests t
If you're using client source IP preservation on your ingress controller, you can't use TLS pass-through. Client source IP preservation and TLS pass-through can be used with other services, such as the *LoadBalancer* type.
+## Control outbound (egress) traffic
+
+AKS clusters are deployed on a virtual network and have outbound dependencies on services outside of that virtual network. These outbound dependencies are almost entirely defined with fully qualified domain names (FQDNs). By default, AKS clusters have unrestricted outbound (egress) internet access. This allows the nodes and services you run to access external resources as needed. If desired, you can restrict outbound traffic.
+
+For more information, see [Control egress traffic for cluster nodes in AKS][limit-egress].
+ ## Network security groups A network security group filters traffic for VMs like the AKS nodes. As you create Services, such as a LoadBalancer, the Azure platform automatically configures any necessary network security group rules.
-You don't need to manually configure network security group rules to filter traffic for pods in an AKS cluster. Simply define any required ports and forwarding as part of your Kubernetes Service manifests. Let the Azure platform create or update the appropriate rules.
+You don't need to manually configure network security group rules to filter traffic for pods in an AKS cluster. Simply define any required ports and forwarding as part of your Kubernetes Service manifests. Let the Azure platform create or update the appropriate rules.
You can also use network policies to automatically apply traffic filter rules to pods.
For more information on core Kubernetes and AKS concepts, see the following arti
[use-network-policies]: use-network-policies.md [operator-best-practices-network]: operator-best-practices-network.md [support-policies]: support-policies.md
+[limit-egress]: limit-egress-traffic.md
api-management Api Management Access Restriction Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-access-restriction-policies.md
This article provides a reference for API Management access restriction policies
- [Restrict caller IPs](#RestrictCallerIPs) - Filters (allows/denies) calls from specific IP addresses and/or address ranges. - [Set usage quota by subscription](#SetUsageQuota) - Allows you to enforce a renewable or lifetime call volume and/or bandwidth quota, on a per subscription basis. - [Set usage quota by key](#SetUsageQuotaByKey) - Allows you to enforce a renewable or lifetime call volume and/or bandwidth quota, on a per key basis.-- [Validate JWT](#ValidateJWT) - Enforces existence and validity of a JWT extracted from either a specified HTTP header or a specified query parameter.
+- [Validate Azure Active Directory token](#ValidateAAD) - Enforces existence and validity of an Azure Active Directory JWT extracted from either a specified HTTP header, query parameter, or token value.
+- [Validate JWT](#ValidateJWT) - Enforces existence and validity of a JWT extracted from either a specified HTTP header, query parameter, or token value.
- [Validate client certificate](#validate-client-certificate) - Enforces that a certificate presented by a client to an API Management instance matches specified validation rules and claims. > [!TIP]
-> You can use access restriction policies in different scopes for different purposes. For example, you can secure the whole API with AAD authentication by applying the `validate-jwt` policy on the API level or you can apply it on the API operation level and use `claims` for more granular control.
+> You can use access restriction policies in different scopes for different purposes. For example, you can secure the whole API with AAD authentication by applying the `validate-azure-ad-token` policy on the API level or you can apply it on the API operation level and use `claims` for more granular control.
## <a name="CheckHTTPHeader"></a> Check HTTP header
If `identity-type=jwt` is configured, a JWT token is required to be validated. T
| context-variable-name | The name of the context variable to receive the [`Authorization` object](#authorization-object). | Yes | | | identity-type | Type of identity to be checked against the authorization access policy. <br> - `managed`: managed identity of the API Management service. <br> - `jwt`: JWT bearer token specified in the `identity` attribute. | No | managed | | identity | An Azure AD JWT bearer token to be checked against the authorization permissions. Ignored for `identity-type` other than `jwt`. <br><br>Expected claims: <br> - audience: `https://azure-api.net/authorization-manager` <br> - `oid`: Permission object ID <br> - `tid`: Permission tenant ID | No | |
-| ignore-error | Boolean. If acquiring the authorization context results in an error (for example, the authorization resource is not found or is in an error state): <br> - `true`: the context variable is assigned a value of null. <br> - `false`: return `500` | No | false |
+| ignore-error | Boolean. If acquiring the authorization context results in an error (for example, the authorization resource isn't found or is in an error state): <br> - `true`: the context variable is assigned a value of null. <br> - `false`: return `500` | No | false |
### Authorization object
In the following example, the per subscription rate limit is 20 calls per 90 sec
| Name | Description | Required | | - | -- | -- | | rate-limit | Root element. | Yes |
-| api | Add one or more of these elements to impose a call rate limit on APIs within the product. Product and API call rate limits are applied independently. API can be referenced either via `name` or `id`. If both attributes are provided, `id` will be used and `name` will be ignored. | No |
-| operation | Add one or more of these elements to impose a call rate limit on operations within an API. Product, API, and operation call rate limits are applied independently. Operation can be referenced either via `name` or `id`. If both attributes are provided, `id` will be used and `name` will be ignored. | No |
+| api | Add one or more of these elements to impose a call rate limit on APIs within the product. Product and API call rate limits are applied independently. API can be referenced either via `name` or `id`. If both attributes are provided, `id` will be used, and `name` will be ignored. | No |
+| operation | Add one or more of these elements to impose a call rate limit on operations within an API. Product, API, and operation call rate limits are applied independently. Operation can be referenced either via `name` or `id`. If both attributes are provided, `id` will be used, and `name` will be ignored. | No |
### Attributes
In the following example, the per subscription rate limit is 20 calls per 90 sec
| -- | -- | -- | - | | name | The name of the API for which to apply the rate limit. | Yes | N/A | | calls | The maximum total number of calls allowed during the time interval specified in `renewal-period`. | Yes | N/A |
-| renewal-period | The length in seconds of the sliding window during which the number of allowed requests should not exceed the value specified in `calls`. Maximum allowed value: 300 seconds. | Yes | N/A |
+| renewal-period | The length in seconds of the sliding window during which the number of allowed requests shouldn't exceed the value specified in `calls`. Maximum allowed value: 300 seconds. | Yes | N/A |
| retry-after-header-name | The name of a custom response header whose value is the recommended retry interval in seconds after the specified call rate is exceeded. | No | `Retry-After` | | retry-after-variable-name | The name of a policy expression variable that stores the recommended retry interval in seconds after the specified call rate is exceeded. | No | N/A | | remaining-calls-header-name | The name of a response header whose value after each policy execution is the number of remaining calls allowed for the time interval specified in the `renewal-period`. | No | N/A |
In the following example, the rate limit of 10 calls per 60 seconds is keyed by
| counter-key | The key to use for the rate limit policy. For each key value, a single counter is used for all scopes at which the policy is configured. | Yes | N/A | | increment-condition | The boolean expression specifying if the request should be counted towards the rate (`true`). | No | N/A | | increment-count | The number by which the counter is increased per request. | No | 1 |
-| renewal-period | The length in seconds of the sliding window during which the number of allowed requests should not exceed the value specified in `calls`. Policy expression is allowed. Maximum allowed value: 300 seconds. | Yes | N/A |
+| renewal-period | The length in seconds of the sliding window during which the number of allowed requests shouldn't exceed the value specified in `calls`. Policy expression is allowed. Maximum allowed value: 300 seconds. | Yes | N/A |
| retry-after-header-name | The name of a custom response header whose value is the recommended retry interval in seconds after the specified call rate is exceeded. | No | `Retry-After` | | retry-after-variable-name | The name of a policy expression variable that stores the recommended retry interval in seconds after the specified call rate is exceeded. | No | N/A | | remaining-calls-header-name | The name of a response header whose value after each policy execution is the number of remaining calls allowed for the time interval specified in the `renewal-period`. | No | N/A |
This policy can be used in the following policy [sections](./api-management-howt
> [!IMPORTANT] > This feature is unavailable in the **Consumption** tier of API Management.
-The `quota-by-key` policy enforces a renewable or lifetime call volume and/or bandwidth quota, on a per key basis. The key can have an arbitrary string value and is typically provided using a policy expression. Optional increment condition can be added to specify which requests should be counted towards the quota. If multiple policies would increment the same key value, it is incremented only once per request. When the quota is exceeded, the caller receives a `403 Forbidden` response status code, and the response includes a `Retry-After` header whose value is the recommended retry interval in seconds.
+The `quota-by-key` policy enforces a renewable or lifetime call volume and/or bandwidth quota, on a per key basis. The key can have an arbitrary string value and is typically provided using a policy expression. Optional increment condition can be added to specify which requests should be counted towards the quota. If multiple policies would increment the same key value, it's incremented only once per request. When the quota is exceeded, the caller receives a `403 Forbidden` response status code, and the response includes a `Retry-After` header whose value is the recommended retry interval in seconds.
For more information and examples of this policy, see [Advanced request throttling with Azure API Management](./api-management-sample-flexible-throttling.md).
This policy can be used in the following policy [sections](./api-management-howt
- **Policy sections:** inbound - **Policy scopes:** all scopes
+## <a name="ValidateAAD"></a> Validate Azure Active Directory token
+
+The `validate-azure-ad-token` policy enforces the existence and validity of a JSON web token (JWT) that was provided by the Azure Active Directory service. The JWT can be extracted from a specified HTTP header, query parameter, or value provided using a policy expression or context variable.
+
+### Policy statement
+
+```xml
+<validate-azure-ad-token
+ tenant-id="tenant ID or URL (for example, "contoso.onmicrosoft.com") of the Azure Active Directory service"
+ header-name="name of HTTP header containing the token (alternatively, use query-parameter-name or token-value attribute to specify token)"
+ query-parameter-name="name of query parameter used to pass the token (alternative, use header-name or token-value attribute to specify token)"
+ token-value="expression returning the token as a string (alternatively, use header-name or query-parameter attribute to specify token)"
+ failed-validation-httpcode="HTTP status code to return on failure"
+ failed-validation-error-message="error message to return on failure"
+ output-token-variable-name="name of a variable to receive a JWT object representing successfully validated token">
+ <client-application-ids>
+ <application-id>Client application ID from Azure Active Directory</application-id>
+ <!-- If there are multiple client application IDs, then add additional application-id elements -->
+ </client-application-ids>
+ <backend-application-ids>
+ <application-id>Backend application ID from Azure Active Directory</application-id>
+ <!-- If there are multiple backend application IDs, then add additional application-id elements -->
+ </backend-application-ids>
+ <audiences>
+ <audience>audience string</audience>
+ <!-- if there are multiple possible audiences, then add additional audience elements -->
+ </audiences>
+ <required-claims>
+ <claim name="name of the claim as it appears in the token" match="all|any" separator="separator character in a multi-valued claim">
+ <value>claim value as it is expected to appear in the token</value>
+ <!-- if there is more than one allowed value, then add additional value elements -->
+ </claim>
+ <!-- if there are multiple possible allowed values, then add additional value elements -->
+ </required-claims>
+</validate-azure-ad-token>
+```
+
+### Examples
+
+#### Simple token validation
+
+The following policy is the minimal form of the `validate-azure-ad-token` policy. It expects the JWT to be provided in the `Authorization` header using the `Bearer` scheme. In this example, the Azure AD tenant ID and client application ID are provided using named values.
+
+```xml
+<validate-azure-ad-token tenant-id="{{aad-tenant-id}}">
+ <client-application-ids>
+ <application-id>{{aad-client-application-id}}</application-id>
+ </client-application-ids>
+</validate-azure-ad-token>
+```
+
+#### Validate that audience and claim are correct
+
+The following policy checks that the audience is the hostname of the API Management instance and that the `ctry` claim is `US`. The hostname is provided using a policy expression, and the Azure AD tenant ID and client application ID are provided using named values. The decoded JWT is provided in the `jwt` variable after validation.
+
+For more details on optional claims, read [Provide optional claims to your app](/azure/active-directory/develop/active-directory-optional-claims).
+
+```xml
+<validate-azure-ad-token tenant-id="{{aad-tenant-id}}" output-token-variable-name="jwt">
+ <client-application-ids>
+ <application-id>{{aad-client-application-id}}</application-id>
+ </client-application-ids>
+ <audiences>
+ <audience>@(context.Request.OriginalUrl.Host)</audience>
+ </audiences>
+ <required-claims>
+ <claim name="ctry" match="any">
+ <value>US</value>
+ </claim>
+ </required-claims>
+</validate-azure-ad-token>
+```
+
+### Elements
+
+| Element | Description | Required |
+| - | -- | -- |
+| validate-azure-ad-token | Root element. | Yes |
+| audiences | Contains a list of acceptable audience claims that can be present on the token. If multiple audience values are present, then each value is tried until either all are exhausted (in which case validation fails) or until one succeeds. At least one audience must be specified. | No |
+| backend-application-ids | Contains a list of acceptable backend application IDs. This is only required in advanced cases for the configuration of options and can generally be removed. | No |
+| client-application-ids | Contains a list of acceptable client application IDs. If multiple application-id elements are present, then each value is tried until either all are exhausted (in which case validation fails) or until one succeeds. At least one application-id must be specified. | Yes |
+| required-claims | Contains a list of claims expected to be present on the token for it to be considered valid. When the `match` attribute is set to `all`, every claim value in the policy must be present in the token for validation to succeed. When the `match` attribute is set to `any`, at least one claim must be present in the token for validation to succeed. | No |
+
+### Attributes
+
+| Name | Description | Required | Default |
+| - | | -- | |
+| failed-validation-error-message | Error message to return in the HTTP response body if the JWT doesn't pass validation. This message must have any special characters properly escaped. | No | Default error message depends on validation issue, for example "JWT not present." |
+| failed-validation-httpcode | HTTP status code to return if the JWT doesn't pass validation. | No | 401 |
+| header-name | The name of the HTTP header holding the token. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A |
+| match | The `match` attribute on the `claim` element specifies whether every claim value in the policy must be present in the token for validation to succeed. Possible values are:<br /><br /> - `all` - every claim value in the policy must be present in the token for validation to succeed.<br /><br /> - `any` - at least one claim value must be present in the token for validation to succeed. | No | all |
+| output-token-variable-name | String. Name of context variable that will receive token value as an object of type [`Jwt`](api-management-policy-expressions.md) upon successful token validation | No | N/A |
+| query-parameter-name | The name of the query parameter holding the token. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A |
+| separator | String. Specifies a separator (e.g. ",") to be used for extracting a set of values from a multi-valued claim. | No | N/A |
+| token-value | Expression returning a string containing the token. You must not return `Bearer ` as part of the token value. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A |
+
+### Usage
+
+This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
+
+- **Policy sections:** inbound
+- **Policy scopes:** all scopes
+
+### Limitations
+
+This policy can only be used with an Azure Active Directory tenant in the public Azure cloud. It doesn't support tenants configured in regional clouds or Azure clouds with restricted access.
+ ## <a name="ValidateJWT"></a> Validate JWT The `validate-jwt` policy enforces existence and validity of a JSON web token (JWT) extracted from a specified HTTP header, extracted from a specified query parameter, or matching a specific value.
The `validate-jwt` policy enforces existence and validity of a JSON web token (J
#### Azure Active Directory token validation
+> [!NOTE]
+> Use the [`validate-azure-ad-token`](#ValidateAAD) policy to validate tokens against Azure Active Directory.
+ ```xml <validate-jwt header-name="Authorization" failed-validation-httpcode="401" failed-validation-error-message="Unauthorized. Access token is missing or invalid."> <openid-config url="https://login.microsoftonline.com/contoso.onmicrosoft.com/v2.0/.well-known/openid-configuration" />
This example shows how to use the [Validate JWT](api-management-access-restricti
| decryption-keys | A list of Base64-encoded keys used to decrypt the tokens. If multiple security keys are present, then each key is tried until either all keys are exhausted (in which case validation fails) or a key succeeds. Key elements have an optional `id` attribute used to match against `kid` claim.<br/><br/>Alternatively supply a decryption key using:<br/><br/> - `certificate-id` in format `<key certificate-id="mycertificate" />` to specify the identifier of a certificate entity [uploaded](/rest/api/apimanagement/apimanagementrest/azure-api-management-rest-api-certificate-entity#Add) to API Management | No | | issuers | A list of acceptable principals that issued the token. If multiple issuer values are present, then each value is tried until either all are exhausted (in which case validation fails) or until one succeeds. | No | | openid-config | Add one or more of these elements to specify a compliant OpenID configuration endpoint from which signing keys and issuer can be obtained.<br/><br/>Configuration including the JSON Web Key Set (JWKS) is pulled from the endpoint every 1 hour and cached. If the token being validated references a validation key (using `kid` claim) that is missing in cached configuration, or if retrieval fails, API Management pulls from the endpoint at most once per 5 min. These intervals are subject to change without notice. | No |
-| required-claims | Contains a list of claims expected to be present on the token for it to be considered valid. When the `match` attribute is set to `all` every claim value in the policy must be present in the token for validation to succeed. When the `match` attribute is set to `any` at least one claim must be present in the token for validation to succeed. | No |
+| required-claims | Contains a list of claims expected to be present on the token for it to be considered valid. When the `match` attribute is set to `all`, every claim value in the policy must be present in the token for validation to succeed. When the `match` attribute is set to `any`, at least one claim must be present in the token for validation to succeed. | No |
### Attributes | Name | Description | Required | Default | | - | | -- | | | clock-skew | Timespan. Use to specify maximum expected time difference between the system clocks of the token issuer and the API Management instance. | No | 0 seconds |
-| failed-validation-error-message | Error message to return in the HTTP response body if the JWT does not pass validation. This message must have any special characters properly escaped. | No | Default error message depends on validation issue, for example "JWT not present." |
+| failed-validation-error-message | Error message to return in the HTTP response body if the JWT doesn't pass validation. This message must have any special characters properly escaped. | No | Default error message depends on validation issue, for example "JWT not present." |
| failed-validation-httpcode | HTTP Status code to return if the JWT doesn't pass validation. | No | 401 | | header-name | The name of the HTTP header holding the token. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A | | query-parameter-name | The name of the query parameter holding the token. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A |
This example shows how to use the [Validate JWT](api-management-access-restricti
| id | The `id` attribute on the `key` element allows you to specify the string that will be matched against `kid` claim in the token (if present) to find out the appropriate key to use for signature validation. | No | N/A | | match | The `match` attribute on the `claim` element specifies whether every claim value in the policy must be present in the token for validation to succeed. Possible values are:<br /><br /> - `all` - every claim value in the policy must be present in the token for validation to succeed.<br /><br /> - `any` - at least one claim value must be present in the token for validation to succeed. | No | all | | require-expiration-time | Boolean. Specifies whether an expiration claim is required in the token. | No | true |
-| require-scheme | The name of the token scheme, e.g. "Bearer". When this attribute is set, the policy will ensure that specified scheme is present in the Authorization header value. | No | N/A |
+| require-scheme | The name of the token scheme, for example, "Bearer". When this attribute is set, the policy will ensure that specified scheme is present in the Authorization header value. | No | N/A |
| require-signed-tokens | Boolean. Specifies whether a token is required to be signed. | No | true |
-| separator | String. Specifies a separator (e.g. ",") to be used for extracting a set of values from a multi-valued claim. | No | N/A |
+| separator | String. Specifies a separator (for example, ",") to be used for extracting a set of values from a multi-valued claim. | No | N/A |
| url | Open ID configuration endpoint URL from where OpenID configuration metadata can be obtained. The response should be according to specs as defined at URL: `https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderMetadata`. <br/><br/>For Azure Active Directory use the OpenID Connect [metadata endpoint](../active-directory/develop/v2-protocols-oidc.md#find-your-apps-openid-configuration-document-uri) configured in your app registration such as:<br/>- (v2) `https://login.microsoftonline.com/{tenant-name}/v2.0/.well-known/openid-configuration`<br/> - (v2 multitenant) ` https://login.microsoftonline.com/organizations/v2.0/.well-known/openid-configuration`<br/>- (v1) `https://login.microsoftonline.com/{tenant-name}/.well-known/openid-configuration` <br/><br/> substituting your directory tenant name or ID, for example `contoso.onmicrosoft.com`, for `{tenant-name}`. | Yes | N/A | | output-token-variable-name | String. Name of context variable that will receive token value as an object of type [`Jwt`](api-management-policy-expressions.md) upon successful token validation | No | N/A |
The following example validates a client certificate to match the policy's defau
| Name | Description | Required | Default | | - | --| -- | -- | | validate-revocationΓÇ» | Boolean. Specifies whether certificate is validated against online revocation list.ΓÇ» | noΓÇ» | True |
-| validate-trustΓÇ»| Boolean. Specifies if validation should fail in case chain cannot be successfully built up to trusted CA. | no | True |
+| validate-trustΓÇ»| Boolean. Specifies if validation should fail in case the chain can't be successfully built up to a trusted CA. | no | True |
| validate-not-before | Boolean. Validates value against current time. | noΓÇ»| True | | validate-not-afterΓÇ» | Boolean. Validates value against current time. | noΓÇ»| True| | ignore-errorΓÇ» | Boolean. Specifies if policy should proceed to the next handler or jump to on-error upon failed validation. | no | False |
api-management Api Management Howto Configure Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-configure-notifications.md
To modify email settings:
* **Administrator email** - the email address to receive all system notifications and other configured notifications * **Organization name** - the name of your organization for use in the developer portal and notifications * **Originating email address** - The value of the `From` header for notifications from the API Management instance. API Management sends notifications on behalf of this originating address.-
- :::image type="content" source="media/api-management-howto-configure-notifications/configure-email-settings.png" alt-text="Screenshot of API Management email settings in the portal":::
+ > [!NOTE]
+ > When you change the Originating email address, some recipients may not receive the auto-generated emails from API Management or emails may get sent to the Junk/Spam folder. This happens because the email no longer passes SPF Authentication after you change the Originating email address domain. To ensure successful SPF Authentication and delivery of email, create the following TXT record in the DNS database of the domain specified in the email address. For instance, if the email address is `noreply@contoso.com`, you will need to contact the administrator of contoso.com to add the following TXT record: **"v=spf1 include:spf.protection.outlook.com include:_spf-ssg-a.microsoft.com -all"**
+
+ :::image type="content" source="media/api-management-howto-configure-notifications/configure-email-settings.png" alt-text="Screenshot of API Management email settings in the portal":::
1. Select **Save**. ## Next steps
api-management Api Management Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-policies.md
More information about policies:
- [Restrict caller IPs](api-management-access-restriction-policies.md#RestrictCallerIPs) - Filters (allows/denies) calls from specific IP addresses and/or address ranges. - [Set usage quota by subscription](api-management-access-restriction-policies.md#SetUsageQuota) - Allows you to enforce a renewable or lifetime call volume and/or bandwidth quota, on a per subscription basis. - [Set usage quota by key](api-management-access-restriction-policies.md#SetUsageQuotaByKey) - Allows you to enforce a renewable or lifetime call volume and/or bandwidth quota, on a per key basis.-- [Validate JWT](api-management-access-restriction-policies.md#ValidateJWT) - Enforces existence and validity of a JWT extracted from either a specified HTTP Header or a specified query parameter.
+- [Validate Azure Active Directory Token](api-management-access-restriction-policies.md#ValidateAAD) - Enforces existence and validity of an Azure Active Directory JWT extracted from either a specified HTTP Header, query parameter, or token value.
+- [Validate JWT](api-management-access-restriction-policies.md#ValidateJWT) - Enforces existence and validity of a JWT extracted from either a specified HTTP Header, query parameter, or token value.
- [Validate client certificate](api-management-access-restriction-policies.md#validate-client-certificate) - Enforces that a certificate presented by a client to an API Management instance matches specified validation rules and claims. ## Advanced policies
api-management Api Management Transformation Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-transformation-policies.md
This article provides a reference for API Management policies used to transform
consider-accept-header="true | false" parse-date="true | false" namespace-separator="separator character"
+ namespace-prefix="namepsace prefix"
attribute-block-name="name" /> ```
Consider the following policy:
</inbound> <outbound> <base />
- <json-to-xml apply="always" consider-accept-header="false" parse-date="false" namespace-separator=":" attribute-block-name="#attrs" />
+ <json-to-xml apply="always" consider-accept-header="false" parse-date="false" namespace-separator=":" namespace-prefix="xmlns" attribute-block-name="#attrs" />
</outbound> </policies> ```
The XML response to the client will be:
|consider-accept-header|The attribute must be set to one of the following values.<br /><br /> - true - apply conversion if XML is requested in request Accept header.<br />- false -always apply conversion.|No|true| |parse-date|When set to `false` date values are simply copied during transformation|No|true| |namespace-separator|The character to use as a namespace separator|No|Underscore|
+|namespace-prefix|The string that identifies property as namespace attribute, usually "xmlns". Properties with names beginning with specified prefix will be added to current element as namespace declarations.|No|N/A|
|attribute-block-name|When set, properties inside the named object will be added to the element as attributes|No|Not set| ### Usage
api-management Virtual Network Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/virtual-network-reference.md
When an API Management service instance is hosted in a VNet, the ports in the fo
### [stv2](#tab/stv2)
+>[!IMPORTANT]
+> When using `stv2`, it is required to assign a Network Security Group to your VNet in order for the Azure Load Balancer to work. Learn more in the [Azure Load Balancer documentation](/security/benchmark/azure/baselines/azure-load-balancer-security-baseline#network-security-group-support).
+ | Source / Destination Port(s) | Direction | Transport protocol | Service tags <br> Source / Destination | Purpose | VNet type | ||--|--||-|-| | * / [80], 443 | Inbound | TCP | Internet / VirtualNetwork | **Client communication to API Management** | External only |
app-service Configure Connect To Azure Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-connect-to-azure-storage.md
The following features are supported for Linux containers:
- Azure CLI, Azure PowerShell, and Azure SDK support is in preview. - Mapping `/` or `/home` to custom-mounted storage is not supported. - Don't map the custom storage mount to `/tmp` or its subdirectories as this may cause timeout during app startup.
+- Azure Storage is not supported with [Docker Compose Scenarios](configure-custom-container.md?pivots=container-linux#docker-compose-options)
- Storage mounts cannot be used together with clone settings option during [deployment slot](deploy-staging-slots.md) creation. - Storage mounts are not backed up when you [back up your app](manage-backup.md). Be sure to follow best practices to back up the Azure Storage accounts. - Only Azure Files [SMB](../storage/files/files-smb-protocol.md) are supported. Azure Files [NFS](../storage/files/files-nfs-protocol.md) is not currently supported for Linux App Services.
app-service Configure Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-custom-container.md
The following lists show supported and unsupported Docker Compose configuration
- ports - restart - services-- volumes
+- volumes ([mapping to Azure Storage is unsupported](configure-connect-to-azure-storage.md?tabs=portal&pivots=container-linux#limitations))
#### Unsupported options
app-service Overview Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-diagnostics.md
After you choose to investigate the issue further by clicking on a topic, you ca
## Resiliency Score
-If you don't know whatΓÇÖs wrong with your app or donΓÇÖt know where to start troubleshooting your issues, the Get Resiliency Score report is a good place to start. Once a Troubleshooting category has been selected the Get Resilience Score report link is available and clicking it produces a PDF document with actionable insights.
+To review tailored best practice recommendations, check out the Resiliency Score Report. This is available as a downloadable PDF Report. To get it, simply click on the "Get Resilience Score report" button available on the command bar of any of the Troubleshooting categories.
![App Service Diagnose and solve problems Resiliency Score report, with a gauge indicating App's resilience score and what App Developer can do to improve resilience of the App.](./media/app-service-diagnostics/app-service-diagnostics-resiliency-report-1.png)
application-gateway Mutual Authentication Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/mutual-authentication-overview.md
description: This article is an overview of mutual authentication on Application
Previously updated : 03/30/2021 Last updated : 11/03/2022
To configure mutual authentication, a trusted client CA certificate is required
For example, if your client certificate contains a root CA certificate, multiple intermediate CA certificates, and a leaf certificate, make sure that the root CA certificate and all the intermediate CA certificates are uploaded onto Application Gateway in one file. For more information on how to extract a trusted client CA certificate, see [how to extract trusted client CA certificates](./mutual-authentication-certificate-management.md).
-If you're uploading a certificate chain with root CA and intermediate CA certificates, the certificate chain must be uploaded as a PEM or CER file to the gateway.
+If you're uploading a certificate chain with root CA and intermediate CA certificates, the certificate chain must be uploaded as a PEM or CER file to the gateway.
+
+> [!IMPORTANT]
+> Make sure you upload the entire trusted client CA certificate chain to the Application Gateway when using mutual authentication.
+
+Each SSL profile can support up to five trusted client CA certificate chains.
> [!NOTE] > Mutual authentication is only available on Standard_v2 and WAF_v2 SKUs. ### Certificates supported for mutual authentication
-Application Gateway supports the following types of certificates:
--- CA (Certificate Authority) certificate: A CA certificate is a digital certificate issued by a certificate authority (CA).-- Self-signed CA certificates: Client browsers do not trust these certificates and will warn the user that the virtual service's certificate is not part of a trust chain. Self-signed CA certificates are good for testing or in environments where administrators control the clients and can safely bypass the browser's security alerts.
+Application Gateway supports certificates issued from both public and privately established certificate authorities.
-> [!IMPORTANT]
-> Production workloads should never use self-signed CA certificates.
+- CA certificates issued from well-known certificate authorities: Intermediate and root certificates are commonly found in trusted certificate stores and enable trusted connections with little to no additional configuration on the device.
+- CA certificates issued from organization established certificate authorities: These certificates are typically issued privately via your organization and not trusted by other entities. Intermediate and root certificates must be imported in to trusted certificate stores for clients to establish chain trust.
-For more information on how to set up mutual authentication, see [configure mutual authentication with Application Gateway](./mutual-authentication-portal.md).
-
-> [!IMPORTANT]
-> Make sure you upload the entire trusted client CA certificate chain to the Application Gateway when using mutual authentication.
-
-Each SSL profile can support up to five trusted client CA certificate chains.
+> [!NOTE]
+> When issuing client certificates from well established certificate authorities, consider working with the certificate authority to see if an intermediate certificate can be issued for your organization to prevent inadvertent cross-organizational client certificate authentication.
## Additional client authentication validation ### Verify client certificate DN
-You have the option to verify the client certificate's immediate issuer and only allow the Application Gateway to trust that issuer. This options is off by default but you can enable this through Portal, PowerShell, or Azure CLI.
+You have the option to verify the client certificate's immediate issuer and only allow the Application Gateway to trust that issuer. This option is off by default but you can enable this through Portal, PowerShell, or Azure CLI.
If you choose to enable the Application Gateway to verify the client certificate's immediate issuer, here's how to determine what client certificate issuer DN will be extracted from the certificates uploaded. * **Scenario 1:** Certificate chain includes: root certificate - intermediate certificate - leaf certificate
applied-ai-services Rest Api And Client Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/quickstarts/rest-api-and-client-library.md
Previously updated : 07/06/2021 Last updated : 11/07/2022 zone_pivot_groups: programming-languages-metrics-monitor
applied-ai-services Web Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/quickstarts/web-portal.md
Previously updated : 09/30/2020 Last updated : 11/07/2022
# Quickstart: Monitor your first metric by using the web portal
-When you provision an instance of Azure Metrics Advisor, you can use the APIs and web-based workspace to work with the service. The web-based workspace can be used as a straightforward way to quickly get started with the service. It also provides a visual way to configure settings, customize your model, and perform root cause analysis.
+When you provision an instance of Azure Metrics Advisor, you can use the APIs and web-based workspace to interact with the service. The web-based workspace can be used as a straightforward way to quickly get started with the service. It also provides a visual way to configure settings, customize your model, and perform root cause analysis.
## Prerequisites
When detection is applied, select one of the metrics listed in the data feed to
After tuning the detection configuration, you should find that detected anomalies reflect actual anomalies in your data. Metrics Advisor performs analysis on multidimensional metrics to locate the root cause to a specific dimension. The service also performs cross-metrics analysis by using the metrics graph feature.
-To view the diagnostic insights, select the red dots on time series visualizations. These red dots represent detected anomalies. A window will appear with a link to the incident analysis page.
+To view diagnostic insights, select the red dots on time series visualizations. These red dots represent detected anomalies. A window will appear with a link to the incident analysis page.
:::image type="content" source="../media/incident-link.png" alt-text="Screenshot that shows an incident link." lightbox="../media/incident-link.png":::
attestation Claim Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/claim-sets.md
The following claims are additionally supported by the SevSnpVm attestation type
- **x-ms-sevsnpvm-authorkeydigest**: SHA384 hash of the author signing key - **x-ms-sevsnpvm-bootloader-svn** :AMD boot loader security version number (SVN)-- **x-ms-sevsnpvm-familyId**: HCL family identification string
+- **x-ms-sevsnpvm-familyId**: Host Compatibility Layer (HCL) family identification string
- **x-ms-sevsnpvm-guestsvn**: HCL security version number (SVN) - **x-ms-sevsnpvm-hostdata**: Arbitrary data defined by the host at VM launch time - **x-ms-sevsnpvm-idkeydigest**: SHA384 hash of the identification signing key
azure-arc Tutorial Use Gitops Connected Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md
Title: 'Tutorial: Deploy configurations using GitOps on an Azure Arc-enabled Kubernetes cluster' description: This tutorial demonstrates applying configurations on an Azure Arc-enabled Kubernetes cluster. For a conceptual take on this process, see the Configurations and GitOps - Azure Arc-enabled Kubernetes article. -- Last updated 05/24/2022
azure-functions Create First Function Cli Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-csharp.md
Title: "Create a C# function from the command line - Azure Functions" description: "Learn how to create a C# function from the command line, then publish the local project to serverless hosting in Azure Functions." Previously updated : 09/14/2021 Last updated : 11/08/2022 ms.devlang: csharp
This article supports creating both types of compiled C# functions:
[!INCLUDE [functions-dotnet-execution-model](../../includes/functions-dotnet-execution-model.md)]
-This article creates an HTTP triggered function that runs on .NET 6.0. There is also a [Visual Studio Code-based version](create-first-function-vs-code-csharp.md) of this article.
+This article creates an HTTP triggered function that runs on .NET in-process or isolated worker process with an example of .NET 6. There's also a [Visual Studio Code-based version](create-first-function-vs-code-csharp.md) of this article.
Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account.
Completing this quickstart incurs a small cost of a few USD cents or less in you
Before you begin, you must have the following:
-+ [.NET 6.0 SDK](https://dotnet.microsoft.com/download)
++ [.NET 6.0 SDK](https://dotnet.microsoft.com/download). + [Azure Functions Core Tools](./functions-run-local.md#v2) version 4.x.
You also need an Azure account with an active subscription. [Create an account f
### Prerequisite check
-Verify your prerequisites, which depend on whether you are using Azure CLI or Azure PowerShell for creating Azure resources:
+Verify your prerequisites, which depend on whether you're using Azure CLI or Azure PowerShell for creating Azure resources:
# [Azure CLI](#tab/azure-cli)
In Azure Functions, a function project is a container for one or more individual
func new --name HttpExample --template "HTTP trigger" --authlevel "anonymous" ```
- `func new` creates a HttpExample.cs code file.
+ `func new` creates an HttpExample.cs code file.
### (Optional) Examine the file contents
The return object is an [ActionResult](/dotnet/api/microsoft.aspnetcore.mvc.acti
# [Isolated process](#tab/isolated-process)
-*HttpExample.cs* contains a `Run` method that receives request data in the `req` variable is an [HttpRequestData](/dotnet/api/microsoft.azure.functions.worker.http.httprequestdata) object that's decorated with the **HttpTriggerAttribute**, which defines the trigger behavior. Because of the isolated process model, `HttpRequestData` is a representation of the actual `HttpRequest`, and not the request object itself.
+*HttpExample.cs* contains a `Run` method that receives request data in the `req` variable is an [HttpRequestData](/dotnet/api/microsoft.azure.functions.worker.http.httprequestdata) object that's decorated with the **HttpTriggerAttribute**, which defines the trigger behavior. Because of the isolated worker process model, `HttpRequestData` is a representation of the actual `HttpRequest`, and not the request object itself.
:::code language="csharp" source="~/functions-docs-csharp/http-trigger-isolated/HttpExample.cs":::
To learn more, see [Azure Functions HTTP triggers and bindings](./functions-bind
In the previous example, replace `<STORAGE_NAME>` with the name of the account you used in the previous step, and replace `<APP_NAME>` with a globally unique name appropriate to you. The `<APP_NAME>` is also the default DNS domain for the function app.
- This command creates a function app running in your specified language runtime under the [Azure Functions Consumption Plan](consumption-plan.md), which is free for the amount of usage you incur here. The command also provisions an associated Azure Application Insights instance in the same resource group, with which you can monitor your function app and view logs. For more information, see [Monitor Azure Functions](functions-monitoring.md). The instance incurs no costs until you activate it.
+ This command creates a function app running in your specified language runtime under the [Azure Functions Consumption Plan](consumption-plan.md), which is free for the amount of usage you incur here. The command also creates an associated Azure Application Insights instance in the same resource group, with which you can monitor your function app and view logs. For more information, see [Monitor Azure Functions](functions-monitoring.md). The instance incurs no costs until you activate it.
[!INCLUDE [functions-publish-project-cli](../../includes/functions-publish-project-cli.md)]
azure-functions Create First Function Vs Code Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-csharp.md
Title: "Create a C# function using Visual Studio Code - Azure Functions" description: "Learn how to create a C# function, then publish the local project to serverless hosting in Azure Functions using the Azure Functions extension in Visual Studio Code. " Previously updated : 10/11/2022 Last updated : 11/08/2022 ms.devlang: csharp adobe-target: true
adobe-target-content: ./create-first-function-vs-code-csharp-ieux
# Quickstart: Create a C# function in Azure using Visual Studio Code
-In this article, you use Visual Studio Code to create a C# function that responds to HTTP requests. After testing the code locally, you deploy it to the serverless environment of Azure Functions. This article creates an HTTP triggered function that runs on .NET 6.0. There's also a [CLI-based version](create-first-function-cli-csharp.md) of this article.
+This article creates an HTTP triggered function that runs on .NET 6, either in-process or isolated worker process. .NET Functions isolated worker process also lets you run on .NET 7 (in preview). For information about all .NET versions supported by isolated worker process, see [Supported versions](dotnet-isolated-process-guide.md#supported-versions).
-By default, this article shows you how to create C# functions that run [in the same process as the Functions host](functions-dotnet-class-library.md). These _in-process_ C# functions are only supported on Long Term Support (LTS) versions of .NET, such as .NET 6. To create C# functions on .NET 6 that can also run on [other supported versions](functions-versions.md) for Azure functions [in an isolated process](dotnet-isolated-process-guide.md).
+There's also a [CLI-based version](create-first-function-cli-csharp.md) of this article.
+
+By default, this article shows you how to create C# functions that run on .NET 6 [in the same process as the Functions host](functions-dotnet-class-library.md). These _in-process_ C# functions are only supported on [Long Term Support (LTS)](https://dotnet.microsoft.com/en-us/platform/support/policy/dotnet-core) .NET versions, such as .NET 6. When creating your project, you can choose to instead create a function that runs on .NET 6 in an [isolated worker process](dotnet-isolated-process-guide.md). [Isolated worker process](dotnet-isolated-process-guide.md) supports both LTS and Standard Term Support (STS) versions of .NET. For more information, see [Supported versions](dotnet-isolated-process-guide.md#supported-versions) in the .NET Functions isolated worker process guide.
Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account.
In this section, you use Visual Studio Code to create a local Azure Functions pr
1. Provide the following information at the prompts:
- # [.NET 6](#tab/in-process)
+ # [In-process](#tab/in-process)
|Prompt|Selection| |--|--|
In this section, you use Visual Studio Code to create a local Azure Functions pr
|**Authorization level**|Choose `Anonymous`, which enables anyone to call your function endpoint. To learn about authorization level, see [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys).| |**Select how you would like to open your project**|Select `Add to workspace`.|
- # [.NET 6 Isolated](#tab/isolated-process)
+ # [Isolated process](#tab/isolated-process)
|Prompt|Selection| |--|--|
In this section, you use Visual Studio Code to create a local Azure Functions pr
> [!NOTE] > If you don't see .NET 6 as a runtime option, check the following: >
- > + Make sure you have installed the .NET 6.0 SDK.
+ > + Make sure you have installed the .NET 6.0 SDK or other available .NET SDK versions, from .NET website [here](https://dotnet.microsoft.com/download).
> + Press F1 and type `Preferences: Open user settings`, then search for `Azure Functions: Project Runtime` and change the default runtime version to `~4`. 1. Visual Studio Code uses the provided information and generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. For more information about the files that are created, see [Generated project files](functions-develop-vs-code.md?tabs=csharp#generated-project-files).
After checking that the function runs correctly on your local computer, it's tim
You have used [Visual Studio Code](functions-develop-vs-code.md?tabs=csharp) to create a function app with a simple HTTP-triggered function. In the next article, you expand that function by connecting to either Azure Cosmos DB or Azure Queue Storage. To learn more about connecting to other Azure services, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=csharp).
-# [.NET 6](#tab/in-process)
+# [In-process](#tab/in-process)
> [!div class="nextstepaction"] > [Connect to Azure Cosmos DB](functions-add-output-binding-cosmos-db-vs-code.md?pivots=programming-language-csharp&tabs=in-process) > [Connect to Azure Queue Storage](functions-add-output-binding-storage-queue-vs-code.md?pivots=programming-language-csharp&tabs=in-process)
-# [.NET 6 Isolated](#tab/isolated-process)
+# [Isolated process](#tab/isolated-process)
> [!div class="nextstepaction"] > [Connect to Azure Cosmos DB](functions-add-output-binding-cosmos-db-vs-code.md?pivots=programming-language-csharp&tabs=isolated-process)
azure-functions Dotnet Isolated In Process Differences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-in-process-differences.md
+
+ Title: Differences between in-process and isolate worker process .NET Azure Functions
+description: Compares features and functionality differences between running .NET Functions in-process or as an isolated worker process.
++ Last updated : 11/07/2022
+recommendations: false
+#Customer intent: As a developer, I need to understand the differences between running in-process and running in an isolated worker process so that I can choose the best process model for my functions.
++
+# Differences between in-process and isolate worker process .NET Azure Functions
+
+Functions supports two process models for .NET class library functions:
++
+This article describes the current state of the functional and behavioral differences between the two models.
+
+## Execution mode comparison table
+
+Use the following table to compare feature and functional differences between the two models:
+
+| Feature/behavior | In-process<sup>3</sup> | Isolated worker process |
+| - | - | - |
+| [Supported .NET versions](./dotnet-isolated-process-guide.md#supported-versions) | Long Term Support (LTS) versions | All supported versions + .NET Framework |
+| Core packages | [Microsoft.NET.Sdk.Functions](https://www.nuget.org/packages/Microsoft.NET.Sdk.Functions/) | [Microsoft.Azure.Functions.Worker](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker/)<br/>[Microsoft.Azure.Functions.Worker.Sdk](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Sdk) |
+| Binding extension packages | [Microsoft.Azure.WebJobs.Extensions.*](https://www.nuget.org/packages?q=Microsoft.Azure.WebJobs.Extensions) | [Microsoft.Azure.Functions.Worker.Extensions.*](https://www.nuget.org/packages?q=Microsoft.Azure.Functions.Worker.Extensions) |
+| Durable Functions | [Supported](durable/durable-functions-overview.md) | [Supported (public preview)](https://github.com/microsoft/durabletask-dotnet#usage-with-azure-functions) |
+| Model types exposed by bindings | Simple types<br/>JSON serializable types<br/>Arrays/enumerations<br/>Service SDK types such as [BlobClient]<br/>`IAsyncCollector` (for output bindings) | Simple types<br/>JSON serializable types<br/>Arrays/enumerations |
+| HTTP trigger model types| [HttpRequest]/[ObjectResult] | [HttpRequestData]/[HttpResponseData] |
+| Output binding interaction | Return values (single output only)<br/>`out` parameters<br/>`IAsyncCollector` | Return values (expanded model with single or [multiple outputs](dotnet-isolated-process-guide.md#multiple-output-bindings)) |
+| Imperative bindings<sup>1</sup> | [Supported](functions-dotnet-class-library.md#binding-at-runtime) | Not supported |
+| Dependency injection | [Supported](functions-dotnet-dependency-injection.md) | [Supported](dotnet-isolated-process-guide.md#dependency-injection) |
+| Middleware | Not supported | [Supported](dotnet-isolated-process-guide.md#middleware) |
+| Logging | [ILogger] passed to the function<br/>[ILogger&lt;T&gt;] via dependency injection | [ILogger]/[ILogger&lt;T&gt;] obtained from [FunctionContext] or via [dependency injection](dotnet-isolated-process-guide.md#dependency-injection)|
+| Application Insights dependencies | [Supported](functions-monitoring.md#dependencies) | [Supported (public preview)](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.ApplicationInsights) |
+| Cancellation tokens | [Supported](functions-dotnet-class-library.md#cancellation-tokens) | [Supported](dotnet-isolated-process-guide.md#cancellation-tokens) |
+| Cold start times<sup>2</sup> | (Baseline) | Additionally includes process launch |
+| ReadyToRun | [Supported](functions-dotnet-class-library.md#readytorun) | [Supported](dotnet-isolated-process-guide.md#readytorun) |
+
+<sup>1</sup> When you need to interact with a service using parameters determined at runtime, using the corresponding service SDKs directly is recommended over using imperative bindings. The SDKs are less verbose, cover more scenarios, and have advantages for error handling and debugging purposes. This recommendation applies to both models.
+
+<sup>2</sup> Cold start times may be additionally impacted on Windows when using some preview versions of .NET due to just-in-time loading of preview frameworks. This applies to both the in-process and out-of-process models but may be noticeable when comparing across different versions. This delay for preview versions isn't present on Linux plans.
+
+<sup>3</sup> C# Script functions also run in-process and use the same libraries as in-process class library functions. For more information, see the [Azure Functions C# script (.csx) developer reference](functions-reference-csharp.md).
+
+## Next steps
+
+To learn more, see:
+++ [Develop .NET class library functions](functions-dotnet-class-library.md)++ [Develop .NET isolated worker process functions](dotnet-isolated-process-guide.md)+
azure-functions Dotnet Isolated Process Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md
Title: Guide for running C# Azure Functions in an isolated process
-description: Learn how to use a .NET isolated process to run your C# functions in Azure, which supports .NET 5.0 and later versions.
-
+ Title: Guide for running C# Azure Functions in an isolated worker process
+description: Learn how to use a .NET isolated worker process to run your C# functions in Azure, which supports non-LTS versions of .NET and .NET Framework apps.
Previously updated : 09/29/2022 Last updated : 11/01/2022 recommendations: false
-#Customer intent: As a developer, I need to know how to create functions that run in an isolated process so that I can run my function code on current (not LTS) releases of .NET.
+#Customer intent: As a developer, I need to know how to create functions that run in an isolated worker process so that I can run my function code on current (not LTS) releases of .NET.
-# Guide for running C# Azure Functions in an isolated process
+# Guide for running C# Azure Functions in an isolated worker process
+
+This article is an introduction to working with .NET Functions isolated worker process, which runs your functions in an isolated worker process in Azure. This allows you to run your .NET class library functions on a version of .NET that is different from the version used by the Functions host process. For information about specific .NET versions supported, see [supported version](#supported-versions).
-This article is an introduction to using C# to develop .NET isolated process functions, which runs Azure Functions in an isolated process. This allows you to decouple your function code from the Azure Functions runtime, check out [supported version](#supported-versions) for Azure functions in an isolated process. [In-process C# class library functions](functions-dotnet-class-library.md) aren't supported on .NET 7.0.
+Use the following links to get started right away building .NET isolated worker process functions.
| Getting started | Concepts| Samples | |--|--|--| | <ul><li>[Using Visual Studio Code](create-first-function-vs-code-csharp.md?tabs=isolated-process)</li><li>[Using command line tools](create-first-function-cli-csharp.md?tabs=isolated-process)</li><li>[Using Visual Studio](functions-create-your-first-function-visual-studio.md?tabs=isolated-process)</li></ul> | <ul><li>[Hosting options](functions-scale.md)</li><li>[Monitoring](functions-monitoring.md)</li> | <ul><li>[Reference samples](https://github.com/Azure/azure-functions-dotnet-worker/tree/main/samples)</li></ul> |
-## Why .NET isolated process?
+If you still need to run your functions in the same process as the host, see [In-process C# class library functions](functions-dotnet-class-library.md).
+
+For a comprehensive comparison between isolated worker process and in-process .NET Functions, see [Differences between in-process and isolate worker process .NET Azure Functions](dotnet-isolated-in-process-differences.md).
-Previously Azure Functions has only supported a tightly integrated mode for .NET functions, which run [as a class library](functions-dotnet-class-library.md) in the same process as the host. This mode provides deep integration between the host process and the functions. For example, .NET class library functions can share binding APIs and types. However, this integration also requires a tighter coupling between the host process and the .NET function. For example, .NET functions running in-process are required to run on the same version of .NET as the Functions runtime. To enable you to run outside these constraints, you can now choose to run in an isolated process. This process isolation also lets you develop functions that use current .NET releases (such as .NET 7.0), not natively supported by the Functions runtime. Both isolated process and in-process C# class library functions run on .NET 6.0. To learn more, see [Supported versions](#supported-versions).
+## Why .NET Functions isolated worker process?
-Because these functions run in a separate process, there are some [feature and functionality differences](#differences-with-net-class-library-functions) between .NET isolated function apps and .NET class library function apps.
+When it was introduced, Azure Functions only supported a tightly integrated mode for .NET functions. In this _in-process_ mode, your [.NET class library functions](functions-dotnet-class-library.md) run in the same process as the host. This mode provides deep integration between the host process and the functions. For example, when running in the same process .NET class library functions can share binding APIs and types. However, this integration also requires a tight coupling between the host process and the .NET function. For example, .NET functions running in-process are required to run on the same version of .NET as the Functions runtime. This means that your in-process functions can only run on version of .NET with Long Term Support (LTS). To enable you to run on non-LTS version of .NET, you can instead choose to run in an isolated worker process. This process isolation lets you develop functions that use current .NET releases not natively supported by the Functions runtime, including .NET Framework. Both isolated worker process and in-process C# class library functions run on LTS versions. To learn more, see [Supported versions](#supported-versions).
-### Benefits of running out-of-process
+Because these functions run in a separate process, there are some [feature and functionality differences](./dotnet-isolated-in-process-differences.md) between .NET isolated function apps and .NET class library function apps.
-When your .NET functions run out-of-process, you can take advantage of the following benefits:
+### Benefits of isolated worker process
+
+When your .NET functions run in an isolated worker process, you can take advantage of the following benefits:
+ Fewer conflicts: because the functions run in a separate process, assemblies used in your app won't conflict with different version of the same assemblies used by the host process. + Full control of the process: you control the start-up of the app and can control the configurations used and the middleware started.
When your .NET functions run out-of-process, you can take advantage of the follo
[!INCLUDE [functions-dotnet-supported-versions](../../includes/functions-dotnet-supported-versions.md)]
-## .NET isolated project
+## .NET isolated worker process project
A .NET isolated function project is basically a .NET console app project that targets a supported .NET runtime. The following are the basic files required in any .NET isolated project:
For complete examples, see the [.NET 6 isolated sample project](https://github.c
## Package references
-When your functions run out-of-process, your .NET project uses a unique set of packages, which implement both core functionality and binding extensions.
+A .NET Functions isolated worker process project uses a unique set of packages, for both core functionality and binding extensions.
### Core packages
-The following packages are required to run your .NET functions in an isolated process:
+The following packages are required to run your .NET functions in an isolated worker process:
+ [Microsoft.Azure.Functions.Worker](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker/) + [Microsoft.Azure.Functions.Worker.Sdk](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Sdk/) ### Extension packages
-Because functions that run in a .NET isolated process use different binding types, they require a unique set of binding extension packages.
+Because .NET isolated worker process functions use different binding types, they require a unique set of binding extension packages.
You'll find these extension packages under [Microsoft.Azure.Functions.Worker.Extensions](https://www.nuget.org/packages?q=Microsoft.Azure.Functions.Worker.Extensions). ## Start-up and configuration
-When using .NET isolated functions, you have access to the start-up of your function app, which is usually in Program.cs. You're responsible for creating and starting your own host instance. As such, you also have direct access to the configuration pipeline for your app. When you run your functions out-of-process, you can much more easily add configurations, inject dependencies, and run your own middleware.
+When using .NET isolated functions, you have access to the start-up of your function app, which is usually in Program.cs. You're responsible for creating and starting your own host instance. As such, you also have direct access to the configuration pipeline for your app. With .NET Functions isolated worker process, you can much more easily add configurations, inject dependencies, and run your own middleware.
The following code shows an example of a [HostBuilder] pipeline:
A [HostBuilder] is used to build and return a fully initialized [IHost] instance
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/FunctionApp/Program.cs" id="docsnippet_host_run"::: > [!IMPORTANT]
-> If your project targets .NET Framework 4.8, you also need to add `FunctionsDebugger.Enable();` before creating the HostBuilder. It should be the first line of your `Main()` method. See [Debugging when targeting .NET Framework](#debugging-when-targeting-net-framework) for more information.
+> If your project targets .NET Framework 4.8, you also need to add `FunctionsDebugger.Enable();` before creating the HostBuilder. It should be the first line of your `Main()` method. For more information, see [Debugging when targeting .NET Framework](#debugging-when-targeting-net-framework).
### Configuration
-The [ConfigureFunctionsWorkerDefaults] method is used to add the settings required for the function app to run in an isolated process, which includes the following functionality:
+The [ConfigureFunctionsWorkerDefaults] method is used to add the settings required for the function app to run in an isolated worker process, which includes the following functionality:
+ Default set of converters. + Set the default [JsonSerializerOptions] to ignore casing on property names.
The [ConfigureFunctionsWorkerDefaults] extension method has an overload that let
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/CustomMiddleware/Program.cs" id="docsnippet_middleware_register" :::
- The `UseWhen` extension method can be used to register a middleware which gets executed conditionally. A predicate which returns a boolean value needs to be passed to this method and the middleware will be participating in the invocation processing pipeline if the return value of the predicate is true.
+ The `UseWhen` extension method can be used to register a middleware that gets executed conditionally. You must pass to this method a predicate that returns a boolean value, and the middleware participates in the invocation processing pipeline when the return value of the predicate is `true`.
The following extension methods on [FunctionContext] make it easier to work with middleware in the isolated model.
The following extension methods on [FunctionContext] make it easier to work with
| **` GetOutputBindings`** | Gets the output binding entries for the current function execution. Each entry in the result of this method is of type `OutputBindingData`. You can use the `Value` property to get or set the value as needed. | | **` BindInputAsync`** | Binds an input binding item for the requested `BindingMetadata` instance. For example, you can use this method when you have a function with a `BlobInput` input binding that needs to be accessed or updated by your middleware. |
-The following is an example of a middleware implementation which reads the `HttpRequestData` instance and updates the `HttpResponseData` instance during function execution. This middleware checks for the presence of a specific request header(x-correlationId), and when present uses the header value to stamp a response header. Otherwise, it generates a new GUID value and uses that for stamping the response header.
+The following is an example of a middleware implementation that reads the `HttpRequestData` instance and updates the `HttpResponseData` instance during function execution. This middleware checks for the presence of a specific request header(x-correlationId), and when present uses the header value to stamp a response header. Otherwise, it generates a new GUID value and uses that for stamping the response header.
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/CustomMiddleware/StampHttpHeaderMiddleware.cs" id="docsnippet_middleware_example_stampheader" :::
For a more complete example of using custom middleware in your function app, see
A function can accept a [CancellationToken](/dotnet/api/system.threading.cancellationtoken) parameter, which enables the operating system to notify your code when the function is about to be terminated. You can use this notification to make sure the function doesn't terminate unexpectedly in a way that leaves data in an inconsistent state.
-Cancellation tokens are supported in .NET functions when running in an isolated process. The following example raises an exception when a cancellation request has been received:
+Cancellation tokens are supported in .NET functions when running in an isolated worker process. The following example raises an exception when a cancellation request has been received:
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Net7Worker/EventHubCancellationToken.cs" id="docsnippet_cancellation_token_throw":::
The following example performs clean-up actions if a cancellation request has be
## ReadyToRun
-You can compile your function app as [ReadyToRun binaries](/dotnet/core/deploying/ready-to-run). ReadyToRun is a form of ahead-of-time compilation that can improve startup performance to help reduce the impact of [cold-start](event-driven-scaling.md#cold-start) when running in a [Consumption plan](consumption-plan.md).
+You can compile your function app as [ReadyToRun binaries](/dotnet/core/deploying/ready-to-run). ReadyToRun is a form of ahead-of-time compilation that can improve startup performance to help reduce the effect of [cold-start](event-driven-scaling.md#cold-start) when running in a [Consumption plan](consumption-plan.md).
-ReadyToRun is available in .NET 3.1, .NET 6 (both in-process and isolated process), and .NET 7, and it requires [version 3.0 or later](functions-versions.md) of the Azure Functions runtime.
+ReadyToRun is available in .NET 3.1, .NET 6 (both in-process and isolated worker process), and .NET 7, and it requires [version 3.0 or later](functions-versions.md) of the Azure Functions runtime.
To compile your project as ReadyToRun, update your project file by adding the `<PublishReadyToRun>` and `<RuntimeIdentifier>` elements. The following is the configuration for publishing to a Windows 32-bit function app.
The `Function` attribute marks the method as a function entry point. The name mu
Because .NET isolated projects run in a separate worker process, bindings can't take advantage of rich binding classes, such as `ICollector<T>`, `IAsyncCollector<T>`, and `CloudBlockBlob`. There's also no direct support for types inherited from underlying service SDKs, such as [DocumentClient] and [BrokeredMessage]. Instead, bindings rely on strings, arrays, and serializable types, such as plain old class objects (POCOs).
-For HTTP triggers, you must use [HttpRequestData] and [HttpResponseData] to access the request and response data. This is because you don't have access to the original HTTP request and response objects when running out-of-process.
+For HTTP triggers, you must use [HttpRequestData] and [HttpResponseData] to access the request and response data. This is because you don't have access to the original HTTP request and response objects when using .NET Functions isolated worker process.
-For a complete set of reference samples for using triggers and bindings when running out-of-process, see the [binding extensions reference sample](https://github.com/Azure/azure-functions-dotnet-worker/blob/main/samples/Extensions).
+For a complete set of reference samples for using triggers and bindings with isolated worker process functions, see the [binding extensions reference sample](https://github.com/Azure/azure-functions-dotnet-worker/blob/main/samples/Extensions).
### Input bindings
An [ILogger] is also provided when using [dependency injection](#dependency-inje
## Debugging when targeting .NET Framework
-If your isolated project targets .NET Framework 4.8, the current preview scope requires manual steps to enable debugging. These steps are not required if using another target framework.
+If your isolated project targets .NET Framework 4.8, the current preview scope requires manual steps to enable debugging. These steps aren't required if using another target framework.
Your app should start with a call to `FunctionsDebugger.Enable();` as its first operation. This occurs in the `Main()` method before initializing a HostBuilder. Your `Program.cs` file should look similar to the following:
namespace MyDotnetFrameworkProject
} ```
-Next, you need to manually attach to the process using a .NET Framework debugger. Visual Studio doesn't do this automatically for isolated process .NET Framework apps yet, and the "Start Debugging" operation should be avoided.
+Next, you need to manually attach to the process using a .NET Framework debugger. Visual Studio doesn't do this automatically for isolated worker process .NET Framework apps yet, and the "Start Debugging" operation should be avoided.
In your project directory (or its build output directory), run:
Azure Functions .NET Worker (PID: <process id>) initialized in debug mode. Waiti
Where `<process id>` is the ID for your worker process. You can now use Visual Studio to manually attach to the process. For instructions on this operation, see [How to attach to a running process](/visualstudio/debugger/attach-to-running-processes-with-the-visual-studio-debugger#BKMK_Attach_to_a_running_process).
-Once the debugger is attached, the process execution will resume and you will be able to debug.
-
-## Differences with .NET class library functions
-
-This section describes the current state of the functional and behavioral differences running on out-of-process compared to .NET class library functions running in-process:
-
-| Feature/behavior | In-process | Out-of-process |
-| - | - | - |
-| .NET versions | .NET Core 3.1<br/>.NET 6.0 | .NET 6.0<br/>.NET 7.0 (Preview)<br/>.NET Framework 4.8 (GA) |
-| Core packages | [Microsoft.NET.Sdk.Functions](https://www.nuget.org/packages/Microsoft.NET.Sdk.Functions/) | [Microsoft.Azure.Functions.Worker](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker/)<br/>[Microsoft.Azure.Functions.Worker.Sdk](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Sdk) |
-| Binding extension packages | [Microsoft.Azure.WebJobs.Extensions.*](https://www.nuget.org/packages?q=Microsoft.Azure.WebJobs.Extensions) | [Microsoft.Azure.Functions.Worker.Extensions.*](https://www.nuget.org/packages?q=Microsoft.Azure.Functions.Worker.Extensions) |
-| Durable Functions | [Supported](durable/durable-functions-overview.md) | [Supported (public preview)](https://github.com/microsoft/durabletask-dotnet#usage-with-azure-functions) |
-| Model types exposed by bindings | Simple types<br/>JSON serializable types<br/>Arrays/enumerations<br/>Service SDK types such as [BlobClient]<br/>`IAsyncCollector` (for output bindings) | Simple types<br/>JSON serializable types<br/>Arrays/enumerations |
-| HTTP trigger model types| [HttpRequest]/[ObjectResult] | [HttpRequestData]/[HttpResponseData] |
-| Output binding interaction | Return values (single output only)<br/>`out` parameters<br/>`IAsyncCollector` | Return values (expanded model with single or [multiple outputs](#multiple-output-bindings)) |
-| Imperative bindings<sup>1</sup> | [Supported](functions-dotnet-class-library.md#binding-at-runtime) | Not supported |
-| Dependency injection | [Supported](functions-dotnet-dependency-injection.md) | [Supported](#dependency-injection) |
-| Middleware | Not supported | [Supported](#middleware) |
-| Logging | [ILogger] passed to the function<br/>[ILogger&lt;T&gt;] via dependency injection | [ILogger]/[ILogger&lt;T&gt;] obtained from [FunctionContext] or via [dependency injection](#dependency-injection)|
-| Application Insights dependencies | [Supported](functions-monitoring.md#dependencies) | [Supported (public preview)](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.ApplicationInsights) |
-| Cancellation tokens | [Supported](functions-dotnet-class-library.md#cancellation-tokens) | [Supported](#cancellation-tokens) |
-| Cold start times<sup>2</sup> | (Baseline) | Additionally includes process launch |
-| ReadyToRun | [Supported](functions-dotnet-class-library.md#readytorun) | [Supported](#readytorun) |
-
-<sup>1</sup> When you need to interact with a service using parameters determined at runtime, using the corresponding service SDKs directly is recommended over using imperative bindings. The SDKs are less verbose, cover more scenarios, and have advantages for error handling and debugging purposes. This recommendation applies to both models.
-
-<sup>2</sup> Cold start times may be additionally impacted on Windows when using some preview versions of .NET due to just-in-time loading of preview frameworks. This applies to both the in-process and out-of-process models but may be particularly noticeable if comparing across different versions. This delay for preview versions is not present on Linux plans.
+After the debugger is attached, the process execution resumes, and you'll be able to debug.
## Remote Debugging using Visual Studio
-Because your isolated process app runs outside the Functions runtime, you need to attach the remote debugger to a separate process. To learn more about debugging using Visual Studio, see [Remote Debugging](functions-develop-vs.md?tabs=isolated-process#remote-debugging).
+Because your isolated worker process app runs outside the Functions runtime, you need to attach the remote debugger to a separate process. To learn more about debugging using Visual Studio, see [Remote Debugging](functions-develop-vs.md?tabs=isolated-process#remote-debugging).
## Next steps + [Learn more about triggers and bindings](functions-triggers-bindings.md)
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md
The following major runtime version values are supported:
## FUNCTIONS\_V2\_COMPATIBILITY\_MODE
-This setting enables your function app to run in a version 2.x compatible mode on the version 3.x runtime. Use this setting only if encountering issues when [upgrading your function app from version 2.x to 3.x of the runtime](functions-versions.md#migrating-from-2x-to-3x).
+This setting enables your function app to run in a version 2.x compatible mode on the version 3.x runtime. Use this setting only if encountering issues after upgrading your function app from version 2.x to 3.x of the runtime.
>[!IMPORTANT] > This setting is intended only as a short-term workaround while you update your app to run correctly on version 3.x. This setting is supported as long as the [2.x runtime is supported](functions-versions.md). If you encounter issues that prevent your app from running on version 3.x without using this setting, please [report your issue](https://github.com/Azure/azure-functions-host/issues/new?template=Bug_report.md).
Valid values:
| Value | Language | ||| | `dotnet` | [C# (class library)](functions-dotnet-class-library.md)<br/>[C# (script)](functions-reference-csharp.md) |
-| `dotnet-isolated` | [C# (isolated process)](dotnet-isolated-process-guide.md) |
+| `dotnet-isolated` | [C# (isolated worker process)](dotnet-isolated-process-guide.md) |
| `java` | [Java](functions-reference-java.md) | | `node` | [JavaScript](functions-reference-node.md)<br/>[TypeScript](functions-reference-node.md#typescript) | | `powershell` | [PowerShell](functions-reference-powershell.md) |
Sets the version of Node.js to use when running your function app on Windows. Yo
## WEBSITE\_OVERRIDE\_STICKY\_EXTENSION\_VERSIONS
-By default, the version settings for function apps are specific to each slot. This setting is used when upgrading functions by using [deployment slots](functions-deployment-slots.md). This prevents unanticipated behavior due to changing versions after a swap. Set to `0` in production and in the slot to make sure that all version settings are also swapped. For more information, see [Migrate using slots](functions-versions.md#migrate-using-slots).
+By default, the version settings for function apps are specific to each slot. This setting is used when upgrading functions by using [deployment slots](functions-deployment-slots.md). This prevents unanticipated behavior due to changing versions after a swap. Set to `0` in production and in the slot to make sure that all version settings are also swapped. For more information, see [Upgrade using slots](migrate-version-3-version-4.md#upgrade-using-slots).
|Key|Sample value| |||
For more information, see [Create a function on Linux using a custom container](
### netFrameworkVersion
-Sets the specific version of .NET for C# functions. For more information, see [Migrating from 3.x to 4.x](functions-versions.md#migrating-from-3x-to-4x).
+Sets the specific version of .NET for C# functions. For more information, see [Upgrade your function app in Azure](migrate-version-3-version-4.md?pivots=programming-language-csharp#upgrade-your-function-app-in-azure).
### powerShellVersion
azure-functions Functions Bindings Azure Sql Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-input.md
The stored procedure `dbo.DeleteToDo` must be created on the SQL database. In t
# [Isolated process](#tab/isolated-process)
-Isolated process isn't currently supported.
+Isolated worker process isn't currently supported.
<!-- Uncomment to support C# script examples. # [C# Script](#tab/csharp-script)
azure-functions Functions Bindings Azure Sql Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-output.md
namespace AzureSQLSamples
# [Isolated process](#tab/isolated-process)
-Isolated process isn't currently supported.
+Isolated worker process isn't currently supported.
<!-- Uncomment to support C# script examples. # [C# Script](#tab/csharp-script)
azure-functions Functions Bindings Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql.md
Add the extension to your project by installing this [NuGet package](https://www
# [Isolated process](#tab/isolated-process)
-Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated process](dotnet-isolated-process-guide.md).
+Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
> [!NOTE]
-> In the current preview, Azure SQL bindings aren't supported when your function app runs in an isolated process.
+> In the current preview, Azure SQL bindings aren't supported when your function app runs in an isolated worker process.
<!-- Add the extension to your project by installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.SignalRService/).
azure-functions Functions Bindings Cosmosdb V2 Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-input.md
Here's the binding data in the *function.json* file:
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file.
# [Functions 2.x+](#tab/functionsv2/in-process)
azure-functions Functions Bindings Cosmosdb V2 Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-output.md
def main(req: func.HttpRequest, doc: func.Out[func.Document]) -> func.HttpRespon
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file.
# [Functions 2.x+](#tab/functionsv2/in-process)
azure-functions Functions Bindings Cosmosdb V2 Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-trigger.md
An in-process class library is a compiled C# function runs in the same process a
# [Isolated process](#tab/isolated-process)
-An isolated process class library compiled C# function runs in a process isolated from the runtime. Isolated process is required to support C# functions running on .NET 5.0.
+An isolated worker process class library compiled C# function runs in a process isolated from the runtime.
# [C# script](#tab/csharp-script)
azure-functions Functions Bindings Cosmosdb V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2.md
Functions execute in the same process as the Functions host. To learn more, see
# [Isolated process](#tab/isolated-process)
-Functions execute in an isolated C# worker process. To learn more, see [Guide for running functions on .NET 5.0 in Azure](dotnet-isolated-process-guide.md).
+Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
# [C# script](#tab/csharp-script)
azure-functions Functions Bindings Error Pages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-error-pages.md
public static async Task Run([EventHubTrigger("myHub", Connection = "EventHubCon
# [Isolated process](#tab/isolated-process/fixed-delay)
-Retry policies aren't yet supported when running in an isolated process.
+Retry policies aren't yet supported when running in an isolated worker process.
# [C# Script](#tab/csharp-script/fixed-delay)
public static async Task Run([EventHubTrigger("myHub", Connection = "EventHubCon
# [Isolated process](#tab/isolated-process/exponential-backoff)
-Retry policies aren't yet supported when running in an isolated process.
+Retry policies aren't yet supported when running in an isolated worker process.
# [C# Script](#tab/csharp-script/exponential-backoff)
azure-functions Functions Bindings Event Grid Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid-output.md
For information on setup and configuration details, see [How to work with Event
The type of the output parameter used with an Event Grid output binding depends on the Functions runtime version, the binding extension version, and the modality of the C# function. The C# function can be created using one of the following C# modes: * [In-process class library](functions-dotnet-class-library.md): compiled C# function that runs in the same process as the Functions runtime.
-* [Isolated process class library](dotnet-isolated-process-guide.md): compiled C# function that runs in a process isolated from the runtime. Isolated process is required to support C# functions running on .NET 5.0.
+* [Isolated worker process class library](dotnet-isolated-process-guide.md): compiled C# function that runs in a worker process isolated from the runtime.
* [C# script](functions-reference-csharp.md): used primarily when creating C# functions in the Azure portal. # [In-process](#tab/in-process)
def main(eventGridEvent: func.EventGridEvent,
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use attribute to configure the binding. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attribute to configure the binding. C# script instead uses a function.json configuration file.
The attribute's constructor takes the name of an application setting that contains the name of the custom topic, and the name of an application setting that contains the topic key.
Requires you to define a custom type, or use a string. See the [Example section]
# [Functions 1.x](#tab/functionsv1/isolated-process)
-Functions version 1.x doesn't support isolated process.
+Functions version 1.x doesn't support isolated worker process.
# [Extension v3.x](#tab/extensionv3/csharp-script)
azure-functions Functions Bindings Event Grid Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid-trigger.md
namespace Company.Function
``` # [Isolated process](#tab/isolated-process)
-When running your C# function in an isolated process, you need to define a custom type for event properties. The following example defines a `MyEventType` class.
+When running your C# function in an isolated worker process, you need to define a custom type for event properties. The following example defines a `MyEventType` class.
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/EventGrid/EventGridFunction.cs" range="35-49":::
def main(event: func.EventGridEvent):
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use the [EventGridTrigger](https://github.com/Azure/azure-functions-eventgrid-extension/blob/master/src/EventGridExtension/TriggerBinding/EventGridTriggerAttribute.cs) attribute. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the [EventGridTrigger](https://github.com/Azure/azure-functions-eventgrid-extension/blob/master/src/EventGridExtension/TriggerBinding/EventGridTriggerAttribute.cs) attribute. C# script instead uses a function.json configuration file.
# [In-process](#tab/in-process)
Requires you to define a custom type, or use a string. See the [Example section]
# [Functions 1.x](#tab/functionsv1/isolated-process)
-Functions version 1.x doesn't support isolated process.
+Functions version 1.x doesn't support the isolated worker process.
# [Extension v3.x](#tab/extensionv3/csharp-script)
azure-functions Functions Bindings Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid.md
Functions execute in the same process as the Functions host. To learn more, see
# [Isolated process](#tab/isolated-process)
-Functions execute in an isolated C# worker process. To learn more, see [Guide for running functions on .NET 5.0 in Azure](dotnet-isolated-process-guide.md).
+Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
# [C# script](#tab/csharp-script)
Add the extension to your project by installing the [NuGet package](https://www.
# [Functions 1.x](#tab/functionsv1/isolated-process)
-Functions version 1.x doesn't support isolated process.
+Functions version 1.x doesn't support the isolated worker process.
The Event Grid output binding is only available for Functions 2.x and higher.
azure-functions Functions Bindings Event Hubs Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-hubs-output.md
In the [Java functions runtime library](/java/api/overview/azure/functions/runti
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use attribute to configure the binding. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attribute to configure the binding. C# script instead uses a function.json configuration file.
# [In-process](#tab/in-process)
azure-functions Functions Bindings Http Webhook Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-http-webhook-output.md
The default return value for an HTTP-triggered function is:
::: zone pivot="programming-language-csharp" ## Attribute
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries don't require an attribute. C# script uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries don't require an attribute. C# script uses a function.json configuration file.
# [In-process](#tab/in-process)
azure-functions Functions Bindings Http Webhook Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-http-webhook-trigger.md
def main(req: func.HttpRequest) -> func.HttpResponse:
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use the `HttpTriggerAttribute` to define the trigger binding. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the `HttpTriggerAttribute` to define the trigger binding. C# script instead uses a function.json configuration file.
# [In-process](#tab/in-process)
In [in-process functions](functions-dotnet-class-library.md), the `HttpTriggerAt
# [Isolated process](#tab/isolated-process)
-In [isolated process](dotnet-isolated-process-guide.md) function apps, the `HttpTriggerAttribute` supports the following parameters:
+In [isolated worker process](dotnet-isolated-process-guide.md) function apps, the `HttpTriggerAttribute` supports the following parameters:
| Parameters | Description| ||-|
azure-functions Functions Bindings Http Webhook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-http-webhook.md
Functions execute in the same process as the Functions host. To learn more, see
# [Isolated process](#tab/isolated-process)
-Functions execute in an isolated C# worker process. To learn more, see [Guide for running functions on .NET 5.0 in Azure](dotnet-isolated-process-guide.md).
+Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
# [C# script](#tab/csharp-script)
Add the extension to your project by installing the [NuGet package](https://www.
# [Functions v1.x](#tab/functionsv1/isolated-process)
-Functions 1.x doesn't support running in an isolated process.
+Functions 1.x doesn't support running in an isolated worker process.
# [Functions v2.x+](#tab/functionsv2/csharp-script)
azure-functions Functions Bindings Kafka Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-kafka-output.md
An [in-process class library](functions-dotnet-class-library.md) is a compiled C
# [Isolated process](#tab/isolated-process)
-An [isolated process class library](dotnet-isolated-process-guide.md) compiled C# function runs in a process isolated from the runtime. Isolated process is required to support C# functions running on .NET 5.0.
+An [isolated worker process class library](dotnet-isolated-process-guide.md) compiled C# function runs in a process isolated from the runtime.
For a complete set of working Java examples for Confluent, see the [Kafka extens
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use the `Kafka` attribute to define the function trigger.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the `Kafka` attribute to define the function trigger.
The following table explains the properties you can set using this attribute:
azure-functions Functions Bindings Kafka Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-kafka-trigger.md
An [in-process class library](functions-dotnet-class-library.md) is a compiled C
# [Isolated process](#tab/isolated-process)
-An [isolated process class library](dotnet-isolated-process-guide.md) compiled C# function runs in a process isolated from the runtime. Isolated process is required to support C# functions running on .NET 5.0.
+An [isolated worker process class library](dotnet-isolated-process-guide.md) compiled C# function runs in a process isolated from the runtime.
For a complete set of working Java examples for Event Hubs, see the [Kafka exten
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use the `KafkaTriggerAttribute` to define the function trigger.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the `KafkaTriggerAttribute` to define the function trigger.
The following table explains the properties you can set using this trigger attribute:
azure-functions Functions Bindings Kafka https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-kafka.md
Add the extension to your project by installing this [NuGet package](https://www
# [Isolated process](#tab/isolated-process)
-Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated process](dotnet-isolated-process-guide.md).
+Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
Add the extension to your project by installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Kafka).
azure-functions Functions Bindings Rabbitmq Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-rabbitmq-output.md
def main(req: func.HttpRequest, outputMessage: func.Out[str]) -> func.HttpRespon
## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use the <!--attribute API here--> attribute to define the function. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the <!--attribute API here--> attribute to define the function. C# script instead uses a function.json configuration file.
The attribute's constructor takes the following parameters:
ILogger log)
In [C# class libraries](functions-dotnet-class-library.md), use the [RabbitMQTrigger](https://github.com/Azure/azure-functions-rabbitmq-extension/blob/dev/extension/WebJobs.Extensions.RabbitMQ/Trigger/RabbitMQTriggerAttribute.cs) attribute.
-Here's a `RabbitMQTrigger` attribute in a method signature for an isolated process library:
+Here's a `RabbitMQTrigger` attribute in a method signature for an isolated worker process library:
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/RabbitMQ/RabbitMQFunction.cs" range="12-16":::
When working with C# functions:
# [Isolated process](#tab/isolated-process)
-The RabbitMQ bindings currently support only string and serializable object types when running in an isolated process.
+The RabbitMQ bindings currently support only string and serializable object types when running in an isolated worker process.
# [C# script](#tab/csharp-script)
azure-functions Functions Bindings Rabbitmq Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-rabbitmq-trigger.md
def main(myQueueItem) -> None:
## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use the <!--attribute API here--> attribute to define the function. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the <!--attribute API here--> attribute to define the function. C# script instead uses a function.json configuration file.
The attribute's constructor takes the following parameters:
public static void RabbitMQTest([RabbitMQTrigger("queue")] string message, ILogg
In [C# class libraries](functions-dotnet-class-library.md), use the [RabbitMQTrigger](https://github.com/Azure/azure-functions-rabbitmq-extension/blob/dev/extension/WebJobs.Extensions.RabbitMQ/Trigger/RabbitMQTriggerAttribute.cs) attribute.
-Here's a `RabbitMQTrigger` attribute in a method signature for an isolated process library:
+Here's a `RabbitMQTrigger` attribute in a method signature for an isolated worker process library:
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/RabbitMQ/RabbitMQFunction.cs" range="12-16":::
azure-functions Functions Bindings Rabbitmq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-rabbitmq.md
Add the extension to your project by installing this [NuGet package](https://www
# [Isolated process](#tab/isolated-process)
-Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated process](dotnet-isolated-process-guide.md).
+Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
Add the extension to your project by installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Rabbitmq).
azure-functions Functions Bindings Register https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-register.md
The following table lists the currently available versions of the default *Micro
## Explicitly install extensions
-For compiled C# class library projects ([in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md)), you install the NuGet packages for the extensions that you need as you normally would. For examples see either the [Visual Studio Code developer guide](functions-develop-vs-code.md?tabs=csharp#install-binding-extensions) or the [Visual Studio developer guide](functions-develop-vs.md#add-bindings).
+For compiled C# class library projects ([in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md)), you install the NuGet packages for the extensions that you need as you normally would. For examples see either the [Visual Studio Code developer guide](functions-develop-vs-code.md?tabs=csharp#install-binding-extensions) or the [Visual Studio developer guide](functions-develop-vs.md#add-bindings).
For non-.NET languages and C# script, when you can't use extension bundles you need to manually install required binding extensions in your local project. The easiest way is to use Azure Functions Core Tools. To learn more, see [Install extensions](functions-run-local.md#install-extensions).
azure-functions Functions Bindings Sendgrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-sendgrid.md
Functions execute in the same process as the Functions host. To learn more, see
# [Isolated process](#tab/isolated-process)
-Functions execute in an isolated C# worker process. To learn more, see [Guide for running functions on .NET 5.0 in Azure](dotnet-isolated-process-guide.md).
+Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
# [C# script](#tab/csharp-script)
Add the extension to your project by installing the [NuGet package](https://www.
# [Functions v1.x](#tab/functionsv1/isolated-process)
-Functions 1.x doesn't support running in an isolated process.
+Functions 1.x doesn't support running in an isolated worker process.
# [Functions v2.x+](#tab/functionsv2/csharp-script)
You can omit setting the attribute's `ApiKey` property if you have your API key
# [Isolated process](#tab/isolated-process)
-We don't currently have an example for using the SendGrid binding in a function app running in an isolated process.
+We don't currently have an example for using the SendGrid binding in a function app running in an isolated worker process.
# [C# Script](#tab/csharp-script)
public class HttpTriggerSendGrid {
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the output binding. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the output binding. C# script instead uses a function.json configuration file.
# [In-process](#tab/in-process)
In [in-process](functions-dotnet-class-library.md) function apps, use the [SendG
# [Isolated process](#tab/isolated-process)
-In [isolated process](dotnet-isolated-process-guide.md) function apps, the `SendGridOutputAttribute` supports the following parameters:
+In [isolated worker process](dotnet-isolated-process-guide.md) function apps, the `SendGridOutputAttribute` supports the following parameters:
| Attribute/annotation property | Description | |-|-|
azure-functions Functions Bindings Service Bus Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus-output.md
def main(req: func.HttpRequest, msg: func.Out[str]) -> func.HttpResponse:
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the output binding. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the output binding. C# script instead uses a function.json configuration file.
# [In-process](#tab/in-process)
azure-functions Functions Bindings Service Bus Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus-trigger.md
def main(msg: func.ServiceBusMessage):
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use the [ServiceBusTriggerAttribute](https://github.com/Azure/azure-functions-servicebus-extension/blob/master/src/Microsoft.Azure.WebJobs.Extensions.ServiceBus/ServiceBusTriggerAttribute.cs) attribute to define the function trigger. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the [ServiceBusTriggerAttribute](https://github.com/Azure/azure-functions-servicebus-extension/blob/master/src/Microsoft.Azure.WebJobs.Extensions.ServiceBus/ServiceBusTriggerAttribute.cs) attribute to define the function trigger. C# script instead uses a function.json configuration file.
# [In-process](#tab/in-process)
azure-functions Functions Bindings Service Bus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus.md
Add the extension to your project installing this [NuGet package](https://www.nu
# [Isolated process](#tab/isolated-process)
-Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated process](dotnet-isolated-process-guide.md).
+Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
Add the extension to your project installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.servicebus).
Add the extension to your project by installing the [NuGet package](https://www.
# [Functions 1.x](#tab/functionsv1/isolated-process)
-Functions version 1.x doesn't support isolated process.
+Functions version 1.x doesn't support the isolated worker process.
# [Extension 5.x+](#tab/extensionv5/csharp-script)
azure-functions Functions Bindings Signalr Service Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-signalr-service-input.md
public static SignalRConnectionInfo Negotiate(
# [Isolated process](#tab/isolated-process)
-Sample code not available for isolated process.
+Sample code not available for the isolated worker process.
# [C# Script](#tab/csharp-script)
public SignalRConnectionInfo negotiate(
## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use attribute to define the function. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attribute to define the function. C# script instead uses a function.json configuration file.
# [In-process](#tab/in-process)
azure-functions Functions Bindings Signalr Service Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-signalr-service-output.md
public SignalRGroupAction removeFromGroup(
## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use attribute to define the function. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attribute to define the function. C# script instead uses a function.json configuration file.
# [In-process](#tab/in-process)
azure-functions Functions Bindings Signalr Service Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-signalr-service-trigger.md
def main(invocation) -> None:
## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use the `SignalRTrigger` attribute to define the function. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the `SignalRTrigger` attribute to define the function. C# script instead uses a function.json configuration file.
# [In-process](#tab/in-process)
You can follow the sample in GitHub to deploy a chat room on Function App with S
* [Azure Functions development and configuration with Azure SignalR Service](../azure-signalr/signalr-concept-serverless-development-config.md) * [SignalR Service Trigger binding sample](https://github.com/aspnet/AzureSignalR-samples/tree/master/samples/BidirectionChat)
-* [SignalR Service Trigger binding sample in isolated process](https://github.com/aspnet/AzureSignalR-samples/tree/master/samples/DotnetIsolated-BidirectionChat)
+* [SignalR Service Trigger binding sample in isolated worker process](https://github.com/aspnet/AzureSignalR-samples/tree/master/samples/DotnetIsolated-BidirectionChat)
azure-functions Functions Bindings Signalr Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-signalr-service.md
Add the extension to your project by installing this [NuGet package].
# [Isolated process](#tab/isolated-process)
-Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated process](dotnet-isolated-process-guide.md).
+Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
Add the extension to your project by installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.SignalRService/).
azure-functions Functions Bindings Storage Blob Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-input.md
public static void Run(
# [Isolated process](#tab/isolated-process)
-The following example is a [C# function](dotnet-isolated-process-guide.md) that runs in an isolated process and uses a blob trigger with both blob input and blob output blob bindings. The function is triggered by the creation of a blob in the *test-samples-trigger* container. It reads a text file from the *test-samples-input* container and creates a new text file in an output container based on the name of the triggered file.
+The following example is a [C# function](dotnet-isolated-process-guide.md) that runs in an isolated worker process and uses a blob trigger with both blob input and blob output blob bindings. The function is triggered by the creation of a blob in the *test-samples-trigger* container. It reads a text file from the *test-samples-input* container and creates a new text file in an output container based on the name of the triggered file.
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/Blob/BlobFunction.cs" range="9-26":::
def main(queuemsg: func.QueueMessage, inputblob: bytes) -> bytes:
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file.
# [In-process](#tab/in-process)
public static void Run(
# [Isolated process](#tab/isolated-process)
-Isolated process defines an input binding by using a `BlobInputAttribute` attribute, which takes the following parameters:
+isolated worker process defines an input binding by using a `BlobInputAttribute` attribute, which takes the following parameters:
|Parameter | Description| ||-|
azure-functions Functions Bindings Storage Blob Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-output.md
public class ResizeImages
# [Isolated process](#tab/isolated-process)
-The following example is a [C# function](dotnet-isolated-process-guide.md) that runs in an isolated process and uses a blob trigger with both blob input and blob output blob bindings. The function is triggered by the creation of a blob in the *test-samples-trigger* container. It reads a text file from the *test-samples-input* container and creates a new text file in an output container based on the name of the triggered file.
+The following example is a [C# function](dotnet-isolated-process-guide.md) that runs in an isolated worker process and uses a blob trigger with both blob input and blob output blob bindings. The function is triggered by the creation of a blob in the *test-samples-trigger* container. It reads a text file from the *test-samples-input* container and creates a new text file in an output container based on the name of the triggered file.
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/Blob/BlobFunction.cs" range="4-26":::
def main(queuemsg: func.QueueMessage, inputblob: bytes, outputblob: func.Out[byt
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use attribute to define the function. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attribute to define the function. C# script instead uses a function.json configuration file.
# [In-process](#tab/in-process)
azure-functions Functions Bindings Storage Blob Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-trigger.md
For more information about the `BlobTrigger` attribute, see [Attributes](#attrib
# [Isolated process](#tab/isolated-process)
-The following example is a [C# function](dotnet-isolated-process-guide.md) that runs in an isolated process and uses a blob trigger with both blob input and blob output blob bindings. The function is triggered by the creation of a blob in the *test-samples-trigger* container. It reads a text file from the *test-samples-input* container and creates a new text file in an output container based on the name of the triggered file.
+The following example is a [C# function](dotnet-isolated-process-guide.md) that runs in an isolated worker process and uses a blob trigger with both blob input and blob output blob bindings. The function is triggered by the creation of a blob in the *test-samples-trigger* container. It reads a text file from the *test-samples-input* container and creates a new text file in an output container based on the name of the triggered file.
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/Blob/BlobFunction.cs" range="9-25":::
def main(myblob: func.InputStream):
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use the [BlobAttribute](/dotnet/api/microsoft.azure.webjobs.blobattribute) attribute to define the function. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the [BlobAttribute](/dotnet/api/microsoft.azure.webjobs.blobattribute) attribute to define the function. C# script instead uses a function.json configuration file.
The attribute's constructor takes the following parameters:
azure-functions Functions Bindings Storage Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob.md
Functions execute in the same process as the Functions host. To learn more, see
# [Isolated process](#tab/isolated-process)
-Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated process](dotnet-isolated-process-guide.md).
+Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
# [C# script](#tab/csharp-script)
Add the extension to your project by installing the [Microsoft.Azure.Functions.W
# [Functions 1.x](#tab/functionsv1/isolated-process)
-Functions version 1.x doesn't support isolated process.
+Functions version 1.x doesn't support isolated worker process.
# [Extension 5.x and higher](#tab/extensionv5/csharp-script)
azure-functions Functions Bindings Storage Queue Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-queue-output.md
You can use the `StorageAccount` attribute to specify the storage account at cla
# [Isolated process](#tab/isolated-process)
-When running in an isolated process, you use the [QueueOutputAttribute](https://github.com/Azure/azure-functions-dotnet-worker/blob/main/extensions/Worker.Extensions.Storage.Queues/src/QueueOutputAttribute.cs), which takes the name of the queue, as shown in the following example:
+When running in an isolated worker process, you use the [QueueOutputAttribute](https://github.com/Azure/azure-functions-dotnet-worker/blob/main/extensions/Worker.Extensions.Storage.Queues/src/QueueOutputAttribute.cs), which takes the name of the queue, as shown in the following example:
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/Queue/QueueFunction.cs" id="docsnippet_queue_trigger" :::
-Only returned variables are supported when running in an isolated process. Output parameters can't be used.
+Only returned variables are supported when running in an isolated worker process. Output parameters can't be used.
# [C# script](#tab/csharp-script)
An in-process class library is a compiled C# function runs in the same process a
# [Isolated process](#tab/isolated-process)
-An isolated process class library compiled C# function runs in a process isolated from the runtime. Isolated process is required to support C# functions running on .NET 5.0.
+An isolated worker process class library compiled C# function runs in a process isolated from the runtime.
# [C# script](#tab/csharp-script)
You can write multiple messages to the queue by using one of the following types
# [Extension 5.x+](#tab/extensionv5/isolated-process)
-Isolated process currently only supports binding to string parameters.
+Isolated worker process currently only supports binding to string parameters.
# [Extension 2.x+](#tab/extensionv2/isolated-process)
-Isolated process currently only supports binding to string parameters.
+Isolated worker process currently only supports binding to string parameters.
# [Extension 5.x+](#tab/extensionv5/csharp-script)
azure-functions Functions Bindings Storage Queue Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-queue-trigger.md
def main(msg: func.QueueMessage):
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use the [QueueTriggerAttribute](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs.Extensions.Storage/Queues/QueueTriggerAttribute.cs) to define the function. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the [QueueTriggerAttribute](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs.Extensions.Storage/Queues/QueueTriggerAttribute.cs) to define the function. C# script instead uses a function.json configuration file.
# [In-process](#tab/in-process)
An in-process class library is a compiled C# function runs in the same process a
# [Isolated process](#tab/isolated-process)
-An isolated process class library compiled C# function runs in a process isolated from the runtime. Isolated process is required to support C# functions running on .NET 5.0.
+An isolated worker process class library compiled C# function runs in a process isolated from the runtime.
# [C# script](#tab/csharp-script)
When binding to an object, the Functions runtime tries to deserialize the JSON p
# [Extension 5.x+](#tab/extensionv5/isolated-process)
-Isolated process currently only supports binding to string parameters.
+Isolated worker process currently only supports binding to string parameters.
# [Extension 2.x+](#tab/extensionv2/isolated-process)
-Isolated process currently only supports binding to string parameters.
+Isolated worker process currently only supports binding to string parameters.
# [Extension 5.x+](#tab/extensionv5/csharp-script)
azure-functions Functions Bindings Storage Queue https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-queue.md
Functions execute in the same process as the Functions host. To learn more, see
# [Isolated process](#tab/isolated-process)
-Functions execute in an isolated C# worker process. To learn more, see [Guide for running functions on .NET 5.0 in Azure](dotnet-isolated-process-guide.md).
+Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
# [C# script](#tab/csharp-script)
Add the extension to your project by installing the [NuGet package](https://www.
# [Functions 1.x](#tab/functionsv1/isolated-process)
-Functions version 1.x doesn't support isolated process.
+Functions version 1.x doesn't support the isolated worker process.
# [Extension 5.x+](#tab/extensionv5/csharp-script)
azure-functions Functions Bindings Storage Table Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-table-input.md
An [in-process class library](functions-dotnet-class-library.md) is a compiled C
# [Isolated process](#tab/isolated-process)
-An [isolated process class library](dotnet-isolated-process-guide.md) compiled C# function runs in a process isolated from the runtime. Isolated process is required to support C# functions running on .NET 5.0.
+An [isolated worker process class library](dotnet-isolated-process-guide.md) compiled C# function runs in a process isolated from the runtime.
# [C# script](#tab/csharp-script)
The `Filter` and `Take` properties are used to limit the number of entities retu
# [Azure Cosmos DB for Table extension (preview)](#tab/table-api/isolated-process)
-The Azure Cosmos DB for Table extension does not currently support isolated process. You will instead need to use the combined Azure Storage extension.
+The Azure Cosmos DB for Table extension does not currently support isolated worker process. You will instead need to use the combined Azure Storage extension.
# [Functions 1.x](#tab/functionsv1/isolated-process)
-Functions version 1.x doesn't support isolated process.
+Functions version 1.x doesn't support isolated worker process.
# [Combined Azure Storage extension](#tab/storage-extension/csharp-script)
With this simple binding, you can't programmatically handle a case in which no r
## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file.
# [In-process](#tab/in-process)
An in-process class library is a compiled C# function that runs in the same proc
# [Isolated process](#tab/isolated-process)
-An isolated process class library compiled C# function runs in a process isolated from the runtime. Isolated process is required to support C# functions running on .NET 5.0.
+An isolated worker process class library compiled C# function runs in a process isolated from the runtime.
# [C# script](#tab/csharp-script)
To return a specific entity by key, use a plain-old CLR object (POCO). The speci
# [Azure Cosmos DB for Table extension (preview)](#tab/table-api/isolated-process)
-The Azure Cosmos DB for Table extension does not currently support isolated process. You will instead need to use the combined Azure Storage extension.
+The Azure Cosmos DB for Table extension does not currently support isolated worker process. You will instead need to use the combined Azure Storage extension.
# [Functions 1.x](#tab/functionsv1/isolated-process)
-Functions version 1.x doesn't support isolated process.
+Functions version 1.x doesn't support isolated worker process.
# [Combined Azure Storage extension](#tab/storage-extension/csharp-script)
azure-functions Functions Bindings Storage Table Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-table-output.md
def main(req: func.HttpRequest, message: func.Out[str]) -> func.HttpResponse:
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file.
# [In-process](#tab/in-process)
An in-process class library is a compiled C# function runs in the same process a
# [Isolated process](#tab/isolated-process)
-An isolated process class library compiled C# function runs in a process isolated from the runtime. Isolated process is required to support C# functions running on .NET 5.0.
+An isolated worker process class library compiled C# function runs in a process isolated from the runtime.
# [C# script](#tab/csharp-script)
Return a plain-old CLR object (POCO) with properties that can be mapped to the t
# [Azure Cosmos DB for Table extension (preview)](#tab/table-api/isolated-process)
-The Azure Cosmos DB for Table extension does not currently support isolated process. You will instead need to use the combined Azure Storage extension.
+The Azure Cosmos DB for Table extension does not currently support isolated worker process. You will instead need to use the combined Azure Storage extension.
# [Functions 1.x](#tab/functionsv1/isolated-process)
-Functions version 1.x doesn't support isolated process.
+Functions version 1.x doesn't support isolated worker process.
# [Combined Azure Storage extension](#tab/storage-extension/csharp-script)
azure-functions Functions Bindings Storage Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-table.md
Functions execute in the same process as the Functions host. To learn more, see
# [Isolated process](#tab/isolated-process)
-Functions execute in an isolated C# worker process. To learn more, see [Guide for running functions on .NET 5.0 in Azure](dotnet-isolated-process-guide.md).
+Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
# [C# script](#tab/csharp-script)
Tables are included in a combined package for Azure Storage. Install the [Micros
# [Azure Cosmos DB for Table extension (preview)](#tab/table-api/isolated-process)
-The Azure Cosmos DB for Table extension does not currently support isolated process. You will instead need to use the [Storage extension](#storage-extension).
+The Azure Cosmos DB for Table extension does not currently support isolated worker process. You will instead need to use the [Storage extension](#storage-extension).
# [Functions 1.x](#tab/functionsv1/isolated-process)
-Functions version 1.x doesn't support isolated process.
+Functions version 1.x doesn't support isolated worker process.
# [Combined Azure Storage extension](#tab/storage-extension/csharp-script)
azure-functions Functions Bindings Timer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-timer.md
def main(mytimer: func.TimerRequest) -> None:
::: zone pivot="programming-language-csharp" ## Attributes
-[In-process](functions-dotnet-class-library.md) C# library uses [TimerTriggerAttribute](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions/Extensions/Timers/TimerTriggerAttribute.cs) from [Microsoft.Azure.WebJobs.Extensions](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions) whereas [Isolated process](dotnet-isolated-process-guide.md) C# library uses [TimerTriggerAttribute](https://github.com/Azure/azure-functions-dotnet-worker/blob/main/extensions/Worker.Extensions.Timer/src/TimerTriggerAttribute.cs) from [Microsoft.Azure.Functions.Worker.Extensions.Timer](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Timer) to define the function.
+[In-process](functions-dotnet-class-library.md) C# library uses [TimerTriggerAttribute](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions/Extensions/Timers/TimerTriggerAttribute.cs) from [Microsoft.Azure.WebJobs.Extensions](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions) whereas [isolated worker process](dotnet-isolated-process-guide.md) C# library uses [TimerTriggerAttribute](https://github.com/Azure/azure-functions-dotnet-worker/blob/main/extensions/Worker.Extensions.Timer/src/TimerTriggerAttribute.cs) from [Microsoft.Azure.Functions.Worker.Extensions.Timer](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Timer) to define the function.
C# script instead uses a function.json configuration file.
azure-functions Functions Bindings Twilio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-twilio.md
Functions execute in the same process as the Functions host. To learn more, see
# [Isolated process](#tab/isolated-process)
-Functions execute in an isolated C# worker process. To learn more, see [Guide for running functions on .NET 5.0 in Azure](dotnet-isolated-process-guide.md).
+Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
# [C# script](#tab/csharp-script)
Add the extension to your project by installing the [NuGet package](https://www.
# [Functions v2.x+](#tab/functionsv2/isolated-process)
-There is currently no support for Twilio for an isolated process app.
+There is currently no support for Twilio for an isolated worker process app.
# [Functions v1.x](#tab/functionsv1/isolated-process)
-Functions 1.x doesn't support running in an isolated process.
+Functions 1.x doesn't support running in an isolated worker process.
# [Functions v2.x+](#tab/functionsv2/csharp-script)
This example uses the `TwilioSms` attribute with the method return value. An alt
# [Isolated process](#tab/isolated-process)
-The Twilio binding isn't currently supported for a function app running in an isolated process.
+The Twilio binding isn't currently supported for a function app running in an isolated worker process.
# [C# Script](#tab/csharp-script)
public class TwilioOutput {
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the output binding. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the output binding. C# script instead uses a function.json configuration file.
# [In-process](#tab/in-process)
In [in-process](functions-dotnet-class-library.md) function apps, use the [Twili
# [Isolated process](#tab/isolated-process)
-The Twilio binding isn't currently supported for a function app running in an isolated process.
+The Twilio binding isn't currently supported for a function app running in an isolated worker process.
# [C# Script](#tab/csharp-script)
azure-functions Functions Bindings Warmup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-warmup.md
def main(warmupContext: func.Context) -> None:
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use the `WarmupTrigger` attribute to define the function. C# script instead uses a *function.json* configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the `WarmupTrigger` attribute to define the function. C# script instead uses a *function.json* configuration file.
# [In-process](#tab/in-process)
azure-functions Functions Create Your First Function Visual Studio Uiex https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-your-first-function-visual-studio-uiex.md
Title: "Quickstart: Create your first function in Azure using Visual Studio"
description: In this quickstart, you learn how to create and publish an HTTP trigger Azure Function by using Visual Studio. ms.assetid: 82db1177-2295-4e39-bd42-763f6082e796 Previously updated : 09/30/2020 Last updated : 11/8/2022 ms.devlang: csharp
The `FunctionName` method attribute sets the name of the function, which by defa
+ **Select** <abbr title="When you publish your project to a function app that runs in a Consumption plan, you pay only for executions of your functions app. Other hosting plans incur higher costs.">Consumption</abbr> in the Play Type drop-down. (For more information, see [Consumption plan](consumption-plan.md).)
- + **Select** an <abbr title="A geographical reference to a specific Azure datacenter in which resources are allocated.See [regions](https://azure.microsoft.com/regions/) for a list of available regions.">location</abbr> from the drop-down.
+ + **Select** a <abbr title="A geographical reference to a specific Azure datacenter in which resources are allocated.See [regions](https://azure.microsoft.com/regions/) for a list of available regions.">location</abbr> from the drop-down.
+ **Select** an <abbr="An Azure Storage account is required by the Functions runtime. Select New to configure a general-purpose storage account. You can also choose an existing account that meets the storage account requirements.">Azure Storage</abbr> account from the drop-down
azure-functions Functions Create Your First Function Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-your-first-function-visual-studio.md
Title: "Quickstart: Create your first C# function in Azure using Visual Studio"
description: "In this quickstart, you learn how to use Visual Studio to create and publish a C# HTTP triggered function to Azure Functions." ms.assetid: 82db1177-2295-4e39-bd42-763f6082e796 Previously updated : 09/08/2022 Last updated : 11/08/2022 ms.devlang: csharp adobe-target: true
adobe-target-content: ./functions-create-your-first-function-visual-studio-uiex
Azure Functions lets you use Visual Studio to create local C# function projects and then easily publish this project to run in a scalable serverless environment in Azure. If you prefer to develop your C# apps locally using Visual Studio Code, you should instead consider the [Visual Studio Code-based version](create-first-function-vs-code-csharp.md) of this article.
-By default, this article shows you how to create C# functions that run [in the same process as the Functions host](functions-dotnet-class-library.md). These _in-process_ C# functions are only supported on Long Term Support (LTS) versions of .NET. To create C# functions [in an isolated process](dotnet-isolated-process-guide.md), see the [alternate version of this article](functions-create-your-first-function-visual-studio.md?tabs=isolated-process). Check out [.NET supported versions](functions-dotnet-class-library.md#supported-versions) before getting started.
+By default, this article shows you how to create C# functions that run on .NET 6 [in the same process as the Functions host](functions-dotnet-class-library.md). These _in-process_ C# functions are only supported on [Long Term Support (LTS)](https://dotnet.microsoft.com/en-us/platform/support/policy/dotnet-core) .NET versions, such as .NET 6. When creating your project, you can choose to instead create a function that runs on .NET 6 in an [isolated worker process](dotnet-isolated-process-guide.md). [Isolated worker process](dotnet-isolated-process-guide.md) supports both LTS and Standard Term Support (STS) versions of .NET. For more information, see [Supported versions](dotnet-isolated-process-guide.md#supported-versions) in the .NET Functions isolated worker process guide.
In this article, you learn how to:
Completing this quickstart incurs a small cost of a few USD cents or less in you
+ [Visual Studio 2022](https://visualstudio.microsoft.com/vs/). Make sure to select the **Azure development** workload during installation.
-+ [Azure subscription](../guides/developer/azure-developer-guide.md#understanding-accounts-subscriptions-and-billing). If you don't already have an account [create a free one](https://azure.microsoft.com/free/dotnet/) before you begin.
++ [Azure subscription](../guides/developer/azure-developer-guide.md#understanding-accounts-subscriptions-and-billing). If you don't already have an account, [create a free one](https://azure.microsoft.com/free/dotnet/) before you begin. ## Create a function app project
The Azure Functions project template in Visual Studio creates a C# class library
1. For the **Additional information** settings, use the values in the following table:
- # [.NET 6](#tab/in-process)
+ # [In-process](#tab/in-process)
| Setting | Value | Description | | | - |-- |
The Azure Functions project template in Visual Studio creates a C# class library
:::image type="content" source="../../includes/media/functions-vs-tools-create/functions-project-settings-v4.png" alt-text="Screenshot of Azure Functions project settings.":::
- # [.NET 6 Isolated](#tab/isolated-process)
+ # [Isolated process](#tab/isolated-process)
| Setting | Value | Description | | | - |-- |
- | **Functions worker** | **.NET 6 Isolated** | When you choose **.NET 6 Isolated**, you create a project that runs in a separate worker process. Choose isolated process when you need to run your function app on .NET 7.0 or on .NET Framework 4.8 (preview). To learn more, see [Supported versions](dotnet-isolated-process-guide.md#supported-versions). |
+ | **Functions worker** | **.NET 6 Isolated** | When you choose **.NET 6 Isolated**, you create a project that runs in a separate worker process. Choose isolated worker process when you need to run your function app on .NET 7.0 or on .NET Framework 4.8 (preview). To learn more, see [Supported versions](dotnet-isolated-process-guide.md#supported-versions). |
| **Function** | **HTTP trigger** | This value creates a function triggered by an HTTP request. | | **Use Azurite for runtime storage account (AzureWebJobsStorage)** | Enable | Because a function app in Azure requires a storage account, one is assigned or created when you publish your project to Azure. An HTTP trigger doesn't use an Azure Storage account connection string; all other trigger types require a valid Azure Storage account connection string. When you select this option, the [Azurite emulator](../storage/common/storage-use-azurite.md?tabs=visual-studio) is used. | | **Authorization level** | **Anonymous** | The created function can be triggered by any client without providing a key. This authorization setting makes it easy to test your new function. For more information about keys and authorization, see [Authorization keys](./functions-bindings-http-webhook-trigger.md#authorization-keys) and [HTTP and webhook bindings](./functions-bindings-http-webhook.md). |
The `FunctionName` method attribute sets the name of the function, which by defa
Your function definition should now look like the following code:
-# [.NET 6](#tab/in-process)
+# [In-process](#tab/in-process)
:::code language="csharp" source="~/functions-docs-csharp/http-trigger-template/HttpExample.cs" range="15-18":::
-# [.NET 6 Isolated](#tab/isolated-process)
+# [Isolated process](#tab/isolated-process)
:::code language="csharp" source="~/functions-docs-csharp/http-trigger-isolated/HttpExample.cs" range="11-13":::
After you've verified that the function runs correctly on your local computer, i
## Publish the project to Azure
-Visual Studio can publish your local project to Azure. Before you can publish your project, you must have a function app in your Azure subscription. If you don't already have a function app in Azure, Visual Studio publishing creates one for you the first time you publish your project. In this article you create a function app and related Azure resources.
+Visual Studio can publish your local project to Azure. Before you can publish your project, you must have a function app in your Azure subscription. If you don't already have a function app in Azure, Visual Studio publishing creates one for you the first time you publish your project. In this article, you create a function app and related Azure resources.
[!INCLUDE [Publish the project to Azure](../../includes/functions-vstools-publish.md)]
Visual Studio can publish your local project to Azure. Before you can publish yo
*Resources* in Azure refer to function apps, functions, storage accounts, and so forth. They're grouped into *resource groups*, and you can delete everything in a group by deleting the group.
-You created Azure resources to complete this quickstart. You may be billed for these resources, depending on your [account status](https://azure.microsoft.com/account/) and [service pricing](https://azure.microsoft.com/pricing/). Other quickstarts in this collection build upon this quickstart. If you plan to work with subsequent quickstarts, tutorials, or with any of the services you have created in this quickstart, don't clean up the resources.
+You created Azure resources to complete this quickstart. You may be billed for these resources, depending on your [account status](https://azure.microsoft.com/account/) and [service pricing](https://azure.microsoft.com/pricing/). Other quickstarts in this collection build upon this quickstart. If you plan to work with subsequent quickstarts, tutorials, or with any of the services you've created in this quickstart, don't clean up the resources.
[!INCLUDE [functions-vstools-cleanup](../../includes/functions-vstools-cleanup.md)]
You created Azure resources to complete this quickstart. You may be billed for t
In this quickstart, you used Visual Studio to create and publish a C# function app in Azure with a simple HTTP trigger function.
-# [.NET 6](#tab/in-process)
+# [In-process](#tab/in-process)
To learn more about working with C# functions that run in-process with the Functions host, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
Advance to the next article to learn how to add an Azure Storage queue binding t
> [!div class="nextstepaction"] > [Add an Azure Storage queue binding to your function](functions-add-output-binding-storage-queue-vs.md?tabs=in-process)
-# [.NET 6 Isolated](#tab/isolated-process)
+# [Isolated process](#tab/isolated-process)
-To learn more about working with C# functions that run in an isolated process, see the [Guide for running C# Azure Functions in an isolated process](dotnet-isolated-process-guide.md). Check out [.NET supported versions](functions-dotnet-class-library.md#supported-versions) to see other versions of supported .NET versions in an isolated process .
+To learn more about working with C# functions that run in an isolated worker process, see the [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md). Check out [.NET supported versions](functions-dotnet-class-library.md#supported-versions) to see other versions of supported .NET versions in an isolated worker process.
Advance to the next article to learn how to add an Azure Storage queue binding to your function: > [!div class="nextstepaction"]
azure-functions Functions Deployment Slots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-deployment-slots.md
There are a number of advantages to using deployment slots. The following scenar
- **Different environments for different purposes**: Using different slots gives you the opportunity to differentiate app instances before swapping to production or a staging slot. - **Prewarming**: Deploying to a slot instead of directly to production allows the app to warm up before going live. Additionally, using slots reduces latency for HTTP-triggered workloads. Instances are warmed up before deployment, which reduces the cold start for newly deployed functions. - **Easy fallbacks**: After a swap with production, the slot with a previously staged app now has the previous production app. If the changes swapped into the production slot aren't as you expect, you can immediately reverse the swap to get your "last known good instance" back.-- **Minimize restarts**: Changing app settings in a production slot requires a restart of the running app. You can instead change settings in a staging slot and swap the settings change into production with a prewarmed instance. This is the recommended way to upgrade between Functions runtime versions while maintaining the highest availability. To learn more, see [Minimum downtime upgrade](functions-versions.md#minimum-downtime-upgrade).
+- **Minimize restarts**: Changing app settings in a production slot requires a restart of the running app. You can instead change settings in a staging slot and swap the settings change into production with a prewarmed instance. This is the recommended way to upgrade between Functions runtime versions while maintaining the highest availability. To learn more, see [Minimum downtime upgrade](migrate-version-3-version-4.md#minimum-downtime-upgrade).
## Swap operations
azure-functions Functions Develop Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-local.md
The way in which you develop functions on your local computer depends on your [l
|Environment |Languages |Description| |--|||
-|[Visual Studio Code](functions-develop-vs-code.md)| [C# (in-process)](functions-dotnet-class-library.md)<br/>[C# (isolated process)](dotnet-isolated-process-guide.md)<br/>[JavaScript](functions-reference-node.md)<br/>[PowerShell](./create-first-function-vs-code-powershell.md)<br/>[Python](functions-reference-python.md) | The [Azure Functions extension for VS Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) adds Functions support to VS Code. Requires the Core Tools. Supports development on Linux, macOS, and Windows, when using version 2.x of the Core Tools. To learn more, see [Create your first function using Visual Studio Code](./create-first-function-vs-code-csharp.md). |
-| [Command prompt or terminal](functions-run-local.md) | [C# (in-process)](functions-dotnet-class-library.md)<br/>[C# (isolated process)](dotnet-isolated-process-guide.md)<br/>[JavaScript](functions-reference-node.md)<br/>[PowerShell](functions-reference-powershell.md)<br/>[Python](functions-reference-python.md) | [Azure Functions Core Tools] provides the core runtime and templates for creating functions, which enable local development. Version 2.x supports development on Linux, macOS, and Windows. All environments rely on Core Tools for the local Functions runtime. |
-| [Visual Studio](functions-develop-vs.md) | [C# (in-process)](functions-dotnet-class-library.md)<br/>[C# (isolated process)](dotnet-isolated-process-guide.md) | The Azure Functions tools are included in the **Azure development** workload of [Visual Studio](https://www.visualstudio.com/vs/), starting with Visual Studio 2019. Lets you compile functions in a class library and publish the .dll to Azure. Includes the Core Tools for local testing. To learn more, see [Develop Azure Functions using Visual Studio](functions-develop-vs.md). |
+|[Visual Studio Code](functions-develop-vs-code.md)| [C# (in-process)](functions-dotnet-class-library.md)<br/>[C# (isolated worker process)](dotnet-isolated-process-guide.md)<br/>[JavaScript](functions-reference-node.md)<br/>[PowerShell](./create-first-function-vs-code-powershell.md)<br/>[Python](functions-reference-python.md) | The [Azure Functions extension for VS Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) adds Functions support to VS Code. Requires the Core Tools. Supports development on Linux, macOS, and Windows, when using version 2.x of the Core Tools. To learn more, see [Create your first function using Visual Studio Code](./create-first-function-vs-code-csharp.md). |
+| [Command prompt or terminal](functions-run-local.md) | [C# (in-process)](functions-dotnet-class-library.md)<br/>[C# (isolated worker process)](dotnet-isolated-process-guide.md)<br/>[JavaScript](functions-reference-node.md)<br/>[PowerShell](functions-reference-powershell.md)<br/>[Python](functions-reference-python.md) | [Azure Functions Core Tools] provides the core runtime and templates for creating functions, which enable local development. Version 2.x supports development on Linux, macOS, and Windows. All environments rely on Core Tools for the local Functions runtime. |
+| [Visual Studio](functions-develop-vs.md) | [C# (in-process)](functions-dotnet-class-library.md)<br/>[C# (isolated worker process)](dotnet-isolated-process-guide.md) | The Azure Functions tools are included in the **Azure development** workload of [Visual Studio](https://www.visualstudio.com/vs/), starting with Visual Studio 2019. Lets you compile functions in a class library and publish the .dll to Azure. Includes the Core Tools for local testing. To learn more, see [Develop Azure Functions using Visual Studio](functions-develop-vs.md). |
| [Maven](./create-first-function-cli-java.md) (various) | [Java](functions-reference-java.md) | Maven archetype supports Core Tools to enable development of Java functions. Version 2.x supports development on Linux, macOS, and Windows. To learn more, see [Create your first function with Java and Maven](./create-first-function-cli-java.md). Also supports development using [Eclipse](functions-create-maven-eclipse.md) and [IntelliJ IDEA](functions-create-maven-intellij.md). | [!INCLUDE [Don't mix development environments](../../includes/functions-mixed-dev-environments.md)]
When you develop your functions locally, you need to take trigger and binding be
## Local storage emulator
-During local development, you can use the local [Azurite emulator](/azure/storage/common/storage-use-azurite.md) when testing functions with Azure Storage bindings (Queue Storage, Blob Storage, and Table Storage), without having to connect to remote storage services. Azurite integrates with Visual Studio Code and Visual Studio, and you can also run it from the command prompt using npm. For more information, see [Use the Azurite emulator for local Azure Storage development](/storage/common/storage-use-azurite.md).
+During local development, you can use the local [Azurite emulator](../storage/common/storage-use-azurite.md) when testing functions with Azure Storage bindings (Queue Storage, Blob Storage, and Table Storage), without having to connect to remote storage services. Azurite integrates with Visual Studio Code and Visual Studio, and you can also run it from the command prompt using npm. For more information, see [Use the Azurite emulator for local Azure Storage development](../storage/common/storage-use-azurite.md).
The following setting in the `Values` collection of the local.settings.json file tells the local Functions host to use Azurite for the default `AzureWebJobsStorage` connection:
With this setting in place, any Azure Storage trigger or binding that uses `Azur
## Next steps
-+ To learn more about local development of compiled C# functions (both in-process and isolated process) using Visual Studio, see [Develop Azure Functions using Visual Studio](functions-develop-vs.md).
++ To learn more about local development of compiled C# functions (both in-process and isolated worker process) using Visual Studio, see [Develop Azure Functions using Visual Studio](functions-develop-vs.md). + To learn more about local development of functions using VS Code on a Mac, Linux, or Windows computer, see the Visual Studio Code getting started article for your preferred language: + [C# (in-process)](create-first-function-vs-code-csharp.md)
- + [C# (isolated process)](create-first-function-vs-code-csharp.md?tabs=isolated-process)
+ + [C# (isolated worker process)](create-first-function-vs-code-csharp.md?tabs=isolated-process)
+ [Java](create-first-function-vs-code-java.md) + [JavaScript](create-first-function-vs-code-node.md) + [PowerShell](create-first-function-vs-code-powershell.md)
azure-functions Functions Develop Vs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-vs.md
This section assumes you've already published to your function app using a relea
### Attach the debugger
-The way you attach the debugger depends on your execution mode. When debugging an isolated process app, you currently need to attach the remote debugger to a separate .NET process, and several other configuration steps are required.
+The way you attach the debugger depends on your execution mode. When debugging an isolated worker process app, you currently need to attach the remote debugger to a separate .NET process, and several other configuration steps are required.
When you're done, you should [disable remote debugging](#disable-remote-debugging).
Visual Studio connects to your function app and enables remote debugging, if it'
To attach a remote debugger to a function app running in a process separate from the Functions host:
-1. From the **Publish** tab, select the ellipses (**...**) in the **Hosting** section, and then choose **Download publish profile**. This action downloads a copy of the publish profile and opens the download location. You need this file, which contains the credentials used to attach to your isolated process running in Azure.
+1. From the **Publish** tab, select the ellipses (**...**) in the **Hosting** section, and then choose **Download publish profile**. This action downloads a copy of the publish profile and opens the download location. You need this file, which contains the credentials used to attach to your isolated worker process running in Azure.
> [!CAUTION] > The .publishsettings file contains your credentials (unencoded) that are used to administer your function app. The security best practice for this file is to store it temporarily outside your source directories (for example in the Libraries\Documents folder), and then delete it after it's no longer needed. A malicious user who gains access to the .publishsettings file can edit, create, and delete your function app.
To attach a remote debugger to a function app running in a process separate from
![Visual Studio enter credential](./media/functions-develop-vs/creds-dialog.png)
-1. Check **Show process from all users** and then choose **dotnet.exe** and select **Attach**. When the operation completes, you're attached to your C# class library code running in an isolated process. At this point, you can debug your function app as normal.
+1. Check **Show process from all users** and then choose **dotnet.exe** and select **Attach**. When the operation completes, you're attached to your C# class library code running in an isolated worker process. At this point, you can debug your function app as normal.
azure-functions Functions Dotnet Class Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-dotnet-class-library.md
Last updated 10/12/2022
This article is an introduction to developing Azure Functions by using C# in .NET class libraries. >[!IMPORTANT]
->This article supports .NET class library functions that run in-process with the runtime. Your C# functions can also run out-of-process and isolated from the Functions runtime. The isolated model is the only way to run .NET 5.x and the preview of .NET Framework 4.8 using recent versions of the Functions runtime. To learn more, see [.NET isolated process functions](dotnet-isolated-process-guide.md).
+>This article supports .NET class library functions that run in-process with the runtime. Your C# functions can also run out-of-process and isolated from the Functions runtime. The isolated worker process model is the only way to run non-LTS versions of .NET and .NET Framework apps in current versions of the Functions runtime. To learn more, see [.NET isolated worker process functions](dotnet-isolated-process-guide.md).
+>For a comprehensive comparison between isolated worker process and in-process .NET Functions, see [Differences between in-process and isolate worker process .NET Azure Functions](dotnet-isolated-in-process-differences.md).
As a C# developer, you may also be interested in one of the following articles:
azure-functions Functions Dotnet Dependency Injection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-dotnet-dependency-injection.md
Azure Functions supports the dependency injection (DI) software design pattern,
- Dependency injection patterns differ depending on whether your C# functions run [in-process](functions-dotnet-class-library.md) or [out-of-process](dotnet-isolated-process-guide.md). > [!IMPORTANT]
-> The guidance in this article applies only to [C# class library functions](functions-dotnet-class-library.md), which run in-process with the runtime. This custom dependency injection model doesn't apply to [.NET isolated functions](dotnet-isolated-process-guide.md), which lets you run .NET 5.0 functions out-of-process. The .NET isolated process model relies on regular ASP.NET Core dependency injection patterns. To learn more, see [Dependency injection](dotnet-isolated-process-guide.md#dependency-injection) in the .NET isolated process guide.
+> The guidance in this article applies only to [C# class library functions](functions-dotnet-class-library.md), which run in-process with the runtime. This custom dependency injection model doesn't apply to [.NET isolated functions](dotnet-isolated-process-guide.md), which lets you run .NET functions out-of-process. The .NET isolated worker process model relies on regular ASP.NET Core dependency injection patterns. To learn more, see [Dependency injection](dotnet-isolated-process-guide.md#dependency-injection) in the .NET isolated worker process guide.
## Prerequisites
azure-functions Functions Event Grid Blob Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-event-grid-blob-trigger.md
When you use Visual Studio Code to create a Blob Storage triggered function, you
|Prompt|Selection| |--|--| |**Select a language**|Choose `C#`.|
- |**Select a .NET runtime**| Choose `.NET 6.0 LTS`. Event-driven blob triggers aren't yet supported when running in an isolated process. |
+ |**Select a .NET runtime**| Choose `.NET 6.0 LTS`. Event-driven blob triggers aren't yet supported when running in an isolated worker process. |
|**Select a template for your project's first function**|Choose `Azure Blob Storage trigger`.| |**Provide a function name**|Type `BlobTriggerEventGrid`.| |**Provide a namespace** | Type `My.Functions`. |
azure-functions Functions Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-monitoring.md
In [C#](functions-dotnet-class-library.md#log-custom-telemetry-in-c-functions),
### Dependencies
-Starting with version 2.x of Functions, Application Insights automatically collects data on dependencies for bindings that use certain client SDKs. Application Insights distributed tracing and dependency tracking aren't currently supported for C# apps running in an [isolated process](dotnet-isolated-process-guide.md). Application Insights collects data on the following dependencies:
+Starting with version 2.x of Functions, Application Insights automatically collects data on dependencies for bindings that use certain client SDKs. Application Insights distributed tracing and dependency tracking aren't currently supported for C# apps running in an [isolated worker process](dotnet-isolated-process-guide.md). Application Insights collects data on the following dependencies:
+ Azure Cosmos DB + Azure Event Hubs
azure-functions Functions Reference Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-csharp.md
Azure Functions lets you develop functions using C# in one of the following ways
| | - | | | | | C# script | in-process | .csx | [Portal](functions-create-function-app-portal.md)<br/>[Core Tools](functions-run-local.md) | This article | | C# class library | in-process | .cs | [Visual Studio](functions-develop-vs.md)<br/>[Visual Studio Code](functions-develop-vs-code.md)<br />[Core Tools](functions-run-local.md)| [In-process C# class library functions](functions-dotnet-class-library.md) |
-| C# class library (isolated process)| in an isolated process | .cs | [Visual Studio](functions-develop-vs.md)<br/>[Visual Studio Code](functions-develop-vs-code.md)<br />[Core Tools](functions-run-local.md) | [.NET isolated process functions](dotnet-isolated-process-guide.md) |
+| C# class library (isolated worker process)| in an isolated worker process | .cs | [Visual Studio](functions-develop-vs.md)<br/>[Visual Studio Code](functions-develop-vs-code.md)<br />[Core Tools](functions-run-local.md) | [.NET isolated worker process functions](dotnet-isolated-process-guide.md) |
This article assumes that you've already read the [Azure Functions developers guide](functions-reference.md).
azure-functions Functions Run Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-run-local.md
Certain languages may have additional considerations:
# [C\#](#tab/csharp)
-+ By default, version 2.x and later versions of the Core Tools create function app projects for the .NET runtime as [C# class projects](functions-dotnet-class-library.md) (.csproj). Version 3.x also supports creating functions that [run on .NET 5.0 in an isolated process](dotnet-isolated-process-guide.md). These C# projects, which can be used with Visual Studio or Visual Studio Code, are compiled during debugging and when publishing to Azure.
++ Core Tools lets you create function app projects for the .NET runtime as both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# class library projects (.csproj). These projects, which can be used with Visual Studio or Visual Studio Code, are compiled during debugging and when publishing to Azure. + Use the `--csx` parameter if you want to work locally with C# script (.csx) files. These are the same files you get when you create functions in the Azure portal and when using version 1.x of Core Tools. To learn more, see the [func init reference](functions-core-tools-reference.md#func-init).
azure-functions Functions Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-versions.md
Title: Azure Functions runtime versions overview
description: Azure Functions supports multiple versions of the runtime. Learn the differences between them and how to choose the one that's right for you. Previously updated : 10/04/2022 Last updated : 10/22/2022 zone_pivot_groups: programming-languages-set-functions
zone_pivot_groups: programming-languages-set-functions
| 1.x | GA | Recommended only for C# apps that must use .NET Framework and only supports development in the Azure portal, Azure Stack Hub portal, or locally on Windows computers. This version is in maintenance mode, with enhancements provided only in later versions. | > [!IMPORTANT]
-> Beginning on December 3, 2022, function apps running on versions 2.x and 3.x of the Azure Functions runtime can no longer be supported. Before that time, please test, verify, and migrate your function apps to version 4.x of the Functions runtime. For more information, see [Migrating from 3.x to 4.x](#migrating-from-3x-to-4x). After the deadline, function apps can be created and deployed, and existing apps continue to run. However, your apps won't be eligible for new features, security patches, performance optimizations, and support until you upgrade them to version 4.x.
+> Beginning on December 3, 2022, function apps running on versions 2.x and 3.x of the Azure Functions runtime can no longer be supported. Before that time, please test, verify, and migrate your function apps to version 4.x of the Functions runtime. For more information, see [Migrate apps from Azure Functions version 3.x to version 4.x](migrate-version-3-version-4.md). After the deadline, function apps can be created and deployed, and existing apps continue to run. However, your apps won't be eligible for new features, security patches, performance optimizations, and support until you upgrade them to version 4.x.
> >End of support for these runtime versions is due to the ending of support for .NET Core 3.1, which is required by these older runtime versions. This requirement affects all Azure Functions runtime languages. >Functions version 1.x is still supported for C# function apps that require the .NET Framework. Preview support is now available in Functions 4.x to [run C# functions on .NET Framework 4.8](dotnet-isolated-process-guide.md#supported-versions).
The following table indicates which programming languages are currently supporte
## <a name="creating-1x-apps"></a>Run on a specific version
-By default, function apps created in the Azure portal and by the Azure CLI are set to version 4.x. You can modify this version if needed. You can only downgrade the runtime version to 1.x after you create your function app but before you add any functions. Moving to a later version is allowed even with apps that have existing functions. When your app has existing functions, be aware of any breaking changes between versions before moving to a later runtime version. The following sections detail breaking changes between versions, including language-specific breaking changes.
+The version of the Functions runtime used by published apps in Azure is dictated by the [`FUNCTIONS_EXTENSION_VERSION`](functions-app-settings.md#functions_extension_version) application setting. In some cases and for certain languages, other settings may apply.
-+ [Between 3.x and 4.x](#breaking-changes-between-3x-and-4x)
-+ [Between 2.x and 3.x](#breaking-changes-between-2x-and-3x)
-+ [Between 1.x and later versions](#migrating-from-1x-to-later-versions)
+By default, function apps created in the Azure portal, by the Azure CLI, or from Visual Studio tools are set to version 4.x. You can modify this version if needed. You can only downgrade the runtime version to 1.x after you create your function app but before you add any functions. Moving to a later version is allowed even with apps that have existing functions.
-If you don't see your programming language, go select it from the [top of the page](#top).
+### Migrating existing function apps
-Before making a change to the major version of the runtime, you should first test your existing code on the new runtime version. You can verify your app runs correctly after the upgrade by deploying to another function app running on the latest major version. You can also verify your code locally by using the runtime-specific version of the [Azure Functions Core Tools](functions-run-local.md), which includes the Functions runtime.
+When your app has existing functions, you must take precautions before moving to a later runtime version. The following articles detail breaking changes between versions, including language-specific breaking changes. They also provide you with step-by-step instructions for a successful migration of you existing function app.
-Downgrades to v2.x aren't supported. When possible, you should always run your apps on the latest supported version of the Functions runtime.
++ [Migrate from runtime version 3.x to version 4.x](./migrate-version-3-version-4.md) ++ [Migrate from runtime version 1.x to version 4.x](./migrate-version-1-version-4.md) ### Changing version of apps in Azure
-The version of the Functions runtime used by published apps in Azure is dictated by the [`FUNCTIONS_EXTENSION_VERSION`](functions-app-settings.md#functions_extension_version) application setting. The following major runtime version values are supported:
+The following major runtime version values are supported:
| Value | Runtime target | | | -- | | `~4` | 4.x | | `~3` | 3.x |
-| `~2` | 2.x |
| `~1` | 1.x | >[!IMPORTANT]
-> Don't arbitrarily change this app setting, because other app setting changes and changes to your function code may be required. You should instead change this setting in the **Function runtime settings** tab of the function app **Configuration** in the Azure portal when you are ready to make a major version upgrade.
-
-To learn more, see [How to target Azure Functions runtime versions](set-runtime-version.md).
+> Don't arbitrarily change this app setting, because other app setting changes and changes to your function code may be required. You should instead change this setting in the **Function runtime settings** tab of the function app **Configuration** in the Azure portal when you are ready to make a major version upgrade. For existing function apps, [follow the migration instructions](#migrating-existing-function-apps).
### Pinning to a specific minor version
If you receive a warning about your extension bundle version not meeting a minim
To learn more about extension bundles, see [Extension bundles](functions-bindings-register.md#extension-bundles). ::: zone-end
-## <a name="migrating-from-3x-to-4x"></a>Migrating from 3.x to 4.x
-
-Azure Functions version 4.x is highly backwards compatible to version 3.x. Most apps should safely upgrade to 4.x without requiring significant code changes. An upgrade is initiated when you set the `FUNCTIONS_EXTENSION_VERSION` app setting to a value of `~4`. For function apps running on Windows, you also need to set the `netFrameworkVersion` site setting to target .NET 6.
-
-Before you upgrade your app to version 4.x of the Functions runtime, you should do the following tasks:
-
-* Review the list of [breaking changes between 3.x and 4.x](#breaking-changes-between-3x-and-4x).
-* [Run the pre-upgrade validator](#run-the-pre-upgrade-validator).
-* When possible, [upgrade your local project environment to version 4.x](#upgrade-your-local-project). Fully test your app locally using version 4.x of the [Azure Functions Core Tools](functions-run-local.md). When you use Visual Studio to publish a version 4.x project to an existing function app at a lower version, you're prompted to let Visual Studio upgrade the function app to version 4.x during deployment. This upgrade uses the same process defined in [Migrate without slots](#migrate-without-slots).
-* Consider using a [staging slot](functions-deployment-slots.md) to test and verify your app in Azure on the new runtime version. You can then deploy your app with the updated version settings to the production slot. For more information, see [Migrate using slots](#migrate-using-slots).
-
-### Run the pre-upgrade validator
-
-Azure Functions provides a pre-upgrade validator to help you identify potential issues when migrating your function app to 4.x. To run the pre-upgrade validator:
-
-1. In the [Azure portal](https://portal.azure.com), navigate to your function app.
-
-1. Open the **Diagnose and solve problems** page.
-
-1. In **Function App Diagnostics**, start typing `Functions 4.x Pre-Upgrade Validator` and then choose it from the list.
-
-1. After validation completes, review the recommendations and address any issues in your app. If you need to make changes to your app, make sure to validate the changes against version 4.x of the Functions runtime, either [locally using Azure Functions Core Tools v4](#upgrade-your-local-project) or by [using a staging slot](#migrate-using-slots).
-
-### Migrate without slots
-
-The simplest way to upgrade to v4.x is to set the `FUNCTIONS_EXTENSION_VERSION` application setting to `~4` on your function app in Azure. You must follow a [different procedure](#migrate-using-slots) on a site with slots.
-
-# [Azure CLI](#tab/azure-cli)
-
-```azurecli
-az functionapp config appsettings set --settings FUNCTIONS_EXTENSION_VERSION=~4 -g <RESOURCE_GROUP_NAME> -n <APP_NAME>
-```
-
-# [Azure PowerShell](#tab/azure-powershell)
-
-```azurepowershell
-Update-AzFunctionAppSetting -AppSetting @{FUNCTIONS_EXTENSION_VERSION = "~4"} -Name <APP_NAME> -ResourceGroupName <RESOURCE_GROUP_NAME> -Force
-```
---
-# [Windows](#tab/windows/azure-cli)
-
-When running on Windows, you also need to enable .NET 6.0, which is required by version 4.x of the runtime.
-
-```azurecli
-az functionapp config set --net-framework-version v6.0 -g <RESOURCE_GROUP_NAME> -n <APP_NAME>
-```
-
-.NET 6 is required for function apps in any language running on Windows.
-
-# [Windows](#tab/windows/azure-powershell)
-
-When running on Windows, you also need to enable .NET 6.0, which is required by version 4.x of the runtime.
-
-```azurepowershell
-Set-AzWebApp -NetFrameworkVersion v6.0 -Name <APP_NAME> -ResourceGroupName <RESOURCE_GROUP_NAME>
-```
-
-.NET 6 is required for function apps in any language running on Windows.
-
-# [Linux](#tab/linux/azure-cli)
-
-When running .NET apps on Linux, you also need to update the `linuxFxVersion` site setting for .NET 6.0.
-
-```azurecli
-az functionapp config set --name <APP_NAME> --resource-group <RESOURCE_GROUP_NAME> --linux-fx-version "DOTNET|6.0"
-```
-
-# [Linux](#tab/linux/azure-powershell)
-
-When running .NET apps on Linux, you also need to update the `linuxFxVersion` site setting. Unfortunately, Azure PowerShell can't be used to set the `linuxFxVersion` at this time. Use the Azure CLI instead.
---
-In this example, replace `<APP_NAME>` with the name of your function app and `<RESOURCE_GROUP_NAME>` with the name of the resource group.
-
-### Migrate using slots
-
-Using [deployment slots](functions-deployment-slots.md) is a good way to migrate your function app to the v4.x runtime from a previous version. By using a staging slot, you can run your app on the new runtime version in the staging slot and switch to production after verification. Slots also provide a way to minimize downtime during upgrade. If you need to minimize downtime, follow the steps in [Minimum downtime upgrade](#minimum-downtime-upgrade).
-
-After you've verified your app in the upgraded slot, you can swap the app and new version settings into production. This swap requires setting [`WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0`](functions-app-settings.md#website_override_sticky_extension_versions) in the production slot. How you add this setting affects the amount of downtime required for the upgrade.
-
-#### Standard upgrade
-
-If your slot-enabled function app can handle the downtime of a full restart, you can update the `WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS` setting directly in the production slot. Because changing this setting directly in the production slot causes a restart that impacts availability, consider doing this change at a time of reduced traffic. You can then swap in the upgraded version from the staging slot.
-
-The [`Update-AzFunctionAppSetting`](/powershell/module/az.functions/update-azfunctionappsetting) PowerShell cmdlet doesn't currently support slots. You must use Azure CLI or the Azure portal.
-
-1. Use the following command to set `WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0` in the production slot:
-
- ```azurecli
- az functionapp config appsettings set --settings WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0 -g <RESOURCE_GROUP_NAME> -n <APP_NAME>
- ```
- This command causes the app running in the production slot to restart.
-
-1. Use the following command to also set `WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS` in the staging slot:
-
- ```azurecli
- az functionapp config appsettings set --settings WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0 -g <RESOURCE_GROUP_NAME> -n <APP_NAME> --slot <SLOT_NAME>
- ```
-
-1. Use the following command to change `FUNCTIONS_EXTENSION_VERSION` and upgrade the staging slot to the new runtime version:
-
- ```azurecli
- az functionapp config appsettings set --settings FUNCTIONS_EXTENSION_VERSION=~4 -g <RESOURCE_GROUP_NAME> -n <APP_NAME> --slot <SLOT_NAME>
- ```
-
-1. Version 4.x of the Functions runtime requires .NET 6 in Windows. On Linux, .NET apps must also upgrade to .NET 6. Use the following command so that the runtime can run on .NET 6:
-
- # [Windows](#tab/windows)
-
- When running on Windows, you also need to enable .NET 6.0, which is required by version 4.x of the runtime.
-
- ```azurecli
- az functionapp config set --net-framework-version v6.0 -g <RESOURCE_GROUP_NAME> -n <APP_NAME>
- ```
-
- .NET 6 is required for function apps in any language running on Windows.
-
- # [Linux](#tab/linux/azure-cli)
-
- When running .NET functions on Linux, you also need to update the `linuxFxVersion` site setting for .NET 6.0.
-
- ```azurecli
- az functionapp config set --name <APP_NAME> --resource-group <RESOURCE_GROUP_NAME> --linux-fx-version "DOTNET|6.0"
- ```
-
-
-
- In this example, replace `<APP_NAME>` with the name of your function app and `<RESOURCE_GROUP_NAME>` with the name of the resource group.
-
-1. If your code project required any updates to run on version 4.x, deploy those updates to the staging slot now.
-
-1. Confirm that your function app runs correctly in the upgraded staging environment before swapping.
-
-1. Use the following command to swap the upgraded staging slot to production:
-
- ```azurecli
- az functionapp deployment slot swap -g <RESOURCE_GROUP_NAME> -n <APP_NAME> --slot <SLOT_NAME> --target-slot production
- ```
-
-#### Minimum downtime upgrade
-
-To minimize the downtime in your production app, you can swap the `WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS` setting from the staging slot into production. After that, you can swap in the upgraded version from a prewarmed staging slot.
-
-1. Use the following command to set `WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0` in the staging slot:
-
- ```azurecli
- az functionapp config appsettings set --settings WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0 -g <RESOURCE_GROUP_NAME> -n <APP_NAME> --slot <SLOT_NAME>
- ```
-1. Use the following commands to swap the slot with the new setting into production, and at the same time restore the version setting in the staging slot.
-
- ```azurecli
- az functionapp deployment slot swap -g <RESOURCE_GROUP_NAME> -n <APP_NAME> --slot <SLOT_NAME> --target-slot production
- az functionapp config appsettings set --settings FUNCTIONS_EXTENSION_VERSION=~3 -g <RESOURCE_GROUP_NAME> -n <APP_NAME> --slot <SLOT_NAME>
- ```
-
- You may see errors from the staging slot during the time between the swap and the runtime version being restored on staging. This can happen because having `WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0` only in staging during a swap removes the `FUNCTIONS_EXTENSION_VERSION` setting in staging. Without the version setting, your slot is in a bad state. Updating the version in the staging slot right after the swap should put the slot back into a good state, and you call roll back your changes if needed. However, any rollback of the swap also requires you to directly remove `WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0` from production before the swap back to prevent the same errors in production seen in staging. This change in the production setting would then cause a restart.
-
-1. Use the following command to again set `WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0` in the staging slot:
-
- ```azurecli
- az functionapp config appsettings set --settings WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0 -g <RESOURCE_GROUP_NAME> -n <APP_NAME> --slot <SLOT_NAME>
- ```
-
- At this point, both slots have `WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0` set.
-
-1. Use the following command to change `FUNCTIONS_EXTENSION_VERSION` and upgrade the staging slot to the new runtime version:
-
- ```azurecli
- az functionapp config appsettings set --settings FUNCTIONS_EXTENSION_VERSION=~4 -g <RESOURCE_GROUP_NAME> -n <APP_NAME> --slot <SLOT_NAME>
- ```
-
-1. Version 4.x of the Functions runtime requires .NET 6 in Windows. On Linux, .NET apps must also upgrade to .NET 6. Use the following command so that the runtime can run on .NET 6:
-
- # [Windows](#tab/windows)
-
- When running on Windows, you also need to enable .NET 6.0, which is required by version 4.x of the runtime.
-
- ```azurecli
- az functionapp config set --net-framework-version v6.0 -g <RESOURCE_GROUP_NAME> -n <APP_NAME>
- ```
-
- .NET 6 is required for function apps in any language running on Windows.
-
- # [Linux](#tab/linux/azure-cli)
-
- When running .NET functions on Linux, you also need to update the `linuxFxVersion` site setting for .NET 6.0.
-
- ```azurecli
- az functionapp config set --name <APP_NAME> --resource-group <RESOURCE_GROUP_NAME> --linux-fx-version "DOTNET|6.0"
- ```
-
-
-
- In this example, replace `<APP_NAME>` with the name of your function app and `<RESOURCE_GROUP_NAME>` with the name of the resource group.
-
-1. If your code project required any updates to run on version 4.x, deploy those updates to the staging slot now.
-
-1. Confirm that your function app runs correctly in the upgraded staging environment before swapping.
-
-1. Use the following command to swap the upgraded and prewarmed staging slot to production:
-
- ```azurecli
- az functionapp deployment slot swap -g <RESOURCE_GROUP_NAME> -n <APP_NAME> --slot <SLOT_NAME> --target-slot production
- ```
-
-### Upgrade your local project
-
-Upgrading instructions are language dependent. If you don't see your language, choose it from the switcher at the [top of the article](#top).
-
-To update a C# class library project to .NET 6 and Azure Functions 4.x:
-
-1. Update your local installation of [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) to version 4.
-
-1. Update the `TargetFramework` and `AzureFunctionsVersion`, as follows:
-
- ```xml
- <TargetFramework>net6.0</TargetFramework>
- <AzureFunctionsVersion>v4</AzureFunctionsVersion>
- ```
-
-1. Update the NuGet packages referenced by your app to the latest versions. For more information, see [breaking changes](#breaking-changes-between-3x-and-4x).
- Specific packages depend on whether your functions run in-process or out-of-process.
-
- # [In-process](#tab/in-process)
-
- * [Microsoft.NET.Sdk.Functions](https://www.nuget.org/packages/Microsoft.NET.Sdk.Functions/) 4.0.0 or later
-
- # [Isolated process](#tab/isolated-process)
-
- * [Microsoft.Azure.Functions.Worker](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker/) 1.5.2 or later
- * [Microsoft.Azure.Functions.Worker.Sdk](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Sdk/) 1.2.0 or later
-
-
-To update your project to Azure Functions 4.x:
-
-1. Update your local installation of [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) to version 4.x.
-
-1. Update your app's [Azure Functions extensions bundle](functions-bindings-register.md#extension-bundles) to 2.x or above. For more information, see [breaking changes](#breaking-changes-between-3x-and-4x).
-
-1. If you're using Node.js version 10 or 12, move to one of the [supported version](functions-reference-node.md#node-version).
-1. If you're using PowerShell Core 6, move to one of the [supported versions](functions-reference-powershell.md#powershell-versions).
-1. If you're using Python 3.6, move to one of the [supported versions](functions-reference-python.md#python-version).
-
-### Breaking changes between 3.x and 4.x
-
-The following are key breaking changes to be aware of before upgrading a 3.x app to 4.x, including language-specific breaking changes. For a full list, see Azure Functions GitHub issues labeled [*Breaking Change: Approved*](https://github.com/Azure/azure-functions/issues?q=is%3Aissue+label%3A%22Breaking+Change%3A+Approved%22+is%3A%22closed+OR+open%22). More changes are expected during the preview period. Subscribe to [App Service Announcements](https://github.com/Azure/app-service-announcements/issues) for updates.
-
-If you don't see your programming language, go select it from the [top of the page](#top).
-
-#### Runtime
--- Azure Functions proxies is a legacy feature for versions 1.x through 3.x of the Azure Functions runtime. Support for Functions proxies is being returned in version 4.x so that you can successfully upgrade your function apps to the latest runtime version. As soon as possible, you should instead switch to integrating your function apps with Azure API Management. API Management lets you take advantage of a more complete set of features for defining, securing, managing, and monetizing your Functions-based APIs. For more information, see [API Management integration](functions-proxies.md#api-management-integration). For information about the pending return of proxies in version 4.x, [Monitor the App Service announcements page](https://github.com/Azure/app-service-announcements/issues). --- Logging to Azure Storage using *AzureWebJobsDashboard* is no longer supported in 4.x. You should instead use [Application Insights](./functions-monitoring.md). ([#1923](https://github.com/Azure/Azure-Functions/issues/1923))--- Azure Functions 4.x now enforces [minimum version requirements for extensions](#minimum-extension-versions). Upgrade to the latest version of affected extensions. For non-.NET languages, [upgrade](./functions-bindings-register.md#extension-bundles) to extension bundle version 2.x or later. ([#1987](https://github.com/Azure/Azure-Functions/issues/1987))--- Default and maximum timeouts are now enforced in 4.x for function app running on Linux in a Consumption plan. ([#1915](https://github.com/Azure/Azure-Functions/issues/1915))--- Azure Functions 4.x uses `Azure.Identity` and `Azure.Security.KeyVault.Secrets` for the Key Vault provider and has deprecated the use of Microsoft.Azure.KeyVault. For more information about how to configure function app settings, see the Key Vault option in [Secret Repositories](security-concepts.md#secret-repositories). ([#2048](https://github.com/Azure/Azure-Functions/issues/2048))--- Function apps that share storage accounts now fail to start when their host IDs are the same. For more information, see [Host ID considerations](storage-considerations.md#host-id-considerations). ([#2049](https://github.com/Azure/Azure-Functions/issues/2049))---- Azure Functions 4.x supports .NET 6 in-process and isolated apps.--- `InvalidHostServicesException` is now a fatal error. ([#2045](https://github.com/Azure/Azure-Functions/issues/2045))--- `EnableEnhancedScopes` is enabled by default. ([#1954](https://github.com/Azure/Azure-Functions/issues/1954))--- Remove `HttpClient` as a registered service. ([#1911](https://github.com/Azure/Azure-Functions/issues/1911))-- Use single class loader in Java 11. ([#1997](https://github.com/Azure/Azure-Functions/issues/1997))--- Stop loading worker jars in Java 8. ([#1991](https://github.com/Azure/Azure-Functions/issues/1991))--- Node.js versions 10 and 12 aren't supported in Azure Functions 4.x. ([#1999](https://github.com/Azure/Azure-Functions/issues/1999))--- Output serialization in Node.js apps was updated to address previous inconsistencies. ([#2007](https://github.com/Azure/Azure-Functions/issues/2007))-- PowerShell 6 isn't supported in Azure Functions 4.x. ([#1999](https://github.com/Azure/Azure-Functions/issues/1999))--- Default thread count has been updated. Functions that aren't thread-safe or have high memory usage may be impacted. ([#1962](https://github.com/Azure/Azure-Functions/issues/1962))-- Python 3.6 isn't supported in Azure Functions 4.x. ([#1999](https://github.com/Azure/Azure-Functions/issues/1999))--- Shared memory transfer is enabled by default. ([#1973](https://github.com/Azure/Azure-Functions/issues/1973))--- Default thread count has been updated. Functions that aren't thread-safe or have high memory usage may be impacted. ([#1962](https://github.com/Azure/Azure-Functions/issues/1962))-
-## Migrating from 2.x to 3.x
-
-Azure Functions version 3.x is highly backwards compatible to version 2.x. Many apps can safely upgrade to 3.x without any code changes. While moving to 3.x is encouraged, run extensive tests before changing the major version in production apps.
-
-### Breaking changes between 2.x and 3.x
-
-The following are the language-specific changes to be aware of before upgrading a 2.x app to 3.x. If you don't see your programming language, go select it from the [top of the page](#top).
-
-The main differences between versions when running .NET class library functions is the .NET Core runtime. Functions version 2.x is designed to run on .NET Core 2.2 and version 3.x is designed to run on .NET Core 3.1.
-
-* [Synchronous server operations are disabled by default](/dotnet/core/compatibility/2.2-3.0#http-synchronous-io-disabled-in-all-servers).
-
-* Breaking changes introduced by .NET Core in [version 3.1](/dotnet/core/compatibility/3.1) and [version 3.0](/dotnet/core/compatibility/3.0), which aren't specific to Functions but might still affect your app.
-
->[!NOTE]
->Due to support issues with .NET Core 2.2, function apps pinned to version 2 (`~2`) are essentially running on .NET Core 3.1. To learn more, see [Functions v2.x compatibility mode](functions-dotnet-class-library.md#functions-v2x-considerations).
--
-* Output bindings assigned through 1.x `context.done` or return values now behave the same as setting in 2.x+ `context.bindings`.
-
-* Timer trigger object is camelCase instead of PascalCase
-
-* Event hub triggered functions with `dataType` binary will receive an array of `binary` instead of `string`.
-
-* The HTTP request payload can no longer be accessed via `context.bindingData.req`. It can still be accessed as an input parameter, `context.req`, and in `context.bindings`.
-
-* Node.js 8 is no longer supported and won't execute in 3.x functions.
-
-## Migrating from 1.x to later versions
-
-You may choose to migrate an existing app written to use the version 1.x runtime to instead use a newer version. Most of the changes you need to make are related to changes in the language runtime, such as C# API changes between .NET Framework 4.8 and .NET Core. You'll also need to make sure your code and libraries are compatible with the language runtime you choose. Finally, be sure to note any changes in trigger, bindings, and features highlighted below. For the best migration results, you should create a new function app in a new version and port your existing version 1.x function code to the new app.
-
-While it's possible to do an "in-place" upgrade by manually updating the app configuration, going from 1.x to a higher version includes some breaking changes. For example, in C#, the debugging object is changed from `TraceWriter` to `ILogger`. By creating a new version 3.x project, you start off with updated functions based on the latest version 3.x templates.
-
-### Changes in triggers and bindings after version 1.x
-
-Starting with version 2.x, you must install the extensions for specific triggers and bindings used by the functions in your app. The only exception for this HTTP and timer triggers, which don't require an extension. For more information, see [Register and install binding extensions](./functions-bindings-register.md).
-
-There are also a few changes in the *function.json* or attributes of the function between versions. For example, the Event Hubs `path` property is now `eventHubName`. See the [existing binding table](#bindings) for links to documentation for each binding.
-
-### Changes in features and functionality after version 1.x
-
-A few features were removed, updated, or replaced after version 1.x. This section details the changes you see in later versions after having used version 1.x.
-
-In version 2.x, the following changes were made:
-
-* Keys for calling HTTP endpoints are always stored encrypted in Azure Blob storage. In version 1.x, keys were stored in Azure Files by default. When you upgrade an app from version 1.x to version 2.x, existing secrets that are in Azure Files are reset.
-
-* The version 2.x runtime doesn't include built-in support for webhook providers. This change was made to improve performance. You can still use HTTP triggers as endpoints for webhooks.
-
-* The host configuration file (host.json) should be empty or have the string `"version": "2.0"`.
-
-* To improve monitoring, the WebJobs dashboard in the portal, which used the [`AzureWebJobsDashboard`](functions-app-settings.md#azurewebjobsdashboard) setting is replaced with Azure Application Insights, which uses the [`APPINSIGHTS_INSTRUMENTATIONKEY`](functions-app-settings.md#appinsights_instrumentationkey) setting. For more information, see [Monitor Azure Functions](functions-monitoring.md).
-
-* All functions in a function app must share the same language. When you create a function app, you must choose a runtime stack for the app. The runtime stack is specified by the [`FUNCTIONS_WORKER_RUNTIME`](functions-app-settings.md#functions_worker_runtime) value in application settings. This requirement was added to improve footprint and startup time. When developing locally, you must also include this setting in the [local.settings.json file](functions-develop-local.md#local-settings-file).
-
-* The default timeout for functions in an App Service plan is changed to 30 minutes. You can manually change the timeout back to unlimited by using the [functionTimeout](functions-host-json.md#functiontimeout) setting in host.json.
-
-* HTTP concurrency throttles are implemented by default for Consumption plan functions, with a default of 100 concurrent requests per instance. You can change this behavior in the [`maxConcurrentRequests`](functions-host-json.md#http) setting in the host.json file.
-
-* Because of [.NET Core limitations](https://github.com/Azure/azure-functions-host/issues/3414), support for F# script (`.fsx` files) functions has been removed. Compiled F# functions (.fs) are still supported.
-
-* The URL format of Event Grid trigger webhooks has been changed to follow this pattern: `https://{app}/runtime/webhooks/{triggerName}`.
-
-### Locally developed application versions
+## Locally developed application versions
You can make the following updates to function apps to locally change the targeted versions.
-#### Visual Studio runtime versions
+### Visual Studio runtime versions
In Visual Studio, you select the runtime version when you create a project. Azure Functions tools for Visual Studio supports the three major runtime versions. The correct version is used when debugging and publishing based on project settings. The version settings are defined in the `.csproj` file in the following properties:
In Visual Studio, you select the runtime version when you create a project. Azur
<AzureFunctionsVersion>v4</AzureFunctionsVersion> ```
-You can also choose `net6.0`, `net7.0`, or `net48` as the target framework if you are using [.NET isolated process functions](dotnet-isolated-process-guide.md). Support for `net7.0` and `net48` is currently in preview.
+You can also choose `net6.0`, `net7.0`, or `net48` as the target framework if you are using [.NET isolated worker process functions](dotnet-isolated-process-guide.md). Support for `net7.0` and `net48` is currently in preview.
> [!NOTE] > Azure Functions 4.x requires the `Microsoft.NET.Sdk.Functions` extension be at least `4.0.0`.
You can also choose `net6.0`, `net7.0`, or `net48` as the target framework if yo
<AzureFunctionsVersion>v3</AzureFunctionsVersion> ```
-You can also choose `net5.0` as the target framework if you're using [.NET isolated process functions](dotnet-isolated-process-guide.md).
+You can also choose `net5.0` as the target framework if you're using [.NET isolated worker process functions](dotnet-isolated-process-guide.md).
> [!NOTE] > Azure Functions 3.x and .NET requires the `Microsoft.NET.Sdk.Functions` extension be at least `3.0.0`.
You can also choose `net5.0` as the target framework if you're using [.NET isola
```
-###### Updating 2.x apps to 3.x in Visual Studio
-
-You can open an existing function targeting 2.x and move to 3.x by editing the `.csproj` file and updating the values above. Visual Studio manages runtime versions automatically for you based on project metadata. However, it's possible if you've never created a 3.x app before that Visual Studio doesn't yet have the templates and runtime for 3.x on your machine. This issue may present itself with an error like "no Functions runtime available that matches the version specified in the project." To fetch the latest templates and runtime, go through the experience to create a new function project. When you get to the version and template select screen, wait for Visual Studio to complete fetching the latest templates. After the latest .NET Core 3 templates are available and displayed, you can run and debug any project configured for version 3.x.
-
-> [!IMPORTANT]
-> Version 3.x functions can only be developed in Visual Studio if using Visual Studio version 16.4 or newer.
-
-#### VS Code and Azure Functions Core Tools
+### VS Code and Azure Functions Core Tools
-[Azure Functions Core Tools](functions-run-local.md) is used for command-line development and also by the [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) for Visual Studio Code. To develop against version 3.x, install version 3.x of the Core Tools. Version 2.x development requires version 2.x of the Core Tools, and so on. For more information, see [Install the Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools).
+[Azure Functions Core Tools](functions-run-local.md) is used for command-line development and also by the [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) for Visual Studio Code. To develop against version 4.x, install version 4.x of the Core Tools. Version 3.x development requires version 3.x of the Core Tools, and so on. For more information, see [Install the Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools).
For Visual Studio Code development, you may also need to update the user setting for the `azureFunctions.projectRuntime` to match the version of the tools installed. This setting also updates the templates and languages used during function app creation. To create apps in `~3`, you update the `azureFunctions.projectRuntime` user setting to `~3`. ![Azure Functions extension runtime setting](./media/functions-versions/vs-code-version-runtime.png)
-#### Maven and Java apps
-
-You can migrate Java apps from version 2.x to 3.x by [installing the 3.x version of the core tools](functions-run-local.md#install-the-azure-functions-core-tools) required to run locally. After verifying that your app works correctly running locally on version 3.x, update the app's `POM.xml` file to modify the `FUNCTIONS_EXTENSION_VERSION` setting to `~3`, as in the following example:
-
-```xml
-<configuration>
- <resourceGroup>${functionResourceGroup}</resourceGroup>
- <appName>${functionAppName}</appName>
- <region>${functionAppRegion}</region>
- <appSettings>
- <property>
- <name>WEBSITE_RUN_FROM_PACKAGE</name>
- <value>1</value>
- </property>
- <property>
- <name>FUNCTIONS_EXTENSION_VERSION</name>
- <value>~3</value>
- </property>
- </appSettings>
-</configuration>
-```
- ## Bindings Starting with version 2.x, the runtime uses a new [binding extensibility model](https://github.com/Azure/azure-webjobs-sdk-extensions/wiki/Binding-Extensions-Overview) that offers these advantages:
azure-functions Migrate Version 1 Version 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-version-1-version-4.md
+
+ Title: Migrate apps from Azure Functions version 1.x to 4.x
+description: This article shows you how to upgrade your existing function apps running on version 1.x of the Azure Functions runtime to be able to run on version 4.x of the runtime.
++ Last updated : 11/05/2022+
+zone_pivot_groups: programming-languages-set-functions
++
+# Migrate apps from Azure Functions version 1.x to version 4.x
+
+> [!IMPORTANT]
+> Java isn't supported by version 1.x of the Azure Functions runtime. Perhaps you're instead looking to [migrate your Java app from version 3.x to version 4.x](./migrate-version-3-version-4.md). If you're migrating a version 1.x function app, select either C# or JavaScript above.
+> [!IMPORTANT]
+> TypeScript isn't supported by version 1.x of the Azure Functions runtime. Perhaps you're instead looking to [migrate your TypeScript app from version 3.x to version 4.x](./migrate-version-3-version-4.md). If you're migrating a version 1.x function app, select either C# or JavaScript above.
+> [!IMPORTANT]
+> PowerShell isn't supported by version 1.x of the Azure Functions runtime. Perhaps you're instead looking to [migrate your PowerShell app from version 3.x to version 4.x](./migrate-version-3-version-4.md). If you're migrating a version 1.x function app, select either C# or JavaScript above.
+> [!IMPORTANT]
+> Python isn't supported by version 1.x of the Azure Functions runtime. Perhaps you're instead looking to [migrate your Python app from version 3.x to version 4.x](./migrate-version-3-version-4.md). If you're migrating a version 1.x function app, select either C# or JavaScript above.
+If you're running on version 1.x of the Azure Functions runtime, it's likely because your C# app requires .NET Framework 2.1. Version 4.x of the runtime now lets you run .NET Framework 4.8 apps. At this point, you should consider migrating your version 1.x function apps to run on version 4.x. For more information about Functions runtime versions, see [Azure Functions runtime versions overview](./functions-versions.md).
+
+Migrating a C# function app from version 1.x to version 4.x of the Functions runtime requires you to make changes to your project code. Many of these changes are a result of changes in the C# language and .NET APIs. JavaScript apps generally don't require code changes to migrate.
+
+You can upgrade your C# project to one of the following versions of .NET, all of which can run on Functions version 4.x:
+
+| .NET version | Process model<sup>*</sup> |
+| | | |
+| .NET 7 | [Isolated worker process](./dotnet-isolated-process-guide.md) |
+| .NET 6 | [Isolated worker process](./dotnet-isolated-process-guide.md) |
+| .NET 6 | [In-process](./functions-dotnet-class-library.md) |
+| .NET&nbsp;Framework&nbsp;4.8 | [Isolated worker process](./dotnet-isolated-process-guide.md) |
+
+<sup>*</sup> [In-process execution](./functions-dotnet-class-library.md) is only supported for Long Term Support (LTS) releases of .NET. Non-LTS releases and .NET Framework require you to run in an [isolated worker process](./dotnet-isolated-process-guide.md). For a feature and functionality comparison between the two process models, see [Differences between in-process and isolate worker process .NET Azure Functions](./dotnet-isolated-in-process-differences.md).
+This article walks you through the process of safely migrating your function app to run on version 4.x of the Functions runtime.
+
+## Prepare for migration
+
+Before you upgrade your app to version 4.x of the Functions runtime, you should do the following tasks:
+
+* Review the list of [behavior changes after version 1.x](#behavior-changes-after-version-1x). Migrating from version 1.x to version 4.x also can affect bindings.
+* Review [Update your project files](#update-your-project-files) and decide which version of .NET you want to migrate to. Complete the steps to migrate your local project to your chosen version of .NET.
+* Complete the steps in [update your project files](#update-your-project-files) to migrate your local project to run locally on a version 4.x and a supported version of Node.js.
+* After migrating your local project, fully test the app locally using version 4.x of the [Azure Functions Core Tools](functions-run-local.md).
+
+* Upgrade your function app in Azure to the new version. If you need to minimize downtime, consider using a [staging slot](functions-deployment-slots.md) to test and verify your migrated app in Azure on the new runtime version. You can then deploy your app with the updated version settings to the production slot. For more information, see [Migrate using slots](#upgrade-using-slots).
+* Republished your migrated project to the upgraded function app. When you use Visual Studio to publish a version 4.x project to an existing function app at a lower version, you're prompted to let Visual Studio upgrade the function app to version 4.x during deployment. This upgrade uses the same process defined in [Migrate without slots](#upgrade-without-slots).
+* Republished your migrated project to the upgraded function app.
+* Consider using a [staging slot](functions-deployment-slots.md) to test and verify your app in Azure on the new runtime version. You can then deploy your app with the updated version settings to the production slot. For more information, see [Migrate using slots](#upgrade-using-slots).
+## Update your project files
+
+The following sections describes the updates you must make to your C# project files to be able to run on one of the supported versions of .NET in Functions version 4.x. The updates shown are ones common to most projects. Your project code may require updates not mentioned in this article, especially when using custom NuGet packages.
+
+Choose the tab that matches your target version of .NET and the desired process model (in-process or isolated worker process).
+
+### .csproj file
+
+The following example is a .csproj project file that runs on version 1.x:
+
+```xml
+<Project Sdk="Microsoft.NET.Sdk">
+ <PropertyGroup>
+ <TargetFramework>net48</TargetFramework>
+ <AzureFunctionsVersion>v1</AzureFunctionsVersion>
+ </PropertyGroup>
+ <ItemGroup>
+ <PackageReference Include="Microsoft.NET.Sdk.Functions" Version="1.0.24" />
+ </ItemGroup>
+ <ItemGroup>
+ <Reference Include="Microsoft.CSharp" />
+ </ItemGroup>
+ <ItemGroup>
+ <None Update="host.json">
+ <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
+ </None>
+ <None Update="local.settings.json">
+ <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
+ <CopyToPublishDirectory>Never</CopyToPublishDirectory>
+ </None>
+ </ItemGroup>
+</Project>
+```
+
+Use one of the following procedures to update this XML file to run in Functions version 4.x:
+
+# [.NET Framework 4.8](#tab/v4)
+
+The following changes are required in the .csproj XML project file:
+
+1. Change the value of `PropertyGroup`.`AzureFunctionsVersion` to `v4`.
+
+1. Add the following `OutputType` element to the `PropertyGroup`:
+
+ :::code language="xml" source="~/functions-quickstart-templates/Functions.Templates/ProjectTemplate_v4.x/CSharp-Isolated/Company.FunctionApp.csproj" range="5-5":::
+
+1. Replace the existing `ItemGroup`.`PackageReference` with the following `ItemGroup`:
+
+ :::code language="xml" source="~/functions-quickstart-templates/Functions.Templates/ProjectTemplate_v4.x/CSharp-Isolated/Company.FunctionApp.csproj" range="12-15":::
+
+1. Add the following new `ItemGroup`:
+
+ :::code language="xml" source="~/functions-quickstart-templates/Functions.Templates/ProjectTemplate_v4.x/CSharp-Isolated/Company.FunctionApp.csproj" range="31-33":::
+
+After you make these changes, your updated project should look like the following example:
+
+```xml
+
+<Project Sdk="Microsoft.NET.Sdk">
+ <PropertyGroup>
+ <TargetFramework>net48</TargetFramework>
+ <AzureFunctionsVersion>v4</AzureFunctionsVersion>
+ <RootNamespace>My.Namespace</RootNamespace>
+ <OutputType>Exe</OutputType>
+ </PropertyGroup>
+ <ItemGroup>
+ <PackageReference Include="Microsoft.Azure.Functions.Worker" Version="1.8.0" />
+ <PackageReference Include="Microsoft.Azure.Functions.Worker.Sdk" Version="1.7.0" />
+ </ItemGroup>
+ <ItemGroup>
+ <None Update="host.json">
+ <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
+ </None>
+ <None Update="local.settings.json">
+ <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
+ <CopyToPublishDirectory>Never</CopyToPublishDirectory>
+ </None>
+ </ItemGroup>
+ <ItemGroup>
+ <Folder Include="Properties\" />
+ </ItemGroup>
+</Project>
+```
+
+# [.NET 6 (isolated)](#tab/net6-isolated)
++
+# [.NET 6 (in-process)](#tab/net6-in-proc)
++
+# [.NET 7](#tab/net7)
++++
+### program.cs file
+
+In most cases, migrating requires you to add the following program.cs file to your project:
+
+# [.NET Framework 4.8](#tab/v4)
++
+# [.NET 6 (isolated)](#tab/net6-isolated)
++
+# [.NET 6 (in-process)](#tab/net6-in-proc)
+
+A program.cs file isn't required when running in-process.
+
+# [.NET 7](#tab/net7)
++++
+### host.json file
+
+Settings in the host.json file apply at the function app level, both locally and in Azure. In version 1.x, your host.json file is either empty or it contains some settings that apply to all functions in the function app. For more information, see [Host.json v1](./functions-host-json-v1.md). If your host.json file has setting values, review the [host.json v2 format](./functions-host-json.md) for any changes.
+
+To run on version 4.x, you must add `"version": "2.0"` to the host.json file. You should also consider adding `logging` to your configuration, as in the following examples:
+
+# [.NET Framework 4.8](#tab/v4)
++
+# [.NET 6 (isolated)](#tab/net6-isolated)
++
+# [.NET 6 (in-process)](#tab/net6-in-proc)
++
+# [.NET 7](#tab/net7)
++++
+### local.settings.json file
+
+The local.settings.json file is only used when running locally. For information, see [Local settings file](functions-develop-local.md#local-settings-file). In version 1.x, the local.settings.json file has only two required values:
++
+When you upgrade to version 4.x, make sure that your local.settings.json file has at least the following elements:
+
+# [.NET Framework 4.8](#tab/v4)
++
+# [.NET 6 (isolated)](#tab/net6-isolated)
+
+# [.NET 6 (in-process)](#tab/net6-in-proc)
++
+# [.NET 7](#tab/net7)
++++
+### Namespace changes
+
+C# functions that run in an isolated worker process uses libraries in a different namespace than those libraries used in version 1.x. In-process functions use libraries in the same namespace.
+
+Version 1.x and in-process libraries are generally in the namespace `Microsoft.Azure.WebJobs.*`. Isolated worker process function apps use libraries in the namespace `Microsoft.Azure.Functions.Worker.*`. You can see the effect of these namespace changes on `using` statements in the [HTTP trigger template examples](#http-trigger-template) that follow.
+
+### Class name changes
+
+Some key classes changed names between version 1.x and version 4.x. These changes are a result either of changes in .NET APIs or in differences between in-process and isolated worker process. The following table indicates these key .NET classes used by Azure Functions that changed after version 1.x:
+
+# [.NET Framework 4.8](#tab/v4)
+
+| Version 1.x | .NET Framework 4.8 |
+| | |
+| `FunctionName` (attribute) | `Function` (attribute) |
+| `TraceWriter` | `ILogger` |
+| `HttpRequestMessage` | `HttpRequestData` |
+| `HttpResonseMessage` | `HttpResonseData` |
+
+# [.NET 6 (isolated)](#tab/net6-isolated)
+
+| Version 1.x | .NET 6 (isolated) |
+| | |
+| `FunctionName` (attribute) | `Function` (attribute) |
+| `TraceWriter` | `ILogger` |
+| `HttpRequestMessage` | `HttpRequestData` |
+| `HttpResonseMessage` | `HttpResonseData` |
+
+# [.NET 6 (in-process)](#tab/net6-in-proc)
+
+| Version 1.x | .NET 6 (in-process) |
+| | |
+| `FunctionName` (attribute) | `FunctionName` (attribute) |
+| `TraceWriter` | `ILogger` |
+| `HttpRequestMessage` | `HttpRequest` |
+| `HttpResonseMessage` | `OkObjectResult` |
+
+# [.NET 7](#tab/net7)
+
+| Version 1.x | .NET 7 |
+| | |
+| `FunctionName` (attribute) | `Function` (attribute) |
+| `TraceWriter` | `ILogger` |
+| `HttpRequestMessage` | `HttpRequestData` |
+| `HttpResonseMessage` | `HttpResonseData` |
+++
+There might also be class name differences in bindings. For more information, see the reference articles for the specific bindings.
+
+### HTTP trigger template
+
+Most of the code changes between version 1.x and version 4.x can be seen in HTTP triggered functions. The HTTP trigger template for version 1.x looks like the following example:
+
+```csharp
+using System.Linq;
+using System.Net;
+using System.Net.Http;
+using System.Threading.Tasks;
+using Microsoft.Azure.WebJobs;
+using Microsoft.Azure.WebJobs.Extensions.Http;
+using Microsoft.Azure.WebJobs.Host;
+
+namespace Company.Function
+{
+ public static class HttpTriggerCSharp
+ {
+ [FunctionName("HttpTriggerCSharp")]
+ public static async Task<HttpResponseMessage>
+ Run([HttpTrigger(AuthorizationLevel.AuthLevelValue, "get", "post",
+ Route = null)]HttpRequestMessage req, TraceWriter log)
+ {
+ log.Info("C# HTTP trigger function processed a request.");
+
+ // parse query parameter
+ string name = req.GetQueryNameValuePairs()
+ .FirstOrDefault(q => string.Compare(q.Key, "name", true) == 0)
+ .Value;
+
+ if (name == null)
+ {
+ // Get request body
+ dynamic data = await req.Content.ReadAsAsync<object>();
+ name = data?.name;
+ }
+
+ return name == null
+ ? req.CreateResponse(HttpStatusCode.BadRequest,
+ "Please pass a name on the query string or in the request body")
+ : req.CreateResponse(HttpStatusCode.OK, "Hello " + name);
+ }
+ }
+}
+```
+
+In version 4.x, the HTTP trigger template looks like the following example:
+
+# [.NET Framework 4.8](#tab/v4)
++
+# [.NET 6 (isolated)](#tab/net6-isolated)
++
+# [.NET 6 (in-process)](#tab/net6-in-proc)
++
+# [.NET 7](#tab/net7)
+++
+## Update your project files
+
+To update your project to Azure Functions 4.x:
+
+1. Update your local installation of [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) to version 4.x.
+
+1. Move to one of the [Node.js versions supported on version 4.x](functions-reference-node.md#node-version).
+
+1. Add both `version` and `extensionBundle` elements to the host.json, so that it looks like the following example:
+
+ [!INCLUDE [functions-extension-bundles-json-v3](../../includes/functions-extension-bundles-json-v3.md)]
+
+ The `extensionBundle` element is required because after version 1.x, bindings are maintained as external packages. For more information, see [Extension bundles](/functions-bindings-register.md#extension-bundles).
+
+1. Update your local.settings.json file so that it has at least the following elements:
+
+ ```json
+ {
+ "IsEncrypted": false,
+ "Values": {
+ "AzureWebJobsStorage": "UseDevelopmentStorage=true",
+ "FUNCTIONS_WORKER_RUNTIME": "node"
+ }
+ }
+ ```
+
+ The `AzureWebJobsStorage` setting can be either the Azurite storage emulator or an actual Azure storage account. For more information, see [Local storage emulator](/functions-develop-local.md#local-storage-emulator).
+## Behavior changes after version 1.x
+
+This section details changes made after version 1.x in both trigger and binding behaviors as well as in core Functions features and behaviors.
+
+### Changes in triggers and bindings
+
+Starting with version 2.x, you must install the extensions for specific triggers and bindings used by the functions in your app. The only exception for this HTTP and timer triggers, which don't require an extension. For more information, see [Register and install binding extensions](./functions-bindings-register.md).
+
+There are also a few changes in the *function.json* or attributes of the function between versions. For example, the Event Hubs `path` property is now `eventHubName`. See the [existing binding table](functions-versions.md#bindings) for links to documentation for each binding.
+
+### Changes in features and functionality
+
+A few features were removed, updated, or replaced after version 1.x. This section details the changes you see in later versions after having used version 1.x.
+
+In version 2.x, the following changes were made:
+
+* Keys for calling HTTP endpoints are always stored encrypted in Azure Blob storage. In version 1.x, keys were stored in Azure Files by default. When you upgrade an app from version 1.x to version 2.x, existing secrets that are in Azure Files are reset.
+
+* The version 2.x runtime doesn't include built-in support for webhook providers. This change was made to improve performance. You can still use HTTP triggers as endpoints for webhooks.
+
+* To improve monitoring, the WebJobs dashboard in the portal, which used the [`AzureWebJobsDashboard`](functions-app-settings.md#azurewebjobsdashboard) setting is replaced with Azure Application Insights, which uses the [`APPINSIGHTS_INSTRUMENTATIONKEY`](functions-app-settings.md#appinsights_instrumentationkey) setting. For more information, see [Monitor Azure Functions](functions-monitoring.md).
+
+* All functions in a function app must share the same language. When you create a function app, you must choose a runtime stack for the app. The runtime stack is specified by the [`FUNCTIONS_WORKER_RUNTIME`](functions-app-settings.md#functions_worker_runtime) value in application settings. This requirement was added to improve footprint and startup time. When developing locally, you must also include this setting in the [local.settings.json file](functions-develop-local.md#local-settings-file).
+
+* The default timeout for functions in an App Service plan is changed to 30 minutes. You can manually change the timeout back to unlimited by using the [functionTimeout](functions-host-json.md#functiontimeout) setting in host.json.
+
+* HTTP concurrency throttles are implemented by default for Consumption plan functions, with a default of 100 concurrent requests per instance. You can change this behavior in the [`maxConcurrentRequests`](functions-host-json.md#http) setting in the host.json file.
+
+* Because of [.NET Core limitations](https://github.com/Azure/azure-functions-host/issues/3414), support for F# script (`.fsx` files) functions has been removed. Compiled F# functions (.fs) are still supported.
+
+* The URL format of Event Grid trigger webhooks has been changed to follow this pattern: `https://{app}/runtime/webhooks/{triggerName}`.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn more about Functions versions](functions-versions.md)
++
azure-functions Migrate Version 3 Version 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-version-3-version-4.md
+
+ Title: Migrate apps from Azure Functions version 3.x to 4.x
+description: This article shows you how to upgrade your existing function apps running on version 3.x of the Azure Functions runtime to be able to run on version 4.x of the runtime.
++ Last updated : 11/05/2022
+zone_pivot_groups: programming-languages-set-functions
++
+# <a name="top"></a>Migrate apps from Azure Functions version 3.x to version 4.x
+
+Azure Functions version 4.x is highly backwards compatible to version 3.x. Most apps should safely upgrade to 4.x without requiring significant code changes. For more information about Functions runtime versions, see [Azure Functions runtime versions overview](./functions-versions.md).
+
+This article walks you through the process of safely migrating your function app to run on version 4.x of the Functions runtime. Because project upgrade instructions are language dependent, make sure to choose your development language from the selector at the [top of the article](#top).
+
+## Choose your target .NET
+
+On version 3.x of the Functions runtime, your C# function app targets .NET Core 3.1. When you migrate your function app to version 4.x, you have the opportunity to choose the target version of .NET. You can upgrade your C# project to one of the following versions of .NET, all of which can run on Functions version 4.x:
+
+| .NET version | Process model<sup>*</sup> |
+| | | |
+| .NET 7 | [Isolated worker process](./dotnet-isolated-process-guide.md) |
+| .NET 6 | [Isolated worker process](./dotnet-isolated-process-guide.md) |
+| .NET 6 | [In-process](./functions-dotnet-class-library.md) |
+
+<sup>*</sup> [In-process execution](./functions-dotnet-class-library.md) is only supported for Long Term Support (LTS) releases of .NET. Non-LTS releases and .NET Framework require you to run in an [isolated worker process](./dotnet-isolated-process-guide.md).
+
+Upgrading from .NET Core 3.1 to .NET 6 running in-process requires minimal updates to your project and virtually no updates to code. Switching to the isolated worker process model requires you to make changes to your code, but provides the flexibility of being able to easily run on any future version of .NET. For a feature and functionality comparison between the two process models, see [Differences between in-process and isolate worker process .NET Azure Functions](./dotnet-isolated-in-process-differences.md).
+
+## Prepare for migration
+
+Before you upgrade your app to version 4.x of the Functions runtime, you should do the following tasks:
+
+* Review the list of [breaking changes between 3.x and 4.x](#breaking-changes-between-3x-and-4x).
+* [Run the pre-upgrade validator](#run-the-pre-upgrade-validator).
+* When possible, [upgrade your local project environment to version 4.x](#upgrade-your-local-project). Fully test your app locally using version 4.x of the [Azure Functions Core Tools](functions-run-local.md).
+* Upgrade your function app in Azure to the new version. If you need to minimize downtime, consider using a [staging slot](functions-deployment-slots.md) to test and verify your migrated app in Azure on the new runtime version. You can then deploy your app with the updated version settings to the production slot. For more information, see [Migrate using slots](#upgrade-using-slots).
+* Republished your migrated project to the upgraded function app. When you use Visual Studio to publish a version 4.x project to an existing function app at a lower version, you're prompted to let Visual Studio upgrade the function app to version 4.x during deployment. This upgrade uses the same process defined in [Migrate without slots](#upgrade-without-slots).
+
+## Run the pre-upgrade validator
+
+Azure Functions provides a pre-upgrade validator to help you identify potential issues when migrating your function app to 4.x. To run the pre-upgrade validator:
+
+1. In the [Azure portal](https://portal.azure.com), navigate to your function app.
+
+1. Open the **Diagnose and solve problems** page.
+
+1. In **Function App Diagnostics**, start typing `Functions 4.x Pre-Upgrade Validator` and then choose it from the list.
+
+1. After validation completes, review the recommendations and address any issues in your app. If you need to make changes to your app, make sure to validate the changes against version 4.x of the Functions runtime, either [locally using Azure Functions Core Tools v4](#upgrade-your-local-project) or by [using a staging slot](#upgrade-using-slots).
+
+## Upgrade your local project
+
+Upgrading instructions are language dependent. If you don't see your language, choose it from the selector at the [top of the article](#top).
++
+Choose the tab that matches your target version of .NET and the desired process model (in-process or isolated worker process).
+
+### .csproj file
+
+The following example is a .csproj project file that uses .NET Core 3.1 on version 3.x:
+
+```xml
+<Project Sdk="Microsoft.NET.Sdk">
+ <PropertyGroup>
+ <TargetFramework>netcoreapp3.1</TargetFramework>
+ <AzureFunctionsVersion>v3</AzureFunctionsVersion>
+ </PropertyGroup>
+ <ItemGroup>
+ <PackageReference Include="Microsoft.NET.Sdk.Functions" Version="3.0.13" />
+ </ItemGroup>
+ <ItemGroup>
+ <Reference Include="Microsoft.CSharp" />
+ </ItemGroup>
+ <ItemGroup>
+ <None Update="host.json">
+ <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
+ </None>
+ <None Update="local.settings.json">
+ <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
+ <CopyToPublishDirectory>Never</CopyToPublishDirectory>
+ </None>
+ </ItemGroup>
+</Project>
+```
+
+Use one of the following procedures to update this XML file to run in Functions version 4.x:
+
+# [.NET 6 (in-process)](#tab/net6-in-proc)
++
+# [.NET 6 (isolated)](#tab/net6-isolated)
++
+# [.NET 7](#tab/net7)
++++
+### program.cs file
+
+When migrating to run in an isolated worker process, you must add the following program.cs file to your project:
+
+# [.NET 6 (in-process)](#tab/net6-in-proc)
+
+A program.cs file isn't required when running in-process.
+
+# [.NET 6 (isolated)](#tab/net6-isolated)
++
+# [.NET 7](#tab/net7)
++++
+### local.settings.json file
+
+The local.settings.json file is only used when running locally. For information, see [Local settings file](functions-develop-local.md#local-settings-file). When migrating from running in-process to running in an isolated worker process, you need to change the `FUNCTIONS_WORKER_RUNTIME` value, as in the following example:
+
+# [.NET 6 (in-process)](#tab/net6-in-proc)
++
+# [.NET 6 (isolated)](#tab/net6-isolated)
++
+# [.NET 7](#tab/net7)
++++
+### Namespace changes
+
+C# functions that run in an isolated worker process uses libraries in a different namespace than those libraries used when running in-process. In-process libraries are generally in the namespace `Microsoft.Azure.WebJobs.*`. Isolated worker process function apps use libraries in the namespace `Microsoft.Azure.Functions.Worker.*`. You can see the effect of these namespace changes on `using` statements in the [HTTP trigger template examples](#http-trigger-template) that follow.
+
+### Class name changes
+
+Some key classes change names as a result of differences between in-process and isolated worker process APIs.
+
+The following table indicates key .NET classes used by Functions that could change when migrating from in-process:
+
+| .NET Core 3.1 | .NET 6 (in-process) | .NET 6 (isolated) | .NET 7 |
+| | | | |
+| `FunctionName` (attribute) | `FunctionName` (attribute) | `Function` (attribute) | `Function` (attribute) |
+| `HttpRequest` | `HttpRequest` | `HttpRequestData` | `HttpRequestData` |
+| `OkObjectResult` | `OkObjectResult` | `HttpResonseData` | `HttpResonseData` |
+
+There might also be class name differences in bindings. For more information, see the reference articles for the specific bindings.
+
+### HTTP trigger template
+
+The differences between in-process and isolated worker process can be seen in HTTP triggered functions. The HTTP trigger template for version 3.x (in-process) looks like the following example:
++
+The HTTP trigger template for the migrated version looks like the following example:
+
+# [.NET 6 (in-process)](#tab/net6-in-proc)
+
+Sames as version 3.x (in-process).
+
+# [.NET 6 (isolated)](#tab/net6-isolated)
++
+# [.NET 7](#tab/net7)
++++
+To update your project to Azure Functions 4.x:
+
+1. Update your local installation of [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) to version 4.x.
+
+1. Update your app's [Azure Functions extensions bundle](functions-bindings-register.md#extension-bundles) to 2.x or above. For more information, see [breaking changes](#breaking-changes-between-3x-and-4x).
+
+3. If needed, move to one of the [Java versions supported on version 4.x](./functions-reference-java.md#supported-versions).
+4. Update the app's `POM.xml` file to modify the `FUNCTIONS_EXTENSION_VERSION` setting to `~4`, as in the following example:
+
+ ```xml
+ <configuration>
+ <resourceGroup>${functionResourceGroup}</resourceGroup>
+ <appName>${functionAppName}</appName>
+ <region>${functionAppRegion}</region>
+ <appSettings>
+ <property>
+ <name>WEBSITE_RUN_FROM_PACKAGE</name>
+ <value>1</value>
+ </property>
+ <property>
+ <name>FUNCTIONS_EXTENSION_VERSION</name>
+ <value>~4</value>
+ </property>
+ </appSettings>
+ </configuration>
+ ```
+3. If needed, move to one of the [Node.js versions supported on version 4.x](functions-reference-node.md#node-version).
+3. Take this opportunity to upgrade to PowerShell 7.2, which is recommended. For more information, see [PowerShell versions](functions-reference-powershell.md#powershell-versions).
+3. If you're using Python 3.6, move to one of the [supported versions](functions-reference-python.md#python-version).
++
+## Breaking changes between 3.x and 4.x
+
+The following are key breaking changes to be aware of before upgrading a 3.x app to 4.x, including language-specific breaking changes. For a full list, see Azure Functions GitHub issues labeled [*Breaking Change: Approved*](https://github.com/Azure/azure-functions/issues?q=is%3Aissue+label%3A%22Breaking+Change%3A+Approved%22+is%3A%22closed+OR+open%22).
+
+If you don't see your programming language, go select it from the [top of the page](#top).
+
+### Runtime
+
+- Azure Functions proxies is a legacy feature for versions 1.x through 3.x of the Azure Functions runtime. Support for Functions proxies is being returned in version 4.x so that you can successfully upgrade your function apps to the latest runtime version. As soon as possible, you should instead switch to integrating your function apps with Azure API Management. API Management lets you take advantage of a more complete set of features for defining, securing, managing, and monetizing your Functions-based APIs. For more information, see [API Management integration](functions-proxies.md#api-management-integration). For information about the pending return of proxies in version 4.x, [Monitor the App Service announcements page](https://github.com/Azure/app-service-announcements/issues).
+
+- Logging to Azure Storage using *AzureWebJobsDashboard* is no longer supported in 4.x. You should instead use [Application Insights](./functions-monitoring.md). ([#1923](https://github.com/Azure/Azure-Functions/issues/1923))
+
+- Azure Functions 4.x now enforces [minimum version requirements for extensions](functions-versions.md#minimum-extension-versions). Upgrade to the latest version of affected extensions. For non-.NET languages, [upgrade](./functions-bindings-register.md#extension-bundles) to extension bundle version 2.x or later. ([#1987](https://github.com/Azure/Azure-Functions/issues/1987))
+
+- Default and maximum timeouts are now enforced in 4.x for function apps running on Linux in a Consumption plan. ([#1915](https://github.com/Azure/Azure-Functions/issues/1915))
+
+- Azure Functions 4.x uses `Azure.Identity` and `Azure.Security.KeyVault.Secrets` for the Key Vault provider and has deprecated the use of Microsoft.Azure.KeyVault. For more information about how to configure function app settings, see the Key Vault option in [Secret Repositories](security-concepts.md#secret-repositories). ([#2048](https://github.com/Azure/Azure-Functions/issues/2048))
+
+- Function apps that share storage accounts now fail to start when their host IDs are the same. For more information, see [Host ID considerations](storage-considerations.md#host-id-considerations). ([#2049](https://github.com/Azure/Azure-Functions/issues/2049))
++
+- Azure Functions 4.x supports .NET 6 in-process and isolated apps.
+
+- `InvalidHostServicesException` is now a fatal error. ([#2045](https://github.com/Azure/Azure-Functions/issues/2045))
+
+- `EnableEnhancedScopes` is enabled by default. ([#1954](https://github.com/Azure/Azure-Functions/issues/1954))
+
+- Remove `HttpClient` as a registered service. ([#1911](https://github.com/Azure/Azure-Functions/issues/1911))
+- Use single class loader in Java 11. ([#1997](https://github.com/Azure/Azure-Functions/issues/1997))
+
+- Stop loading worker jars in Java 8. ([#1991](https://github.com/Azure/Azure-Functions/issues/1991))
+
+- Node.js versions 10 and 12 aren't supported in Azure Functions 4.x. ([#1999](https://github.com/Azure/Azure-Functions/issues/1999))
+
+- Output serialization in Node.js apps was updated to address previous inconsistencies. ([#2007](https://github.com/Azure/Azure-Functions/issues/2007))
+- Default thread count has been updated. Functions that aren't thread-safe or have high memory usage may be impacted. ([#1962](https://github.com/Azure/Azure-Functions/issues/1962))
+- Python 3.6 isn't supported in Azure Functions 4.x. ([#1999](https://github.com/Azure/Azure-Functions/issues/1999))
+
+- Shared memory transfer is enabled by default. ([#1973](https://github.com/Azure/Azure-Functions/issues/1973))
+
+- Default thread count has been updated. Functions that aren't thread-safe or have high memory usage may be impacted. ([#1962](https://github.com/Azure/Azure-Functions/issues/1962))
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn more about Functions versions](functions-versions.md)
azure-functions Openapi Apim Integrate Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/openapi-apim-integrate-visual-studio.md
In this tutorial, you learn how to:
The serverless function you create provides an API that lets you determine whether an emergency repair on a wind turbine is cost-effective. Because both the function app and API Management instance you create use consumption plans, your cost for completing this tutorial is minimal. > [!NOTE]
-> The OpenAPI and API Management integration featured in this article is currently in preview. This method for exposing a serverless API is only supported for [in-process](functions-dotnet-class-library.md) C# class library functions. [Isolated process](dotnet-isolated-process-guide.md) C# class library functions and all other language runtimes should instead [use Azure API Management integration from the portal](functions-openapi-definition.md).
+> The OpenAPI and API Management integration featured in this article is currently in preview. This method for exposing a serverless API is only supported for [in-process](functions-dotnet-class-library.md) C# class library functions. [isolated worker process](dotnet-isolated-process-guide.md) C# class library functions and all other language runtimes should instead [use Azure API Management integration from the portal](functions-openapi-definition.md).
## Prerequisites
The Azure Functions project template in Visual Studio creates a project that you
| Setting | Value | Description | | | - |-- |
- | **Functions worker** | **.NET 6** | This value creates a function project that runs in-process on version 4.x of the Azure Functions runtime. OpenAPI file generation is only supported for versions 3.x and 4.x of the Functions runtime, and isolated process isn't supported. |
+ | **Functions worker** | **.NET 6** | This value creates a function project that runs in-process on version 4.x of the Azure Functions runtime. OpenAPI file generation is only supported for versions 3.x and 4.x of the Functions runtime, and isolated worker process isn't supported. |
| **Function template** | **HTTP trigger with OpenAPI** | This value creates a function triggered by an HTTP request, with the ability to generate an OpenAPI definition file. | | **Use Azurite for runtime storage account (AzureWebJobsStorage)** | **Selected** | You can use the emulator for local development of HTTP trigger functions. Because a function app in Azure requires a storage account, one is assigned or created when you publish your project to Azure. | | **Authorization level** | **Function** | When running in Azure, clients must provide a key when accessing the endpoint. For more information about keys and authorization, see [function access keys](functions-bindings-http-webhook-trigger.md#authorization-keys). |
azure-functions Set Runtime Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/set-runtime-version.md
Title: How to target Azure Functions runtime versions description: Azure Functions supports multiple versions of the runtime. Learn how to specify the runtime version of a function app hosted in Azure. Previously updated : 10/04/2022 Last updated : 10/22/2022
The following table shows the `FUNCTIONS_EXTENSION_VERSION` values for each majo
| Major version | `FUNCTIONS_EXTENSION_VERSION` value | Additional configuration | | - | -- | - |
-| 4.x | `~4` | [On Windows, enable .NET 6](./functions-versions.md#migrating-from-3x-to-4x) |
+| 4.x | `~4` | [On Windows, enable .NET 6](./migrate-version-3-version-4.md#upgrade-your-function-app-in-azure) |
| 3.x | `~3` | | | 2.x | `~2` | | | 1.x | `~1` | |
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
recommendations: false Previously updated : 09/29/2022 Last updated : 11/04/2022 # Azure, Dynamics 365, Microsoft 365, and Power Platform services compliance scope
For current Azure Government regions and available services, see [Products avail
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and Power Platform cloud services in scope for FedRAMP High, DoD IL2, DoD IL4, DoD IL5, and DoD IL6 authorizations across Azure, Azure Government, and Azure Government Secret cloud environments. For other authorization details in Azure Government Secret and Azure Government Top Secret, contact your Microsoft account representative. ## Azure public services by audit scope
-*Last updated: September 2022*
+*Last updated: November 2022*
### Terminology used
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure AD Multi-Factor Authentication](../../active-directory/authentication/concept-mfa-howitworks.md) | &#x2705; | &#x2705; | | [Azure API for FHIR](../../healthcare-apis/azure-api-for-fhir/index.yml) | &#x2705; | &#x2705; | | [Azure Arc-enabled servers](../../azure-arc/servers/index.yml) | &#x2705; | &#x2705; |
+| [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/index.yml) | &#x2705; | &#x2705; |
| **Service** | **FedRAMP High** | **DoD IL2** | | [Azure Cache for Redis](../../azure-cache-for-redis/index.yml) | &#x2705; | &#x2705; | | [Azure Cosmos DB](../../cosmos-db/index.yml) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| **Service** | **FedRAMP High** | **DoD IL2** | | [Virtual Network NAT](../../virtual-network/nat-gateway/index.yml) | &#x2705; | &#x2705; | | [Virtual WAN](../../virtual-wan/index.yml) | &#x2705; | &#x2705; |
+| [VM Image Builder](../../virtual-machines/image-builder-overview.md) | &#x2705; | &#x2705; |
| [VPN Gateway](../../vpn-gateway/index.yml) | &#x2705; | &#x2705; | | [Web Application Firewall](../../web-application-firewall/index.yml) | &#x2705; | &#x2705; | | [Windows 10 IoT Core Services](/windows-hardware/manufacture/iot/iotcoreservicesoverview) | &#x2705; | &#x2705; |
azure-monitor Alerts Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-types.md
This article describes the kinds of Azure Monitor alerts you can create, and hel
There are five types of alerts: - [Metric alerts](#metric-alerts)-- [Prometheus alerts](#prometheus-alerts-preview)-- [Log alerts](#log-alerts)+ - [Activity log alerts](#activity-log-alerts) - [Smart detection alerts](#smart-detection-alerts)-
+- [Prometheus alerts](#prometheus-alerts-preview) (preview)
## Choosing the right alert type This table can help you decide when to use what type of alert. For more detailed information about pricing, see the [pricing page](https://azure.microsoft.com/pricing/details/monitor/).
Prometheus alerts are based on metric values stored in [Azure Monitor managed se
- Get an [overview of alerts](alerts-overview.md). - [Create an alert rule](alerts-log.md). - Learn more about [Smart Detection](proactive-failure-diagnostics.md).+
azure-monitor Prometheus Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/prometheus-alerts.md
Last updated 09/15/2022
-# Prometheus metric alerts in Azure Monitor
+# Prometheus alerts in Azure Monitor
Prometheus alert rules allow you to define alert conditions, using queries which are written in Prometheus Query Language (Prom QL) that are applied on Prometheus metrics stored in [Azure Monitor managed services for Prometheus](../essentials/prometheus-metrics-overview.md). Whenever the alert query results in one or more time series meeting the condition, the alert counts as pending for these metric and label sets. A pending alert becomes active after a user-defined period of time during which all the consecutive query evaluations for the respective time series meet the alert condition. Once an alert becomes active, it is fired and would trigger your actions or notifications of choice, as defined in the Azure Action Groups configured in your alert rule. > [!NOTE]
Prometheus alert rules allow you to define alert conditions, using queries which
## Create Prometheus alert rule Prometheus alert rules are created as part of a Prometheus rule group which is stored in [Azure Monitor workspace](../essentials/azure-monitor-workspace-overview.md). See [Azure Monitor managed service for Prometheus rule groups](../essentials/prometheus-rule-groups.md) for details.
-## View Prometheus metric alerts
-View fired and resolved Prometheus alerts in the Azure portal with other alert types. Use the following steps to filter on only Prometheus metric alerts.
-
+## View Prometheus alerts
+View fired and resolved Prometheus alerts in the Azure portal with other alert types. Use the following steps to filter on only Prometheus alerts.
1. From the **Monitor** menu in the Azure portal, select **Alerts**. 2. If **Monitoring Service** isn't displayed as a filter option, then select **Add Filter** and add it. 3. Set the filter **Monitoring Service** to **Prometheus** to see Prometheus alerts. - 4. Click the alert name to view the details of a specific fired/resolved alert. - ## Next steps - [Create a Prometheus rule group](../essentials/prometheus-rule-groups.md).+
azure-monitor Azure Web Apps Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-net.md
# Application Monitoring for Azure App Service and ASP.NET
-Enabling monitoring on your ASP.NET based web applications running on [Azure App Services](../../app-service/index.yml) is now easier than ever. Whereas previously you needed to manually instrument your app, the latest extension/agent is now built into the App Service image by default. This article will walk you through enabling Azure Monitor application Insights monitoring as well as provide preliminary guidance for automating the process for large-scale deployments.
+Enabling monitoring on your ASP.NET based web applications running on [Azure App Services](../../app-service/index.yml) is now easier than ever. Whereas previously you needed to manually instrument your app, the latest extension/agent is now built into the App Service image by default. This article will walk you through enabling Azure Monitor Application Insights monitoring as well as provide preliminary guidance for automating the process for large-scale deployments.
> [!NOTE] > Manually adding an Application Insights site extension via **Development Tools** > **Extensions** is deprecated. This method of extension installation was dependent on manual updates for each new version. The latest stable release of the extension is now [preinstalled](https://github.com/projectkudu/kudu/wiki/Azure-Site-Extensions) as part of the App Service image. The files are located in `d:\Program Files (x86)\SiteExtensions\ApplicationInsightsAgent` and are automatically updated with each stable release. If you follow the auto-instrumentation instructions to enable monitoring below, it will automatically remove the deprecated extension for you.
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
The following table shows the recommended [aggregation types](../essentials/metr
||| | Counter | Sum | | Asynchronous Counter | Sum |
-| Histogram | Average, Sum, Count (Max, Min for Python and Node.js only) |
+| Histogram | Min, Max, Average, Sum and Count |
| Asynchronous Gauge | Average |
-| UpDownCounter (Python and Node.js only) | Sum |
-| Asynchronous UpDownCounter (Python and Node.js only) | Sum |
+| UpDownCounter | Sum |
+| Asynchronous UpDownCounter | Sum |
> [!CAUTION] > Aggregation types beyond what's shown in the table typically aren't meaningful.
azure-monitor Basic Logs Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md
By default, all tables in your Log Analytics workspace are Analytics tables, and
| Table | Details| |:|:| | Custom tables | All custom tables created with or migrated to the [data collection rule (DCR)-based logs ingestion API.](logs-ingestion-api-overview.md) |
-| [ContainerLogV2](/azure/azure-monitor/reference/tables/containerlogv2) | Used in [Container insights](../containers/container-insights-overview.md) and includes verbose text-based log records. |
-| [AppTraces](/azure/azure-monitor/reference/tables/apptraces) | Freeform Application Insights traces. |
-| [ContainerAppConsoleLogs](/azure/azure-monitor/reference/tables/containerappconsoleLogs) | Logs generated by Azure Container Apps, within a Container Apps environment. |
| [ACSCallRecordingSummary](/azure/azure-monitor/reference/tables/acscallrecordingsummary) | Communication Services recording summary logs. |
-| [ACSRoomsIncomingOperations](/azure/azure-monitor/reference/tables/acsroomsincomingoperations) | Communication Services rooms operations incoming requests logs. |
+| [ACSRoomsIncomingOperations](/azure/azure-monitor/reference/tables/acsroomsincomingoperations) | Communication Services Rooms incoming requests operations. |
+| [AppTraces](/azure/azure-monitor/reference/tables/apptraces) | Application Insights Freeform traces. |
+| [AMSKeyDeliveryRequests](/azure/azure-monitor/reference/tables/AMSKeyDeliveryRequests) | Azure Media Services HTTP request details for key, or license acquisition. |
+| [AMSMediaAccountHealth](/azure/azure-monitor/reference/tables/AMSMediaAccountHealth) | Azure Media Account Health Status. |
+| [ContainerAppConsoleLogs](/azure/azure-monitor/reference/tables/containerappconsoleLogs) | Azure Container Apps logs, generated within a Container Apps environment. |
+| [ContainerLogV2](/azure/azure-monitor/reference/tables/containerlogv2) | Used in [Container insights](../containers/container-insights-overview.md) and includes verbose text-based log records. |
> [!NOTE] > Tables created with the [Data Collector API](data-collector-api.md) don't support Basic Logs.
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
na Previously updated : 11/03/2022 Last updated : 11/07/2022 # Guidelines for Azure NetApp Files network planning
Azure NetApp Files Standard network features are supported for the following reg
* France Central * Germany West Central * Japan East
+* Japan West
* Korea Central * North Central US * North Europe
azure-netapp-files Azure Netapp Files Resource Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resource-limits.md
na Previously updated : 09/07/2022 Last updated : 11/07/2022 # Resource limits for Azure NetApp Files
Size: 4096 Blocks: 8 IO Block: 65536 directory
## `Maxfiles` limits <a name="maxfiles"></a>
-Azure NetApp Files volumes have a limit called *`maxfiles`*. The `maxfiles` limit is the number of files a volume can contain. Linux file systems refer to the limit as *inodes*. The `maxfiles` limit for an Azure NetApp Files volume is indexed based on the size (quota) of the volume. The `maxfiles` limit for a volume increases or decreases at the rate of 20 million files per TiB of provisioned volume size.
+Azure NetApp Files volumes have a limit called *`maxfiles`*. The `maxfiles` limit is the number of files a volume can contain. Linux file systems refer to the limit as *inodes*. The `maxfiles` limit for an Azure NetApp Files volume is indexed based on the size (quota) of the volume. The `maxfiles` limit for a volume increases or decreases at the rate of 21,251,126 files per TiB of provisioned volume size.
-The service dynamically adjusts the `maxfiles` limit for a volume based on its provisioned size. For example, a volume configured initially with a size of 1 TiB would have a `maxfiles` limit of 20 million. Subsequent changes to the size of the volume would result in an automatic readjustment of the `maxfiles` limit based on the following rules:
+The service dynamically adjusts the `maxfiles` limit for a volume based on its provisioned size. For example, a volume configured initially with a size of 1 TiB would have a `maxfiles` limit of 21,251,126. Subsequent changes to the size of the volume would result in an automatic readjustment of the `maxfiles` limit based on the following rules:
| Volume size (quota) | Automatic readjustment of the `maxfiles` limit | |-|-|
-| <= 1 TiB | 20 million |
-| > 1 TiB but <= 2 TiB | 40 million |
-| > 2 TiB but <= 3 TiB | 60 million |
-| > 3 TiB but <= 4 TiB | 80 million |
-| > 4 TiB | 100 million |
+| <= 1 TiB | 21,251,126 |
+| > 1 TiB but <= 2 TiB | 42,502,252 |
+| > 2 TiB but <= 3 TiB | 63,753,378 |
+| > 3 TiB but <= 4 TiB | 85,004,504 |
+| > 4 TiB | 106,255,630 |
>[!IMPORTANT] > If your volume has a quota of at least 4 TiB and you want to increase the quota, you must initiate [a support request](#request-limit-increase).
-For volumes with at least 4 TiB of quota, you can increase the `maxfiles` (inodes) limit beyond 100 million. For every 100 million files you increase (or a fraction thereof), you need to increase the corresponding volume quota by 4 TiB. For example, if you increase the `maxfiles` limit from 100 million files to 200 million files (or any number in between), you need to increase the volume quota from 4 TiB to 8 TiB.
+For volumes with at least 4 TiB of quota, you can increase the `maxfiles` (inodes) limit beyond 100 million. For every 100 million files you increase (or a fraction thereof), you need to increase the corresponding volume quota by 4 TiB. For example, if you increase the `maxfiles` limit from 100 million files to 200 million files (or any number in between), you need to increase the volume quota from 4 TiB to 8 TiB.
You can increase the `maxfiles` limit to 500 million if your volume quota is at least 20 TiB.
azure-netapp-files Understand Guidelines Active Directory Domain Service Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/understand-guidelines-active-directory-domain-service-site.md
na Previously updated : 11/01/2022 Last updated : 11/07/2022 # Understand guidelines for Active Directory Domain Services site design and planning for Azure NetApp Files
Ensure that you meet the following requirements about network topology and confi
* Ensure that a [supported network topology for Azure NetApp Files](azure-netapp-files-network-topologies.md) is used. * Ensure that AD DS domain controllers have network connectivity from the Azure NetApp Files delegated subnet hosting the Azure NetApp Files volumes.
+ * Peered virtual network topologies with AD DS domain controllers must have peering configured correctly to support Azure NetApp Files to AD DS domain controller network connectivity.
* Network Security Groups (NSGs) and AD DS domain controller firewalls must have appropriately configured rules to support Azure NetApp Files connectivity to AD DS and DNS. * Ensure that the latency is less than 10ms RTT between Azure NetApp Files and AD DS domain controllers.
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
na Previously updated : 11/03/2022 Last updated : 11/07/2022 # What's new in Azure NetApp Files
Azure NetApp Files is updated regularly. This article provides a summary about t
## November 2022
+* [Azure NetApp Files datastores for Azure VMware Solution](../azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md) is now generally available (GA) with expanded regional coverage.
+ * [Encrypted SMB connections to Domain Controller](create-active-directory-connections.md#encrypted-smb-dc) (Preview) With the Encrypted SMB connections to Active Directory Domain Controller capability you can now specify whether encryption should be used for communication between SMB server and domain controller in Active Directory connections. When enabled, only SMB3 will be used for encrypted domain controller connections.
azure-resource-manager Azure Subscription Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-subscription-service-limits.md
The following limits apply to [Azure role-based access control (Azure RBAC)](../
[!INCLUDE [signalr-service-limits](../../../includes/signalr-service-limits.md)]
+## Azure Spring Apps limits
+
+To learn more about the limits for Azure Spring Apps, see [Quotas and service plans for Azure Spring Apps](../../spring-apps/quotas.md).
+ ## Azure Virtual Desktop Service limits [!INCLUDE [azure-virtual-desktop-service-limits](../../../includes/azure-virtual-desktop-limits.md)]
azure-vmware Attach Azure Netapp Files To Azure Vmware Solution Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md
Title: Attach Azure NetApp Files datastores to Azure VMware Solution hosts
description: Learn how to create Azure NetApp Files-based NSF datastores for Azure VMware Solution hosts. Previously updated : 10/18/2022 Last updated : 11/07/2022
-# Attach Azure NetApp Files datastores to Azure VMware Solution hosts (Preview)
+# Attach Azure NetApp Files datastores to Azure VMware Solution hosts
[Azure NetApp Files](../azure-netapp-files/azure-netapp-files-introduction.md) is an enterprise-class, high-performance, metered file storage service. The service supports the most demanding enterprise file-workloads in the cloud: databases, SAP, and high-performance computing applications, with no code changes. For more information on Azure NetApp Files, see [Azure NetApp Files](../azure-netapp-files/index.yml) documentation. [Azure VMware Solution](./introduction.md) supports attaching Network File System (NFS) datastores as a persistent storage option. You can create NFS datastores with Azure NetApp Files volumes and attach them to clusters of your choice. You can also create virtual machines (VMs) for optimal cost and performance.
-> [!IMPORTANT]
-> Azure NetApp Files datastores for Azure VMware Solution hosts is currently in public preview. This version is provided without a service-level agreement and is not recommended for production workloads. Some features may not be supported or may have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- By using NFS datastores backed by Azure NetApp Files, you can expand your storage instead of scaling the clusters. You can also use Azure NetApp Files volumes to replicate data from on-premises or primary VMware environments for the secondary site. Create your Azure VMware Solution and create Azure NetApp Files NFS volumes in the virtual network connected to it using an ExpressRoute. Ensure there's connectivity from the private cloud to the NFS volumes created. Use those volumes to create NFS datastores and attach the datastores to clusters of your choice in a private cloud. As a native integration, no other permissions configured via vSphere are needed.
Before you begin the prerequisites, review the [Performance best practices](#per
1. [Deploy Azure VMware Solution](./deploy-azure-vmware-solution.md) private cloud and a dedicated virtual network connected via ExpressRoute gateway. The virtual network gateway should be configured with the Ultra performance SKU and have FastPath enabled. For more information, see [Configure networking for your VMware private cloud](tutorial-configure-networking.md) and [Network planning checklist](tutorial-network-checklist.md). 1. Create an [NFSv3 volume for Azure NetApp Files](../azure-netapp-files/azure-netapp-files-create-volumes.md) in the same virtual network created in the previous step. 1. Verify connectivity from the private cloud to Azure NetApp Files volume by pinging the attached target IP.
- 2. Verify the subscription is registered to the `ANFAvsDataStore` feature in the `Microsoft.NetApp` namespace. If the subscription isn't registered, register it now.
-
+ 2. Verify the subscription is registered to the `ANFAvsDataStore` feature in the `Microsoft.NetApp` namespace. If the subscription isn't registered, register it now.
+ `az feature register --name "ANFAvsDataStore" --namespace "Microsoft.NetApp"` `az feature show --name "ANFAvsDataStore" --namespace "Microsoft.NetApp" --query properties.state`
Before you begin the prerequisites, review the [Performance best practices](#per
1. Under the **Protocol** section, select **Azure VMware Solution Datastore** to indicate the volume is created to use as a datastore for Azure VMware Solution private cloud. 1. If you're using [export policies](../azure-netapp-files/azure-netapp-files-configure-export-policy.md) to control access to Azure NetApp Files volumes, enable the Azure VMware private cloud IP range, not individual host IPs. Faulty hosts in a private cloud could get replaced so if the IP isn't enabled, connectivity to datastore will be impacted.
+>[!NOTE]
+>Azure NetApp Files datastores for Azure VMware Solution are generally available. You must register Azure NetApp Files datastores for Azure VMware Solution before using it.
+ ## Supported regions Azure VMware Solution currently supports the following regions:
Azure VMware Solution currently supports the following regions:
**Brazil** : Brazil South.
-**Europe** : France Central, Germany West Central, North Europe, Switzerland West, UK South, UK West, West Europe
-
-**North America** : Canada Central, Canada East, Central US, East US, East US 2, North Central US, South Central US, West US.
+**Europe** : France Central, Germany West Central, North Europe, Sweden Central, Sweden North, Switzerland West, UK South, UK West, West Europe
-The list of supported regions will expand as the preview progresses.
+**North America** : Canada Central, Canada East, Central US, East US, East US 2, North Central US, South Central US, West US, West US 2.
## Performance best practices
To attach an Azure NetApp Files volume to your private cloud using Portal, follo
1. Under **Settings**, select **Preview features**. 1. Verify you're registered for both the `CloudSanExperience` and `AnfDatstoreExperience` features. 1. Navigate to your Azure VMware Solution.
-Under **Manage**, select **Storage (preview)**.
+Under **Manage**, select **Storage**.
1. Select **Connect Azure NetApp Files volume**. 1. In **Connect Azure NetApp Files volume**, select the **Subscription**, **NetApp account**, **Capacity pool**, and **Volume** to be attached as a datastore.
Under **Manage**, select **Storage (preview)**.
1. Verify the protocol is NFS. You'll need to verify the virtual network and subnet to ensure connectivity to the Azure VMware Solution private cloud. 1. Under **Associated cluster**, select the **Client cluster** to associate the NFS volume as a datastore 1. Under **Data store**, create a personalized name for your **Datastore name**.
- 1. When the datastore is created, you should see all of your datastores in the **Storage (preview)**.
+ 1. When the datastore is created, you should see all of your datastores in the **Storage**.
2. You'll also notice that the NFS datastores are added in vCenter.
To attach an Azure NetApp Files volume to your private cloud using Azure CLI, fo
`az feature register --name " AnfDatastoreExperience" --namespace "Microsoft.AVS"` `az feature show --name "AnfDatastoreExperience" --namespace "Microsoft.AVS" --query properties.state`+ 1. Verify the VMware extension is installed. If the extension is already installed, verify you're using the latest version of the Azure CLI extension. If an older version is installed, update the extension. `az extension show --name vmware`
Now that you've attached a datastore on Azure NetApp Files-based NFS volume to y
- **What are my options for backup and recovery?**
- Azure NetApp Files (ANF) supports [snapshots](../azure-netapp-files/azure-netapp-files-manage-snapshots.md) of datastores for quick checkpoints for near term recovery or quick clones. ANF backup lets you offload your ANF snapshots to Azure storage. This feature is available in public preview. Only for this technology are copies and stores-changed blocks relative to previously offloaded snapshots in an efficient format. This ability decreases Recovery Point Objective (RPO) and Recovery Time Objective (RTO) while lowering backup data transfer burden on the Azure VMware Solution service.
+ Azure NetApp Files supports [snapshots](../azure-netapp-files/azure-netapp-files-manage-snapshots.md) of datastores for quick checkpoints for near term recovery or quick clones. Azure NetApp Files backup lets you offload your Azure NetApp Files snapshots to Azure storage. With snapshots, copies and stores-changed blocks relative to previously offloaded snapshots are stored in an efficient format. This ability decreases Recovery Point Objective (RPO) and Recovery Time Objective (RTO) while lowering backup data transfer burden on the Azure VMware Solution service.
- **How do I monitor Storage Usage?**
azure-vmware Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/introduction.md
The diagram shows the adjacency between private clouds and VNets in Azure, Azure
## AV36P and AV52 node sizes generally available in Azure VMware Solution
- The new node sizes in will increase memory and storage options to optimize your workloads. These gains in performance enable you to do more per server, break storage bottlenecks, and lower transaction costs of latency-sensitive workloads. The availability of these new nodes allows large latency-sensitive services to be hosted efficiently on the Azure VMware Solution infrastructure.
+ The new node sizes increase memory and storage options to optimize your workloads. These gains in performance enable you to do more per server, break storage bottlenecks, and lower transaction costs of latency-sensitive workloads. The availability of these new nodes allows large latency-sensitive services to be hosted efficiently on the Azure VMware Solution infrastructure.
**AV36P key highlights for Memory and Storage optimized Workloads:**
backup Backup Azure Vms Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-vms-troubleshoot.md
Title: Troubleshoot backup errors with Azure VMs
description: In this article, learn how to troubleshoot errors encountered with backup and restore of Azure virtual machines. Previously updated : 10/20/2022 Last updated : 11/07/2022
To resolve this issue:
**Resolution**: Use the same subscription for Restore of Trusted Launch Azure VMs.
+### UserErrorCrossSubscriptionRestoreInvalidTargetSubscription
+
+**Error code**: UserErrorCrossSubscriptionRestoreInvalidTargetSubscription
+
+**Error message**: Operation failed as the target subscription specified for restore is not registered to the Azure Recovery Services Resource Provider.
+
+**Recommended action**: Ensure that the target subscription is registered to the Recovery Services Resource Provider before you attempt a cross subscription restore. Creating a vault in the target Subscription should typically register the Subscription to Recovery Services vault Provider.
+ ## Backup or restore takes time If your backup takes more than 12 hours, or restore takes more than 6 hours, review [best practices](backup-azure-vms-introduction.md#best-practices), and
backup Multi User Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/multi-user-authorization.md
Title: Configure Multi-user authorization using Resource Guard
description: This article explains how to configure Multi-user authorization using Resource Guard. zone_pivot_groups: backup-vaults-recovery-services-vault-backup-vault Previously updated : 09/15/2022 Last updated : 11/08/2022
Learn about various [MUA usage scenarios](./multi-user-authorization-concept.md?
The **Security admin** creates the Resource Guard. We recommend that you create it in a **different subscription** or a **different tenant** as the vault. However, it should be in the **same region** as the vault. The Backup admin must **NOT** have *contributor* access on the Resource Guard or the subscription that contains it.
-For the following example, create the Resource Guard in a tenant different from the vault tenant.
+**Choose a client**
+
+# [Azure portal](#tab/azure-portal)
+
+To create the Resource Guard in a tenant different from the vault tenant, follow these steps:
+ 1. In the Azure portal, go to the directory under which you want to create the Resource Guard. :::image type="content" source="./media/multi-user-authorization/portal-settings-directories-subscriptions.png" alt-text="Screenshot showing the portal settings.":::
For the following example, create the Resource Guard in a tenant different from
Follow notifications for status and successful creation of the Resource Guard.
+# [PowerShell](#tab/powershell)
+
+Use the following command to create a resource guard:
+
+ ```azurepowershell-interactive
+ New-AzDataProtectionResourceGuard -Location ΓÇ£LocationΓÇ¥ -Name ΓÇ£ResourceGuardNameΓÇ¥ -ResourceGroupName ΓÇ£rgNameΓÇ¥
+ ```
+++ ### Select operations to protect using Resource Guard
-Choose the operations you want to protect using the Resource Guard out of all supported critical operations. By default, all supported critical operations are enabled. However, you can exempt certain operations from falling under the purview of MUA using Resource Guard. The security admin can perform the following steps:
+Choose the operations you want to protect using the Resource Guard out of all supported critical operations. By default, all supported critical operations are enabled. However, you (as the security admin) can exempt certain operations from falling under the purview of MUA using Resource Guard.
+
+**Choose a client**
+
+# [Azure portal](#tab/azure-portal)
+
+To exempt operations, follow these steps:
1. In the Resource Guard created above, go to **Properties**. 2. Select **Disable** for operations that you want to exclude from being authorized using the Resource Guard.
Choose the operations you want to protect using the Resource Guard out of all su
:::image type="content" source="./media/multi-user-authorization/demo-resource-guard-properties.png" alt-text="Screenshot showing demo resource guard properties.":::
+# [PowerShell](#tab/powershell)
+
+Use the following commands to update the operations. These exclude operations from protection by the resource guard.
+
+ ```azurepowershell-interactive
+ $resourceGuard = Get-AzDataProtectionResourceGuard -SubscriptionId "xxxxxxxx-xxxx-xxxx-xxxxxxxxxxxx" -ResourceGroupName "rgName" -Name "resGuardName"
+ $criticalOperations = $resourceGuard.ResourceGuardOperation.VaultCriticalOperation
+ $operationsToBeExcluded = $criticalOperations | Where-Object { $_ -match "backupSecurityPIN/action" -or $_ -match "backupInstances/delete" }
++
+ Update-AzDataProtectionResourceGuard -SubscriptionId "xxxxxxxx-xxxx-xxxx-xxxxxxxxxxxx" -ResourceGroupName "rgName" -Name $resourceGuard.Name -CriticalOperationExclusionList $operationsToBeExcluded
+ ```
+
+- The first command fetches the resource guard that needs to be updated.
+- The second and third commands fetch the critical operations that you want to update.
+- The fourth command excludes some critical operations from the resource guard.
+++++ ## Assign permissions to the Backup admin on the Resource Guard to enable MUA To enable MUA on a vault, the admin of the vault must have **Reader** role on the Resource Guard or subscription containing the Resource Guard. To assign the **Reader** role on the Resource Guard:
To enable MUA on a vault, the admin of the vault must have **Reader** role on th
## Enable MUA on a Recovery Services vault
-Now that the Backup admin has the Reader role on the Resource Guard, they can easily enable multi-user authorization on vaults managed by them. The following steps are performed by the **Backup admin**.
+After the Reader role assignment on the Resource Guard is complete, enable multi-user authorization on vaults (as the **Backup admin**) that you manage.
+
+**Choose a client**
+
+# [Azure portal](#tab/azure-portal)
+
+To enable MUA on the vaults, follow these steps.
1. Go to the Recovery Services vault. Go to **Properties** on the left navigation panel, then to **Multi-User Authorization** and click **Update**.
Now that the Backup admin has the Reader role on the Resource Guard, they can ea
:::image type="content" source="./media/multi-user-authorization/testvault1-enable-mua.png" alt-text="Screenshot showing how to enable Multi-user authentication.":::
+# [PowerShell](#tab/powershell)
+
+Use the following command to enable MUA on a Recovery Services vault:
+
+ ```azurepowershell-interactive
+ $token = (Get-AzAccessToken -TenantId "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx").Token
+ Set-AzRecoveryServicesResourceGuardMapping -VaultId ΓÇ£VaultArmIdΓÇ¥ -ResourceGuardId "ResourceGuardArmId" -Token $token
+ ```
+
+- The first command fetches the access token for the resource guard tenant where the resource guard is present.
+- The second command creates a mapping between the RSVault $vault and Resource guard.
+
+>[!NOTE]
+>The token parameter is optional and is only needed to authenticate cross tenant protected operations.
++++ ## Protected operations using MUA Once you have enabled MUA, the operations in scope will be restricted on the vault, if the Backup admin tries to perform them without having the required role (that is, Contributor role) on the Resource Guard.
The following screenshot shows an example of disabling soft delete for an MUA-en
## Disable MUA on a Recovery Services vault
-Disabling MUA is a protected operation, and hence, is protected using MUA. This means that the Backup admin must have the required Contributor role in the Resource Guard. Details on obtaining this role are described here. Following is a summary of steps to disable MUA on a vault.
+Disabling MUA is a protected operation, so, so, vaults are protected using MUA. If you (the Backup admin) want to disable MUA, you must have the required Contributor role in the Resource Guard.
+
+**Choose a client**
+
+# [Azure portal](#tab/azure-portal)
+
+To disable MUA on a vault, follow these steps:
+ 1. The Backup admin requests the Security admin for **Contributor** role on the Resource Guard. They can request this to use the methods approved by the organization such as JIT procedures, like [Azure AD Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md), or other internal tools and procedures. 1. The Security admin approves the request (if they find it worthy of being approved) and informs the Backup admin. Now the Backup admin has the ΓÇÿContributorΓÇÖ role on the Resource Guard. 1. The Backup admin goes to the vault > **Properties** > **Multi-user Authorization**.
Disabling MUA is a protected operation, and hence, is protected using MUA. This
:::image type="content" source="./media/multi-user-authorization/disable-mua.png" alt-text="Screenshot showing to disable multi-user authentication.":::
+# [PowerShell](#tab/powershell)
+
+Use the following command to disable MUA on a Recovery Services vault:
+
+ ```azurepowershell-interactive
+ $token = (Get-AzAccessToken -TenantId "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx").Token
+ Remove-AzRecoveryServicesResourceGuardMapping -VaultId ΓÇ£VaultArmIdΓÇ¥ -Token $token
+ ```
+
+- The first command fetches the access token for the resource guard tenant, where the resource guard is present.
+- The second command deletes the mapping between the Recovery Services vault and the resource guard.
+
+>[!NOTE]
+>The token parameter is optional and is only needed to authenticate the cross tenant protected operations.
+++++++ ::: zone-end
bastion Configuration Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/configuration-settings.md
Title: 'About Azure Bastion configuration settings' description: Learn about the available configuration settings for Azure Bastion. + Previously updated : 08/03/2022-- Last updated : 08/15/2022 # About Bastion configuration settings
You can specify the port that you want to use to connect to your VMs. By default
Custom port values are supported for the Standard SKU only.
+## Shareable link (Preview)
+
+The Bastion **Shareable Link** feature lets users connect to a target resource using Azure Bastion without accessing the Azure portal.
+
+When a user without Azure credentials clicks a shareable link, a webpage will open that prompts the user to sign in to the target resource via RDP or SSH. Users authenticate using username and password or private key, depending on what you have configured in the Azure portal for that target resource. Users can connect to the same resources that you can currently connect to with Azure Bastion: VMs or virtual machine scale set.
+
+| Method | Value | Links | Requires Standard SKU |
+| | | | |
+| Azure portal |Shareable Link | [Configure](shareable-link.md)| Yes |
+ ## Next steps For frequently asked questions, see the [Azure Bastion FAQ](bastion-faq.md).
bastion Shareable Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/shareable-link.md
+
+ Title: 'Create a shareable link for Azure Bastion'
+description: Learn how to create a shareable link to let a user connect to a target resource via Bastion without using the Azure portal.
+++ Last updated : 09/13/2022+++
+# Create a shareable link for Bastion - preview
+
+The Bastion **Shareable Link** feature lets users connect to a target resource (virtual machine or virtual machine scale set) using Azure Bastion without accessing the Azure portal. This article helps you use the Shareable Link feature to create a shareable link for an existing Azure Bastion deployment.
+
+When a user without Azure credentials clicks a shareable link, a webpage opens that prompts the user to sign in to the target resource via RDP or SSH. Users authenticate using username and password or private key, depending on what you have configured for the target resource. The shareable link does not contain any credentials - the admin must provide sign-in credentials to the user.
+
+By default, users in your org will have only read access to shared links. If a user has read access, they'll only be able to use and view shared links, but can't create or delete a shareable link. For more information, see the [Permissions](#permissions) section of this article.
+
+## Considerations
+
+* Shareable Links isn't currently supported on peered VNets.
+* Shareable Links is not supported for national clouds during preview.
+* The Standard SKU is required for this feature.
+
+## Prerequisites
+
+* Azure Bastion is deployed to your VNet. See [Tutorial - Deploy Bastion using manual settings](tutorial-create-host-portal.md) for steps.
+
+* Bastion must be configured to use the **Standard** SKU for this feature. You can update the SKU from Basic to Standard when you configure the shareable links feature.
+
+* The VNet contains the VM resource to which you want to create a shareable link.
+
+## Enable Shareable Link feature
+
+Before you can create a shareable link to a VM, you must first enable the feature.
+
+1. In the Azure portal, go to your bastion resource.
+
+1. On your **Bastion** page, in the left pane, click **Configuration**.
+
+ :::image type="content" source="./media/shareable-link/configuration-settings.png" alt-text="Screenshot of Configuration settings with shareable link selected." lightbox="./media/shareable-link/configuration-settings.png":::
+
+1. On the **Configuration** page, for **Tier**, select **Standard** if it isn't already selected. This feature requires the **Standard SKU**.
+
+1. Select **Shareable Link** from the listed features to enable the Shareable Link feature.
+
+1. Verify that you've selected the settings that you want, then click **Apply**.
+
+1. Bastion will immediately begin updating the settings for your bastion host. Updates will take about 10 minutes.
+
+## Create shareable links
+
+In this section, you specify each resource for which you want to create a shareable link
+
+1. In the Azure portal, go to your bastion resource.
+
+1. On your bastion page, in the left pane, click **Shareable links**. Click **+ Add** to open the **Create shareable link** page.
+
+ :::image type="content" source="./media/shareable-link/add.png" alt-text="Screenshot shareable links page with + add." lightbox="./media/shareable-link/add.png":::
+
+1. On the **Create shareable link** page, select the resources for which you want to create a shareable link. You can select specific resources, or you can select all. A separate shareable link will be created for each selected resource. Click **Apply** to create links.
+
+ :::image type="content" source="./media/shareable-link/select-vm.png" alt-text="Screenshot of shareable links page to create a shareable link." lightbox="./media/shareable-link/select-vm.png":::
+
+1. Once the links are created, you can view them on the **Shareable links** page. The following example shows links for multiple resources. You can see that each resource has a separate link and the link status is **Active**. To share a link, copy it, then send it to the user. The link doesn't contain authentication credentials.
+
+ :::image type="content" source="./media/shareable-link/copy-link.png" alt-text="Screenshot of shareable links page to show all available resource links." lightbox="./media/shareable-link/copy-link.png":::
+
+## Connect to a VM
+
+1. After receiving the link, the user opens the link in their browser.
+
+1. In the left corner, the user can select whether to see text and images copied to the clipboard. The user inputs the required information, then clicks **Login** to connect. A shared link doesn't contain authentication credentials. The admin must provide sign-in credentials to the user. Custom port and protocols are supported.
+
+ :::image type="content" source="./media/shareable-link/login.png" alt-text="Screenshot of Sign-in to bastion using the shareable link in the browser." lightbox="./media/shareable-link/login.png":::
+
+> [!NOTE]
+> If a link is no longer able to be opened, this means that someone in your organization has deleted that resource. While you'll still be able to see the shared links in your list, it will no longer connect to the target resource and will lead to a connection error. You can delete the shared link in your list, or keep it for auditing purposes.
+>
+
+## Delete a shareable link
+
+1. In the Azure portal, go to your **Bastion resource -> Shareable Links**.
+
+1. On the **Shareable Links** page, select the resource link that you want to delete, then click **Delete**.
+
+ :::image type="content" source="./media/shareable-link/delete.png" alt-text="Screenshot of selecting link to delete." lightbox="./media/shareable-link/delete.png":::
+
+## Permissions
+
+Permissions to the Shareable Link feature are configured using Access control (IAM). By default, users in your org will have only read access to shared links. If a user has read access, they'll only be able to use and view shared links, but can't create or delete a shared link.
+
+To give someone permissions to create or delete a shared link, use the following steps:
+
+1. In the Azure portal, go to the Bastion host.
+1. Go to the **Access control (IAM)** page.
+1. In the Microsoft.Network/bastionHosts section, configure the following permissions:
+
+ * Other: Creates shareable urls for the VMs under a bastion and returns the URLs.
+ * Other: Deletes shareable urls for the provided VMs under a bastion.
+ * Other: Deletes shareable urls for the provided tokens under a bastion.
+
+ These correspond to the following PowerShell cmdlets:
+
+ * Microsoft.Network/bastionHosts/createShareableLinks/action
+ * Microsoft.Network/bastionHosts/deleteShareableLinks/action
+ * Microsoft.Network/bastionHosts/deleteShareableLinksByToken/action
+ * Microsoft.Network/bastionHosts/getShareableLinks/action - If this isn't enabled, the user won't be able to see a shareable link.
+
+## Next steps
+
+* For additional features, see [Bastion features and configuration settings](configuration-settings.md).
+* For more information about Azure Bastion, see [What is Azure Bastion?](bastion-overview.md)
cdn Cdn Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-billing.md
If you are using Azure Blob storage as the origin for your content, you also inc
> [!NOTE] > Starting October 2019, If you are using Azure CDN from Microsoft, the cost of data transfer from Origins hosted in Azure to CDN PoPs is free of charge. Azure CDN from Verizon and Azure CDN from Akamai are subject to the rates described below.
-For more information about Azure Storage billing, see [Understanding Azure Storage Billing ΓÇô Bandwidth, Transactions, and Capacity](https://blogs.msdn.microsoft.com/windowsazurestorage/2010/07/08/understanding-windows-azure-storage-billing-bandwidth-transactions-and-capacity/).
+For more information about Azure Storage billing, see [Plan and manage costs for Azure Storage](../storage/common/storage-plan-manage-costs.md).
If you are using *hosted service delivery*, you will incur charges as follows:
If you use one of the following Azure services as your CDN origin, you will not
- Azure Cache for Redis ## How do I manage my costs most effectively?
-Set the longest TTL possible on your content.
+Set the longest TTL possible on your content.
cognitive-services Streaming Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/How-to/streaming-inference.md
A sample request:
{ "variables": [ {
- "variableName": "Variable_1",
+ "variable": "Variable_1",
"timestamps": [ "2021-01-01T00:00:00Z", "2021-01-01T00:01:00Z",
A sample request:
] }, {
- "variableName": "Variable_2",
+ "variable": "Variable_2",
"timestamps": [ "2021-01-01T00:00:00Z", "2021-01-01T00:01:00Z",
A sample request:
] }, {
- "variableName": "Variable_3",
+ "variable": "Variable_3",
"timestamps": [ "2021-01-01T00:00:00Z", "2021-01-01T00:01:00Z",
cognitive-services Concept Face Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-face-detection.md
Try out the capabilities of face detection quickly and easily using Vision Studi
## Face ID
-The face ID is a unique identifier string for each detected face in an image. You can request a face ID in your [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) API call.
+The face ID is a unique identifier string for each detected face in an image. Note that Face ID requires limited access approval by filling out the [intake form](https://aka.ms/facerecognition). For more information, see the Face [limited access page](/legal/cognitive-services/computer-vision/limited-access-identity?context=%2Fazure%2Fcognitive-services%2Fcomputer-vision%2Fcontext%2Fcontext). You can request a face ID in your [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) API call.
## Face landmarks
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/whats-new.md
Learn what's new in the service. These items may be release notes, videos, blog
### Computer Vision Image Analysis 4.0 public preview
-Version 4.0 of Computer Vision has been released in public preview. The new API includes image captioning, image tagging, object detection people detection, and Read OCR functionality, available in the same Analyze Image operation. The OCR is optimized for general, non-document images in a performance-enhanced synchronous API that makes it easier to embed OCR-powered experiences in your workflows.
+Image Analysis 4.0 has been released in public preview. The new API includes image captioning, image tagging, object detection, smart crops, people detection, and Read OCR functionality, all available through one Analyze Image operation. The OCR is optimized for general, non-document images in a performance-enhanced synchronous API that makes it easier to embed OCR-powered experiences in your workflows.
## September 2022
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/quickstart.md
Publishing your model makes it available for use with the Translator API. A proj
1. Developers should use the `Category ID` when making translation requests with Microsoft Translator [Text API V3](../reference/v3-0-translate.md?tabs=curl). More information about the Translator Text API can be found on the [API Reference](../reference/v3-0-reference.md) webpage.
-1. Business users may want to download and install our free [DocumentTranslator app for Windows](https://github.com/MicrosoftTranslator/DocumentTranslator/releases/tag/V2.9.4).
+1. Business users may want to download and install our free [DocumentTranslator app for Windows](https://github.com/MicrosoftTranslator/DocumentTranslation/releases).
## Next steps
cognitive-services Sovereign Clouds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/sovereign-clouds.md
The following table lists the base URLs for Azure sovereign cloud endpoints:
|--|--| |Azure portal | <ul><li>[Azure Government Portal](https://portal.azure.us/)</li></ul>| | Available regions</br></br>The region-identifier is a required header when using Translator for the government cloud. | <ul><li>`usgovarizona` </li><li> `usgovvirginia`</li></ul>|
-|Available pricing tiers|<ul><li>Free (F0) and Standard (S0). See [Translator pricing](https://azure.microsoft.com/pricing/details/cognitive-services/translator/)</li></ul>|
-|Supported Features | <ul><li>[Text Translation](https://docs.azure.cn/cognitive-services/translator/reference/v3-0-reference)</li><li>[Document Translation](document-translation/overview.md)</li><li>[Custom Translator](custom-translator/overview.md)</li></ul>|
+|Available pricing tiers|<ul><li>Free (F0) and Standard (S1). See [Translator pricing](https://azure.microsoft.com/pricing/details/cognitive-services/translator/)</li></ul>|
+|Supported Features | <ul><li>[Text Translation](reference/v3-0-reference.md)</li><li>[Document Translation](document-translation/overview.md)</li><li>[Custom Translator](custom-translator/overview.md)</li></ul>|
|Supported Languages| <ul><li>[Translator language support](language-support.md)</li></ul>| <!-- markdownlint-disable MD036 -->
curl -X POST "https://api.cognitive.microsofttranslator.us/translate?api-version
``` > [!div class="nextstepaction"]
-> [Azure Government: Translator text reference](reference/rest-api-guide.md)
+> [Azure Government: Translator text reference](/azure/azure-government/documentation-government-cognitiveservices#translator)
### [Azure China 21 Vianet](#tab/china)
cognitive-services Use Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/how-to/use-containers.md
The following table describes the minimum and recommended specifications for the
| | Minimum host specs | Recommended host specs | Minimum TPS | Maximum TPS| |||-|--|--|
-| **1 document/request** | 4 core, 10GB memory | 6 core, 12GB memory |15 | 30|
+| **1 document/request** | 4 core, 12GB memory | 6 core, 12GB memory |15 | 30|
| **10 documents/request** | 6 core, 16GB memory | 8 core, 20GB memory |15 | 30| CPU core and memory correspond to the `--cpus` and `--memory` settings, which are used as part of the `docker run` command.
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/language-support.md
json
{ "taskName": "analyze 1", "kind": "Healthcare",
+ "parameters":
+ {
+ "modelVersion": "2022-08-15-preview"
+ }
} ] }
json
## Docker container
-The docker container supports English language, model version 03-01-2022.
+The docker container supports English language, model version 2022-03-01.
Additional languages are also supported when using a docker container to deploy the API: Spanish, French, German Italian, Portuguese and Hebrew. This functionality is currently in preview, model version 2022-08-15-preview. Full details for deploying the service in a container can be found [here](../text-analytics-for-health/how-to/use-containers.md).
In order to download the new container images from the Microsoft public containe
For English, Spanish, Italian, French, German and Portuguese: ```
-docker pull mcr.microsoft.com/azure-cognitive-services/textanalytics/latin
+docker pull mcr.microsoft.com/azure-cognitive-services/textanalytics/healthcare:latin
``` For Hebrew: ```
-docker pull mcr.microsoft.com/azure-cognitive-services/textanalytics/semitic
+docker pull mcr.microsoft.com/azure-cognitive-services/textanalytics/healthcare:semitic
```
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/overview.md
Previously updated : 6/30/2021 Last updated : 11/07/2022 recommendations: false keywords:
cognitive-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/reference.md
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
#### Example request ```console
-curl https://YOUR_RESOURCE_NAME.openaiazure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=2022-06-01-preview\
+curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=2022-06-01-preview\
-H "Content-Type: application/json" \ -H "api-key: YOUR_API_KEY" \ -d "{
curl -X DELETE https://example_resource_name.openai.azure.com/openai/deployments
## Next steps
-Learn more about the [underlying models that power Azure OpenAI](./concepts/models.md).
+Learn more about the [underlying models that power Azure OpenAI](./concepts/models.md).
cognitive-services What Are Cognitive Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/what-are-cognitive-services.md
You can create a Cognitive Services resource with hands-on quickstarts using any
* [Azure portal](cognitive-services-apis-create-account.md?tabs=multiservice%2Cwindows "Azure portal") * [Azure CLI](cognitive-services-apis-create-account-cli.md?tabs=windows "Azure CLI")
-* [Azure SDK client libraries](cognitive-services-apis-create-account-cli.md?tabs=windows "cognitive-services-apis-create-account-client-library?pivots=programming-language-csharp")
+* [Azure SDK client libraries](cognitive-services-apis-create-account-client-library.md?tabs=windows "cognitive-services-apis-create-account-client-library?pivots=programming-language-csharp")
* [Azure Resource Manager (ARM template)](./create-account-resource-manager-template.md?tabs=portal "Azure Resource Manager (ARM template)") ## Use Cognitive Services in different development environments
communication-services Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/pricing.md
Note: USA mixed rates to `+1-425` is $0.013. Refer to the following link for det
## Call Recording
-Azure Communication Services allows customers to record PSTN, WebRTC, Conference, SIP Interface calls. Currently Call Recording supports mixed audio+video MP4 and mixed audio-only MP3/WAV output formats. Call Recording SDKs are available for Java and C#. Refer to [this page to learn more](../quickstarts/voice-video-calling/call-recording-sample.md).
+Azure Communication Services allow developers to record PSTN, WebRTC, Conference, or SIP calls. Call Recording supports mixed video MP4, mixed audio MP3/WAV, and unmixed audio WAV output formats. Call Recording SDKs are available for Java and C#. To learn more view Call Recording [concepts](./voice-video-calling/call-recording.md) and [quickstart](../quickstarts/voice-video-calling/get-started-call-recording.md).
### Price
-You're charged $0.01/min for mixed audio+video format and $0.002/min for mixed audio-only.
+- Mixed video (audio+video): $0.01/min
+- Mixed audio: $0.002/min
+- Unmixed audio: $0.0012/participant/min
-### Pricing example: Record a call in a mixed audio+video format
+
+### Pricing example: Record a video call
Alice made a group call with her colleagues, Bob and Charlie. -- The call lasts a total of 60 minutes. And recording was active during 60 minutes.
+- The call lasts a total of 60 minutes and recording was active during 60 minutes.
- Bob stayed in a call for 30 minutes and Alice and Charlie for 60 minutes. **Cost calculations**-- You'll be charged the length of the meeting. (Length of the meeting is the timeline between user starts a recording and either explicitly stops or when there's no one left in a meeting).
+- You'll be charged for the length of the meeting. (Length of the meeting is the timeline between user starts a recording and either explicitly stops or when there's no one left in a meeting).
- 60 minutes x $0.01 per recording per minute = $0.6
-### Pricing example: Record a call in a mixed audio+only format
+### Pricing example: Record an audio call in a mixed format
Alice starts a call with Jane. - The call lasts a total of 60 minutes. The recording lasted for 45 minutes. **Cost calculations**-- You'll be charged the length of the recording.
+- You'll be charged for the length of the recording.
- 45 minutes x $0.002 per recording per minute = $0.09
+### Pricing example: Record an audio call in an unmixed format
+
+Bob starts a call with his financial advisor, Charlie.
+
+- The call lasts a total of 60 minutes. The recording lasted for 50 minutes.
+
+**Cost calculations**
+- You'll be charged for the length of the recording per participant.
+- 50 minutes x $0.0012 x 2 per recording per participant per minute = $0.12
+ ## Chat With Communication Services you can enhance your application with the ability to send and receive chat messages between two or more users. Chat SDKs are available for JavaScript, .NET, Python, and Java. Refer to [this page to learn about SDKs](./sdk-options.md)
communication-services Call Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/call-automation.md
> Functionality described on this document is currently in private preview. Private preview includes access to SDKs and documentation for testing purposes that are not yet available publicly. > Apply to become an early adopter by filling out the form for [preview access to Azure Communication Services](https://aka.ms/ACS-EarlyAdopter).
-Azure Communication Services Call Automation provides developers the ability to build server-based, intelligent call workflows for voice and PSTN channels. The SDKs, available for .NET and Java, uses an action-event model to help you build personalized customer interactions. Your communication applications can listen to real-time call events and perform control plane actions (like answer, transfer, play audio, etc.) to steer and control calls based on your business logic.
+Azure Communication Services Call Automation provides developers the ability to build server-based, intelligent call workflows, and call recording for voice and PSTN channels. The SDKs, available for .NET and Java, uses an action-event model to help you build personalized customer interactions. Your communication applications can listen to real-time call events and perform control plane actions (like answer, transfer, play audio, start recording, etc.) to steer and control calls based on your business logic.
> [!NOTE] > Call Automation currently doesnt interoperate with Microsoft Teams. Actions like making, redirecting a call to a Teams user or adding them to a call using Call Automation isnt supported.
Some of the common use cases that can be build using Call Automation include:
- Integrate your communication applications with Contact Centers and your private telephony networks using Direct Routing. - Protect your customer's identity by building number masking services to connect buyers to sellers or users to partner vendors on your platform. - Increase engagement by building automated customer outreach programs for marketing and customer service.
+- Analyze in a post-call process your unmixed audio recordings for quality assurance purposes.
ACS Call Automation can be used to build calling workflows for customer service scenarios, as depicted in the high-level architecture below. You can answer inbound calls or make outbound calls. Execute actions like playing a welcome message, connecting the customer to a live agent on an ACS Calling SDK client app to answer the incoming call request. With support for ACS PSTN or Direct Routing, you can then connect this workflow back to your contact center.
The following list presents the set of features that are currently available in
| Query scenarios | Get the call state | ✔️ | ✔️ | | | Get a participant in a call | ✔️ | ✔️ | | | List all participants in a call | ✔️ | ✔️ |
+| Call Recording | Start/pause/resume/stop recording | ✔️ | ✔️ |
*Transfer of VoIP call to a phone number is currently not supported.
These actions can be performed on the calls that are answered or placed using Ca
**Transfer** ΓÇô When your application answers a call or places an outbound call to an endpoint, that endpoint can be transferred to another destination endpoint. Transferring a 1:1 call will remove your application's ability to control the call using the Call Automation SDKs.
+**Record** - You decide when to start/pause/resume/stop recording based on your application business logic, or you can grant control to the end user to trigger those actions. To learn more, view our [concepts](./call-recording.md) and [quickstart](../../quickstarts/voice-video-calling/get-started-call-recording.md).
+ **Hang-up** ΓÇô When your application has answered a one-to-one call, the hang-up action will remove the call leg and terminate the call with the other endpoint. If there are more than two participants in the call (group call), performing a ΓÇÿhang-upΓÇÖ action will remove your applicationΓÇÖs endpoint from the group call. **Terminate** ΓÇô Whether your application has answered a one-to-one or group call, or placed an outbound call with one or more participants, this action will remove all participants and end the call. This operation is triggered by setting `forEveryOne` property to true in Hang-Up call action.
The Call Automation events are sent to the web hook callback URI specified when
## Next Steps > [!div class="nextstepaction"]
-> [Get started with Call Automation](./../../quickstarts/voice-video-calling/Callflows-for-customer-interactions.md)
+> [Get started with Call Automation](./../../quickstarts/voice-video-calling/Callflows-for-customer-interactions.md)
communication-services Call Recording https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/call-recording.md
An Event Grid notification `Microsoft.Communication.RecordingFileStatusUpdated`
"documentId": string, // Document id for retrieving from storage "index": int, // Index providing ordering for this chunk in the entire recording "endReason": string, // Reason for chunk ending: "SessionEnded",ΓÇ»"ChunkMaximumSizeExceededΓÇ¥, etc.
- "metadataLocation": <string>, // url of the metadata for this chunk
- "contentLocation": <string> // url of the mp4, mp3, or wav for this chunk
+ "metadataLocation": <string>, // url of the metadata for this chunk
+ "contentLocation": <string>, // url of the mp4, mp3, or wav for this chunk
+ "deleteLocation": <string> // url of the mp4, mp3, or wav to delete this chunk
} ] },
confidential-computing Guest Attestation Example https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/guest-attestation-example.md
The [*guest attestation*](guest-attestation-confidential-vms.md) feature helps you to confirm that a confidential VM runs on a hardware-based trusted execution environment (TEE) with security features enabled for isolation and integrity.
-Sample applications for use with the guest attestation APIs are [available on GitHub](https://github.com/Azure/confidential-computing-cvm-guest-attestation) for [Linux](https://github.com/Azure/confidential-computing-cvm-guest-attestation/tree/main/cvm-guest-attestation-linux-app) and [Windows](https://github.com/Azure/confidential-computing-cvm-guest-attestation/tree/main/cvm-guest-attestation-windows-app).
+Sample applications for use with the guest attestation APIs are [available on GitHub](https://github.com/Azure/confidential-computing-cvm-guest-attestation).
Depending on your [type of scenario](guest-attestation-confidential-vms.md#scenarios), you can reuse the sample code in your client program or workload code.
To use a sample application in C++ for use with the guest attestation APIs, foll
1. Sign in to your VM.
-1. Clone the [sample Linux application](https://github.com/Azure/confidential-computing-cvm-guest-attestation/tree/main/cvm-guest-attestation-linux-app).
+1. Clone the sample Linux application.
1. Install the `build-essential` package. This package installs everything required for compiling the sample application.
To use a sample application in C++ for use with the guest attestation APIs, foll
#### [Windows](#tab/windows) 1. Install Visual Studio with the [**Desktop development with C++** workload](/cpp/build/vscpp-step-0-installation).
-1. Clone the [sample Windows application](https://github.com/Azure/confidential-computing-cvm-guest-attestation/tree/main/cvm-guest-attestation-windows-app).
+1. Clone the sample Windows application.
1. Build your project. From the **Build** menu, select **Build Solution**. 1. After the build succeeds, go to the `Release` build folder. 1. Run the application by running the `AttestationClientApp.exe`.
container-apps Observability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/observability.md
These features include:
|[Azure Monitor alerts](alerts.md) | Create and manage alerts to notify you of events and conditions based on metric and log data.| >[!NOTE]
-> While not a built-in feature, [Azure Monitor's Application Insights](../azure-monitor/app/app-insights-overview.md) is a powerful tool to monitor your web and background applications. Although Container Apps doesn't support the Application Insights auto-instrumentation agent, you can instrument your application code using Application Insights SDKs.
+> While not a built-in feature, [Azure Monitor Application Insights](../azure-monitor/app/app-insights-overview.md) is a powerful tool to monitor your web and background applications. Although Container Apps doesn't support the Application Insights auto-instrumentation agent, you can instrument your application code using Application Insights SDKs.
## Application lifecycle observability
cosmos-db Access Secrets From Keyvault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/access-secrets-from-keyvault.md
Title: Use Key Vault to store and access Azure Cosmos DB keys
-description: Use Azure Key Vault to store and access Azure Cosmos DB connection string, keys, endpoints.
+ Title: |
+ Tutorial: Store and use Azure Cosmos DB credentials with Azure Key Vault
+description: |
+ Use Azure Key Vault to store and access Azure Cosmos DB connection string, keys, and endpoints.
+ ms.devlang: csharp Previously updated : 06/01/2022- Last updated : 11/07/2022
-# Secure Azure Cosmos DB credentials using Azure Key Vault
+# Tutorial: Store and use Azure Cosmos DB credentials with Azure Key Vault
+ [!INCLUDE[NoSQL, MongoDB, Cassandra, Gremlin, Table](includes/appliesto-nosql-mongodb-cassandra-gremlin-table.md)]
->[!IMPORTANT]
-> The recommended solution to access Azure Cosmos DB is to use a [system-assigned managed identity](managed-identity-based-authentication.md). If your service cannot take advantage of managed identities then use the [cert based solution](certificate-based-authentication.md). If both the managed identity solution and cert based solution do not meet your needs, please use the key vault solution below.
+> [!IMPORTANT]
+> It's recommended to access Azure Cosmos DB is to use a [system-assigned managed identity](managed-identity-based-authentication.md). If your service cannot take advantage of managed identities then use the [certificate-based authentication](certificate-based-authentication.md). If both the managed identity solution and cert based solution do not meet your needs, please use the Azure Key vault solution in this article.
+
+If you're using Azure Cosmos DB as your database, you connect to databases, container, and items by using an SDK, the API endpoint, and either the primary or secondary key.
+
+It's not a good practice to store the endpoint URI and sensitive read-write keys directly within application code or configuration file. Ideally, this data is read from environment variables within the host. In Azure App Service, [app settings](/azure/app-service/configure-common#configure-app-settings) allow you to inject runtime credentials for your Azure Cosmos DB account without the need for developers to store these credentials in an insecure clear text manner.
+
+Azure Key Vault iterates on this best practice further by allowing you to store these credentials securely while giving services like Azure App Service managed access to the credentials. Azure App Service will securely read your credentials from Azure Key Vault and inject those credentials into your running application.
+
+With this best practice, developers can store the credentials for tools like the [Azure Cosmos DB emulator](local-emulator.md) or [Try Azure Cosmos DB free](try-free.md) during development. Then, the operations team can ensure that the correct production settings are injected at runtime.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+>
+> - Create an Azure Key Vault instance
+> - Add Azure Cosmos DB credentials as secrets to the key vault
+> - Create and register an Azure App Service resource and grant "read key" permissions
+> - Inject key vault secrets into the App Service resource
+>
+
+> [!NOTE]
+> This tutorial and the sample application uses an Azure Cosmos DB for NoSQL account. You can perform many of the same steps using other APIs.
+
+## Prerequisites
+
+- An existing Azure Cosmos DB for NoSQL account.
+ - If you have an Azure subscription, [create a new account](nosql/how-to-create-account.md?tabs=azure-portal).
+ - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+ - Alternatively, you can [try Azure Cosmos DB free](try-free.md) before you commit.
+- GitHub account.
+
+## Before you begin: Get Azure Cosmos DB credentials
+
+Before you start, you'll get the credentials for your existing account.
+
+1. Navigate to the [Azure portal](https://portal.azure.com/) page for the existing Azure Cosmos DB for NoSQL account.
+
+1. From the Azure Cosmos DB for NoSQL account page, select the **Keys** navigation menu option.
+
+ :::image type="content" source="media/access-secrets-from-keyvault/cosmos-keys-option.png" lightbox="media/access-secrets-from-keyvault/cosmos-keys-option.png" alt-text="Screenshot of an Azure Cosmos DB SQL API account page. The Keys option is highlighted in the navigation menu.":::
+
+1. Record the values from the **URI** and **PRIMARY KEY** fields. You'll use these values later in this tutorial.
+
+ :::image type="content" source="media/access-secrets-from-keyvault/cosmos-endpoint-key-credentials.png" lightbox="media/access-secrets-from-keyvault/cosmos-endpoint-key-credentials.png" alt-text="Screenshot of Keys page with various credentials for an Azure Cosmos DB SQL API account.":::
+
+## Create an Azure Key Vault resource
+
+First, create a new key vault to store your API for NoSQL credentials.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Select **Create a resource > Security > Key Vault**.
+
+1. On the **Create key vault** page, enter the following information:
+
+ | Setting | Description |
+ | | |
+ | **Subscription** | Select the Azure subscription that you wish to use for this Azure Cosmos account. |
+ | **Resource group** | Select a resource group, or select **Create new**, then enter a unique name for the new resource group. |
+ | **Key vault name** | Enter a globally unique name for your key vault. |
+ | **Region** | Select a geographic location to host your Azure Cosmos DB account. Use the location that is closest to your users to give them the fastest access to the data. |
+ | **Pricing tier** | Select *Standard*. |
+
+1. Leave the remaining settings to their default values.
+
+1. Select **Review + create**.
+
+1. Review the settings you provide, and then select **Create**. It takes a few minutes to create the account. Wait for the portal page to display **Your deployment is complete** before moving on.
+
+## Add Azure Cosmos DB access keys to the Key Vault
+
+Now, store your Azure Cosmos DB credentials as secrets in the key vault.
+
+1. Select **Go to resource** to go to the Azure Key Vault resource page.
+
+1. From the Azure Key Vault resource page, select the **Secrets** navigation menu option.
+
+1. Select **Generate/Import** from the menu.
+
+ :::image type="content" source="media/access-secrets-from-keyvault/create-new-secret.png" alt-text="Screenshot of the Generate/Import option in a key vault menu.":::
+
+1. On the **Create a secret** page, enter the following information:
+
+ | Setting | Description |
+ | | |
+ | **Upload options** | *Manual* |
+ | **Name** | *cosmos-endpoint* |
+ | **Secret value** | Enter the **URI** you copied earlier in this tutorial. |
+
+ :::image type="content" source="media/access-secrets-from-keyvault/create-endpoint-secret.png" alt-text="Screenshot of the Create a secret dialog in the Azure portal with details for an URI secret.":::
+
+1. Select **Create** to create the new **cosmos-endpoint** secret.
+
+1. Select **Generate/Import** from the menu again. On the **Create a secret** page, enter the following information:
+
+ | Setting | Description |
+ | | |
+ | **Upload options** | *Manual* |
+ | **Name** | *cosmos-readwrite-key* |
+ | **Secret value** | Enter the **PRIMARY KEY** you copied earlier in this tutorial. |
+
+ :::image type="content" source="media/access-secrets-from-keyvault/create-key-secret.png" alt-text="Screenshot of the Create a secret dialog in the Azure portal with details for a PRIMARY KEY secret.":::
+
+1. Select **Create** to create the new **cosmos-readwrite-key** secret.
+
+1. After the secrets are created, view them in the list of secrets within the **Secrets** page.
+
+ :::image type="content" source="media/access-secrets-from-keyvault/view-secrets-list.png" alt-text="Screenshot of the list of secrets for a key vault.":::
+
+1. Select each key, select the latest version, and then copy the **Secret Identifier**. You'll use the identifier for the **cosmos-endpoint** and **cosmos-readwrite-key** secrets later in this tutorial.
+
+ > [!TIP]
+ > The secret identifier will be in this format `https://<key-vault-name>.vault.azure.net/secrets/<secret-name>/<version-id>`. For example, if the name of the key vault is **msdocs-key-vault**, the name of the key is **cosmos-readwrite-key**, and the version if **83b995e363d947999ac6cf487ae0e12e**; then the secret identifier would be `https://msdocs-key-vault.vault.azure.net/secrets/cosmos-readwrite-key/83b995e363d947999ac6cf487ae0e12e`.
+ >
+ > :::image type="content" source="media/access-secrets-from-keyvault/view-secret-identifier.png" alt-text="Screenshot of a secret identifier for a key vault secret named cosmos-readwrite-key.":::
+ >
+
+## Create and register an Azure Web App with Azure Key Vault
+
+In this section, create a new Azure Web App, deploy a sample application, and then register the Web App's managed identity with Azure Key Vault.
+
+1. Create a new GitHub repository using the [cosmos-db-nosql-dotnet-sample-web-environment-variables template](https://github.com/azure-samples/cosmos-db-nosql-dotnet-sample-web-environment-variables/generate).
+
+1. In the Azure portal, select **Create a resource > Web > Web App**.
+
+1. On the **Create Web App** page and **Basics** tab, enter the following information:
+
+ | Setting | Description |
+ | | |
+ | **Subscription** | Select the Azure subscription that you wish to use for this Azure Cosmos account. |
+ | **Resource group** | Select a resource group, or select **Create new**, then enter a unique name for the new resource group. |
+ | **Name** | Enter a globally unique name for your web app. |
+ | **Publish** | Select *Code*. |
+ | **Runtime stack** | Select *.NET 6 (LTS)*. |
+ | **Operating System** | Select *Windows*. |
+ | **Region** | Select a geographic location to host your Azure Cosmos DB account. Use the location that is closest to your users to give them the fastest access to the data. |
+
+1. Leave the remaining settings to their default values.
+
+1. Select **Next: Deployment**.
+
+1. On the **Deployment** tab, enter the following information:
+
+ | Setting | Description |
+ | | |
+ | **Continuous deployment** | Select *Enable*. |
+ | **GitHub account** | Select *Authorize*. Follow the GitHub account authorization prompts to grant Azure permission to read your newly created GitHub repository. |
+ | **Organization** | Select the organization for your new GitHub repository. |
+ | **Repository** | Select the name your new GitHub repository. |
+ | **Branch** | Select *main*. |
+
+1. Select **Review + create**.
+
+1. Review the settings you provide, and then select **Create**. It takes a few minutes to create the account. Wait for the portal page to display **Your deployment is complete** before moving on.
+
+1. You may need to wait a few extra minutes for the web application to be initially deployed to the web app. From the Azure Web App resource page, select **Browse** to see the default state of the app.
+
+ :::image type="content" source="media/access-secrets-from-keyvault/sample-web-app-empty.png" lightbox="media/access-secrets-from-keyvault/sample-web-app-empty.png" alt-text="Screenshot of the web application in it's default state without credentials.":::
+
+1. Select the **Identity** navigation menu option.
-When using Azure Cosmos DB, you can access the database, collections, documents by using the endpoint and the key within the app's configuration file. However, it's not safe to put keys and URL directly in the application code because they're available in clear text format to all the users. You want to make sure that the endpoint and keys are available but through a secured mechanism. This scenario is where Azure Key Vault can help you to securely store and manage application secrets.
+1. On the **Identity** page, select **On** for **System-assigned** managed identity, and then select **Save**.
-The following steps are required to store and read Azure Cosmos DB access keys from Key Vault:
+ :::image type="content" source="media/access-secrets-from-keyvault/enable-managed-identity.png" alt-text="Screenshot of system-assigned managed identity being enabled from the Identity page.":::
-* Create a Key Vault
-* Add Azure Cosmos DB access keys to the Key Vault
-* Create an Azure web application
-* Register the application & grant permissions to read the Key Vault
+## Inject Azure Key Vault secrets as Azure Web App app settings
+Finally, inject the secrets stored in your key vault as app settings within the web app. The app settings will, in turn, inject the credentials into the application at runtime without storing the credentials in clear text.
-## Create a Key Vault
+1. Return to the key vault page in the Azure portal. Select **Access policies** from the navigation menu.
-1. Sign in to [Azure portal](https://portal.azure.com/).
-2. Select **Create a resource > Security > Key Vault**.
-3. On the **Create key vault** section provide the following information:
- * **Name:** Provide a unique name for your Key Vault.
- * **Subscription:** Choose the subscription that you'll use.
- * Within **Resource Group**, choose **Create new** and enter a resource group name.
- * In the Location pull-down menu, choose a location.
- * Leave other options to their defaults.
-4. After providing the information above, select **Create**.
+1. On the **Access policies** page, select **Create** from the menu.
-## Add Azure Cosmos DB access keys to the Key Vault.
-1. Navigate to the Key Vault you created in the previous step, open the **Secrets** tab.
-2. Select **+Generate/Import**,
+ :::image type="content" source="media/access-secrets-from-keyvault/create-access-policy.png" alt-text="Screenshot of the Create option in the Access policies menu.":::
- * Select **Manual** for **Upload options**.
- * Provide a **Name** for your secret
- * Provide the connection string of your Azure Cosmos DB account into the **Value** field. And then select **Create**.
+1. On the **Permissions** tab of the **Create an access policy** page, select the **Get** option in the **Secret permissions** section. Select **Next**.
- :::image type="content" source="./media/access-secrets-from-keyvault/create-a-secret.png" alt-text="Screenshot of the Create a secret dialog in the Azure portal.":::
+ :::image type="content" source="media/access-secrets-from-keyvault/get-secrets-permission.png" alt-text="Screenshot of the Get permission enabled for Secret permissions.":::
-4. After the secret is created, open it and copy the **Secret Identifier that is in the following format. You'll use this identifier in the next section.
+1. On the **Principal** tab, select the name of the web app you created earlier in this tutorial. Select **Next**.
- `https://<Key_Vault_Name>.vault.azure.net/secrets/<Secret _Name>/<ID>`
+ :::image type="content" source="media/access-secrets-from-keyvault/assign-principal.png" alt-text="Screenshot of a web app managed identity assigned to a permission.":::
-## Create an Azure web application
+ > [!NOTE]
+ > In this example screenshot, the web app is named **msdocs-dotnet-web**.
-1. Create an Azure web application or you can download the app from the [GitHub repository](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/Demo/keyvaultdemo). It's a simple MVC application.
+1. Select **Next** again to skip the **Application** tab. On the **Review + create** tab, review the settings you provide, and then select **Create**.
-2. Unzip the downloaded application and open the **HomeController.cs** file. Update the secret ID in the following line:
+1. Return to the web app page in the Azure portal. Select **Configuration** from the navigation menu.
- `var secret = await keyVaultClient.GetSecretAsync("<Your Key VaultΓÇÖs secret identifier>")`
+1. On the **Configuration** page, select **New application setting**. In the **Add/Edit application setting** dialog, enter the following information:
-3. **Save** the file, **Build** the solution.
-4. Next deploy the application to Azure. Open the context menu for the project and choose **publish**. Create a new app service profile (you can name the app WebAppKeyVault1) and select **Publish**.
+ | Setting | Description |
+ | | |
+ | **Name** | `CREDENTIALS__ENDPOINT` |
+ | **Key** | Get the **secret identifier** for the **cosmos-endpoint** secret in your key vault that you created earlier in this tutorial. Enter the identifier in the following format: `@Microsoft.KeyVault(SecretUri=<secret-identifier>)`. |
-5. Once the application is deployed from the Azure portal, navigate to web app that you deployed, and turn on the **Managed service identity** of this application.
+ > [!TIP]
+ > Ensure that the environment variable has a double underscore (`__`) value instead of a single underscore. The double-underscore is a key delimeter supported by .NET on all platforms. For more information, see [environment variables configuration](/dotnet/core/extensions/configuration-providers#environment-variable-configuration-provider).
- :::image type="content" source="./media/access-secrets-from-keyvault/turn-on-managed-service-identity.png" alt-text="Screenshot of the Managed service identity page in the Azure portal.":::
+ > [!NOTE]
+ > For example, if the secret identifier is `https://msdocs-key-vault.vault.azure.net/secrets/cosmos-endpoint/69621c59ef5b4b7294b5def118921b07`, then the reference would be `@Microsoft.KeyVault(SecretUri=https://msdocs-key-vault.vault.azure.net/secrets/cosmos-endpoint/69621c59ef5b4b7294b5def118921b07)`.
+ >
+ > :::image type="content" source="media/access-secrets-from-keyvault/create-app-setting.png" alt-text="Screenshot of the Add/Edit application setting dialog with a new app setting referencing a key vault secret.":::
+ >
-If you run the application now, you'll see the following error, as you have not given any permission to this application in Key Vault.
+1. Select **OK** to persist the new app setting
+1. Select **New application setting** again. In the **Add/Edit application setting** dialog, enter the following information and then select **OK**:
-## Register the application & grant permissions to read the Key Vault
+ | Setting | Description |
+ | | |
+ | **Name** | `CREDENTIALS__KEY` |
+ | **Key** | Get the **secret identifier** for the **cosmos-readwrite-key** secret in your key vault that you created earlier in this tutorial. Enter the identifier in the following format: `@Microsoft.KeyVault(SecretUri=<secret-identifier>)`. |
-In this section, you register the application with Azure Active Directory and give permissions for the application to read the Key Vault.
+1. Back on the **Configuration** page, select **Save** to update the app settings for the web app.
-1. Navigate to the Azure portal, open the **Key Vault** you created in the previous section.
+ :::image type="content" source="media/access-secrets-from-keyvault/save-app-settings.png" alt-text="Screenshot of the Save option in the Configuration page's menu.":::
-2. Open **Access policies**, select **+Add New** find the web app you deployed, select permissions and select **OK**.
+1. Wait a few minutes for the web app to restart with the new app settings. At this point, the new app settings should indicate that they're a **Key vault Reference**.
- :::image type="content" source="./media/access-secrets-from-keyvault/add-access-policy.png" alt-text="Add access policy":::
+ :::image type="content" source="media/access-secrets-from-keyvault/app-settings-reference.png" lightbox="media/access-secrets-from-keyvault/app-settings-reference.png" alt-text="Screenshot of the Key vault Reference designation on two app settings in a web app.":::
-Now, if you run the application, you can read the secret from Key Vault.
+1. Select **Overview** from the navigation menu. Select **Browse** to see the app with populated credentials.
-
-Similarly, you can add a user to access the key Vault. You need to add yourself to the Key Vault by selecting **Access Policies** and then grant all the permissions you need to run the application from Visual studio. When this application is running from your desktop, it takes your identity.
+ :::image type="content" source="media/access-secrets-from-keyvault/sample-web-app-populated.png" lightbox="media/access-secrets-from-keyvault/sample-web-app-populated.png" alt-text="Screenshot of the web application with valid Azure Cosmos DB for NoSQL account credentials.":::
## Next steps
-* To configure a firewall for Azure Cosmos DB, see [firewall support](how-to-configure-firewall.md) article.
-* To configure virtual network service endpoint, see [secure access by using VNet service endpoint](how-to-configure-vnet-service-endpoint.md) article.
+- To configure a firewall for Azure Cosmos DB, see [firewall support](how-to-configure-firewall.md) article.
+- To configure virtual network service endpoint, see [secure access by using VNet service endpoint](how-to-configure-vnet-service-endpoint.md) article.
cosmos-db Change Feed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/change-feed.md
Change feed functionality is surfaced as change stream in API for MongoDB and Qu
Native Apache Cassandra provides change data capture (CDC), a mechanism to flag specific tables for archival as well as rejecting writes to those tables once a configurable size-on-disk for the CDC log is reached. The change feed feature in Azure Cosmos DB for Apache Cassandra enhances the ability to query the changes with predicate via CQL. To learn more about the implementation details, see [Change feed in the Azure Cosmos DB for Apache Cassandra](cassandr).
+## Measuing change feed request unit consumption
+
+Use Azure Monitor to measure the request unit (RU) consumption of the change feed. For more information, see [monitor throughput or request unit usage in Azure Cosmos DB](monitor-request-unit-usage.md).
+ ## Next steps You can now proceed to learn more about change feed in the following articles:
cosmos-db How To Setup Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-managed-identity.md
az cosmosdb identity remove \
## Next steps
+> [!div class="nextstepaction"]
+> [Tutorial: Store and use Azure Cosmos DB credentials with Azure Key Vault](access-secrets-from-keyvault.md)
+ - Learn more about [managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md) - Learn more about [customer-managed keys on Azure Cosmos DB](how-to-setup-cmk.md)
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/introduction.md
Title: Introduction to Azure Cosmos DB
-description: Learn about Azure Cosmos DB. This globally distributed multi-model database is built for low latency, elastic scalability, high availability, and offers native support for NoSQL data.
+description: Learn about Azure Cosmos DB. This globally distributed multi-model database is built for low latency, elastic scalability, high availability, and offers native support for NoSQL and relational data.
adobe-target: true
Today's applications are required to be highly responsive and always online. To achieve low latency and high availability, instances of these applications need to be deployed in datacenters that are close to their users. Applications need to respond in real time to large changes in usage at peak hours, store ever increasing volumes of data, and make this data available to users in milliseconds.
-Azure Cosmos DB is a fully managed NoSQL database for modern app development. Single-digit millisecond response times, and automatic and instant scalability, guarantee speed at any scale. Business continuity is assured with [SLA-backed](https://azure.microsoft.com/support/legal/sla/cosmos-db) availability and enterprise-grade security.
+Azure Cosmos DB is a fully managed NoSQL and relational database for modern app development. Single-digit millisecond response times, and automatic and instant scalability, guarantee speed at any scale. Business continuity is assured with [SLA-backed](https://azure.microsoft.com/support/legal/sla/cosmos-db) availability and enterprise-grade security.
App development is faster and more productive thanks to:
You can [Try Azure Cosmos DB for Free](https://azure.microsoft.com/try/cosmosdb/
> [!TIP] > To learn more about Azure Cosmos DB, join us every Thursday at 1PM Pacific on Azure Cosmos DB Live TV. See the [Upcoming session schedule and past episodes](https://gotcosmos.com/tv). ## Key Benefits
Gain unparalleled [SLA-backed](https://azure.microsoft.com/support/legal/sla/cos
Build fast with open source APIs, multiple SDKs, schemaless data and no-ETL analytics over operational data. - Deeply integrated with key Azure services used in modern (cloud-native) app development including Azure Functions, IoT Hub, AKS (Azure Kubernetes Service), App Service, and more.-- Choose from multiple database APIs including the native API for NoSQL, API for MongoDB, Apache Cassandra, Apache Gremlin, and Table.
+- Choose from multiple database APIs including the native API for NoSQL, MongoDB, PostgreSQL, Apache Cassandra, Apache Gremlin, and Table.
- Build apps on API for NoSQL using the languages of your choice with SDKs for .NET, Java, Node.js and Python. Or your choice of drivers for any of the other database APIs. - Change feed makes it easy to track and manage changes to database containers and create triggered events with Azure Functions. - Azure Cosmos DB's schema-less service automatically indexes all your data, regardless of the data model, to deliver blazing fast queries.
cosmos-db Change Feed Processor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/change-feed-processor.md
The change feed processor is resilient to user code errors. That means that if y
> [!NOTE] > There is only one scenario where a batch of changes will not be retried. If the failure happens on the first ever delegate execution, the lease store has no previous saved state to be used on the retry. On those cases, the retry would use the [initial starting configuration](#starting-time), which might or might not include the last batch.
-To prevent your change feed processor from getting "stuck" continuously retrying the same batch of changes, you should add logic in your delegate code to write documents, upon exception, to a dead-letter queue. This design ensures that you can keep track of unprocessed changes while still being able to continue to process future changes. The dead-letter queue might be another Azure Cosmos DB container. The exact data store does not matter, simply that the unprocessed changes are persisted.
+To prevent your change feed processor from getting "stuck" continuously retrying the same batch of changes, you should add logic in your delegate code to write documents, upon exception, to an errored-message queue. This design ensures that you can keep track of unprocessed changes while still being able to continue to process future changes. The errored-message queue might be another Azure Cosmos DB container. The exact data store does not matter, simply that the unprocessed changes are persisted.
In addition, you can use the [change feed estimator](how-to-use-change-feed-estimator.md) to monitor the progress of your change feed processor instances as they read the change feed or use the [life cycle notifications](#life-cycle-notifications) to detect underlying failures.
The change feed processor is resilient to user code errors. That means that if y
> [!NOTE] > There is only one scenario where a batch of changes will not be retried. If the failure happens on the first ever delegate execution, the lease store has no previous saved state to be used on the retry. On those cases, the retry would use the [initial starting configuration](#starting-time), which might or might not include the last batch.
-To prevent your change feed processor from getting "stuck" continuously retrying the same batch of changes, you should add logic in your delegate code to write documents, upon exception, to a dead-letter queue. This design ensures that you can keep track of unprocessed changes while still being able to continue to process future changes. The dead-letter queue might be another Azure Cosmos DB container. The exact data store does not matter, simply that the unprocessed changes are persisted.
+To prevent your change feed processor from getting "stuck" continuously retrying the same batch of changes, you should add logic in your delegate code to write documents, upon exception, to an errored-message. This design ensures that you can keep track of unprocessed changes while still being able to continue to process future changes. The errored-message might be another Azure Cosmos DB container. The exact data store does not matter, simply that the unprocessed changes are persisted.
In addition, you can use the [change feed estimator](how-to-use-change-feed-estimator.md) to monitor the progress of your change feed processor instances as they read the change feed.
cosmos-db Estimate Ru With Capacity Planner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/estimate-ru-with-capacity-planner.md
After you sign in, you can see more fields compared to the fields in basic mode.
|Indexing policy|By default, Azure Cosmos DB [indexes all properties](../index-policy.md) in all items for flexible and efficient queries (maps to the **Automatic** indexing policy). <br/><br/> If you choose **off**, none of the properties are indexed. This results in the lowest RU charge for writes. Select **off** policy if you expect to only do [point reads](/dotnet/api/microsoft.azure.cosmos.container.readitemasync) (key value lookups) and/or writes, and no queries. <br/><br/> If you choose **Automatic**, Azure Cosmos DB automatically indexes all the items as they are written. <br/><br/> **Custom** indexing policy allows you to include or exclude specific properties from the index for lower write throughput and storage. To learn more, see [indexing policy](../index-overview.md) and [sample indexing policies](how-to-manage-indexing-policy.md#indexing-policy-examples) articles.| |Total data stored in transactional store |Total estimated data stored(GB) in the transactional store in a single region.| |Use analytical store| Choose **On** if you want to use analytical store. Enter the **Total data stored in analytical store**, it represents the estimated data stored(GB) in the analytical store in a single region. |
-|Workload mode|Select **Steady** option if your workload volume is constant. <br/><br/> Select **Variable** option if your workload volume changes over time. For example, during a specific day or a month. The following setting is available if you choose the variable workload option:<ul><li>Percentage of time at peak: Percentage of time in a month where your workload requires peak (highest) throughput. </li></ul> <br/><br/> For example, if you have a workload that has high activity during 9am ΓÇô 6pm weekday business hours, then the percentage of time at peak is: 45 hours at peak / 730 hours / month = ~6%.<br/><br/>With peak and off-peak intervals, you can optimize your cost by [programmatically scaling your provisioned throughput](../set-throughput.md#update-throughput-on-a-database-or-a-container) up and down accordingly.|
+|Workload mode|Select **Steady** option if your workload volume is constant. <br/><br/> Select **Variable** option if your workload volume changes over time. For example, during a specific day or a month. The following setting is available if you choose the variable workload option:<ul><li>Percentage of time at peak: Percentage of time in a month where your workload requires peak (highest) throughput. </li></ul> <br/><br/> For example, if you have a workload that has high activity during 9am ΓÇô 6pm weekday business hours, then the percentage of time at peak is: `(9 hours per weekday at peak * 5 days per week at peak) / (24 hours per day at peak * 7 days in a week) = 45 / 168 = ~27%`.<br/><br/>With peak and off-peak intervals, you can optimize your cost by [programmatically scaling your provisioned throughput](../set-throughput.md#update-throughput-on-a-database-or-a-container) up and down accordingly.|
|Item size|The size of the data item (for example, document), ranging from 1 KB to 2 MB. You can add estimates for multiple sample items. <br/><br/>You can also **Upload sample (JSON)** document for a more accurate estimate.<br/><br/>If your workload has multiple types of items (with different JSON content) in the same container, you can upload multiple JSON documents and get the estimate. Use the **Add new item** button to add multiple sample JSON documents.| | Number of properties | The average number of properties per an item. | |Point reads/sec |Number of point read operations expected per second per region. Point reads are the key/value lookup on a single item ID and a partition key. Point read operations are different from query read operations. To learn more about point reads, see the [options to read data](../optimize-cost-reads-writes.md#reading-data-point-reads-and-queries) article. If your workload mode is **Variable**, you can provide the expected number of point read operations at peak and off peak. |
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-dotnet.md
ms.devlang: csharp Previously updated : 11/03/2022 Last updated : 11/07/2022
Get started with the Azure Cosmos DB client library for .NET to create databases
## Prerequisites - An Azure account with an active subscription.
- - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
- - Alternatively, you can [try Azure Cosmos DB free](../try-free.md) before you commit.
+ - No Azure subscription? You can [try Azure Cosmos DB free](../try-free.md) with no credit card required.
- [.NET 6.0 or later](https://dotnet.microsoft.com/download) - [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
This section walks you through creating an Azure Cosmos DB account and setting u
### <a id="create-account"></a>Create an Azure Cosmos DB account > [!TIP]
-> Alternatively, you can [try Azure Cosmos DB free](../try-free.md) before you commit. If you create an account using the free trial, you can safely skip this section.
+> No Azure subscription? You can [try Azure Cosmos DB free](../try-free.md) with no credit card required. If you create an account using the free trial, you can safely skip ahead to the [Create a new .NET app](#create-a-new-net-app) section.
[!INCLUDE [Create resource tabbed conceptual - ARM, Azure CLI, PowerShell, Portal](./includes/create-resources.md)]
cosmos-db Samples Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/samples-dotnet.md
> * [.NET](samples-dotnet.md) >
-The [cosmos-db-sql-api-dotnet-samples](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples) GitHub repository includes multiple sample projects. These projects illustrate how to perform common operations on Azure Cosmos DB for NoSQL resources.
+The [cosmos-db-nosql-dotnet-samples](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples) GitHub repository includes multiple sample projects. These projects illustrate how to perform common operations on Azure Cosmos DB for NoSQL resources.
## Prerequisites
The sample projects are all self-contained and are designed to be ran individual
| Task | API reference | | : | : |
-| [Create a client with endpoint and key](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples/blob/v3/101-client-endpoint-key/Program.cs#L11-L14) |[``CosmosClient(string, string)``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.-ctor#microsoft-azure-cosmos-cosmosclient-ctor(system-string-system-string-microsoft-azure-cosmos-cosmosclientoptions)) |
-| [Create a client with connection string](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples/blob/v3/102-client-connection-string/Program.cs#L11-L13) |[``CosmosClient(string)``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.-ctor#microsoft-azure-cosmos-cosmosclient-ctor(system-string-microsoft-azure-cosmos-cosmosclientoptions)) |
-| [Create a client with ``DefaultAzureCredential``](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples/blob/v3/103-client-default-credential/Program.cs#L20-L23) |[``CosmosClient(string, TokenCredential)``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.-ctor#microsoft-azure-cosmos-cosmosclient-ctor(system-string-azure-core-tokencredential-microsoft-azure-cosmos-cosmosclientoptions)) |
-| [Create a client with custom ``TokenCredential``](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples/blob/v3/104-client-secret-credential/Program.cs#L25-L28) |[``CosmosClient(string, TokenCredential)``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.-ctor#microsoft-azure-cosmos-cosmosclient-ctor(system-string-azure-core-tokencredential-microsoft-azure-cosmos-cosmosclientoptions)) |
+| [Create a client with endpoint and key](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples/blob/main/101-client-endpoint-key/Program.cs#L11-L14) |[``CosmosClient(string, string)``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.-ctor#microsoft-azure-cosmos-cosmosclient-ctor(system-string-system-string-microsoft-azure-cosmos-cosmosclientoptions)) |
+| [Create a client with connection string](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples/blob/main/102-client-connection-string/Program.cs#L11-L13) |[``CosmosClient(string)``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.-ctor#microsoft-azure-cosmos-cosmosclient-ctor(system-string-microsoft-azure-cosmos-cosmosclientoptions)) |
+| [Create a client with ``DefaultAzureCredential``](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples/blob/main/103-client-default-credential/Program.cs#L20-L23) |[``CosmosClient(string, TokenCredential)``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.-ctor#microsoft-azure-cosmos-cosmosclient-ctor(system-string-azure-core-tokencredential-microsoft-azure-cosmos-cosmosclientoptions)) |
+| [Create a client with custom ``TokenCredential``](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples/blob/main/104-client-secret-credential/Program.cs#L25-L28) |[``CosmosClient(string, TokenCredential)``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.-ctor#microsoft-azure-cosmos-cosmosclient-ctor(system-string-azure-core-tokencredential-microsoft-azure-cosmos-cosmosclientoptions)) |
### Databases | Task | API reference | | : | : |
-| [Create a database](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples/blob/v3/200-create-database/Program.cs#L19-L21) |[``CosmosClient.CreateDatabaseIfNotExistsAsync``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.createdatabaseifnotexistsasync) |
+| [Create a database](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples/blob/main/200-create-database/Program.cs#L19-L21) |[``CosmosClient.CreateDatabaseIfNotExistsAsync``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.createdatabaseifnotexistsasync) |
### Containers | Task | API reference | | : | : |
-| [Create a container](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples/blob/v3/225-create-container/Program.cs#L26-L30) |[``Database.CreateContainerIfNotExistsAsync``](/dotnet/api/microsoft.azure.cosmos.database.createcontainerifnotexistsasync) |
+| [Create a container](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples/blob/main/225-create-container/Program.cs#L26-L30) |[``Database.CreateContainerIfNotExistsAsync``](/dotnet/api/microsoft.azure.cosmos.database.createcontainerifnotexistsasync) |
### Items | Task | API reference | | : | : |
-| [Create an item](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples/blob/v3/250-create-item/Program.cs#L35-L46) |[``Container.CreateItemAsync<>``](/dotnet/api/microsoft.azure.cosmos.container.createitemasync) |
-| [Point read an item](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples/blob/v3/275-read-item/Program.cs#L51-L54) |[``Container.ReadItemAsync<>``](/dotnet/api/microsoft.azure.cosmos.container.readitemasync) |
-| [Query multiple items](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples/blob/v3/300-query-items/Program.cs#L64-L80) |[``Container.GetItemQueryIterator<>``](/dotnet/api/microsoft.azure.cosmos.container.getitemqueryiterator) |
+| [Create an item](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples/blob/main/250-create-item/Program.cs#L35-L46) |[``Container.CreateItemAsync<>``](/dotnet/api/microsoft.azure.cosmos.container.createitemasync) |
+| [Point read an item](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples/blob/main/275-read-item/Program.cs#L51-L54) |[``Container.ReadItemAsync<>``](/dotnet/api/microsoft.azure.cosmos.container.readitemasync) |
+| [Query multiple items](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples/blob/main/300-query-items/Program.cs#L64-L80) |[``Container.GetItemQueryIterator<>``](/dotnet/api/microsoft.azure.cosmos.container.getitemqueryiterator) |
## Next steps
cosmos-db Tutorial Dotnet Console App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/tutorial-dotnet-console-app.md
In this tutorial, you learn how to:
## Prerequisites - An existing Azure Cosmos DB for NoSQL account.
- - If you have an Azure subscription, [create a new account](how-to-create-account.md?tabs=azure-portal).
- - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
- - Alternatively, you can [try Azure Cosmos DB free](../try-free.md) before you commit.
+ - If you have an existing Azure subscription, [create a new account](how-to-create-account.md?tabs=azure-portal).
+ - No Azure subscription? You can [try Azure Cosmos DB free](../try-free.md) with no credit card required.
- [Visual Studio Code](https://code.visualstudio.com) - [.NET 6 (LTS) or later](https://dotnet.microsoft.com/download/dotnet/6.0) - Experience writing C# applications.
cosmos-db Tutorial Dotnet Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/tutorial-dotnet-web-app.md
In this tutorial, you learn how to:
## Prerequisites - An existing Azure Cosmos DB for NoSQL account.
- - If you have an Azure subscription, [create a new account](how-to-create-account.md?tabs=azure-portal).
- - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
- - Alternatively, you can [try Azure Cosmos DB free](../try-free.md) before you commit.
+ - If you have an existing Azure subscription, [create a new account](how-to-create-account.md?tabs=azure-portal).
+ - No Azure subscription? You can [try Azure Cosmos DB free](../try-free.md) with no credit card required.
- [Visual Studio Code](https://code.visualstudio.com) - [.NET 6 (LTS) or later](https://dotnet.microsoft.com/download/dotnet/6.0) - Experience writing C# applications.
cosmos-db Partial Document Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/partial-document-update.md
A JSON Patch document:
{ "op": "add", "path": "/color", "value": "silver" }, { "op": "remove", "path": "/used" }, { "op": "set", "path": "/price", "value": 355.45 }
- { "op": "increment", "path": "/inventory/quantity", "value": 10 }
+ { "op": "incr", "path": "/inventory/quantity", "value": 10 }
] ```
cosmos-db Reserved Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/reserved-capacity.md
You can cancel, exchange, or refund reservations with certain limitations. For m
## Exceeding reserved capacity
-When you reserve capacity for your Azure Cosmos DB resources, you are reserving [provisioned thorughput](set-throughput.md). If the provisioned throughput is exceeded, requests beyond that provisioning will be rate-limited. For more information, see [provisioned throughput types](how-to-choose-offer.md#overview-of-provisioned-throughput-types).
+When you reserve capacity for your Azure Cosmos DB resources, you are reserving [provisioned thorughput](set-throughput.md). If the provisioned throughput is exceeded, requests beyond that provisioning will be billed using pay-as-you go rates. For more information on reservations, see the [Azure reservations](../cost-management-billing/reservations/save-compute-costs-reservations.md) article. For more information on provisioned throughput, see [provisioned throughput types](how-to-choose-offer.md#overview-of-provisioned-throughput-types).
## Next steps
cosmos-db How To Use Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-use-nodejs.md
communicate with the Storage REST services.
Add the following code to the top of the **server.js** file in your application: ```javascript
-const { TableServiceClient } = require("@azure/data-tables");
+const { TableServiceClient, odata } = require("@azure/data-tables");
``` ## Connect to Azure Table service
For successful batch operations, `result` contains information for each operatio
To return a specific entity based on the **PartitionKey** and **RowKey**, use the **getEntity** method. ```javascript
-let result = await tableClient.getEntity("hometasks", "1");
- // result contains the entity
+let result = await tableClient.getEntity("hometasks", "1")
+ .catch((error) => {
+ // handle any errors
+ });
+ // result contains the entity
``` After this operation is complete, `result` contains the entity.
cosmos-db Try Free https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/try-free.md
Title: Try Azure Cosmos DB free
description: Try Azure Cosmos DB free of charge. No sign-up or credit card required. It's easy to test your apps, deploy, and run small workloads free for 30 days. Upgrade your account at any time during your trial. -+
cost-management-billing Assign Access Acm Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/assign-access-acm-data.md
To view cost data for Azure EA subscriptions, a user must have at least read acc
| **Scope** | **Defined at** | **Required access to view data** | **Prerequisite EA setting** | **Consolidates data to** | | | | | | |
-| Billing account<sup>1</sup> | [https://ea.azure.com](https://ea.azure.com/) | Enterprise Admin | None | All subscriptions from the enterprise agreement |
+| Billing account┬╣ | [https://ea.azure.com](https://ea.azure.com/) | Enterprise Admin | None | All subscriptions from the enterprise agreement |
| Department | [https://ea.azure.com](https://ea.azure.com/) | Department Admin | **DA view charges** enabled | All subscriptions belonging to an enrollment account that is linked to the department |
-| Enrollment account<sup>2</sup> | [https://ea.azure.com](https://ea.azure.com/) | Account Owner | **AO view charges** enabled | All subscriptions from the enrollment account |
+| Enrollment account┬▓ | [https://ea.azure.com](https://ea.azure.com/) | Account Owner | **AO view charges** enabled | All subscriptions from the enrollment account |
| Management group | [https://portal.azure.com](https://portal.azure.com/) | Cost Management Reader (or Contributor) | **AO view charges** enabled | All subscriptions below the management group | | Subscription | [https://portal.azure.com](https://portal.azure.com/) | Cost Management Reader (or Contributor) | **AO view charges** enabled | All resources/resource groups in the subscription | | Resource group | [https://portal.azure.com](https://portal.azure.com/) | Cost Management Reader (or Contributor) | **AO view charges** enabled | All resources in the resource group |
-<sup>1</sup> The billing account is also referred to as the Enterprise Agreement or Enrollment.
+┬╣ The billing account is also referred to as the Enterprise Agreement or Enrollment.
-<sup>2</sup> The enrollment account is also referred to as the account owner.
+┬▓ The enrollment account is also referred to as the account owner.
Direct enterprise administrators can assign the billing account, department, and enrollment account scope the in the [Azure portal](https://portal.azure.com/). For more information, see [Azure portal administration for direct Enterprise Agreements](../manage/direct-ea-administration.md).
cost-management-billing Tutorial Export Acm Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-export-acm-data.md
description: This article shows you how you can create and manage exported Cost Management data so that you can use it in external systems. Previously updated : 08/23/2022 Last updated : 11/07/2022
If you have a Microsoft Customer Agreement, Microsoft Partner Agreement, or Ente
If you don't have a Microsoft Customer Agreement, Microsoft Partner Agreement, or Enterprise Agreement, then you won't see the **File Partitioning** option.
+Partitioning isn't currently supported for resource groups or management group scopes.
+ #### Update existing exports to use file partitioning If you have existing exports and you want to set up file partitioning, create a new export. File partitioning is only available with the latest Exports version. There may be minor changes to some of the fields in the usage files that get created.
cost-management-billing Create Multiple Subscriptions Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/create-multiple-subscriptions-error.md
+
+ Title: Error when you create multiple subscriptions
+
+description: Provides the solution for a problem where you get an error message when you try to create multiple subscriptions.
++
+tags: billing
+++ Last updated : 11/07/2022+++
+# Error when you create multiple subscriptions
+
+When you try to create multiple Azure subscriptions in a short period of time, you might receive an error stating:
+
+`Subscription not created. Please try again later.`
+
+The error is normal and expected.
+
+The error can occur for customers with the following Azure subscription agreement type:
+
+- Microsoft Customer Agreement purchased directly through Azure.com
+
+## Solution
+
+Expect a delay before you can create another subscription.
+
+## Need help? Contact us.
+
+If you have questions or need help, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458).
+
+## Next steps
+
+- Learn more about [Programmatically creating Azure subscriptions for a Microsoft Customer Agreement with the latest APIs](programmatically-create-subscription-microsoft-customer-agreement.md).
cost-management-billing Direct Ea Administration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/direct-ea-administration.md
After a department is created, the EA admin can add department administrators an
- Add accounts - Remove accounts - Download usage details-- View the monthly usage and charges <sup>1</sup>
+- View the monthly usage and charges ┬╣
- <sup>1</sup> An EA admin must grant the permissions.
+ ┬╣ An EA admin must grant the permissions.
### To add a department administrator
cost-management-billing Ea Portal Administration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-portal-administration.md
After a department is created, the enterprise administrator can add department a
- Add accounts - Remove accounts - Download usage details-- View the monthly usage and charges <sup>1</sup>
+- View the monthly usage and charges ┬╣
-> <sup>1</sup> An enterprise administrator must grant these permissions. If you were given permission to view department monthly usage and charges, but can't see them, contact your partner.
+> ┬╣ An enterprise administrator must grant these permissions. If you were given permission to view department monthly usage and charges, but can't see them, contact your partner.
### To add a department administrator
cost-management-billing Manage Tax Information https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/manage-tax-information.md
Customers in the following countries or regions can add their Tax IDs.
|Ghana | Greece | |Guatemala | Hungary | |Iceland | Italy |
-| India <sup>1</sup> | Indonesia |
+| India ┬╣ | Indonesia |
|Ireland | Isle of Man | |Kenya | Korea | | Latvia | Liechtenstein |
Customers in the following countries or regions can add their Tax IDs.
> [!NOTE] > If you don't see the Tax IDs section, Tax IDs are not yet collected for your region. Or, updating Tax IDs in the Azure portal isn't supported for your account.
-<sup>1</sup> Follow the instructions in the next section to add your Goods and Services Taxpayer Identification Number (GSTIN).
+┬╣ Follow the instructions in the next section to add your Goods and Services Taxpayer Identification Number (GSTIN).
## Add your GSTIN for billing accounts in India
cost-management-billing Mca Request Billing Ownership https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mca-request-billing-ownership.md
As the user that approved the transfer:
You can request billing ownership of products for the subscription types listed below. -- [Action pack](https://azure.microsoft.com/offers/ms-azr-0025p/)<sup>1</sup>-- [Azure in Open Licensing](https://azure.microsoft.com/offers/ms-azr-0111p/)<sup>1</sup>-- [Azure Pass Sponsorship](https://azure.microsoft.com/offers/azure-pass/)<sup>1</sup>
+- [Action pack](https://azure.microsoft.com/offers/ms-azr-0025p/)┬╣
+- [Azure in Open Licensing](https://azure.microsoft.com/offers/ms-azr-0111p/)┬╣
+- [Azure Pass Sponsorship](https://azure.microsoft.com/offers/azure-pass/)┬╣
- [Enterprise Dev/Test](https://azure.microsoft.com/offers/ms-azr-0148p/)-- [Free Trial](https://azure.microsoft.com/offers/ms-azr-0044p/)<sup>1</sup>
+- [Free Trial](https://azure.microsoft.com/offers/ms-azr-0044p/)┬╣
- [Pay-As-You-Go](https://azure.microsoft.com/offers/ms-azr-0003p/) - [Pay-As-You-Go Dev/Test](https://azure.microsoft.com/offers/ms-azr-0023p/)-- [Microsoft Azure Plan](https://azure.microsoft.com/offers/ms-azr-0017g/)<sup>2</sup>-- [Microsoft Azure Sponsored Offer](https://azure.microsoft.com/offers/ms-azr-0036p/)<sup>1</sup>
+- [Microsoft Azure Plan](https://azure.microsoft.com/offers/ms-azr-0017g/)┬▓
+- [Microsoft Azure Sponsored Offer](https://azure.microsoft.com/offers/ms-azr-0036p/)┬╣
- [Microsoft Enterprise Agreement](https://azure.microsoft.com/pricing/enterprise-agreement/) - Subscription and reservation transfer are supported for direct EA customers. A direct enterprise agreement is one that's signed between Microsoft and an enterprise agreement customer. - Only subscription transfers are supported for indirect EA customers. Reservation transfers aren't supported. An indirect EA agreement is one where a customer signs an agreement with a Microsoft partner. - [Microsoft Customer Agreement](https://azure.microsoft.com/pricing/purchase-options/microsoft-customer-agreement/)-- [Microsoft Cloud Partner Program](https://azure.microsoft.com/offers/ms-azr-0025p/)<sup>1</sup>-- [MSDN Platforms](https://azure.microsoft.com/offers/ms-azr-0062p/)<sup>1</sup>-- [Visual Studio Enterprise (BizSpark) subscribers](https://azure.microsoft.com/offers/ms-azr-0064p/)<sup>1</sup>-- [Visual Studio Enterprise (Cloud Partner Program) subscribers](https://azure.microsoft.com/offers/ms-azr-0029p/)<sup>1</sup>-- [Visual Studio Enterprise subscribers](https://azure.microsoft.com/offers/ms-azr-0063p/)<sup>1</sup>-- [Visual Studio Professional](https://azure.microsoft.com/offers/ms-azr-0059p/)<sup>1</sup>-- [Visual Studio Test Professional subscribers](https://azure.microsoft.com/offers/ms-azr-0060p/)<sup>1</sup>
+- [Microsoft Cloud Partner Program](https://azure.microsoft.com/offers/ms-azr-0025p/)┬╣
+- [MSDN Platforms](https://azure.microsoft.com/offers/ms-azr-0062p/)┬╣
+- [Visual Studio Enterprise (BizSpark) subscribers](https://azure.microsoft.com/offers/ms-azr-0064p/)┬╣
+- [Visual Studio Enterprise (Cloud Partner Program) subscribers](https://azure.microsoft.com/offers/ms-azr-0029p/)┬╣
+- [Visual Studio Enterprise subscribers](https://azure.microsoft.com/offers/ms-azr-0063p/)┬╣
+- [Visual Studio Professional](https://azure.microsoft.com/offers/ms-azr-0059p/)┬╣
+- [Visual Studio Test Professional subscribers](https://azure.microsoft.com/offers/ms-azr-0060p/)┬╣
-<sup>1</sup> Any credit available on the subscription won't be available in the new account after the transfer.
+┬╣ Any credit available on the subscription won't be available in the new account after the transfer.
-<sup>2</sup> Only supported for products in accounts that are created during sign-up on the Azure website.
+┬▓ Only supported for products in accounts that are created during sign-up on the Azure website.
## Check for access [!INCLUDE [billing-check-mca](../../../includes/billing-check-mca.md)]
cost-management-billing Mpa Request Ownership https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mpa-request-ownership.md
On the Review request tab, the following status messages might be displayed.
You can request billing ownership of the following subscription types.
-* [Enterprise Dev/Test](https://azure.microsoft.com/offers/ms-azr-0148p/)<sup>1</sup>
+* [Enterprise Dev/Test](https://azure.microsoft.com/offers/ms-azr-0148p/)┬╣
* [Microsoft Enterprise Agreement](https://azure.microsoft.com/pricing/enterprise-agreement/)
-* Azure Plan<sup>1</sup> [(Microsoft Customer Agreement in Enterprise Motion)](https://www.microsoft.com/Licensing/how-to-buy/microsoft-customer-agreement)
+* Azure Plan┬╣ [(Microsoft Customer Agreement in Enterprise Motion)](https://www.microsoft.com/Licensing/how-to-buy/microsoft-customer-agreement)
-<sup>1</sup> You must convert an EA Dev/Test subscription to an EA Enterprise offer using a support ticket and respectively, an Azure Plan Dev/Test offer to Azure plan. A Dev/Test subscription will be billed at a pay-as-you-go rate after conversion. There's no discount currently available through the Dev/Test offer to CSP partners.
+┬╣ You must convert an EA Dev/Test subscription to an EA Enterprise offer using a support ticket and respectively, an Azure Plan Dev/Test offer to Azure plan. A Dev/Test subscription will be billed at a pay-as-you-go rate after conversion. There's no discount currently available through the Dev/Test offer to CSP partners.
## Additional information
cost-management-billing Pay By Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/pay-by-invoice.md
tags: billing
Previously updated : 07/18/2022 Last updated : 11/04/2022
If you're not automatically approved, you can submit a request to Azure support
- If existing, current payment method: - Order ID (requesting for invoice option): - Account Admins Live ID (or Org ID) (should be company domain):
- - Commerce Account ID:
+ - Commerce Account ID┬╣:
- Company Name (as registered under VAT or Government Website): - Company Address (as registered under VAT or Government Website): - Company Website:
If you're not automatically approved, you can submit a request to Azure support
- Add your billing contact information in the Azure portal before the credit limit can be approved. The contact details should be related to the company's Accounts Payable or Finance department. 1. Verify your contact information and preferred contact method, and then select **Create**.
+┬╣ If you don't know your Commerce Account ID, it's the GUID ID shown on the Properties page for your billing account. To view your Commerce Account ID in the Azure portal, navigate to **Cost Management** > select a billing scope > in the left menu, select **Properties**. On the billing scope Properties page, notice the GUID ID value. It's your Commerce Account ID.
+ If we need to run a credit check because of the amount of credit that you need, we'll send you a credit check application. We might ask you to provide your companyΓÇÖs audited financial statements. If no financial information is provided or if the information isn't strong enough to support the amount of credit limit required, we might ask for a security deposit or a standby letter of credit to approve your credit check request. ## Switch to pay by check or wire transfer after approval
cost-management-billing Understand Ea Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/understand-ea-roles.md
To help manage your organization's usage and spend, Azure customers with an Enterprise Agreement can assign six distinct administrative roles: - Enterprise Administrator-- Enterprise Administrator (read only)<sup>1</sup>
+- Enterprise Administrator (read only)┬╣
- EA purchaser - Department Administrator - Department Administrator (read only)-- Account Owner<sup>2</sup>
+- Account Owner┬▓
-<sup>1</sup> The Bill-To contact of the EA contract will be under this role.
+┬╣ The Bill-To contact of the EA contract will be under this role.
-<sup>2</sup> The Bill-To contact cannot be added or changed in the Azure EA Portal and will be added to the EA enrollment based on the user who is set up as the Bill-To contact on agreement level. To change the Bill-To contact, a request needs to be made through a partner/software advisor to the Regional Operations Center (ROC).
+┬▓ The Bill-To contact cannot be added or changed in the Azure EA Portal and will be added to the EA enrollment based on the user who is set up as the Bill-To contact on agreement level. To change the Bill-To contact, a request needs to be made through a partner/software advisor to the Regional Operations Center (ROC).
The first enrollment administrator that is set up during the enrollment provisioning determines the authentication type of the Bill-to contact account. When the bill-to contact gets added to the EA Portal as a read-only administrator, they are given Microsoft account authentication.
The following sections describe the limitations and capabilities of each role.
| EA purchaser assigned to an SPN | Unlimited | |Department Administrator|Unlimited| |Department Administrator (read only)|Unlimited|
-|Account Owner|1 per account<sup>3</sup>|
+|Account Owner|1 per account┬│|
-<sup>3</sup> Each account requires a unique Microsoft account, or work or school account.
+┬│ Each account requires a unique Microsoft account, or work or school account.
## Organization structure and permissions by role
The following sections describe the limitations and capabilities of each role.
||||||||| |View Enterprise Administrators|Γ£ö|Γ£ö| Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ö| |Add or remove Enterprise Administrators|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|
-|View Notification Contacts<sup>4</sup> |Γ£ö|Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ö|
-|Add or remove Notification Contacts<sup>4</sup> |Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|
+|View Notification Contacts⁴ |✔|✔|✔|✘|✘|✘|✔|
+|Add or remove Notification Contacts⁴ |✔|✘|✘|✘|✘|✘|✘|
|Create and manage Departments |Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ| |View Department Administrators|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ÿ|Γ£ö| |Add or remove Department Administrators|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|
-|View Accounts in the enrollment |Γ£ö|Γ£ö|Γ£ö|Γ£ö<sup>5</sup>|Γ£ö<sup>5</sup>|Γ£ÿ|Γ£ö|
-|Add Accounts to the enrollment and change Account Owner|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ö<sup>5</sup>|Γ£ÿ|Γ£ÿ|Γ£ÿ|
+|View Accounts in the enrollment |✔|✔|✔|✔⁵|✔⁵|✘|✔|
+|Add Accounts to the enrollment and change Account Owner|✔|✘|✘|✔⁵|✘|✘|✘|
|Purchase reservations|Γ£ö|Γ£ÿ|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ| |Create and manage subscriptions and subscription permissions|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ö|Γ£ÿ| -- <sup>4</sup> Notification contacts are sent email communications about the Azure Enterprise Agreement.-- <sup>5</sup> Task is limited to accounts in your department.
+- ⁴ Notification contacts are sent email communications about the Azure Enterprise Agreement.
+- ⁵ Task is limited to accounts in your department.
## Add a new enterprise administrator
Direct EA admins can add department admins in the Azure portal. For more informa
|View department spending quotas|Γ£ö|Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ö| |Set department spending quotas|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ| |View organization's EA price sheet|Γ£ö|Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ö|
-|View usage and cost details|Γ£ö|Γ£ö|Γ£ö|Γ£ö<sup>6</sup>|Γ£ö<sup>6</sup>|Γ£ö<sup>7</sup>|Γ£ö|
+|View usage and cost details|✔|✔|✔|✔⁶|✔⁶|✔⁷|✔|
|Manage resources in Azure portal|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ö|Γ£ÿ| -- <sup>6</sup> Requires that the Enterprise Administrator enable **DA view charges** policy in the Enterprise portal. The Department Administrator can then see cost details for the department.-- <sup>7</sup> Requires that the Enterprise Administrator enable **AO view charges** policy in the Enterprise portal. The Account Owner can then see cost details for the account.
+- ⁶ Requires that the Enterprise Administrator enable **DA view charges** policy in the Enterprise portal. The Department Administrator can then see cost details for the department.
+- ⁷ Requires that the Enterprise Administrator enable **AO view charges** policy in the Enterprise portal. The Account Owner can then see cost details for the account.
## See pricing for different user roles
cost-management-billing Microsoft Customer Agreement Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/microsoft-customer-agreement/microsoft-customer-agreement-get-started.md
When you move from a pay-as-you-go or an enterprise agreement to a Microsoft Cus
| MCA purchase method | Previous payment method - Credit card | Previous payment method - Invoice | New payment method under MCA - Credit card | New payment method under MCA - Invoice | | | | | | |
-| Through a Microsoft representative | | Γ£ö | Γ£ö <sup>4</sup> | Γ£ö <sup>2</sup> |
-| Azure website | Γ£ö | Γ£ö <sup>1</sup> | Γ£ö | Γ£ö <sup>3</sup> |
+| Through a Microsoft representative | | ✔ | ✔ ⁴ | ✔ ² |
+| Azure website | Γ£ö | Γ£ö ┬╣ | Γ£ö | Γ£ö ┬│ |
-<sup>1</sup> By request.
+┬╣ By request.
-<sup>2</sup> You continue to pay by invoice/wire transfer under the MCA but will need to send your payments to a different bank account. For information about where to send your payment, see [Pay your bill](../understand/pay-bill.md#wire-bank-details) after you select your country in the list.
+┬▓ You continue to pay by invoice/wire transfer under the MCA but will need to send your payments to a different bank account. For information about where to send your payment, see [Pay your bill](../understand/pay-bill.md#wire-bank-details) after you select your country in the list.
-<sup>3</sup> For more information, see [Pay for your Azure subscription by invoice](../manage/pay-by-invoice.md).
+┬│ For more information, see [Pay for your Azure subscription by invoice](../manage/pay-by-invoice.md).
-<sup>4</sup> For more information, see [Pay your bill for Microsoft Azure](../understand/pay-bill.md#pay-now-in-the-azure-portal).
+⁴ For more information, see [Pay your bill for Microsoft Azure](../understand/pay-bill.md#pay-now-in-the-azure-portal).
## Complete outstanding payments
cost-management-billing Overview Azure Hybrid Benefit Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/scope-level/overview-azure-hybrid-benefit-scope.md
The following table summarizes how many NCLs you need to fully discount the SQL
| | | | | SQL Managed Instance or Instance pool | Business Critical | 4 per vCore | | SQL Managed Instance or Instance pool | General Purpose | 1 per vCore |
-| SQL Database or Elastic pool<sup>1</sup> | Business Critical | 4 per vCore |
-| SQL Database or Elastic pool<sup>1</sup> | General Purpose | 1 per vCore |
-| SQL Database or Elastic pool<sup>1</sup> | Hyperscale | 1 per vCore |
+| SQL Database or Elastic pool┬╣ | Business Critical | 4 per vCore |
+| SQL Database or Elastic pool┬╣ | General Purpose | 1 per vCore |
+| SQL Database or Elastic pool┬╣ | Hyperscale | 1 per vCore |
| Azure Data Factory SQL Server Integration Services | Enterprise | 4 per vCore | | Azure Data Factory SQL Server Integration Services | Standard | 1 per vCore |
-| SQL Server Virtual Machines<sup>2</sup> | Enterprise | 4 per vCPU |
-| SQL Server Virtual Machines<sup>2</sup> | Standard | 1 per vCPU |
+| SQL Server Virtual Machines┬▓ | Enterprise | 4 per vCPU |
+| SQL Server Virtual Machines┬▓ | Standard | 1 per vCPU |
-<sup>1</sup> *Azure Hybrid Benefit isn't available in the serverless compute tier of Azure SQL Database.*
+┬╣ *Azure Hybrid Benefit isn't available in the serverless compute tier of Azure SQL Database.*
-<sup>2</sup> *Subject to a minimum of four vCore licenses per Virtual Machine.*
+┬▓ *Subject to a minimum of four vCore licenses per Virtual Machine.*
## Ongoing scope-level management
data-factory Concepts Pipeline Execution Triggers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-pipeline-execution-triggers.md
For a complete sample, see [Quickstart: Create a data factory by using the REST
The following sample command shows you how to manually run your pipeline by using Azure PowerShell: ```powershell
-Invoke-AzDataFactoryV2Pipeline -DataFactory $df -PipelineName "Adfv2QuickStartPipeline" -ParameterFile .\PipelineParameters.json
+Invoke-AzDataFactoryV2Pipeline -DataFactory $df -PipelineName "Adfv2QuickStartPipeline" -ParameterFile .\PipelineParameters.json -ResourceGroupName "myResourceGroup"
``` You pass parameters in the body of the request payload. In the .NET SDK, Azure PowerShell, and the Python SDK, you pass values in a dictionary that's passed as an argument to the call:
defender-for-cloud Enable Pull Request Annotations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-pull-request-annotations.md
Title: Enable pull request annotations in GitHub or in Azure DevOps
description: Add pull request annotations in GitHub or in Azure DevOps. By adding pull request annotations, your SecOps and developer teams so that they can be on the same page when it comes to mitigating issues. Previously updated : 10/30/2022 Last updated : 11/07/2022 # Enable pull request annotations in GitHub and Azure DevOps
Before you can enable pull request annotations, your main branch must have enabl
1. Locate the Build Validation section.
-1. Ensure the CI Build is toggled to **On**.
+1. Ensure the build validation for your repository is toggled to **On**.
- :::image type="content" source="media/tutorial-enable-pr-annotations/build-validation.png" alt-text="Screenshot that shows where the CI Build toggle is located.":::
+ :::image type="content" source="media/tutorial-enable-pr-annotations/build-validation.png" alt-text="Screenshot that shows where the CI Build toggle is located." lightbox="media/tutorial-enable-pr-annotations/build-validation.png":::
1. Select **Save**.
defender-for-iot Alert Engine Messages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/alert-engine-messages.md
Title: OT monitoring alert types and descriptions description: Learn more about the alerts that are triggered for traffic on OT networks. Previously updated : 11/01/2022 Last updated : 11/03/2022
This article provides information on the alert types, descriptions, and severity that may be generated from the Defender for IoT engines. This information can be used to help map alerts into playbooks, define Forwarding rules, Exclusion rules, and custom alerts and define the appropriate rules within a SIEM. Alerts appear in the Alerts window, which allows you to manage the alert event.
-### Alert news
+## Alerts disabled by default
-New alerts may be added and existing alerts may be updated or disabled. Certain disabled alerts can be re-enabled from the **Support** page of the sensor console. Alerts that can be re-enabled are marked with an asterisk (*) in the tables below.
+Several alerts are disabled by default, as indicated by asterisks (*) in the tables below. Sensor administrator users can enable or disable alerts from the **Support** page on a specific sensor.
-You may have configured newly disabled alerts in your Forwarding rules. If so, you may need to update related Defender for IoT Exclusion rules, or update SIEM rules and playbooks where relevant.
+If you disable alerts that are referenced in other places, such as alert forwarding rules, make sure to update those references as needed.
See [What's new in Microsoft Defender for IoT?](release-notes.md#whats-new-in-microsoft-defender-for-iot) for detailed information about changes made to alerts.
Each alert has one of the following categories:
Policy engine alerts describe detected deviations from learned baseline behavior.
-| Title | Description | Severity | Category |
-|--|--|--|--|
-| **Beckhoff Software Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
-| **Database Login Failed** | A failed sign-in attempt was detected from a source device to a destination server. This might be the result of human error, but could also indicate a malicious attempt to compromise the server or data on it. <br><br> Threshold: 2 sign-in failures in 5 minutes | Major | Authentication |
-| **Emerson ROC Firmware Version Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
-| **External address within the network communicated with Internet** | A source device defined as part of your network is communicating with Internet addresses. The source isn't authorized to communicate with Internet addresses. | Critical | Internet Access |
-| **Field Device Discovered Unexpectedly** | A new source device was detected on the network but hasn't been authorized. | Major | Discovery |
-| **Firmware Change Detected** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
-| **Firmware Version Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
-| **Foxboro I/A Unauthorized Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **FTP Login Failed** | A failed sign-in attempt was detected from a source device to a destination server. This alert might be the result of human error, but could also indicate a malicious attempt to compromise the server or data on it. | Major | Authentication |
-| **Function Code Raised Unauthorized Exception** | A source device (secondary) returned an exception to a destination device (primary). | Major | Command Failures |
-| **GOOSE Message Type Settings** | Message (identified by protocol ID) settings were changed on a source device. | Warning | Unauthorized Communication Behavior |
-| **Honeywell Firmware Version Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
-| * **Illegal HTTP Communication** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal HTTP Communication Behavior |
-| **Internet Access Detected** | A source device defined as part of your network is communicating with Internet addresses. The source isn't authorized to communicate with Internet addresses. | Major | Internet Access |
-| **Mitsubishi Firmware Version Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
-| **Modbus Address Range Violation** | A primary device requested access to a new secondary memory address. | Major | Unauthorized Communication Behavior |
-| **Modbus Firmware Version Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
-| **New Activity Detected - CIP Class** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **New Activity Detected - CIP Class Service** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **New Activity Detected - CIP PCCC Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **New Activity Detected - CIP Symbol** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **New Activity Detected - EtherNet/IP I/O Connection** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **New Activity Detected - EtherNet/IP Protocol Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **New Activity Detected - GSM Message Code** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **New Activity Detected - LonTalk Command Codes** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **New Port Discovery** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Warning | Discovery |
-| **New Activity Detected - LonTalk Network Variable** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **New Activity Detected - Ovation Data Request** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **New Activity Detected - Read/Write Command (AMS Index Group)** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Configuration Changes |
-| **New Activity Detected - Read/Write Command (AMS Index Offset)** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Configuration Changes |
-| **New Activity Detected - Unauthorized DeltaV Message Type** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **New Activity Detected - Unauthorized DeltaV ROC Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **New Activity Detected - Unauthorized RPC Message Type** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **New Activity Detected - Using AMS Protocol Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **New Activity Detected - Using Siemens SICAM Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **New Activity Detected - Using Suitelink Protocol command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **New Activity Detected - Using Suitelink Protocol sessions** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **New Activity Detected - Using Yokogawa VNetIP Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **New Asset Detected** | A new source device was detected on the network but hasn't been authorized. <br><br>This alert applies to devices discovered in OT subnets. New devices discovered in IT subnets don't trigger an alert.| Major | Discovery |
-| **New LLDP Device Configuration** | A new source device was detected on the network but hasn't been authorized. | Major | Configuration Changes |
-| **Omron FINS Unauthorized Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **S7 Plus PLC Firmware Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
-| **Sampled Values Message Type Settings** | Message (identified by protocol ID) settings were changed on a source device. | Warning | Unauthorized Communication Behavior |
-| **Suspicion of Illegal Integrity Scan** | A scan was detected on a DNP3 source device (outstation). This scan wasn't authorized as learned traffic on your network. | Major | Scan |
-| **Toshiba Computer Link Unauthorized Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Minor | Unauthorized Communication Behavior |
-| **Unauthorized ABB Totalflow File Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **Unauthorized ABB Totalflow Register Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **Unauthorized Access to Siemens S7 Data Block** | A source device attempted to access a resource on another device. An access attempt to this resource between these two devices hasn't been authorized as learned traffic on your network. | Warning | Unauthorized Communication Behavior |
-| **Unauthorized Access to Siemens S7 Plus Object** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **Unauthorized Access to Wonderware Tag** | A source device attempted to access a resource on another device. An access attempt to this resource between these two devices hasn't been authorized as learned traffic on your network. | Major | Unauthorized Communication Behavior |
-| **Unauthorized BACNet Object Access** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **Unauthorized BACNet Route** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **Unauthorized Database Login** | A sign-in attempt between a source client and destination server was detected. Communication between these devices hasn't been authorized as learned traffic on your network. | Major | Authentication |
-| **Unauthorized Database Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal Communication Behavior |
-| **Unauthorized Emerson ROC Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **Unauthorized GE SRTP File Access** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **Unauthorized GE SRTP Protocol Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **Unauthorized GE SRTP System Memory Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **Unauthorized HTTP Activity** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal HTTP Communication Behavior |
-| * **Unauthorized HTTP SOAP Action** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal HTTP Communication Behavior |
-| * **Unauthorized HTTP User Agent** | An unauthorized application was detected on a source device. The application hasn't been authorized as a learned application on your network. | Major | Abnormal HTTP Communication Behavior |
-| **Unauthorized Internet Connectivity Detected** | A source device defined as part of your network is communicating with Internet addresses. The source isn't authorized to communicate with Internet addresses. | Critical | Internet Access |
-| **Unauthorized Mitsubishi MELSEC Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **Unauthorized MMS Program Access** | A source device attempted to access a resource on another device. An access attempt to this resource between these two devices hasn't been authorized as learned traffic on your network. | Major | Programming |
-| **Unauthorized MMS Service** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **Unauthorized Multicast/Broadcast Connection** | A Multicast/Broadcast connection was detected between a source device and other devices. Multicast/Broadcast communication isn't authorized. | Critical | Abnormal Communication Behavior |
-| **Unauthorized Name Query** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal Communication Behavior |
-| **Unauthorized OPC UA Activity** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **Unauthorized OPC UA Request/Response** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **Unauthorized Operation was detected by a User Defined Rule** | Traffic was detected between two devices. This activity is unauthorized, based on a Custom Alert Rule defined by a user. | Major | Custom Alerts |
-| **Unauthorized PLC Configuration Read** | The source device isn't defined as a programming device but performed a read/write operation on a destination controller. Programming changes should only be performed by programming devices. A programming application may have been installed on this device. | Warning | Configuration Changes |
-| **Unauthorized PLC Configuration Write** | The source device sent a command to read/write the program of a destination controller. This activity wasn't previously seen. | Major | Configuration Changes |
-| **Unauthorized PLC Program Upload** | The source device sent a command to read/write the program of a destination controller. This activity wasn't previously seen. | Major | Programming |
-| **Unauthorized PLC Programming** | The source device isn't defined as a programming device but performed a read/write operation on a destination controller. Programming changes should only be performed by programming devices. A programming application may have been installed on this device. | Critical | Programming |
-| **Unauthorized Profinet Frame Type** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **Unauthorized SAIA S-Bus Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **Unauthorized Siemens S7 Execution of Control Function** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **Unauthorized Siemens S7 Execution of User Defined Function** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **Unauthorized Siemens S7 Plus Block Access** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **Unauthorized Siemens S7 Plus Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **Unauthorized SMB Login** | A sign-in attempt between a source client and destination server was detected. Communication between these devices hasn't been authorized as learned traffic on your network. | Major | Authentication |
-| **Unauthorized SNMP Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal Communication Behavior |
-| **Unauthorized SSH Access** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Remote Access |
-| **Unauthorized Windows Process** | An unauthorized application was detected on a source device. The application hasn't been authorized as a learned application on your network. | Major | Abnormal Communication Behavior |
-| **Unauthorized Windows Service** | An unauthorized application was detected on a source device. The application hasn't been authorized as a learned application on your network. | Major | Abnormal Communication Behavior |
-| **Unauthorized Operation was detected by a User Defined Rule** | New traffic parameters were detected. This parameter combination violates a user defined rule | Major |
-| **Unpermitted Modbus Schneider Electric Extension** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **Unpermitted Usage of ASDU Types** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **Unpermitted Usage of DNP3 Function Code** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| **Unpermitted Usage of Internal Indication (IIN)** | A DNP3 source device (outstation) reported an internal indication (IIN) that hasn't authorized as learned traffic on your network. | Major | Illegal Commands |
-| **Unpermitted Usage of Modbus Function Code** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| Title | Description | Severity | Category | MITRE ATT&CK <br> tactics and techniques |
+|--|--|--|--|--|
+| **Beckhoff Software Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change | **Tactics:** <br> - Inhibit Response Function <br> - Persistence <br><br> **Techniques:** <br> - T0857: System Firmware |
+| **Database Login Failed** | A failed sign-in attempt was detected from a source device to a destination server. This might be the result of human error, but could also indicate a malicious attempt to compromise the server or data on it. <br><br> Threshold: 2 sign-in failures in 5 minutes | Major | Authentication | **Tactics:** <br> - Lateral Movement <br> - Collection <br><br> **Techniques:** <br> - T0812: Default Credentials <br> - T0811: Data from Information Repositories|
+| **Emerson ROC Firmware Version Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change | **Tactics:** <br> - Inhibit Response Function <br> - Persistence <br><br> **Techniques:** <br> - T0857: System Firmware |
+| **External address within the network communicated with Internet** | A source device defined as part of your network is communicating with Internet addresses. The source isn't authorized to communicate with Internet addresses. | Critical | Internet Access | **Tactics:** <br> - Initial Access <br><br> **Techniques:** <br> - T0883: Internet Accessible Device |
+| **Field Device Discovered Unexpectedly** | A new source device was detected on the network but hasn't been authorized. | Major | Discovery | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Firmware Change Detected** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change | **Tactics:** <br> - Inhibit Response Function <br> - Persistence <br><br> **Techniques:** <br> - T0857: System Firmware |
+| **Firmware Version Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change | **Tactics:** <br> - Inhibit Response Function <br> - Persistence <br><br> **Techniques:** <br> - T0857: System Firmware |
+| **Foxboro I/A Unauthorized Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **FTP Login Failed** | A failed sign-in attempt was detected from a source device to a destination server. This alert might be the result of human error, but could also indicate a malicious attempt to compromise the server or data on it. | Major | Authentication | **Tactics:** <br> - Lateral Movement <br> - Command And Control <br><br> **Techniques:** <br> - T0812: Default Credentials <br> - T0869: Standard Application Layer Protocol |
+| **Function Code Raised Unauthorized Exception** | A source device (secondary) returned an exception to a destination device (primary). | Major | Command Failures | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0835: Manipulate I/O Image |
+| **GOOSE Message Type Settings** | Message (identified by protocol ID) settings were changed on a source device. | Warning | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **Honeywell Firmware Version Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change | **Tactics:** <br> - Inhibit Response Function <br> - Persistence <br><br> **Techniques:** <br> - T0857: System Firmware |
+| **Illegal HTTP Communication [*](#alerts-disabled-by-default)** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal HTTP Communication Behavior | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0846: Remote System Discovery |
+| **Internet Access Detected** | A source device defined as part of your network is communicating with Internet addresses. The source isn't authorized to communicate with Internet addresses. | Major | Internet Access | **Tactics:** <br> - Initial Access <br><br> **Techniques:** <br> - T0883: Internet Accessible Device |
+| **Mitsubishi Firmware Version Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change | **Tactics:** <br> - Inhibit Response Function <br> - Persistence <br><br> **Techniques:** <br> - T0857: System Firmware |
+| **Modbus Address Range Violation** | A primary device requested access to a new secondary memory address. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Modbus Firmware Version Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change | **Tactics:** <br> - Inhibit Response Function <br> - Persistence <br><br> **Techniques:** <br> - T0857: System Firmware |
+| **New Activity Detected - CIP Class** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0888: Remote System Information Discovery |
+| **New Activity Detected - CIP Class Service** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **New Activity Detected - CIP PCCC Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **New Activity Detected - CIP Symbol** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **New Activity Detected - EtherNet/IP I/O Connection** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Discovery <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0846: Remote System Discovery <br> - T0835: Manipulate I/O Image |
+| **New Activity Detected - EtherNet/IP Protocol Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **New Activity Detected - GSM Message Code** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - CommandAndControl <br><br> **Techniques:** <br> - T0869: Standard Application Layer Protocol |
+| **New Activity Detected - LonTalk Command Codes** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Collection <br> - Impair Process Control <br><br> **Techniques:** <br> - T0861 - Point & Tag Identification <br> - T0855: Unauthorized Command Message |
+| **New Port Discovery** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Warning | Discovery | **Tactics:** <br> - Lateral Movement <br><br> **Techniques:** <br> - T0867: Lateral Tool Transfer |
+| **New Activity Detected - LonTalk Network Variable** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **New Activity Detected - Ovation Data Request** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Collection <br> - Discovery <br><br> **Techniques:** <br> - T0801: Monitor Process State <br> - T0888: Remote System Information Discovery |
+| **New Activity Detected - Read/Write Command (AMS Index Group)** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Configuration Changes | **Tactics:** <br> - Impair Process Control <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **New Activity Detected - Read/Write Command (AMS Index Offset)** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Configuration Changes | **Tactics:** <br> - Impair Process Control <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **New Activity Detected - Unauthorized DeltaV Message Type** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
+| **New Activity Detected - Unauthorized DeltaV ROC Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
+| **New Activity Detected - Unauthorized RPC Message Type** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **New Activity Detected - Using AMS Protocol Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Inhibit Response Function <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter <br> - T0821: Modify Controller Tasking |
+| **New Activity Detected - Using Siemens SICAM Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **New Activity Detected - Using Suitelink Protocol command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **New Activity Detected - Using Suitelink Protocol sessions** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **New Activity Detected - Using Yokogawa VNetIP Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
+| **New Asset Detected** | A new source device was detected on the network but hasn't been authorized. <br><br>This alert applies to devices discovered in OT subnets. New devices discovered in IT subnets don't trigger an alert.| Major | Discovery | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **New LLDP Device Configuration** | A new source device was detected on the network but hasn't been authorized. | Major | Configuration Changes | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Omron FINS Unauthorized Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **S7 Plus PLC Firmware Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change | **Tactics:** <br> - Inhibit Response Function <br> - Persistence <br><br> **Techniques:** <br> - T0857: System Firmware |
+| **Sampled Values Message Type Settings** | Message (identified by protocol ID) settings were changed on a source device. | Warning | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **Suspicion of Illegal Integrity Scan** | A scan was detected on a DNP3 source device (outstation). This scan wasn't authorized as learned traffic on your network. | Major | Scan | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Toshiba Computer Link Unauthorized Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Minor | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
+| **Unauthorized ABB Totalflow File Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
+| **Unauthorized ABB Totalflow Register Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
+| **Unauthorized Access to Siemens S7 Data Block** | A source device attempted to access a resource on another device. An access attempt to this resource between these two devices hasn't been authorized as learned traffic on your network. | Warning | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Initial Access <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0811: Data from Information Repositories |
+| **Unauthorized Access to Siemens S7 Plus Object** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking <br> - T0809: Data Destruction |
+| **Unauthorized Access to Wonderware Tag** | A source device attempted to access a resource on another device. An access attempt to this resource between these two devices hasn't been authorized as learned traffic on your network. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Collection <br> - Impair Process Control <br><br> **Techniques:** <br> - T0861: Point & Tag Identification <br> - T0855: Unauthorized Command Message |
+| **Unauthorized BACNet Object Access** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
+| **Unauthorized BACNet Route** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
+| **Unauthorized Database Login** | A sign-in attempt between a source client and destination server was detected. Communication between these devices hasn't been authorized as learned traffic on your network. | Major | Authentication | **Tactics:** <br> - Lateral Movement <br> - Persistence <br> - Collection <br><br> **Techniques:** <br> - T0859: Valid Accounts <br> - T0811: Data from Information Repositories |
+| **Unauthorized Database Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Initial Access <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0811: Data from Information Repositories |
+| **Unauthorized Emerson ROC Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
+| **Unauthorized GE SRTP File Access** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Collection <br> - LateralMovement <br> - Persistence <br><br> **Techniques:** <br> - T0801: Monitor Process State <br> - T0859: Valid Accounts |
+| **Unauthorized GE SRTP Protocol Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
+| **Unauthorized GE SRTP System Memory Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Discovery <br> - Impair Process Control <br><br> **Techniques:** <br> - T0846: Remote System Discovery <br> - T0855: Unauthorized Command Message |
+| **Unauthorized HTTP Activity** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal HTTP Communication Behavior | **Tactics:** <br> - Initial Access <br> - Command And Control <br><br> **Techniques:** <br> - T0822: External Remote Services <br> - T0869: Standard Application Layer Protocol |
+| **Unauthorized HTTP SOAP Action [*](#alerts-disabled-by-default)** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal HTTP Communication Behavior | **Tactics:** <br> - Command And Control <br> - Execution <br><br> **Techniques:** <br> - T0869: Standard Application Layer Protocol <br> - T0871: Execution through API |
+| **Unauthorized HTTP User Agent [*](#alerts-disabled-by-default)** | An unauthorized application was detected on a source device. The application hasn't been authorized as a learned application on your network. | Major | Abnormal HTTP Communication Behavior | **Tactics:** <br> - Command And Control <br><br> **Techniques:** <br> - T0869: Standard Application Layer Protocol |
+| **Unauthorized Internet Connectivity Detected** | A source device defined as part of your network is communicating with Internet addresses. The source isn't authorized to communicate with Internet addresses. | Critical | Internet Access | **Tactics:** <br> - Initial Access <br><br> **Techniques:** <br> - T0883: Internet Accessible Device |
+| **Unauthorized Mitsubishi MELSEC Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
+| **Unauthorized MMS Program Access** | A source device attempted to access a resource on another device. An access attempt to this resource between these two devices hasn't been authorized as learned traffic on your network. | Major | Programming | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
+| **Unauthorized MMS Service** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
+| **Unauthorized Multicast/Broadcast Connection** | A Multicast/Broadcast connection was detected between a source device and other devices. Multicast/Broadcast communication isn't authorized. | Critical | Abnormal Communication Behavior | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Unauthorized Name Query** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **Unauthorized OPC UA Activity** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **Unauthorized OPC UA Request/Response** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **Unauthorized Operation was detected by a User Defined Rule** | Traffic was detected between two devices. This activity is unauthorized, based on a Custom Alert Rule defined by a user. | Major | Custom Alerts | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Unauthorized PLC Configuration Read** | The source device isn't defined as a programming device but performed a read/write operation on a destination controller. Programming changes should only be performed by programming devices. A programming application may have been installed on this device. | Warning | Configuration Changes | **Tactics:** <br> - Collection <br><br> **Techniques:** <br> - T0801: Monitor Process State |
+| **Unauthorized PLC Configuration Write** | The source device sent a command to read/write the program of a destination controller. This activity wasn't previously seen. | Major | Configuration Changes | **Tactics:** <br> - Impair Process Control <br> - Persistence <br> - Impact <br><br> **Techniques:** <br> - T0839: Module Firmware <br> - T0831: Manipulation of Control <br> - T0889: Modify Program |
+| **Unauthorized PLC Program Upload** | The source device sent a command to read/write the program of a destination controller. This activity wasn't previously seen. | Major | Programming | **Tactics:** <br> - Impair Process Control <br> - Persistence <br> - Collection <br><br> **Techniques:** <br> - T0839: Module Firmware <br> - T0845: Program Upload |
+| **Unauthorized PLC Programming** | The source device isn't defined as a programming device but performed a read/write operation on a destination controller. Programming changes should only be performed by programming devices. A programming application may have been installed on this device. | Critical | Programming | **Tactics:** <br> - Impair Process Control <br> - Persistence <br> - Lateral Movement <br><br> **Techniques:** <br> - T0839: Module Firmware <br> - T0889: Modify Program <br> - T0843: Program Download |
+| **Unauthorized Profinet Frame Type** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **Unauthorized SAIA S-Bus Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **Unauthorized Siemens S7 Execution of Control Function** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0809: Data Destruction |
+| **Unauthorized Siemens S7 Execution of User Defined Function** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0836: Modify Parameter <br> - T0863: User Execution |
+| **Unauthorized Siemens S7 Plus Block Access** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Inhibit Response Function <br> - Persistence <br> - Execution <br><br> **Techniques:** <br> - T0803 - Block Command Message <br> - T0889: Modify Program <br> - T0821: Modify Controller Tasking |
+| **Unauthorized Siemens S7 Plus Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0863: User Execution |
+| **Unauthorized SMB Login** | A sign-in attempt between a source client and destination server was detected. Communication between these devices hasn't been authorized as learned traffic on your network. | Major | Authentication | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br> - Persistence <br><br> **Techniques:** <br> - T0886: Remote Services <br> - T0859: Valid Accounts |
+| **Unauthorized SNMP Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal Communication Behavior | **Tactics:** <br> - Discovery <br> - Command And Control <br><br> **Techniques:** <br> - T0842: Network Sniffing <br> - T0885: Commonly Used Port |
+| **Unauthorized SSH Access** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Remote Access | **Tactics:** <br> - InitialAccess <br> - Lateral Movement <br> - Command And Control <br><br> **Techniques:** <br> - T0886: Remote Services <br> - T0869: Standard Application Layer Protocol |
+| **Unauthorized Windows Process** | An unauthorized application was detected on a source device. The application hasn't been authorized as a learned application on your network. | Major | Abnormal Communication Behavior | **Tactics:** <br> - Execution <br> - Privilege Escalation <br> - Command And Control <br><br> **Techniques:** <br> - T0841: Hooking <br> - T0885: Commonly Used Port |
+| **Unauthorized Windows Service** | An unauthorized application was detected on a source device. The application hasn't been authorized as a learned application on your network. | Major | Abnormal Communication Behavior | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services |
+| **Unauthorized Operation was detected by a User Defined Rule** | New traffic parameters were detected. This parameter combination violates a user defined rule | Major | | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Unpermitted Modbus Schneider Electric Extension** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **Unpermitted Usage of ASDU Types** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |**Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **Unpermitted Usage of DNP3 Function Code** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **Unpermitted Usage of Internal Indication (IIN)** | A DNP3 source device (outstation) reported an internal indication (IIN) that hasn't authorized as learned traffic on your network. | Major | Illegal Commands | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Unpermitted Usage of Modbus Function Code** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
## Anomaly engine alerts Anomaly engine alerts describe detected anomalies in network activity.
-| Title | Description | Severity | Category |
-|--|--|--|--|
-| **Abnormal Exception Pattern in Slave** | An excessive number of errors were detected on a source device. This alert may be the result of an operational issue. <br><br> Threshold: 20 exceptions in 1 hour | Minor | Abnormal Communication Behavior |
-| * **Abnormal HTTP Header Length** | The source device sent an abnormal message. This alert may indicate an attempt to attack the destination device. | Critical | Abnormal HTTP Communication Behavior |
-| * **Abnormal Number of Parameters in HTTP Header** | The source device sent an abnormal message. This alert may indicate an attempt to attack the destination device. | Critical | Abnormal HTTP Communication Behavior |
-| **Abnormal Periodic Behavior In Communication Channel** | A change in the frequency of communication between the source and destination devices was detected. | Minor | Abnormal Communication Behavior |
-| **Abnormal Termination of Applications** | An excessive number of stop commands were detected on a source device. This alert may be the result of an operational issue or an attempt to manipulate the device. <br><br> Threshold: 20 stop commands in 3 hours | Major | Abnormal Communication Behavior |
-| **Abnormal Traffic Bandwidth** | Abnormal bandwidth was detected on a channel. Bandwidth appears to be lower/higher than previously detected. For details, work with the Total Bandwidth widget. | Warning | Bandwidth Anomalies |
-| **Abnormal Traffic Bandwidth Between Devices** | Abnormal bandwidth was detected on a channel. Bandwidth appears to be lower/higher than previously detected. For details, work with the Total Bandwidth widget. | Warning | Bandwidth Anomalies |
-| **Address Scan Detected** | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. <br><br> Threshold: 50 connections to the same B class subnet in 2 minutes | Critical | Scan |
-| **ARP Address Scan Detected** | A source device was detected scanning network devices using Address Resolution Protocol (ARP). This device address hasn't been authorized as valid ARP scanning address. <br><br> Threshold: 40 scans in 6 minutes | Critical | Scan |
-| **ARP Spoofing** | An abnormal quantity of packets was detected in the network. This alert could indicate an attack, for example, an ARP spoofing or ICMP flooding attack. <br><br> Threshold: 60 packets in 1 minute | Warning | Abnormal Communication Behavior |
-| **Excessive Login Attempts** | A source device was seen performing excessive sign-in attempts to a destination server. This alert may indicate a brute force attack. The server may be compromised by a malicious actor. <br><br> Threshold: 20 sign-in attempts in 1 minute | Critical | Authentication |
-| **Excessive Number of Sessions** | A source device was seen performing excessive sign-in attempts to a destination server. This may indicate a brute force attack. The server may be compromised by a malicious actor. <br><br> Threshold: 50 sessions in 1 minute | Critical | Abnormal Communication Behavior |
-| **Excessive Restart Rate of an Outstation** | An excessive number of restart commands were detected on a source device. These alerts may be the result of an operational issue or an attempt to manipulate the device. <br><br> Threshold: 10 restarts in 1 hour | Major | Restart/ Stop Commands |
-| **Excessive SMB login attempts** | A source device was seen performing excessive sign-in attempts to a destination server. This may indicate a brute force attack. The server may be compromised by a malicious actor. <br><br> Threshold: 10 sign-in attempts in 10 minutes | Critical | Authentication |
-| **ICMP Flooding** | An abnormal quantity of packets was detected in the network. This alert could indicate an attack, for example, an ARP spoofing or ICMP flooding attack. <br><br> Threshold: 60 packets in 1 minute | Warning | Abnormal Communication Behavior |
-|* **Illegal HTTP Header Content** | The source device initiated an invalid request. | Critical | Abnormal HTTP Communication Behavior |
-| **Inactive Communication Channel** | A communication channel between two devices was inactive during a period in which activity is usually observed. This might indicate that the program generating this traffic was changed, or the program might be unavailable. It's recommended to review the configuration of installed program and verify that it's configured properly. <br><br> Threshold: 1 minute | Warning | Unresponsive |
-| **Long Duration Address Scan Detected** | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. <br><br> Threshold: 50 connections to the same B class subnet in 10 minutes | Critical | Scan |
-| **Password Guessing Attempt Detected** | A source device was seen performing excessive sign-in attempts to a destination server. This may indicate a brute force attack. The server may be compromised by a malicious actor. <br><br> Threshold: 100 attempts in 1 minute | Critical | Authentication |
-| **PLC Scan Detected** | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. <br><br> Threshold: 10 scans in 2 minutes | Critical | Scan |
-| **Port Scan Detected** | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. <br><br> Threshold: 25 scans in 2 minutes | Critical | Scan |
-| **Unexpected message length** | The source device sent an abnormal message. This alert may indicate an attempt to attack the destination device. <br><br> Threshold: text length - 32768 | Critical | Abnormal Communication Behavior |
-| **Unexpected Traffic for Standard Port** | Traffic was detected on a device using a port reserved for another protocol. | Major | Abnormal Communication Behavior |
+| Title | Description | Severity | Category | MITRE ATT&CK <br> tactics and techniques |
+|--|--|--|--|--|
+| **Abnormal Exception Pattern in Slave** | An excessive number of errors were detected on a source device. This alert may be the result of an operational issue. <br><br> Threshold: 20 exceptions in 1 hour | Minor | Abnormal Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0806: Brute Force I/O |
+| **Abnormal HTTP Header Length [*](#alerts-disabled-by-default)** | The source device sent an abnormal message. This alert may indicate an attempt to attack the destination device. | Critical | Abnormal HTTP Communication Behavior | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br> - Command And Control <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services <br> - T0869: Standard Application Layer Protocol |
+| **Abnormal Number of Parameters in HTTP Header [*](#alerts-disabled-by-default)** | The source device sent an abnormal message. This alert may indicate an attempt to attack the destination device. | Critical | Abnormal HTTP Communication Behavior | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br> - Command And Control <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services <br> - T0869: Standard Application Layer Protocol |
+| **Abnormal Periodic Behavior In Communication Channel** | A change in the frequency of communication between the source and destination devices was detected. | Minor | Abnormal Communication Behavior | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Abnormal Termination of Applications** | An excessive number of stop commands were detected on a source device. This alert may be the result of an operational issue or an attempt to manipulate the device. <br><br> Threshold: 20 stop commands in 3 hours | Major | Abnormal Communication Behavior | **Tactics:** <br> - Persistence <br> - Impact <br><br> **Techniques:** <br> - T0889: Modify Program <br> - T0831: Manipulation of Control |
+| **Abnormal Traffic Bandwidth** | Abnormal bandwidth was detected on a channel. Bandwidth appears to be lower/higher than previously detected. For details, work with the Total Bandwidth widget. | Warning | Bandwidth Anomalies | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Abnormal Traffic Bandwidth Between Devices** | Abnormal bandwidth was detected on a channel. Bandwidth appears to be lower/higher than previously detected. For details, work with the Total Bandwidth widget. | Warning | Bandwidth Anomalies | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Address Scan Detected** | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. <br><br> Threshold: 50 connections to the same B class subnet in 2 minutes | Critical | Scan | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **ARP Address Scan Detected** | A source device was detected scanning network devices using Address Resolution Protocol (ARP). This device address hasn't been authorized as valid ARP scanning address. <br><br> Threshold: 40 scans in 6 minutes | Critical | Scan | **Tactics:** <br> - Discovery <br> - Collection <br><br> **Techniques:** <br> - T0842: Network Sniffing <br> - T0830: Man in the Middle |
+| **ARP Spoofing** | An abnormal quantity of packets was detected in the network. This alert could indicate an attack, for example, an ARP spoofing or ICMP flooding attack. <br><br> Threshold: 60 packets in 1 minute | Warning | Abnormal Communication Behavior | **Tactics:** <br> - Collection <br><br> **Techniques:** <br> - T0830: Man in the Middle |
+| **Excessive Login Attempts** | A source device was seen performing excessive sign-in attempts to a destination server. This alert may indicate a brute force attack. The server may be compromised by a malicious actor. <br><br> Threshold: 20 sign-in attempts in 1 minute | Critical | Authentication | **Tactics:** <br> - LateralMovement <br> - Impair Process Control <br><br> **Techniques:** <br> - T0812: Default Credentials <br> - T0806: Brute Force I/O |
+| **Excessive Number of Sessions** | A source device was seen performing excessive sign-in attempts to a destination server. This may indicate a brute force attack. The server may be compromised by a malicious actor. <br><br> Threshold: 50 sessions in 1 minute | Critical | Abnormal Communication Behavior | **Tactics:** <br> - Lateral Movement <br> - Impair Process Control <br><br> **Techniques:** <br> - T0812: Default Credentials <br> - T0806: Brute Force I/O |
+| **Excessive Restart Rate of an Outstation** | An excessive number of restart commands were detected on a source device. These alerts may be the result of an operational issue or an attempt to manipulate the device. <br><br> Threshold: 10 restarts in 1 hour | Major | Restart/ Stop Commands | **Tactics:** <br> - Inhibit Response Function <br> - Impair Process Control <br><br> **Techniques:** <br> - T0814: Denial of Service <br> - T0806: Brute Force I/O |
+| **Excessive SMB login attempts** | A source device was seen performing excessive sign-in attempts to a destination server. This may indicate a brute force attack. The server may be compromised by a malicious actor. <br><br> Threshold: 10 sign-in attempts in 10 minutes | Critical | Authentication | **Tactics:** <br> - Persistence <br> - Execution <br> - LateralMovement <br><br> **Techniques:** <br> - T0812: Default Credentials <br> - T0853: Scripting <br> - T0859: Valid Accounts |
+| **ICMP Flooding** | An abnormal quantity of packets was detected in the network. This alert could indicate an attack, for example, an ARP spoofing or ICMP flooding attack. <br><br> Threshold: 60 packets in 1 minute | Warning | Abnormal Communication Behavior | **Tactics:** <br> - Discovery <br> - Collection <br><br> **Techniques:** <br> - T0842: Network Sniffing <br> - T0830: Man in the Middle |
+| **Illegal HTTP Header Content [*](#alerts-disabled-by-default)** | The source device initiated an invalid request. | Critical | Abnormal HTTP Communication Behavior | **Tactics:** <br> - Initial Access <br> - LateralMovement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services |
+| **Inactive Communication Channel** | A communication channel between two devices was inactive during a period in which activity is usually observed. This might indicate that the program generating this traffic was changed, or the program might be unavailable. It's recommended to review the configuration of installed program and verify that it's configured properly. <br><br> Threshold: 1 minute | Warning | Unresponsive | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0881: Service Stop |
+| **Long Duration Address Scan Detected** | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. <br><br> Threshold: 50 connections to the same B class subnet in 10 minutes | Critical | Scan | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Password Guessing Attempt Detected** | A source device was seen performing excessive sign-in attempts to a destination server. This may indicate a brute force attack. The server may be compromised by a malicious actor. <br><br> Threshold: 100 attempts in 1 minute | Critical | Authentication | **Tactics:** <br> - Lateral Movement <br><br> **Techniques:** <br> - T0812: Default Credentials <br> - T0806: Brute Force I/O |
+| **PLC Scan Detected** | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. <br><br> Threshold: 10 scans in 2 minutes | Critical | Scan | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Port Scan Detected** | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. <br><br> Threshold: 25 scans in 2 minutes | Critical | Scan | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Unexpected message length** | The source device sent an abnormal message. This alert may indicate an attempt to attack the destination device. <br><br> Threshold: text length - 32768 | Critical | Abnormal Communication Behavior | **Tactics:** <br> - InitialAccess <br> - LateralMovement <br><br> **Techniques:** <br> - T0869: Exploitation of Remote Services |
+| **Unexpected Traffic for Standard Port** | Traffic was detected on a device using a port reserved for another protocol. | Major | Abnormal Communication Behavior | **Tactics:** <br> - Command And Control <br> - Discovery <br><br> **Techniques:** <br> - T0869: Standard Application Layer Protocol <br> - T0842: Network Sniffing |
## Protocol violation engine alerts Protocol engine alerts describe detected deviations in the packet structure, or field values compared to protocol specifications.
-| Title | Description | Severity | Category |
-|--|--|--|--|
-| **Excessive Malformed Packets In a Single Session** | An abnormal number of malformed packets sent from the source device to the destination device. This alert might indicate erroneous communications, or an attempt to manipulate the targeted device. <br><br> Threshold: 2 malformed packets in 10 minutes | Major | Illegal Commands |
-| **Firmware Update** | A source device sent a command to update firmware on a destination device. Verify that recent programming, configuration and firmware upgrades made to the destination device are valid. | Warning | Firmware Change |
-| **Function Code Not Supported by Outstation** | The destination device received an invalid request. | Major | Illegal Commands |
-| **Illegal BACNet message** | The source device initiated an invalid request. | Major | Illegal Commands |
-| **Illegal Connection Attempt on Port 0** | A source device attempted to connect to destination device on port number zero (0). For TCP, port 0 is reserved and canΓÇÖt be used. For UDP, the port is optional and a value of 0 means no port. There's usually no service on a system that listens on port 0. This event may indicate an attempt to attack the destination device, or indicate that an application was programmed incorrectly. | Minor | Illegal Commands |
-| **Illegal DNP3 Operation** | The source device initiated an invalid request. | Major | Illegal Commands |
-| **Illegal MODBUS Operation (Exception Raised by Master)** | The source device initiated an invalid request. | Major | Illegal Commands |
-| **Illegal MODBUS Operation (Function Code Zero)** | The source device initiated an invalid request. | Major | Illegal Commands |
-| **Illegal Protocol Version** | The source device initiated an invalid request. | Major | Illegal Commands |
-| **Incorrect Parameter Sent to Outstation** | The destination device received an invalid request. | Major | Illegal Commands |
-| **Initiation of an Obsolete Function Code (Initialize Data)** | The source device initiated an invalid request. | Minor | Illegal Commands |
-| **Initiation of an Obsolete Function Code (Save Config)** | The source device initiated an invalid request. | Minor | Illegal Commands |
-| **Master Requested an Application Layer Confirmation** | The source device initiated an invalid request. | Warning | Illegal Commands |
-| **Modbus Exception** | A source device (secondary) returned an exception to a destination device (primary). | Major | Illegal Commands |
-| **Slave Device Received Illegal ASDU Type** | The destination device received an invalid request. | Major | Illegal Commands |
-| **Slave Device Received Illegal Command Cause of Transmission** | The destination device received an invalid request. | Major | Illegal Commands |
-| **Slave Device Received Illegal Common Address** | The destination device received an invalid request. | Major | Illegal Commands |
-| **Slave Device Received Illegal Data Address Parameter** | The destination device received an invalid request. | Major | Illegal Commands |
-| **Slave Device Received Illegal Data Value Parameter** | The destination device received an invalid request. | Major | Illegal Commands |
-| **Slave Device Received Illegal Function Code** | The destination device received an invalid request. | Major | Illegal Commands |
-| **Slave Device Received Illegal Information Object Address** | The destination device received an invalid request. | Major | Illegal Commands |
-| **Unknown Object Sent to Outstation** | The destination device received an invalid request. | Major | Illegal Commands |
-| **Usage of a Reserved Function Code** | The source device initiated an invalid request. | Major | Illegal Commands |
-| **Usage of Improper Formatting by Outstation** | The source device initiated an invalid request. | Warning | Illegal Commands |
-| **Usage of Reserved Status Flags (IIN)** | A DNP3 source device (outstation) used the reserved Internal Indicator 2.6. It's recommended to check the device's configuration. | Warning | Illegal Commands |
+| Title | Description | Severity | Category | MITRE ATT&CK <br> tactics and techniques |
+|--|--|--|--|--|
+| **Excessive Malformed Packets In a Single Session** | An abnormal number of malformed packets sent from the source device to the destination device. This alert might indicate erroneous communications, or an attempt to manipulate the targeted device. <br><br> Threshold: 2 malformed packets in 10 minutes | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0806: Brute Force I/O |
+| **Firmware Update** | A source device sent a command to update firmware on a destination device. Verify that recent programming, configuration and firmware upgrades made to the destination device are valid. | Warning | Firmware Change | **Tactics:** <br> - Inhibit Response Function <br> - Persistence <br><br> **Techniques:** <br> - T0857: System Firmware |
+| **Function Code Not Supported by Outstation** | The destination device received an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **Illegal BACNet message** | The source device initiated an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **Illegal Connection Attempt on Port 0** | A source device attempted to connect to destination device on port number zero (0). For TCP, port 0 is reserved and canΓÇÖt be used. For UDP, the port is optional and a value of 0 means no port. There's usually no service on a system that listens on port 0. This event may indicate an attempt to attack the destination device, or indicate that an application was programmed incorrectly. | Minor | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **Illegal DNP3 Operation** | The source device initiated an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services |
+| **Illegal MODBUS Operation (Exception Raised by Master)** | The source device initiated an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services |
+| **Illegal MODBUS Operation (Function Code Zero)** | The source device initiated an invalid request. | Major | Illegal Commands |**Tactics:** <br> - Initial Access <br> - Lateral Movement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services |
+| **Illegal Protocol Version** | The source device initiated an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Initial Access <br> - LateralMovement <br> - Impair Process Control <br><br> **Techniques:** <br> - T0820: Remote Services <br> - T0836: Modify Parameter |
+| **Incorrect Parameter Sent to Outstation** | The destination device received an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **Initiation of an Obsolete Function Code (Initialize Data)** | The source device initiated an invalid request. | Minor | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **Initiation of an Obsolete Function Code (Save Config)** | The source device initiated an invalid request. | Minor | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **Master Requested an Application Layer Confirmation** | The source device initiated an invalid request. | Warning | Illegal Commands | **Tactics:** <br> - Command And Control <br><br> **Techniques:** <br> - T0869: Standard Application Layer Protocol |
+| **Modbus Exception** | A source device (secondary) returned an exception to a destination device (primary). | Major | Illegal Commands | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0814: Denial of Service |
+| **Slave Device Received Illegal ASDU Type** | The destination device received an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **Slave Device Received Illegal Command Cause of Transmission** | The destination device received an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **Slave Device Received Illegal Common Address** | The destination device received an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **Slave Device Received Illegal Data Address Parameter** | The destination device received an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **Slave Device Received Illegal Data Value Parameter** | The destination device received an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **Slave Device Received Illegal Function Code** | The destination device received an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **Slave Device Received Illegal Information Object Address** | The destination device received an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **Unknown Object Sent to Outstation** | The destination device received an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **Usage of a Reserved Function Code** | The source device initiated an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **Usage of Improper Formatting by Outstation** | The source device initiated an invalid request. | Warning | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **Usage of Reserved Status Flags (IIN)** | A DNP3 source device (outstation) used the reserved Internal Indicator 2.6. It's recommended to check the device's configuration. | Warning | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
## Malware engine alerts Malware engine alerts describe detected malicious network activity.
-| Title | Description| Severity | Category |
-|--|--|--|--|
-| **Connection Attempt to Known Malicious IP** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity |
-| **Invalid SMB Message (DoublePulsar Backdoor Implant)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| **Malicious Domain Name Request** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity |
-| **Malware Test File Detected - EICAR AV Success** | An EICAR AV test file was detected in traffic between two devices (over any transport - TCP or UDP). The file isn't malware. It's used to confirm that the antivirus software is installed correctly. Demonstrate what happens when a virus is found, and check internal procedures and reactions when a virus is found. Antivirus software should detect EICAR as if it were a real virus. | Major | Suspicion of Malicious Activity |
-| **Suspicion of Conficker Malware** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malware |
-| **Suspicion of Denial Of Service Attack** | A source device attempted to initiate an excessive number of new connections to a destination device. This may indicate a Denial Of Service (DOS) attack against the destination device, and might interrupt device functionality, affect performance and service availability, or cause unrecoverable errors. <br><br> Threshold: 3000 attempts in 1 minute | Critical | Suspicion of Malicious Activity |
-| **Suspicion of Malicious Activity** | Suspicious network activity was detected. This activity may be associated with an attack that triggered known 'Indicators of Compromise' (IOCs). Alert metadata should be reviewed by the security team. | Major | Suspicion of Malicious Activity |
-| **Suspicion of Malicious Activity (BlackEnergy)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| **Suspicion of Malicious Activity (DarkComet)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| **Suspicion of Malicious Activity (Duqu)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| **Suspicion of Malicious Activity (Flame)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| **Suspicion of Malicious Activity (Havex)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| **Suspicion of Malicious Activity (Karagany)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| **Suspicion of Malicious Activity (LightsOut)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| **Suspicion of Malicious Activity (Name Queries)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. <br><br> Threshold: 25 name queries in 1 minute | Major | Suspicion of Malicious Activity |
-| **Suspicion of Malicious Activity (Poison Ivy)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| **Suspicion of Malicious Activity (Regin)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| **Suspicion of Malicious Activity (Stuxnet)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| **Suspicion of Malicious Activity (WannaCry)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malware |
-| **Suspicion of NotPetya Malware - Illegal SMB Parameters Detected** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| **Suspicion of NotPetya Malware - Illegal SMB Transaction Detected** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| **Suspicion of Remote Code Execution with PsExec** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity |
-| **Suspicion of Remote Windows Service Management** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity |
-| **Suspicious Executable File Detected on Endpoint** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity |
-| **Suspicious Traffic Detected** | Suspicious network activity was detected. This activity may be associated with an attack that triggered known 'Indicators of Compromise' (IOCs). Alert metadata should be reviewed by the security team | Critical | Suspicion of Malicious Activity |
-| **Backup Activity with Antivirus Signatures** | Traffic detected between the source device and the destination backup server triggered this alert. The traffic includes backup of antivirus software that might contain malware signatures. This is most likely legitimate backup activity. | Warning | Backup
+| Title | Description| Severity | Category | MITRE ATT&CK <br> tactics and techniques |
+|--|--|--|--|--|
+| **Connection Attempt to Known Malicious IP** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity | **Tactics:** <br> - Initial Access <br> - Command And Control <br><br> **Techniques:** <br> - T0883: Internet Accessible Device <br> - T0884: Connection Proxy |
+| **Invalid SMB Message (DoublePulsar Backdoor Implant)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Initial Access <br> - LateralMovement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services |
+| **Malicious Domain Name Request** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity | **Tactics:** <br> - Initial Access <br> - Command And Control <br><br> **Techniques:** <br> - T0883: Internet Accessible Device <br> - T0884: Connection Proxy |
+| **Malware Test File Detected - EICAR AV Success** | An EICAR AV test file was detected in traffic between two devices (over any transport - TCP or UDP). The file isn't malware. It's used to confirm that the antivirus software is installed correctly. Demonstrate what happens when a virus is found, and check internal procedures and reactions when a virus is found. Antivirus software should detect EICAR as if it were a real virus. | Major | Suspicion of Malicious Activity | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Suspicion of Conficker Malware** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malware | **Tactics:** <br> - Initial Access <br> - Impact <br><br> **Techniques:** <br> - T0826: Loss of Availability <br> - T0828: Loss of Productivity and Revenue <br> - T0847: Replication Through Removable Media |
+| **Suspicion of Denial Of Service Attack** | A source device attempted to initiate an excessive number of new connections to a destination device. This may indicate a Denial Of Service (DOS) attack against the destination device, and might interrupt device functionality, affect performance and service availability, or cause unrecoverable errors. <br><br> Threshold: 3000 attempts in 1 minute | Critical | Suspicion of Malicious Activity | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0814: Denial of Service |
+| **Suspicion of Malicious Activity** | Suspicious network activity was detected. This activity may be associated with an attack that triggered known 'Indicators of Compromise' (IOCs). Alert metadata should be reviewed by the security team. | Major | Suspicion of Malicious Activity | **Tactics:** <br> - Lateral Movement <br><br> **Techniques:** <br> - T0867: Lateral Tool Transfer |
+| **Suspicion of Malicious Activity (BlackEnergy)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Command And Control <br><br> **Techniques:** <br> - T0869: Standard Application Layer Protocol |
+| **Suspicion of Malicious Activity (DarkComet)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Impact <br><br> **Techniques:** <br> - T0882: Theft of Operational Information |
+| **Suspicion of Malicious Activity (Duqu)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Impact <br><br> **Techniques:** <br> - T0882: Theft of Operational Information |
+| **Suspicion of Malicious Activity (Flame)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Collection <br> - Impact <br><br> **Techniques:** <br> - T0882: Theft of Operational Information <br> - T0811: Data from Information Repositories |
+| **Suspicion of Malicious Activity (Havex)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Collection <br> - Discovery <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0861: Point & Tag Identification <br> - T0846: Remote System Discovery <br> - T0814: Denial of Service |
+| **Suspicion of Malicious Activity (Karagany)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Impact <br><br> **Techniques:** <br> - T0882: Theft of Operational Information |
+| **Suspicion of Malicious Activity (LightsOut)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Evasion <br><br> **Techniques:** <br> - T0849: Masquerading |
+| **Suspicion of Malicious Activity (Name Queries)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. <br><br> Threshold: 25 name queries in 1 minute | Major | Suspicion of Malicious Activity | **Tactics:** <br> - Command And Control <br><br> **Techniques:** <br> - T0884: Connection Proxy |
+| **Suspicion of Malicious Activity (Poison Ivy)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services |
+| **Suspicion of Malicious Activity (Regin)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br> - Impact <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services <br> - T0882: Theft of Operational Information |
+| **Suspicion of Malicious Activity (Stuxnet)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br> - Impact <br><br> **Techniques:** <br> - T0818: Engineering Workstation Compromise <br> - T0866: Exploitation of Remote Services <br> - T0831: Manipulation of Control |
+| **Suspicion of Malicious Activity (WannaCry)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malware | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services <br> - T0867: Lateral Tool Transfer |
+| **Suspicion of NotPetya Malware - Illegal SMB Parameters Detected** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services |
+| **Suspicion of NotPetya Malware - Illegal SMB Transaction Detected** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Lateral Movement <br><br> **Techniques:** <br> - T0867: Lateral Tool Transfer |
+| **Suspicion of Remote Code Execution with PsExec** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity | **Tactics:** <br> - Lateral Movement <br> - Initial Access <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services |
+| **Suspicion of Remote Windows Service Management** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity | **Tactics:** <br> - Initial Access <br><br> **Techniques:** <br> - T0822: NetworkExternal Remote Services |
+| **Suspicious Executable File Detected on Endpoint** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity | **Tactics:** <br> - Evasion <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0851: Rootkit |
+| **Suspicious Traffic Detected** | Suspicious network activity was detected. This activity may be associated with an attack that triggered known 'Indicators of Compromise' (IOCs). Alert metadata should be reviewed by the security team | Critical | Suspicion of Malicious Activity | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Backup Activity with Antivirus Signatures** | Traffic detected between the source device and the destination backup server triggered this alert. The traffic includes backup of antivirus software that might contain malware signatures. This is most likely legitimate backup activity. | Warning | Backup | **Tactics:** <br> - Impact <br><br> **Techniques:** <br> - T0882: Theft of Operational Information |
## Operational engine alerts Operational engine alerts describe detected operational incidents, or malfunctioning entities.
-| Title | Description | Severity | Category |
-|--|--|--|--|
-| **An S7 Stop PLC Command was Sent** | The source device sent a stop command to a destination controller. The controller will stop operating until a start command is sent. | Warning | Restart/ Stop Commands |
-| **BACNet Operation Failed** | A server returned an error code. This alert indicates a server error or an invalid request by a client. | Major | Command Failures |
-| **Bad MMS Device State** | An MMS Virtual Manufacturing Device (VMD) sent a status message. The message indicates that the server may not be configured correctly, partially operational, or not operational at all. | Major | Operational Issues |
-| **Change of Device Configuration** | A configuration change was detected on a source device. | Minor | Configuration Changes |
-| **Continuous Event Buffer Overflow at Outstation** | A buffer overflow event was detected on a source device. The event may cause data corruption, program crashes, or execution of malicious code. <br><br> Threshold: 3 occurrences in 10 minutes | Major | Buffer Overflow |
-| **Controller Reset** | A source device sent a reset command to a destination controller. The controller stopped operating temporarily and started again automatically. | Warning | Restart/ Stop Commands |
-| **Controller Stop** | The source device sent a stop command to a destination controller. The controller will stop operating until a start command is sent. | Warning | Restart/ Stop Commands |
-| **Device Failed to Receive a Dynamic IP Address** | The source device is configured to receive a dynamic IP address from a DHCP server but didn't receive an address. This indicates a configuration error on the device, or an operational error in the DHCP server. It's recommended to notify the network administrator of the incident | Major | Command Failures |
-| **Device is Suspected to be Disconnected (Unresponsive)** | A source device didn't respond to a command sent to it. It may have been disconnected when the command was sent. <br><br> Threshold: 8 attempts in 5 minutes | Major | Unresponsive |
-| **EtherNet/IP CIP Service Request Failed** | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures |
-| **EtherNet/IP Encapsulation Protocol Command Failed** | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures |
-| **Event Buffer Overflow in Outstation** | A buffer overflow event was detected on a source device. The event may cause data corruption, program crashes, or execution of malicious code. | Major | Buffer Overflow |
-| **Expected Backup Operation Did Not Occur** | Expected backup/file transfer activity didn't occur between two devices. This alert may indicate errors in the backup / file transfer process. <br><br> Threshold: 100 seconds | Major | Backup |
-| **GE SRTP Command Failure** | A server returned an error code. This alert indicates a server error or an invalid request by a client. | Major | Command Failures |
-| **GE SRTP Stop PLC Command was Sent** | The source device sent a stop command to a destination controller. The controller will stop operating until a start command is sent. | Warning | Restart/ Stop Commands |
-| **GOOSE Control Block Requires Further Configuration** | A source device sent a GOOSE message indicating that the device needs commissioning. This means that the GOOSE control block requires further configuration and GOOSE messages are partially or completely non-operational. | Major | Configuration Changes |
-| **GOOSE Dataset Configuration was Changed** | A message (identified by protocol ID) dataset was changed on a source device. This means the device will report a different dataset for this message. | Warning | Configuration Changes |
-| **Honeywell Controller Unexpected Status** | A Honeywell Controller sent an unexpected diagnostic message indicating a status change. | Warning | Operational Issues |
-|* **HTTP Client Error** | The source device initiated an invalid request. | Warning | Abnormal HTTP Communication Behavior |
-| **Illegal IP Address** | System detected traffic between a source device and an IP address that is an invalid address. This may indicate wrong configuration or an attempt to generate illegal traffic. | Minor | Abnormal Communication Behavior |
-| **Master-Slave Authentication Error** | The authentication process between a DNP3 source device (primary) and a destination device (outstation) failed. | Minor | Authentication |
-| **MMS Service Request Failed** | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures |
-| **No Traffic Detected on Sensor Interface** | A sensor stopped detecting network traffic on a network interface. | Critical | Sensor Traffic |
-| **OPC UA Server Raised an Event That Requires User's Attention** | An OPC UA server sent an event notification to a client. This type of event requires user attention | Major | Operational Issues |
-| **OPC UA Service Request Failed** | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures |
-| **Outstation Restarted** | A cold restart was detected on a source device. This means the device was physically turned off and back on again. | Warning | Restart/ Stop Commands |
-| **Outstation Restarts Frequently** | An excessive number of cold restarts were detected on a source device. This means the device was physically turned off and back on again an excessive number of times. <br><br> Threshold: 2 restarts in 10 minutes | Minor | Restart/ Stop Commands |
-| **Outstation's Configuration Changed** | A configuration change was detected on a source device. | Major | Configuration Changes |
-| **Outstation's Corrupted Configuration Detected** | This DNP3 source device (outstation) reported a corrupted configuration. | Major | Configuration Changes |
-| **Profinet DCP Command Failed** | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures |
-| **Profinet Device Factory Reset** | A source device sent a factory reset command to a Profinet destination device. The reset command clears Profinet device configurations and stops its operation. | Warning | Restart/ Stop Commands |
-| * **RPC Operation Failed** | A server returned an error code. This alert indicates a server error or an invalid request by a client. | Major | Command Failures |
-| **Sampled Values Message Dataset Configuration was Changed** | A message (identified by protocol ID) dataset was changed on a source device. This means the device will report a different dataset for this message. | Warning | Configuration Changes |
-| **Slave Device Unrecoverable Failure** | An unrecoverable condition error was detected on a source device. This kind of error usually indicates a hardware failure or failure to perform a specific command. | Major | Command Failures |
-| **Suspicion of Hardware Problems in Outstation** | An unrecoverable condition error was detected on a source device. This kind of error usually indicates a hardware failure or failure to perform a specific command. | Major | Operational Issues |
-| **Suspicion of Unresponsive MODBUS Device** | A source device didn't respond to a command sent to it. It may have been disconnected when the command was sent. <br><br> Threshold: Minimum of 1 valid response for a minimum of 3 requests within 5 minutes | Minor | Unresponsive |
-| **Traffic Detected on Sensor Interface** | A sensor resumed detecting network traffic on a network interface. | Warning | Sensor Traffic |
-
-\* The alert is disabled by default, but can be enabled again. To enable the alert, navigate to the Support page, find the alert and select **Enable**. You need administrative level permissions to access the Support page.
+| Title | Description | Severity | Category | MITRE ATT&CK <br> tactics and techniques |
+|--|--|--|--|--|
+| **An S7 Stop PLC Command was Sent** | The source device sent a stop command to a destination controller. The controller will stop operating until a start command is sent. | Warning | Restart/ Stop Commands | **Tactics:** <br> - Lateral Movement <br> - Defense Evasion <br> - Execution <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0843: Program Download <br> - T0858: Change Operating Mode <br> - T0814: Denial of Service |
+| **BACNet Operation Failed** | A server returned an error code. This alert indicates a server error or an invalid request by a client. | Major | Command Failures | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **Bad MMS Device State** | An MMS Virtual Manufacturing Device (VMD) sent a status message. The message indicates that the server may not be configured correctly, partially operational, or not operational at all. | Major | Operational Issues | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0814: Denial of Service |
+| **Change of Device Configuration** | A configuration change was detected on a source device. | Minor | Configuration Changes | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **Continuous Event Buffer Overflow at Outstation** | A buffer overflow event was detected on a source device. The event may cause data corruption, program crashes, or execution of malicious code. <br><br> Threshold: 3 occurrences in 10 minutes | Major | Buffer Overflow | **Tactics:** <br> - Inhibit Response Function <br> - Impair Process Control <br> - Persistence <br><br> **Techniques:** <br> - T0814: Denial of Service <br> - T0806: Brute Force I/O <br> - T0839: Module Firmware |
+| **Controller Reset** | A source device sent a reset command to a destination controller. The controller stopped operating temporarily and started again automatically. | Warning | Restart/ Stop Commands | **Tactics:** <br> - Defense Evasion <br> - Execution <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0858: Change Operating Mode <br> - T0814: Denial of Service |
+| **Controller Stop** | The source device sent a stop command to a destination controller. The controller will stop operating until a start command is sent. | Warning | Restart/ Stop Commands | **Tactics:** <br> - Lateral Movement <br> - Defense Evasion <br> - Execution <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0843: Program Download <br> - T0858: Change Operating Mode <br> - T0814: Denial of Service |
+| **Device Failed to Receive a Dynamic IP Address** | The source device is configured to receive a dynamic IP address from a DHCP server but didn't receive an address. This indicates a configuration error on the device, or an operational error in the DHCP server. It's recommended to notify the network administrator of the incident | Major | Command Failures | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Device is Suspected to be Disconnected (Unresponsive)** | A source device didn't respond to a command sent to it. It may have been disconnected when the command was sent. <br><br> Threshold: 8 attempts in 5 minutes | Major | Unresponsive | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0881: Service Stop |
+| **EtherNet/IP CIP Service Request Failed** | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **EtherNet/IP Encapsulation Protocol Command Failed** | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures | **Tactics:** <br> - Collection <br><br> **Techniques:** <br> - T0801: Monitor Process State |
+| **Event Buffer Overflow in Outstation** | A buffer overflow event was detected on a source device. The event may cause data corruption, program crashes, or execution of malicious code. | Major | Buffer Overflow | **Tactics:** <br> - Inhibit Response Function <br> - Impair Process Control <br> - Persistence <br><br> **Techniques:** <br> - T0814: Denial of Service <br> - T0839: Module Firmware |
+| **Expected Backup Operation Did Not Occur** | Expected backup/file transfer activity didn't occur between two devices. This alert may indicate errors in the backup / file transfer process. <br><br> Threshold: 100 seconds | Major | Backup | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0809: Data Destruction |
+| **GE SRTP Command Failure** | A server returned an error code. This alert indicates a server error or an invalid request by a client. | Major | Command Failures | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **GE SRTP Stop PLC Command was Sent** | The source device sent a stop command to a destination controller. The controller will stop operating until a start command is sent. | Warning | Restart/ Stop Commands | **Tactics:** <br> - Lateral Movement <br> - Defense Evasion <br> - Execution <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0843: Program Download <br> - T0858: Change Operating Mode <br> - T0814: Denial of Service |
+| **GOOSE Control Block Requires Further Configuration** | A source device sent a GOOSE message indicating that the device needs commissioning. This means that the GOOSE control block requires further configuration and GOOSE messages are partially or completely non-operational. | Major | Configuration Changes | **Tactics:** <br> - Impair Process Control <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0803: Block Command Message <br> - T0821: Modify Controller Tasking |
+| **GOOSE Dataset Configuration was Changed** | A message (identified by protocol ID) dataset was changed on a source device. This means the device will report a different dataset for this message. | Warning | Configuration Changes | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **Honeywell Controller Unexpected Status** | A Honeywell Controller sent an unexpected diagnostic message indicating a status change. | Warning | Operational Issues | **Tactics:** <br> - Evasion <br> - Execution <br><br> **Techniques:** <br> - T0858: Change Operating Mode |
+| **HTTP Client Error [*](#alerts-disabled-by-default)** | The source device initiated an invalid request. | Warning | Abnormal HTTP Communication Behavior | **Tactics:** <br> - Command And Control <br><br> **Techniques:** <br> - T0869: Standard Application Layer Protocol |
+| **Illegal IP Address** | System detected traffic between a source device and an IP address that is an invalid address. This may indicate wrong configuration or an attempt to generate illegal traffic. | Minor | Abnormal Communication Behavior | **Tactics:** <br> - Discovery <br> - Impair Process Control <br><br> **Techniques:** <br> - T0842: Network Sniffing <br> - T0836: Modify Parameter |
+| **Master-Slave Authentication Error** | The authentication process between a DNP3 source device (primary) and a destination device (outstation) failed. | Minor | Authentication | **Tactics:** <br> - Lateral Movement <br> - Persistence <br><br> **Techniques:** <br> - T0859: Valid Accounts |
+| **MMS Service Request Failed** | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **No Traffic Detected on Sensor Interface** | A sensor stopped detecting network traffic on a network interface. | Critical | Sensor Traffic | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0881: Service Stop |
+| **OPC UA Server Raised an Event That Requires User's Attention** | An OPC UA server sent an event notification to a client. This type of event requires user attention | Major | Operational Issues | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0838: Modify Alarm Settings |
+| **OPC UA Service Request Failed** | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **Outstation Restarted** | A cold restart was detected on a source device. This means the device was physically turned off and back on again. | Warning | Restart/ Stop Commands | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0816: Device Restart/Shutdown |
+| **Outstation Restarts Frequently** | An excessive number of cold restarts were detected on a source device. This means the device was physically turned off and back on again an excessive number of times. <br><br> Threshold: 2 restarts in 10 minutes | Minor | Restart/ Stop Commands | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0814: Denial of Service <br> - T0816: Device Restart/Shutdown |
+| **Outstation's Configuration Changed** | A configuration change was detected on a source device. | Major | Configuration Changes | **Tactics:** <br> - Inhibit Response Function <br> - Persistence <br><br> **Techniques:** <br> - T0857: System Firmware |
+| **Outstation's Corrupted Configuration Detected** | This DNP3 source device (outstation) reported a corrupted configuration. | Major | Configuration Changes | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0809: Data Destruction |
+| **Profinet DCP Command Failed** | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **Profinet Device Factory Reset** | A source device sent a factory reset command to a Profinet destination device. The reset command clears Profinet device configurations and stops its operation. | Warning | Restart/ Stop Commands | **Tactics:** <br> - Defense Evasion <br> - Execution <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0858: Change Operating Mode <br> - T0814: Denial of Service |
+| **RPC Operation Failed [*](#alerts-disabled-by-default)** | A server returned an error code. This alert indicates a server error or an invalid request by a client. | Major | Command Failures | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **Sampled Values Message Dataset Configuration was Changed** | A message (identified by protocol ID) dataset was changed on a source device. This means the device will report a different dataset for this message. | Warning | Configuration Changes | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **Slave Device Unrecoverable Failure** | An unrecoverable condition error was detected on a source device. This kind of error usually indicates a hardware failure or failure to perform a specific command. | Major | Command Failures | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0814: Denial of Service |
+| **Suspicion of Hardware Problems in Outstation** | An unrecoverable condition error was detected on a source device. This kind of error usually indicates a hardware failure or failure to perform a specific command. | Major | Operational Issues | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0814: Denial of Service <br> - T0881: Service Stop |
+| **Suspicion of Unresponsive MODBUS Device** | A source device didn't respond to a command sent to it. It may have been disconnected when the command was sent. <br><br> Threshold: Minimum of 1 valid response for a minimum of 3 requests within 5 minutes | Minor | Unresponsive | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0881: Service Stop |
+| **Traffic Detected on Sensor Interface** | A sensor resumed detecting network traffic on a network interface. | Warning | Sensor Traffic | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
## Next steps
defender-for-iot How To Manage Individual Sensors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-individual-sensors.md
Title: Manage individual sensors description: Learn how to manage individual sensors, including managing activation files, certificates, performing backups, and updating a standalone sensor. Previously updated : 06/02/2022 Last updated : 11/07/2022
This article describes how to manage individual sensors, such as managing activa
You can also perform some management tasks for multiple sensors simultaneously from the Azure portal or an on-premises management console. For more information, see [Next steps](#next-steps).
+## View overall sensor status
+
+When you sign into your sensor, the first page shown is the **Overview** page.
+
+For example:
++
+The **Overview** page shows the following widgets:
+
+| Name | Description |
+|--|--|
+| **General Settings** | Displays a list of the sensor's basic configuration settings |
+| **Traffic Monitoring** | Displays a graph detailing traffic in the sensor. The graph shows traffic as units of Mbps per hour on the day of viewing. |
+| **Top 5 OT Protocols** | Displays a bar graph that details the top five most used OT protocols. The bar graph also provides the number of devices that are using each of those protocols. |
+| **Traffic By Port** | Displays a pie chart showing the types of ports in your network, with the amount of traffic detected in each type of port. |
+| **Top open alerts** | Displays a table listing any currently open alerts with high severity levels, including critical details about each alert. |
+
+Select the link in each widget to drill down for more information in your sensor.
+ ## Manage sensor activation files Your sensor was onboarded with Microsoft Defender for IoT from the Azure portal. Each sensor was onboarded as either a locally connected sensor or a cloud-connected sensor.
You'll receive an error message if the activation file couldn't be uploaded. The
Following sensor installation, a local self-signed certificate is generated and used to access the sensor web application. When logging in to the sensor for the first time, Administrator users are prompted to provide an SSL/TLS certificate.
-Sensor Administrators may be required to update certificates that were uploaded after initial login. This may happen for example if a certificate expired.
+Sensor Administrators may be required to update certificates that were uploaded after initial login. This may happen, for example, if a certificate expired.
**To update a certificate:**
If the upload fails, contact your security or IT administrator, or review the in
**To change the certificate validation setting:**
-1. Enable or disable the **Enable Certificate Validation** toggle. If the option is enabled and validation fails, communication between relevant components is halted and a validation error is presented in the console. If disabled, certificate validation is not carried out. See [About certificate validation](how-to-deploy-certificates.md#about-certificate-validation) for more information.
+1. Enable or disable the **Enable Certificate Validation** toggle. If the option is enabled and validation fails, communication between relevant components is halted, and a validation error is presented in the console. If disabled, certificate validation is not carried out. See [About certificate validation](how-to-deploy-certificates.md#about-certificate-validation) for more information.
1. Select **Save**.
-For more information about first-time certificate upload see,
+For more information about first-time certificate upload, see,
[First-time sign-in and activation checklist](how-to-activate-and-set-up-your-sensor.md#first-time-sign-in-and-activation-checklist) ## Connect a sensor to the management console
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
Title: What's new in Microsoft Defender for IoT description: This article lets you know what's new in the latest release of Defender for IoT. Previously updated : 08/08/2022 Last updated : 11/03/2022 # What's new in Microsoft Defender for IoT?
For more information, see the [Microsoft Security Development Lifecycle practice
Our alert reference article now includes the following details for each alert: -- **Alert category**, helpful when you want to investigate alerts that are aggregated by a specific activity or configure SIEM rules to generate incidents based on specific activities
+- **Alert category**, helpful when you want to investigate alerts that are aggregated by a specific activity or configure SIEM rules to generate incidents based on specific activities.
+
+- **MITRE ATT&CK for ICS tactics and techniques**, which describe the actions an adversary may take while operating within the network. Use the tactics and techniques listed for each alert to learn about the network areas that might be at risk and collaborate more efficiently across your security and OT teams more as you secure those assets.
- **Alert threshold**, for relevant alerts. Thresholds indicate the specific point at which an alert is triggered. Modify alert thresholds as needed from the sensor's **Support** page.
Defender for IoT now provides vulnerability data in the Azure portal for detecte
Access vulnerability data in the Azure portal from the following locations: -- On a device details page select the **Vulnerabilities** tab to view current vulnerabilities on the selected device. For example, from the **Device inventory** page, select a specific device and then select **Vulnerabilities**.
+- On a device details page, select the **Vulnerabilities** tab to view current vulnerabilities on the selected device. For example, from the **Device inventory** page, select a specific device and then select **Vulnerabilities**.
For more information, see [View the device inventory](how-to-manage-device-inventory-for-organizations.md#view-the-device-inventory).
Use the following table to understand the mapping between legacy hardware profil
|Legacy name |New name | Description | ||||
-|**Corporate** | **C5600** | A *Corporate* environment, with: <br>16 Cores<br>32 GB RAM<br>5.6 TB disk storage |
-|**Enterprise** | **E1800** | An *Enterprise* environment, with: <br>8 Cores<br>32 GB RAM<br>1.8 TB disk storage |
-|**SMB** | **L500** | A *Production line* environment, with: <br>4 Cores<br>8 GB RAM<br>500 GB disk storage |
-|**Office** | **L100** | A *Production line* environment, with: <br>4 Cores<br>8 GB RAM<br>100 GB disk storage |
-|**Rugged** | **L64** | A *Production line* environment, with: <br>4 Cores<br>8 GB RAM<br>64 GB disk storage |
+|**Corporate** | **C5600** | A *Corporate* environment, with: <br>16 Cores<br>32-GB RAM<br>5.6-TB disk storage |
+|**Enterprise** | **E1800** | An *Enterprise* environment, with: <br>8 Cores<br>32-GB RAM<br>1.8-TB disk storage |
+|**SMB** | **L500** | A *Production line* environment, with: <br>4 Cores<br>8-GB RAM<br>500-GB disk storage |
+|**Office** | **L100** | A *Production line* environment, with: <br>4 Cores<br>8-GB RAM<br>100-GB disk storage |
+|**Rugged** | **L64** | A *Production line* environment, with: <br>4 Cores<br>8-GB RAM<br>64-GB disk storage |
-We also now support new enterprise hardware profiles, for sensors supporting both 500 GB and 1 TB disk sizes.
+We also now support new enterprise hardware profiles, for sensors supporting both 500 GB and 1-TB disk sizes.
For more information, see [Which appliances do I need?](ot-appliance-sizing.md)
defender-for-iot Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/workbooks.md
Title: Use Azure Monitor workbooks in Microsoft Defender for IoT
+ Title: Visualize Microsoft Defender for IoT data with Azure Monitor workbooks
description: Learn how to view and create Azure Monitor workbooks for Defender for IoT data. Last updated 09/04/2022
-# Use Azure Monitor workbooks in Microsoft Defender for IoT
-
-> [!IMPORTANT]
-> The **Azure Monitor workbooks** are currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+# Visualize Microsoft Defender for IoT data with Azure Monitor workbooks
Azure Monitor workbooks provide graphs, charts, and dashboards that visually reflect data stored in your Azure Resource Graph subscriptions and are available directly in Microsoft Defender for IoT.
deployment-environments Configure Catalog Item https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/configure-catalog-item.md
Title: Configure a Catalog Item in Azure Deployment Environments
-description: This article helps you configure a Catalog Item in GitHub repo or Azure DevOps repo.
+ Title: Add and configure a catalog item
+
+description: Learn how to add and configure a catalog item in your repository to use in your Azure Deployment Environments Preview dev center projects.
+ Last updated 10/12/2022 -+
-# Configure a Catalog Item in GitHub repo or Azure DevOps repo
-In Azure Deployment Environments Preview service, you can use a [Catalog](concept-environments-key-concepts.md#catalogs) to provide your development teams with a curated set of predefined [*infrastructure as code (IaC)*](/devops/deliver/what-is-infrastructure-as-code) templates called [Catalog Items](concept-environments-key-concepts.md#catalog-items). A catalog item is a combination of an *infrastructure as code (IaC)* template (for example, [Azure Resource Manager (ARM) templates](../azure-resource-manager/templates/overview.md)) and a manifest (*manifest.yml*) file.
+# Add and configure a catalog item
+
+In Azure Deployment Environments Preview, you can use a [catalog](concept-environments-key-concepts.md#catalogs) to provide your development teams with a curated set of predefined [infrastructure as code (IaC)](/devops/deliver/what-is-infrastructure-as-code) templates called [*catalog items*](concept-environments-key-concepts.md#catalog-items).
+
+A catalog item is combined of least two files:
+
+- An [Azure Resource Manager template (ARM template)](../azure-resource-manager/templates/overview.md) in JSON file format. For example, *azuredeploy.json*.
+- A manifest YAML file (*manifest.yml*).
>[!NOTE]
-> Azure Deployment Environments Preview currently only supports Azure Resource Manager (ARM) templates.
+> Azure Deployment Environments Preview currently supports only ARM templates.
-The IaC template will contain the environment definition and the manifest file will be used to provide metadata about the template. The catalog items that you provide in the catalog will be used by your development teams to deploy environments in Azure.
+The IaC template contains the environment definition (template), and the manifest file provides metadata about the template. Your development teams use the catalog items that you provide in the catalog to deploy environments in Azure.
-We offer an example [Sample Catalog](https://aka.ms/deployment-environments/SampleCatalog) that you can attach as-is, or you can fork and customize the catalog items. You can attach your private repo to use your own catalog items.
+We offer a [sample catalog](https://aka.ms/deployment-environments/SampleCatalog) that you can use as your repository. You also can use your own private repository, or you can fork and customize the catalog items in the sample catalog.
-After you [attach a catalog](how-to-configure-catalog.md) to your dev center, the service will scan through the specified folder path to identify folders containing an ARM template and the associated manifest file. The specified folder path should be a folder that contains sub-folders with the catalog item files.
+After you [add a catalog](how-to-configure-catalog.md) to your dev center, the service scans the specified folder path to identify folders that contain an ARM template and an associated manifest file. The specified folder path should be a folder that contains subfolders that hold the catalog item files.
-In this article, you'll learn how to:
+In this article, you learn how to:
-* Add a new catalog item
-* Update a catalog item
-* Delete a catalog item
+> [!div class="checklist"]
+>
+> - Add a catalog item
+> - Update a catalog item
+> - Delete a catalog item
> [!IMPORTANT]
-> Azure Deployment Environments is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> Azure Deployment Environments currently is in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise are not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+<a name="add-a-new-catalog-item"></a>
+
+## Add a catalog item
-## Add a new catalog item
+To add a catalog item:
-Provide a new catalog item to your development team as follows:
+1. In your repository, create a subfolder in the repository folder path.
-1. Create a subfolder in the specified folder path, and then add a *ARM_template.json* and the associated *manifest.yaml* file.
- :::image type="content" source="../deployment-environments/media/configure-catalog-item/create-subfolder-in-path.png" alt-text="Screenshot of subfolder in folder path containing ARM template and manifest file.":::
+1. Add two files to the new repository subfolder:
- 1. **Add ARM template**
-
- To implement infrastructure as code for your Azure solutions, use Azure Resource Manager templates (ARM templates).
-
- [Azure Resource Manager (ARM) templates](../azure-resource-manager/templates/overview.md) help you define the infrastructure and configuration of your Azure solution and repeatedly deploy it in a consistent state.
-
- To learn about how to get started with ARM templates, see the following:
-
- - [Understand the structure and syntax of Azure Resource Manager Templates](../azure-resource-manager/templates/syntax.md) describes the structure of an Azure Resource Manager template and the properties that are available in the different sections of a template.
- - [Use linked templates](../azure-resource-manager/templates/linked-templates.md?tabs=azure-powershell#use-relative-path-for-linked-templates) describes how to use linked templates with the new ARM `relativePath` property to easily modularize your templates and share core components between catalog items.
+ - An ARM template as a JSON file.
- 1. **Add manifest file**
-
- The *manifest.yaml* file contains metadata related to the ARM template.
-
- The following is a sample *manifest.yaml* file.
-
- ```
- name: WebApp
- version: 1.0.0
- summary: Azure Web App Environment
- description: Deploys an Azure Web App without a data store
- runner: ARM
- templatePath: azuredeploy.json
- ```
-
- >[!NOTE]
- > `version` is an optional field, and will later be used to support multiple versions of catalog items.
+ To implement IaC for your Azure solutions, use ARM templates. [ARM templates](../azure-resource-manager/templates/overview.md) help you define the infrastructure and configuration of your Azure solution and repeatedly deploy it in a consistent state.
-1. On the **Catalogs** page of the dev center, select the specific repo, and then select **Sync**.
+ To learn how to get started with ARM templates, see the following articles:
- :::image type="content" source="../deployment-environments/media/configure-catalog-item/sync-catalog-items.png" alt-text="Screenshot showing how to sync the catalog." :::
+ - [Understand the structure and syntax of ARM templates](../azure-resource-manager/templates/syntax.md): Describes the structure of an ARM template and the properties that are available in the different sections of a template.
+ - [Use linked templates](../azure-resource-manager/templates/linked-templates.md?tabs=azure-powershell#use-relative-path-for-linked-templates): Describes how to use linked templates with the new ARM template `relativePath` property to easily modularize your templates and share core components between catalog items.
-1. The service scans through the repository to discover any new catalog items and makes them available to all the projects.
+ - A manifest as a YAML file.
-## Update an existing catalog item
+ The *manifest.yaml* file contains metadata related to the ARM template.
-To modify the configuration of Azure resources in an existing catalog item, directly update the associated *ARM_Template.json* file in the repository. The change is immediately reflected when you create a new environment using the specific catalog item, and when you redeploy an environment associated with that catalog item.
+ The following script is an example of the contents of a *manifest.yaml* file:
-To update any metadata related to the ARM template, modify the *manifest.yaml* and [update the catalog](how-to-configure-catalog.md).
+ ```yaml
+ name: WebApp
+ version: 1.0.0
+ summary: Azure Web App Environment
+ description: Deploys a web app in Azure without a datastore
+ runner: ARM
+ templatePath: azuredeploy.json
+ ```
+
+ > [!NOTE]
+ > The `version` field is optional. Later, the field will be used to support multiple versions of catalog items.
+
+ :::image type="content" source="../deployment-environments/media/configure-catalog-item/create-subfolder-in-path.png" alt-text="Screenshot that shows a folder path with a subfolder that contains an ARM template and a manifest file.":::
+
+1. In your dev center, go to **Catalogs**, select the repository, and then select **Sync**.
+
+ :::image type="content" source="../deployment-environments/media/configure-catalog-item/sync-catalog-items.png" alt-text="Screenshot that shows how to sync the catalog." :::
+
+The service scans the repository to find new catalog items. After you sync the repository, new catalog items are available to all projects in the dev center.
+
+## Update a catalog item
+
+To modify the configuration of Azure resources in an existing catalog item, update the associated ARM template JSON file in the repository. The change is immediately reflected when you create a new environment by using the specific catalog item. The update also is applied when you redeploy an environment that's associated with that catalog item.
+
+To update any metadata related to the ARM template, modify *manifest.yaml*, and then [update the catalog](how-to-configure-catalog.md#update-a-catalog).
## Delete a catalog item
-To delete an existing Catalog Item, delete the subfolder containing the ARM template and the associated manifest, and then [update the catalog](how-to-configure-catalog.md).
-Once you delete a catalog item, development teams will no longer be able to use the specific catalog item to deploy a new environment. You'll need to update the catalog item reference for any existing environments created using the deleted catalog item. Redeploying the environment without updating the reference will result in a deployment failure.
+To delete an existing catalog item, in the repository, delete the subfolder that contains the ARM template JSON file and the associated manifest YAML file. Then, [update the catalog](how-to-configure-catalog.md#update-a-catalog).
+
+After you delete a catalog item, development teams can no longer use the specific catalog item to deploy a new environment. Update the catalog item reference for any existing environments that were created by using the deleted catalog item. If the reference isn't updated and the environment is redeployed, the deployment fails.
## Next steps
-* [Create and configure projects](./quickstart-create-and-configure-projects.md)
-* [Create and configure environment types](quickstart-create-access-environments.md).
+- Learn how to [create and configure a project](./quickstart-create-and-configure-projects.md).
+- Learn how to [create and configure an environment type](quickstart-create-access-environments.md).
deployment-environments How To Configure Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-catalog.md
Title: Configure a catalog
+ Title: Add and configure a catalog
-description: Learn how to configure a catalog in your dev center to provide curated infra-as-code templates to your development teams to deploy self-serve environments.
+description: Learn how to add and configure a catalog in your Azure Deployment Environments Preview dev center to provide deployment templates for your development teams.
- + Last updated 10/12/2022
-# Configure a catalog to provide curated infra-as-code templates
+# Add and configure a catalog
+
+Learn how to add and configure a [catalog](./concept-environments-key-concepts.md#catalogs) in your Azure Deployment Environments Preview dev center. You can use a catalog to provide your development teams with a curated set of infrastructure as code (IaC) templates called [*catalog items*](./concept-environments-key-concepts.md#catalog-items).
-Learn how to configure a dev center [catalog](./concept-environments-key-concepts.md#catalogs) to provide your development teams with a curated set of 'infra-as-code' templates called [catalog items](./concept-environments-key-concepts.md#catalog-items). To learn about configuring catalog items, see [How to configure a catalog item](./configure-catalog-item.md).
+For more information about catalog items, see [Add and configure a catalog item](./configure-catalog-item.md).
-The catalog could be a repository hosted in [GitHub](https://github.com) or in [Azure DevOps Services](https://dev.azure.com/).
+A catalog is a repository that's hosted in [GitHub](https://github.com) or [Azure DevOps](https://dev.azure.com/).
-* To learn how to host a repository in GitHub, see [Get started with GitHub](https://docs.github.com/get-started).
-* To learn how to host a Git repository in an Azure DevOps Services project, see [Azure Repos](https://azure.microsoft.com/services/devops/repos/).
+- To learn how to host a repository in GitHub, see [Get started with GitHub](https://docs.github.com/get-started).
+- To learn how to host a Git repository in an Azure DevOps project, see [Azure Repos](https://azure.microsoft.com/services/devops/repos/).
-We offer an example [Sample Catalog](https://aka.ms/deployment-environments/SampleCatalog) that you can attach as-is, or you can fork and customize the catalog items. You can attach your private repo to use your own catalog items.
+We offer a [sample catalog](https://aka.ms/deployment-environments/SampleCatalog) that you can use as your repository. You also can use your own private repository, or you can fork and customize the catalog items in the sample catalog.
-In this article, you'll learn how to:
+In this article, you learn how to:
-* [Add a new catalog](#add-a-new-catalog)
-* [Update a catalog](#update-a-catalog)
-* [Delete a catalog](#delete-a-catalog)
+> [!div class="checklist"]
+>
+> - Add a catalog
+> - Update a catalog
+> - Delete a catalog
-## Add a new catalog
+> [!IMPORTANT]
+> Azure Deployment Environments currently is in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise are not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-To add a new catalog, you'll need to:
+## Add a catalog
+To add a catalog, you complete these tasks:
+
+- Get the clone URL for your repository.
+- Create a personal access token
+- Store the personal access token as a key vault secret in Azure Key Vault.
+- Add your repository as a catalog.
### Get the clone URL for your repository
-**Get the clone URL of your GitHub repo**
+You can choose from two types of repositories:
+
+- A GitHub repository
+- An Azure DevOps repository
+
+#### Get the clone URL of a GitHub repository
1. Go to the home page of the GitHub repository that contains the template definitions. 1. [Get the clone URL](/azure/devops/repos/git/clone#get-the-clone-url-of-a-github-repo). 1. Copy and save the URL. You'll use it later.
-**Get the clone URL of your Azure DevOps Services Git repo**
+#### Get the clone URL of an Azure DevOps repository
1. Go to the home page of your team collection (for example, `https://contoso-web-team.visualstudio.com`), and then select your project. 1. [Get the clone URL](/azure/devops/repos/git/clone#get-the-clone-url-of-an-azure-repos-git-repo). 1. Copy and save the URL. You'll use it later.
-### Create a personal access token and store it as a Key Vault secret
+### Create a personal access token
+
+Next, create a personal access token. Depending on the type of repository you use, create a personal access token either in GitHub or in Azure DevOps.
#### Create a personal access token in GitHub 1. Go to the home page of the GitHub repository that contains the template definitions. 1. In the upper-right corner of GitHub, select the profile image, and then select **Settings**.
-1. In the left sidebar, select **<> Developer settings**.
+1. In the left sidebar, select **Developer settings**.
1. In the left sidebar, select **Personal access tokens**. 1. Select **Generate new token**.
-1. On the **New personal access token** page, add a description for your token in the **Note** field.
-1. Select an expiration for your token from the **Expiration** dropdown.
-1. For a private repository, select the **repo** scope under **Select scopes**.
-1. Select **Generate Token**.
+1. In **New personal access token**, in **Note**, enter a description for your token.
+1. In the **Expiration** dropdown, select an expiration for your token.
+1. For a private repository, under **Select scopes**, select the **repo** scope.
+1. Select **Generate token**.
1. Save the generated token. You'll use the token later.
-#### Create a personal access token in Azure DevOps Services
+#### Create a personal access token in Azure DevOps
1. Go to the home page of your team collection (for example, `https://contoso-web-team.visualstudio.com`), and then select your project.
-1. [Create a Personal access token](/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate#create-a-pat).
+1. Create a [personal access token](/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate#create-a-pat).
1. Save the generated token. You'll use the token later.
-#### Store the personal access token as a Key Vault secret
+### Store the personal access token as a key vault secret
+
+To store the personal access token you generated as a [key vault secret](../key-vault/secrets/about-secrets.md) and copy the secret identifier:
-To store the personal access token(PAT) that you generated as a [Key Vault secret](../key-vault/secrets/about-secrets.md) and copy the secret identifier:
-1. [Create a vault](../key-vault/general/quick-create-portal.md#create-a-vault)
-1. [Add](../key-vault/secrets/quick-create-portal.md#add-a-secret-to-key-vault) the personal access token (PAT) as a secret to the Key Vault.
-1. [Open](../key-vault/secrets/quick-create-portal.md#retrieve-a-secret-from-key-vault) the secret and copy the secret identifier.
+1. Create a [key vault](../key-vault/general/quick-create-portal.md#create-a-vault).
+1. Add the personal access token as a [secret to the key vault](../key-vault/secrets/quick-create-portal.md#add-a-secret-to-key-vault).
+1. Open the secret and [copy the secret identifier](../key-vault/secrets/quick-create-portal.md#retrieve-a-secret-from-key-vault).
-### Connect your repository as a catalog
+### Add your repository as a catalog
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Go to your dev center.
-1. Ensure that the [identity](./how-to-configure-managed-identity.md) attached to the dev center has [access to the Key Vault's secret](./how-to-configure-managed-identity.md#assign-the-managed-identity-access-to-the-key-vault-secret) where the PAT is stored.
-1. Select **Catalogs** from the left pane.
-1. Select **+ Add** from the command bar.
-1. On the **Add catalog** form, enter the following details, and then select **Add**.
+1. In the [Azure portal](https://portal.azure.com/), go to your dev center.
+1. Ensure that the [identity](./how-to-configure-managed-identity.md) that's attached to the dev center has [access to the key vault secret](./how-to-configure-managed-identity.md#grant-the-managed-identity-access-to-the-key-vault-secret) where your personal access token is stored.
+1. In the left menu under **Environment configuration**, select **Catalogs**, and then select **Add**.
+1. In **Add catalog**, enter the following information, and then select **Add**:
| Field | Value | | -- | -- | | **Name** | Enter a name for the catalog. |
- | **Git clone URI** | Enter the [Git HTTPS clone URL](#get-the-clone-url-for-your-repository) for GitHub or Azure DevOps Services repo, that you copied earlier.|
- | **Branch** | Enter the repository branch you'd like to connect to.|
- | **Folder Path** | Enter the folder path relative to the clone URI that contains sub-folders with your catalog items. This folder path should be the path to the folder containing the sub-folders with the catalog item manifests, and not the path to the folder with the catalog item manifest itself.|
- | **Secret Identifier**| Enter the [secret identifier](#create-a-personal-access-token-and-store-it-as-a-key-vault-secret) which contains your Personal Access Token(PAT) for the repository.|
+ | **Git clone URI** | Enter or paste the [clone URL](#get-the-clone-url-for-your-repository) for either your GitHub repository or your Azure DevOps repository.|
+ | **Branch** | Enter the repository branch to connect to.|
+ | **Folder path** | Enter the folder path relative to the clone URI that contains subfolders with your catalog items. This folder path should be the path to the folder that contains the subfolders with the catalog item manifests, and not the path to the folder with the catalog item manifest itself.|
+ | **Secret identifier**| Enter the [secret identifier](#create-a-personal-access-token) that contains your personal access token for the repository.|
+
+ :::image type="content" source="media/how-to-configure-catalog/catalog-item-add.png" alt-text="Screenshot that shows how to add a catalog to a dev center.":::
-1. Verify that your catalog is listed on the **Catalogs** page. If the connection is successful, the **Status** will show as **Connected**.
+1. In **Catalogs** for the dev center, verify that your catalog appears. If the connection is successful, **Status** is **Connected**.
## Update a catalog If you update the ARM template contents or definition in the attached repository, you can provide the latest set of catalog items to your development teams by syncing the catalog.
-To sync to the updated catalog:
+To sync an updated catalog:
-1. Select **Catalogs** from the left pane.
-1. Select the specific catalog and select **Sync**. The service scans through the repository and makes the latest list of catalog items available to all the associated projects in the dev center.
+1. In the left menu for your dev center, under **Environment configuration**, select **Catalogs**,
+1. Select the specific catalog, and then select **Sync**. The service scans through the repository and makes the latest list of catalog items available to all the associated projects in the dev center.
## Delete a catalog
-You can delete a catalog to remove it from the dev center. Any templates contained in a deleted catalog will not be available when deploying new environments. You'll need to update the catalog item reference for any existing environments created using the catalog items in the deleted catalog. If the reference is not updated and the environment is redeployed, it'll result in deployment failure.
+You can delete a catalog to remove it from the dev center. Any templates in a deleted catalog won't be available to development teams when they deploy new environments. Update the catalog item reference for any existing environments that were created by using the catalog items in the deleted catalog. If the reference isn't updated and the environment is redeployed, the deployment fails.
To delete a catalog:
-1. Select **Catalogs** from the left pane.
-1. Select the specific catalog and select **Delete**.
-1. Confirm to delete the catalog.
+1. In the left menu for your dev center, under **Environment configuration**, select **Catalogs**.
+1. Select the specific catalog, and then select **Delete**.
+1. In the **Delete catalog** dialog, select **Continue** to delete the catalog.
## Catalog sync errors
-When adding or syncing a catalog, you may encounter a sync error. This indicates that some or all of the catalog items were found to have errors. You can use CLI or REST API to *GET* the catalog, the response to which will show you the list of invalid catalog items which failed due to schema, reference, or validation errors and ignored catalog items which were detected to be duplicates.
+When you add or sync a catalog, you might encounter a sync error. A sync error indicates that some or all the catalog items have errors. Use the Azure CLI or the REST API to GET the catalog. The GET response shows you the type of errors:
+
+- Ignored catalog items that were detected to be duplicates
+- Invalid catalog items that failed due to schema, reference, or validation errors
+
+### Resolve ignored catalog item errors
-### Handling ignored catalog items
+An ignored catalog item error occurs if you add two or more catalog items that have the same name. You can resolve this issue by renaming catalog items so that each catalog item has a unique name within the catalog.
-Ignored catalog items are caused by adding two or more catalog items with the same name. You can resolve this issue by renaming catalog items so that each item has a unique name within the catalog.
+### Resolve invalid catalog item errors
-### Handling invalid catalog items
+An invalid catalog item error might occur for a variety of reasons:
-Invalid catalog items can be caused due to a variety of reasons. Potential issues are:
+- **Manifest schema errors**. Ensure that your catalog item manifest matches the [required schema](./configure-catalog-item.md#add-a-catalog-item).
- - **Manifest schema errors**
- - Ensure that your catalog item manifest matches the required schema as described [here](./configure-catalog-item.md#add-a-new-catalog-item).
+- **Validation errors**. Check the following items to resolve validation errors:
- - **Validation errors**
- - Ensure that the manifest's engine type is correctly configured as "ARM".
- - Ensure that the catalog item name is between 3 and 63 characters.
- - Ensure that the catalog item name includes only URL-valid characters. This includes alphanumeric characters as well as these symbols: *~!,.';:=-\_+)(\*&$@*
+ - Ensure that the manifest's engine type is correctly configured as `ARM`.
+ - Ensure that the catalog item name is between 3 and 63 characters.
+ - Ensure that the catalog item name includes only characters that are valid for a URL: alphanumeric characters and the symbols `~` `!` `,` `.` `'` `;` `:` `=` `-` `_` `+` `)` `(` `*` `&` `$` `@`.
- - **Reference errors**
- - Ensure that the template path referenced by the manifest is a valid relative path to a file within the repository.
+- **Reference errors**. Ensure that the template path that the manifest references is a valid relative path to a file in the repository.
## Next steps
-* [Create and Configure Projects](./quickstart-create-and-configure-projects.md).
+- Learn how to [create and configure a project](./quickstart-create-and-configure-projects.md).
+- Learn how to [create and configure a project environment type](how-to-configure-project-environment-types.md).
deployment-environments How To Configure Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-managed-identity.md
Title: Configure a managed identity
-description: Learn how to configure a managed identity that'll be used to deploy environments.
+description: Learn how to configure a managed identity that will be used to deploy environments in your Azure Deployment Environments Preview dev center.
- + Last updated 10/12/2022 # Configure a managed identity
- A [Managed Identity](../active-directory/managed-identities-azure-resources/overview.md) is used to provide elevation-of-privilege capabilities and securely authenticate to any service that supports Azure Active Directory (Azure AD) authentication. Azure Deployment Environments Preview service uses identities to provide self-serve capabilities to your development teams without granting them access to the target subscriptions in which the Azure resources are created.
+A [managed identity](../active-directory/managed-identities-azure-resources/overview.md) adds elevated-privileges capabilities and secure authentication to any service that supports Azure Active Directory (Azure AD) authentication. Azure Deployment Environments Preview uses identities to give development teams self-serve deployment capabilities without giving them access to the subscriptions in which Azure resources are created.
-The managed identity attached to the dev center should be [granted 'Owner' access to the deployment subscriptions](how-to-configure-managed-identity.md) configured per environment type. When an environment deployment is requested, the service grants appropriate permissions to the deployment identities configured per environment type to perform deployments on behalf of the user.
-The managed identity attached to a dev center will also be used to connect to a [catalog](how-to-configure-catalog.md) and access the [catalog items](configure-catalog-item.md) made available through the catalog.
+The managed identity that's attached to a dev center should be [assigned the Owner role in the deployment subscriptions](how-to-configure-managed-identity.md#assign-a-subscription-role-assignment-to-the-managed-identity) for each environment type. When an environment deployment is requested, the service grants appropriate permissions to the deployment identities that are set up for the environment type to deploy on behalf of the user.
+The managed identity that's attached to a dev center also is used to add to a [catalog](how-to-configure-catalog.md) and access [catalog items](configure-catalog-item.md) in the catalog.
-In this article, you'll learn about:
+In this article, you learn how to:
-* Types of managed identities
-* Assigning a subscription role assignment to the managed identity
-* Assigning the identity access to the Key Vault secret
+> [!div class="checklist"]
+>
+> - Add a managed identity to your dev center
+> - Assign a subscription role assignment to a managed identity
+> - Grant access to a key vault secret for a managed identity
> [!IMPORTANT]
-> Azure Deployment Environments is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> Azure Deployment Environments currently is in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise are not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-## Types of managed identities
+## Add a managed identity
-In Azure Deployment Environments, you can use two types of managed identities:
+In Azure Deployment Environments, you can choose between two types of managed identities:
-* A **system-assigned identity** is tied to either your dev center or the project environment type and is deleted when the attached resource is deleted. A dev center or a project environment type can have only one system-assigned identity.
-* A **user-assigned identity** is a standalone Azure resource that can be assigned to your dev center or to a project environment type. For Azure Deployment Environments Preview, a dev center or a project environment type can have only one user-assigned identity.
+- **System-assigned identity**: A system-assigned identity is tied either to your dev center or to the project environment type. A system-assigned identity is deleted when the attached resource is deleted. A dev center or a project environment type can have only one system-assigned identity.
+- **User-assigned identity**: A user-assigned identity is a standalone Azure resource that you can assign to your dev center or to a project environment type. For Azure Deployment Environments Preview, a dev center or a project environment type can have only one user-assigned identity.
> [!NOTE]
-> If you add both a system-assigned identity and a user-assigned identity, only the user-assigned identity will be used by the service.
+> In Azure Deployment Environments Preview, if you add both a system-assigned identity and a user-assigned identity, only the user-assigned identity is used.
-### Configure a system-assigned managed identity for a dev center
+### Add a system-assigned managed identity to a dev center
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Access Azure Deployment Environments.
-1. Select your dev center from the list.
-1. Select **Identity** from the left pane.
-1. On the **System assigned** tab, set the **Status** to **On**, select **Save** and then confirm enabling a System assigned managed identity.
+1. Sign in to the [Azure portal](https://portal.azure.com/) and go to Azure Deployment Environments.
+1. In **Dev centers**, select your dev center.
+1. In the left menu under **Settings**, select **Identity**.
+1. Under **System assigned**, set **Status** to **On**.
+1. Select **Save**.
+1. In the **Enable system assigned managed identity** dialog, select **Yes**.
+### Add a user-assigned managed identity to a dev center
-### Configure a user-assigned managed identity for a dev center
+1. Sign in to the [Azure portal](https://portal.azure.com/) and go to Azure Deployment Environments.
+1. In **Dev centers**, select your dev center.
+1. In the left menu under **Settings**, select **Identity**.
+1. Under **User assigned**, select **Add** to attach an existing identity.
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Access Azure Deployment Environments.
-1. Select your dev center from the list.
-1. Select **Identity** from the left pane.
-1. Switch to the **User assigned** tab and select **+ Add** to attach an existing identity.
+ :::image type="content" source="./media/configure-managed-identity/configure-user-assigned-managed-identity.png" alt-text="Screenshot that shows the user-assigned managed identity.":::
+1. In **Add user assigned managed identity**, enter or select the following information:
-1. On the **Add user assigned managed identity** page, add the following details:
- 1. Select the **Subscription** in which the identity exists.
- 1. Select an existing **User assigned managed identities** from the dropdown.
+ 1. In **Subscription**, select the subscription in which the identity exists.
+ 1. In **User assigned managed identities**, select an existing identity.
1. Select **Add**. ## Assign a subscription role assignment to the managed identity
-The identity attached to the dev center should be granted 'Owner' access to all the deployment subscriptions, as well as 'Reader' access to all subscriptions that a project lives in. When a user creates or deploys an environment, the service grants appropriate access to the deployment identity attached to a project environment type and use it to perform deployment on behalf of the user. This will allow you to empower developers to create environments without granting them access to the subscription and abstract Azure governance related constructs from them.
+The identity that's attached to the dev center should be assigned the Owner role for all the deployment subscriptions and the Reader role for all subscriptions that contain the relevant project. When a user creates or deploys an environment, the service grants appropriate access to the deployment identity that's attached to a project environment type. The deployment identity uses the access to perform deployments on behalf of the user. You can use the managed identity to empower developers to create environments without granting them access to the subscription.
+
+### Add a role assignment to a system-assigned managed identity
+
+1. In the Azure portal, go to your dev center.
+1. In the left menu under **Settings**, select **Identity**.
+1. Under **System assigned** > **Permissions**, select **Azure role assignments**.
+
+ :::image type="content" source="./media/configure-managed-identity/system-assigned-azure-role-assignment.png" alt-text="Screenshot that shows the Azure role assignment for system-assigned identity.":::
+
+1. In **Azure role assignments**, select **Add role assignment (Preview)**, and then enter or select the following information:
+
+ 1. In **Scope**, select **Subscription**.
+ 1. In **Subscription**, select the subscription in which to use the managed identity.
+ 1. In **Role**, select **Owner**.
+ 1. Select **Save**.
-1. To add a role assignment to the managed identity:
- 1. For a system-assigned identity, select **Azure role assignments**.
-
- :::image type="content" source="./media/configure-managed-identity/system-assigned-azure-role-assignment.png" alt-text="Screenshot showing the Azure role assignment for system assigned identity.":::
+### Add a role assignment to a user-assigned managed identity
- 1. For the user-assigned identity, select the specific identity, and then select the **Azure role assignments** from the left pane.
+1. In the Azure portal, go to your dev center.
+1. In the left menu under **Settings**, select **Identity**.
+1. Under **User assigned**, select the identity.
+1. In the left menu, select **Azure role assignments**.
+1. In **Azure role assignments**, select **Add role assignment (Preview)**, and then enter or select the following information:
-1. On the **Azure role assignments** page, select **Add role assignment (Preview)** and provide the following details:
- 1. For **Scope**, select **SubScription** from the dropdown.
- 1. For **Subscription**, select the target subscription to use from the dropdown.
- 1. For **Role**, select **Owner** from the dropdown.
+ 1. In **Scope**, select **Subscription**.
+ 1. In **Subscription**, select the subscription in which to use the managed identity.
+ 1. In **Role**, select **Owner**.
1. Select **Save**.
-## Assign the managed identity access to the Key Vault secret
+## Grant the managed identity access to the key vault secret
->[!NOTE]
-> Providing the identity with access to the Key Vault secret, which contains the repo's personal access token (PAT), is a prerequisite to adding the repo as a catalog.
+You can set up your key vault to use either a [key vault access policy'](../key-vault/general/assign-access-policy.md) or [Azure role-based access control](../key-vault/general/rbac-guide.md).
-To grant the identity access to the secret:
+> [!NOTE]
+> Before you can add a repository as a catalog, you must grant the managed identity access to the key vault secret that contains the repository's personal access token.
+
+### Key vault access policy
+
+If the key vault is configured to use a key vault access policy:
+
+1. In the Azure portal, go to the key vault that contains the secret with the personal access token.
+1. In the left menu, select **Access policies**, and then select **Create**.
+1. In **Create an access policy**, enter or select the following information:
-A Key Vault can be configured to use either the [Vault access policy'](../key-vault/general/assign-access-policy.md) or the [Azure role-based access control](../key-vault/general/rbac-guide.md) permission model.
+ 1. On the **Permissions** tab, under **Secret permissions**, select the **Get** checkbox, and then select **Next**.
+ 1. On the **Principal** tab, select the identity that's attached to the dev center.
+ 1. Select **Review + create**, and then select **Create**.
-1. If the Key Vault is configured to use the **Vault access policy** permission model,
- 1. Access the [Azure portal](https://portal.azure.com/) and search for the specific Key Vault that contains the PAT secret.
- 1. Select **Access policies** from the left pane.
- 1. Select **+ Create**.
- 1. On the **Create an access policy** page, provide the following details:
- 1. Enable **Get** for **Secret permissions** on the **Permissions** page.
- 1. Select the identity that is attached to the dev center as **Principal**.
- 1. Select **Create** on the **Review + create** page.
+### Azure role-based access control
-1. If the Key Vault is configured to use **Azure role-based access control** permission model,
- 1. Select the specific identity and select the **Azure role assignments** from the left pane.
- 1. Select **Add Role Assignment** and provide the following details:
- 1. Select Key Vault from the **Scope** dropdown.
- 1. Select the **Subscription** in which the Key Vault exists.
- 1. Select the specific Key Vault for **Resource**.
- 1. Select **Key Vault Secrets User** from the dropdown for **Role**.
- 1. Select **Save**.
+If the key vault is configured to use Azure role-based access control:
+
+1. In the Azure portal, go to the key vault that contains the secret with the personal access token.
+1. In the left menu, select **Access control (IAM)**.
+1. Select the identity, and in the left menu, select **Azure role assignments**.
+1. Select **Add role assignment**, and then enter or select the following information:
+
+ 1. In **Scope**, select the key vault.
+ 1. In **Subscription**, select the subscription that contains the key vault.
+ 1. In **Resource**, select the key vault.
+ 1. In **Role**, select **Key Vault Secrets User**.
+ 1. Select **Save**.
## Next steps
-* [Configure a Catalog](how-to-configure-catalog.md)
-* [Configure a project environment type](how-to-configure-project-environment-types.md)
+- Learn how to [add and configure a catalog](how-to-configure-catalog.md).
+- Learn how to [create and configure a project environment type](how-to-configure-project-environment-types.md).
deployment-environments Quickstart Create And Configure Devcenter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-and-configure-devcenter.md
To create and configure a Dev center in Azure Deployment Environments by using t
## Attach an identity to the dev center
-After you create a dev center, attach an [identity](concept-environments-key-concepts.md#identities) to the dev center. Learn about the two [types of identities](how-to-configure-managed-identity.md#types-of-managed-identities) you can attach:
+After you create a dev center, attach an [identity](concept-environments-key-concepts.md#identities) to the dev center. Learn about the two [types of identities](how-to-configure-managed-identity.md#add-a-managed-identity) you can attach:
- System-assigned managed identity - User-assigned managed identity
For more information, see [Configure a managed identity](how-to-configure-manage
To attach a system-assigned managed identity to your dev center:
-1. Complete the steps to create a [system-assigned managed identity](how-to-configure-managed-identity.md#configure-a-system-assigned-managed-identity-for-a-dev-center).
+1. Complete the steps to create a [system-assigned managed identity](how-to-configure-managed-identity.md#add-a-system-assigned-managed-identity-to-a-dev-center).
:::image type="content" source="media/quickstart-create-and-configure-devcenter/system-assigned-managed-identity.png" alt-text="Screenshot that shows a system-assigned managed identity."::: 1. After you create a system-assigned managed identity, assign the Owner role to give the [identity access](how-to-configure-managed-identity.md#assign-a-subscription-role-assignment-to-the-managed-identity) on the subscriptions that will be used to configure [project environment types](concept-environments-key-concepts.md#project-environment-types).
- Make sure that the identity has [access to the key vault secret](how-to-configure-managed-identity.md#assign-the-managed-identity-access-to-the-key-vault-secret) that contains the personal access token to access your repository.
+ Make sure that the identity has [access to the key vault secret](how-to-configure-managed-identity.md#grant-the-managed-identity-access-to-the-key-vault-secret) that contains the personal access token to access your repository.
### Attach an existing user-assigned managed identity To attach a user-assigned managed identity to your dev center:
-1. Complete the steps to attach a [user-assigned managed identity](how-to-configure-managed-identity.md#configure-a-user-assigned-managed-identity-for-a-dev-center).
+1. Complete the steps to attach a [user-assigned managed identity](how-to-configure-managed-identity.md#add-a-user-assigned-managed-identity-to-a-dev-center).
:::image type="content" source="media/quickstart-create-and-configure-devcenter/user-assigned-managed-identity.png" alt-text="Screenshot that shows a user-assigned managed identity."::: 1. After you attach the identity, assign the Owner role to give the [identity access](how-to-configure-managed-identity.md#assign-a-subscription-role-assignment-to-the-managed-identity) on the subscriptions that will be used to configure [project environment types](how-to-configure-project-environment-types.md). Give the identity Reader access to all subscriptions that a project lives in.
- Make sure that the identity has [access to the key vault secret](how-to-configure-managed-identity.md#assign-the-managed-identity-access-to-the-key-vault-secret) that contains the personal access token to access the repository.
+ Make sure that the identity has [access to the key vault secret](how-to-configure-managed-identity.md#grant-the-managed-identity-access-to-the-key-vault-secret) that contains the personal access token to access the repository.
> [!NOTE] > The [identity](concept-environments-key-concepts.md#identities) that's attached to the dev center should be assigned the Owner role for access to the deployment subscription for each environment type.
firewall Firewall Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-preview.md
Previously updated : 10/04/2022 Last updated : 11/07/2022
Run the following Azure PowerShell command to turn off this feature:
Unregister-AzProviderFeature -FeatureName AFWEnableNetworkRuleNameLogging -ProviderNamespace Microsoft.Network ```
-### IDPS Private IP ranges (preview)
-
-In Azure Firewall Premium IDPS, private IP address ranges are used to identify if traffic is inbound, outbound, or internal (East-West). Each signature is applied on specific traffic direction, as indicated in the signature rules table. By default, only ranges defined by IANA RFC 1918 are considered private IP addresses. So traffic sent from a private IP address range to a private IP address range is considered internal. To modify your private IP addresses, you can now easily edit, remove, or add ranges as needed.
-- ### Structured firewall logs (preview) Today, the following diagnostic log categories are available for Azure Firewall:
Policy analytics starts monitoring the flows in the DNAT, Network, and Applicati
> [!TIP] > Policy Analytics has a dependency on both Log Analytics and Azure Firewall resource specific logging. Verify the Firewall is configured appropriately or follow the previous instructions. Be aware that logs take 60 minutes to appear after enabling them for the first time. This is because logs are aggregated in the backend every hour. You can check logs are configured appropriately by running a log analytics query on the resource specific tables such as **AZFWNetworkRuleAggregation**, **AZFWApplicationRuleAggregation**, and **AZFWNatRuleAggregation**.
+### Single click upgrade/downgrade (preview)
+
+You can now easily upgrade your existing Firewall Standard SKU to Premium SKU as well as downgrade from Premium to Standard SKU. The process is fully automated and has no service impact (zero service downtime).
+
+In the upgrade process, you can select the policy to be attached to the upgraded Premium SKU. You can select an existing Premium Policy or an existing Standard Policy. You can use your existing Standard policy and let the system automatically duplicate, upgrade to Premium Policy, and then attach it to the newly created Premium Firewall.
+
+This new capability is available through the Azure portal as shown here, as well as via PowerShell and Terraform simply by changing the sku_tier attribute.
++
+> [!NOTE]
+> This new upgrade/downgrade capability will also support the Basic SKU for GA.
++ ## Next steps To learn more about Azure Firewall, see [What is Azure Firewall?](overview.md).
firewall Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/overview.md
Previously updated : 10/13/2022 Last updated : 11/07/2022 # Customer intent: As an administrator, I want to evaluate Azure Firewall so I can determine if I want to use it.
Untrusted customer signed certificates|Customer signed certificates are not trus
|Wrong source IP address in Alerts with IDPS for HTTP (without TLS inspection).|When plain text HTTP traffic is in use, and IDPS issues a new alert, and the destination is a public IP address, the displayed source IP address is wrong (the internal IP address is displayed instead of the original IP address).|A fix is being investigated.| |Certificate Propagation|After a CA certificate is applied on the firewall, it may take between 5-10 minutes for the certificate to take effect.|A fix is being investigated.| |TLS 1.3 support|TLS 1.3 is partially supported. The TLS tunnel from client to the firewall is based on TLS 1.2, and from the firewall to the external Web server is based on TLS 1.3.|Updates are being investigated.|
-|KeyVault Private Endpoint|KeyVault supports Private Endpoint access to limit its network exposure. Trusted Azure Services can bypass this limitation if an exception is configured as described in the [KeyVault documentation](../key-vault/general/overview-vnet-service-endpoints.md#trusted-services). Azure Firewall is not currently listed as a trusted service and can't access the Key Vault.|A fix is being investigated.|
|Availability Zones for Firewall Premium in the Southeast Asia region|You can't currently deploy Azure Firewall Premium with Availability Zones in the Southeast Asia region.|Deploy the firewall in Southeast Asia without Availability Zones, or deploy in a region that supports Availability Zones.|
firewall Premium Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/premium-features.md
Previously updated : 10/12/2022 Last updated : 11/07/2022
To learn more about Azure Firewall Premium Intermediate CA certificate requireme
A network intrusion detection and prevention system (IDPS) allows you to monitor your network for malicious activity, log information about this activity, report it, and optionally attempt to block it.
-Azure Firewall Premium provides signature-based IDPS to allow rapid detection of attacks by looking for specific patterns, such as byte sequences in network traffic, or known malicious instruction sequences used by malware. The IDPS signatures are applicable for both application and network level traffic (Layers 3-7), they're fully managed, and continuously updated. IDPS can be applied to inbound, spoke-to-spoke (East-West), and outbound traffic. Spoke-to-spoke (East-West) includes traffic that goes from/to an on-premises network. You can configure your IDPS private IP address ranges using the **Private IP ranges** preview feature. For more information, see [Azure Firewall preview features](firewall-preview.md#idps-private-ip-ranges-preview).
+Azure Firewall Premium provides signature-based IDPS to allow rapid detection of attacks by looking for specific patterns, such as byte sequences in network traffic, or known malicious instruction sequences used by malware. The IDPS signatures are applicable for both application and network level traffic (Layers 3-7), they're fully managed, and continuously updated. IDPS can be applied to inbound, spoke-to-spoke (East-West), and outbound traffic. Spoke-to-spoke (East-West) includes traffic that goes from/to an on-premises network. You can configure your IDPS private IP address ranges using the **Private IP ranges** preview feature. For more information, see [IDPS Private IP ranges](#idps-private-ip-ranges).
The Azure Firewall signatures/rulesets include: - An emphasis on fingerprinting actual malware, Command and Control, exploit kits, and in the wild malicious activity missed by traditional prevention methods.
IDPS allows you to detect attacks in all ports and protocols for non-encrypted t
The IDPS Bypass List allows you to not filter traffic to any of the IP addresses, ranges, and subnets specified in the bypass list.
+### IDPS Private IP ranges
+
+In Azure Firewall Premium IDPS, private IP address ranges are used to identify if traffic is inbound, outbound, or internal (East-West). Each signature is applied on specific traffic direction, as indicated in the signature rules table. By default, only ranges defined by IANA RFC 1918 are considered private IP addresses. So traffic sent from a private IP address range to a private IP address range is considered internal. To modify your private IP addresses, you can now easily edit, remove, or add ranges as needed.
++ ### IDPS signature rules IDPS signature rules allow you to:
IDPS signature rules have the following properties:
|Signature ID |Internal ID for each signature. This ID is also presented in Azure Firewall Network Rules logs.| |Mode |Indicates if the signature is active or not, and whether firewall will drop or alert upon matched traffic. The below signature mode can override IDPS mode<br>- **Disabled**: The signature isn't enabled on your firewall.<br>- **Alert**: You'll receive alerts when suspicious traffic is detected.<br>- **Alert and Deny**: You'll receive alerts and suspicious traffic will be blocked. Few signature categories are defined as ΓÇ£Alert OnlyΓÇ¥, therefore by default, traffic matching their signatures won't be blocked even though IDPS mode is set to ΓÇ£Alert and DenyΓÇ¥. Customers may override this by customizing these specific signatures to ΓÇ£Alert and DenyΓÇ¥ mode. <br><br> Note: IDPS alerts are available in the portal via network rule log query.| |Severity |Each signature has an associated severity level that indicates the probability that the signature is an actual attack.<br>- **Low**: An abnormal event is one that doesn't normally occur on a network or Informational events are logged. Probability of attack is low.<br>- **Medium**: The signature indicates an attack of a suspicious nature. The administrator should investigate further.<br>- **High**: The attack signatures indicate that an attack of a severe nature is being launched. There's little probability that the packets have a legitimate purpose.|
-|Direction |The traffic direction for which the signature is applied.<br>- **Inbound**: Signature is applied only on traffic arriving from the Internet and destined to your [configured private IP address range](firewall-preview.md#idps-private-ip-ranges-preview).<br>- **Outbound**: Signature is applied only on traffic sent from your [configured private IP address range](firewall-preview.md#idps-private-ip-ranges-preview) to the Internet.<br>- **Bidirectional**: Signature is always applied on any traffic direction.|
+|Direction |The traffic direction for which the signature is applied.<br>- **Inbound**: Signature is applied only on traffic arriving from the Internet and destined to your [configured private IP address range](#idps-private-ip-ranges).<br>- **Outbound**: Signature is applied only on traffic sent from your [configured private IP address range](#idps-private-ip-ranges) to the Internet.<br>- **Bidirectional**: Signature is always applied on any traffic direction.|
|Group |The group name that the signature belongs to.| |Description |Structured from the following three parts:<br>- **Category name**: The category name that the signature belongs to as described in [Azure Firewall IDPS signature rule categories](idps-signature-categories.md).<br>- High level description of the signature<br>- **CVE-ID** (optional) in the case where the signature is associated with a specific CVE. The ID is listed here.| |Protocol |The protocol associated with this signature.|
frontdoor Migrate Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/migrate-tier.md
+
+ Title: Migrate Azure Front Door (classic) to Standard/Premium tier using the Azure portal (Preview)
+description: This article provides step-by-step instructions on how to migrate from an Azure Front Door (classic) profile to an Azure Front Door Standard or Premium tier profile.
++++ Last updated : 11/04/2022+++
+# Migrate Azure Front Door (classic) to Standard/Premium tier using the Azure portal (Preview)
+
+Azure Front Door Standard and Premium tier bring the latest cloud delivery network features to Azure. With enhanced security features and an all-in-one service, your application content is secured and closer to your end users with the Microsoft global network. This article will guide you through the migration process to migrate your Front Door (classic) profile to either a Standard or Premium tier profile to begin using these latest features.
+
+## Prerequisites
+
+* Review the [About Front Door tier migration](tier-migration.md) article.
+* Ensure your Front Door (classic) profile can be migrated:
+ * HTTPS is required for all custom domains. Azure Front Door Standard and Premium enforce HTTPS on all domains. If you don't have your own certificate, you can use an Azure Front Door managed certificate. The certificate is free and managed for you.
+ * If you use BYOC (Bring your own certificate) for Azure Front Door (classic), you'll need to grant Key Vault access to your Azure Front Door Standard or Premium profile by completing the following steps:
+ * Register the service principal for **Microsoft.AzureFrontDoor-Cdn** as an app in your Azure Active Directory using Azure PowerShell.
+ * Grant **Microsoft.AzureFrontDoor-Cdn** access to your Key Vault.
+ * Session affinity gets enabled from the origin group settings in the Azure Front Door Standard or Premium profile. In Azure Front Door (classic), session affinity is managed at the domain level. As part of the migration, session affinity is based on the Classic profile's configuration. If you have two domains in the Classic profile that shares the same backend pool (origin group), session affinity has to be consistent across both domains in order for migration to be compatible.
+
+## Validate compatibility
+
+1. Go to the Azure Front Door (classic) resource and select **Migration** from under *Settings*.
+
+ :::image type="content" source="./media/migrate-tier/overview.png" alt-text="Screenshot of the migration button for a Front Door (classic) profile.":::
+
+1. Select **Validate** to see if your Front Door (classic) profile is compatible for migration. This check can take up to two minutes depending on the complexity of your Front Door profile.
+
+ :::image type="content" source="./media/migrate-tier/validate.png" alt-text="Screenshot of the validate compatibility button from the migration page.":::
+
+1. If the migration isn't compatible, you can select **View errors to see a list of errors, and recommendation to resolve them.
+
+ :::image type="content" source="./media/migrate-tier/validation-failed.png" alt-text="Screenshot of the Front Door validate migration with errors.":::
+
+1. Once the migration tool has validated that your Front Door profile is compatible to migrate, you can move onto preparing for migration.
+
+ :::image type="content" source="./media/migrate-tier/validation-passed.png" alt-text="Screenshot of the Front Door migration passing validation.":::
+
+## Prepare for migration
+
+1. A default name for the new Front Door profile has been provided for you. You can change this name before proceeding to the next step.
+
+ :::image type="content" source="./media/migrate-tier/prepare-name.png" alt-text="Screenshot of the prepared name for Front Door migration.":::
+
+1. A Front Door tier is automatically selected for you based on the Front Door (classic) WAF policy settings.
+
+ :::image type="content" source="./media/migrate-tier/prepare-tier.png" alt-text="Screenshot of the selected tier for the new Front Door profile.":::
+
+ * A Standard tier gets selected if you *only have custom WAF rules* associated to the Front Door (classic) profile. You may choose to upgrade to a Premium tier.
+ * A Premium tier gets selected if you *have managed WAF rules* associated to the Classic profile. To use Standard tier, the managed WAF rules must first be removed from the Classic profile.
+
+1. Select **Configure WAF policy upgrades** to configure the WAF policies to be upgraded. Select the action you would like to happen for each WAF policy. You can either copy the old WAF policy to the new WAF policy or select and existing WAF policy that matches the Front Door tier. If you chose to copy the WAF policy, each WAF policy will be given a default WAF policy name that you can change. Select **Apply** once you finish making changes to the WAF policy configuration.
+
+ :::image type="content" source="./media/migrate-tier/prepare-waf.png" alt-text="Screenshot of the configure WAF policy link during Front Door migration preparation.":::
+
+ > [!NOTE]
+ > The **Configure WAF policy upgrades** link only appears if you have WAF policies associated to the Front Door (classic) profile.
+
+ For each WAF policy associated to the Front Door (classic) profile select an action. You can make copy of the WAF policy that matches the tier you're migrating the Front Door profile to or you can use an existing WAF policy that matches the tier. You may also update the WAF policy names from the default names assigned. Select **Apply** to save the WAF settings.
+
+ :::image type="content" source="./media/migrate-tier/waf-policy.png" alt-text="Screenshot of the upgrade wAF policy screen.":::
+
+1. Select **Prepare**, and then select **Yes** to confirm you would like to proceed with the migration process. Once confirmed, you won't be able to make any further changes to the Front Door (classic) settings.
+
+ :::image type="content" source="./media/migrate-tier/prepare-confirmation.png" alt-text="Screenshot the prepare button and confirmation to proceed with Front Door migration.":::
+
+1. Select the link that appears to view the configuration of the new Front Door profile. At this time, review each of the settings for the new profile to ensure all settings are correct. Once you're done reviewing the read-only profile, select the **X** in the top right corner of the page to go back to the migration screen.
+
+ :::image type="content" source="./media/migrate-tier/verify-new-profile.png" alt-text="Screenshot of the link to view the new read-only Front Door profile.":::
+
+> [!NOTE]
+> If you're not using your own certificate, enabling managed identities and granting access to the Key Vault is not required. You can skip to the [**Migrate**](migrate-tier.md#migrate) step.
+
+## Enable managed identities
+
+You're using your own certificate and will need to enable managed identity so Azure Front Door can access the certificate in your Key Vault.
+
+1. Select **Enable** and then select either **System assigned** or **User assigned** depending on the type of managed identities you want to use. For more information, see [What are managed identities for Azure resources?](../active-directory/managed-identities-azure-resources/overview.md)
+
+ :::image type="content" source="./media/migrate-tier/enable-managed-identity.png" alt-text="Screenshot of the enable manage identity button for Front Door migration.":::
+
+ * *System assigned* - Toggle the status to **On** and then select **Save**.
+
+ * *User assigned* - To create a user assigned managed identity, see [Create a user-assigned identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md). If you've already have a user managed identity, select the identity, and then select **Add**.
+
+1. Select the **X** to return to the migration page. You'll then see that you've successfully enabled managed identities.
+
+ :::image type="content" source="./media/migrate-tier/enable-managed-identity-successful.png" alt-text="Screenshot of managed identity getting enabled.":::
+
+## Grant manage identity to Key Vault
+
+Select **Grant** to add managed identities from the last section to all the Key Vaults used in the Front Door (classic) profile.
++
+## Migrate
+
+1. Select **Migrate** to initiate the migration process. When prompted, select **Yes** to confirm you want to move forward with the migration. Once the migration is completed, you can select the banner at the top to go to the new Front Door profile.
+
+ :::image type="content" source="./media/migrate-tier/migrate.png" alt-text="Screenshot of migrate and confirmation button for Front Door migration.":::
+
+ > [!NOTE]
+ > If you cancel the migration, only the new Front Door profile will get deleted. Any new WAF policy copies will need to be manually deleted.
+
+1. Once the migration completes, you can select the banner the top of the page or the link from the successful message to go to the new Front Door profile.
+
+ :::image type="content" source="./media/migrate-tier/successful-migration.png" alt-text="Screenshot of a successful Front Door migration.":::
+
+1. The Front Door (classic) profile is now in a **Disabled** state and can be deleted from your subscription.
+
+ :::image type="content" source="./media/migrate-tier/classic-profile.png" alt-text="Screenshot of the overview page of a Front Door (classic) in a disabled state.":::
+
+## Next steps
+
+* Understand the [mapping between Front Door tiers](tier-mapping.md) settings.
+* Learn more about the [Azure Front Door tier migration process](tier-migration.md).
frontdoor Routing Methods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/routing-methods.md
All Front Door configurations have backend health monitoring and automated insta
## <a name = "latency"></a>Lowest latencies based traffic-routing
-Deploying origins in two or more locations across the globe can improve the responsiveness of your applications by routing traffic to the destination that is 'closest' to your end users. Latency is the default traffic-routing method for your Front Door configuration. This routing method forwards requests from your end users to the closest origin behind Azure Front Door. This routing mechanism combined with the anycast architecture of Azure Front Door ensures that each of your end users get the best performance based on their location.
+Deploying origins in two or more locations across the globe can improve the responsiveness of your applications by routing traffic to the destination that is 'closest' to your end users. Latency is the default traffic-routing method for your Front Door configuration. This routing method forwards requests from your end users to the closest origin behind Azure Front Door. This routing mechanism combined with the anycast architecture of Azure Front Door ensures that each of your end users gets the best performance based on their location.
The 'closest' origin isn't necessarily closest as measured by geographic distance. Instead, Azure Front Door determines the closest origin by measuring network latency. Read more about [Azure Front Door routing architecture](front-door-routing-architecture.md). The following table shows the overall decision flow:
-| Available origins | Priority | Latency signal (based on health probe) | Weights |
-|-| -- | -- | -- |
-| First, select all origins that are enabled and returned healthy (200 OK) for the health probe. If there are six origins A, B, C, D, E, and F, and among them C is unhealthy and E is disabled. The list of available origins is A, B, D, and F. | Next, the top priority origins among the available ones are selected. If origin A, B, and D have priority 1 and origin F has a priority of 2. Then, the selected origins will be A, B, and D.| Select the origins with latency range (least latency & latency sensitivity in ms specified). If origin A is 15 ms, B is 30 ms and D is 60 ms away from the Azure Front Door environment where the request landed, and latency sensitivity is 30 ms, then the lowest latency pool consist of origin A and B, because D is beyond 30 ms away from the closest origin that is A. | Lastly, Azure Front Door will round robin the traffic among the final selected group of origins in the ratio of weights specified. For example, if origin A has a weight of 5 and origin B has a weight of 8, then the traffic will be distributed in the ratio of 5:8 among origins A and B. |
>[!NOTE] > By default, the latency sensitivity property is set to 0 ms. With this setting the request is always forwarded to the fastest available origins and weights on the origin don't take effect unless two origins have the same network latency.
frontdoor Tier Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/tier-mapping.md
+
+ Title: Azure Front Door profile mapping between Classic and Standard/Premium tier
+description: This article explains the differences and settings mapping between an Azure Front Door (classic) and Standard/Premium profile.
++++ Last updated : 11/03/2022+++
+# Mapping between Azure Front Door (classic) and Standard/Premium tier
+
+As you migrate from Azure Front Door (classic) to Front Door Standard or Premium, you'll notice some configurations have been changed, or moved to a new location to provide a better experience when managing the Front Door profile. In this article you'll learn how routing rules, cache duration, rules engine configuration, WAF policy and custom domains gets mapped to new Front Door tiers.
+
+## Routing rules
+
+| Front Door (classic) settings | Mapping in Standard and Premium |
+|--|--|
+| Route status - Enable/disable | Same as Front Door (classic) profile. |
+| Accepted protocol | Copied from Front Door (classic) profile. |
+| Frontend/domains | Copied from Front Door (classic) profile. |
+| Patterns to match | Copied from Front Door (classic) profile. |
+| Rules engine configuration | Rules engine changes to Rule Set and will retain route association from Front Door (classic) profile. |
+| Route type: Forwarding | Backend pool changes to Origin group. Forwarding protocol is copied from Front Door (classic) profile. </br> - If URL rewrite is set to `disabled`, the origin path in Standard and Premium profile is set to empty. </br> - If URL rewrite is set to `enabled`, the origin path is copied from *Custom forwarding path* of the Front Door (classic) profile. |
+| Route type: Redirect | URL redirect rule gets created in Rule set. The Rule set name is called *URLRedirectMigratedRuleSet2*. |
+
+## Cache duration
+
+In Azure Front Door (classic), the *Minimum cache duration* is located in the routing settings and the *Use default cache duration* is located in the Rules engine. Azure Front Door Standard and Premium tier only support caching in a Rule set.
+
+| Front Door (classic) | Front Door Standard and Premium |
+|--|--|
+| When caching is *disabled* and the default caching is used. | Caching is *disabled*. |
+| When caching is *enabled* and the default caching duration is used. | Caching is *enabled*, the origin caching behavior is honored. |
+| Caching is *enabled*. | Caching is *enabled*. |
+| When use default cache duration is set to *No*, the input cache duration is used. | Cache behavior is set to override always and the input cache duration is used. |
+| N/A | Caching is *enabled*, the caching behavior is set to override if origin is missing, and the input cache duration is used. |
+
+## Route configuration override in Rule engine actions
+
+The route configuration override in Front Door (classic) is split into three different actions in rules engine for Standard and Premium profile. Those three actions are URL Redirect, URL Rewrite and Route Configuration Override.
+
+| Actions | Mapping in Standard and Premium |
+|--|--|
+| Route type set to forward | 1. Forward with URL rewrites disabled. All configurations are copied to the Standard or Premium profile.</br>2. Forward with URL rewrites enabled. There will be two rule actions, one for URL rewrite and one for the route configuration override in the Standard or Premium profile.</br> For URL rewrites - </br>- Custom forwarding path in Classic profile is the same as source pattern in Standard or Premium profile.</br>- Destination from Classic profile is copied over to Standard or Premium profile. |
+| Route type set to redirect | Mapping is 1:1 in the Standard or Premium profile. |
+| Route configuration override | 1. Backend pool is 1:1 mapping for origin group in Standard or Premium profile.</br>2. Caching</br>- Enabling and disabling caching is 1:1 mapping in the Standard or Premium profile.</br>- Query string is 1:1 mapping in Standard or Premium profile.</br>3. Dynamic compression is 1:1 mapping in the Standard or Premium profile.
+| Use default cache duration | Same as mentioned in the [Cache duration](#cache-duration) section. |
+
+## Other configurations
+
+| Front Door (classic) configuration | Mapping in Standard and Premium |
+|--|--|
+| Request and response header | Request and response header in Rules engine actions is copied over to Rule set in Standard/Premium profile. |
+| Enforce certificate name check | Enforce certificate name check is supported at the profile level of Azure Front Door (classic). In a Front Door Standard or Premium profile this setting can be found in the origin settings. This configuration will apply to all origins in the migrated Standard or Premium profile. |
+| Origin response time | Origin response time is copied over to the migrated Standard or Premium profile. |
+| Web Application Firewall (WAF) | If the Azure Front Door (classic) profile has WAF policies associated, the migration will create a copy of WAF policies with a default name for the Standard or Premium profile. The names for each WAF policy can be changed during setup from the default names. You can also select an existing Standard or Premium WAF policy that matches the migrated Front Door profile. |
+| Custom domain | This section will use `www.contoso.com` as an example to show a domain going through the migration. The custom domain `www.contoso.com` points to `contoso.azurefd.net` in Front Door (classic) for the CNAME record. </br></br>When the custom domain `www.contoso.com` gets moved to the new Front Door profile:</br>- The association for the custom domain shows the new Front Door endpoint as `contoso-hashvalue.z01.azurefd.net`. The CNAME of the custom domain will automatically point to the new endpoint name with the hash value in the backend. At this point, you can change the CNAME record with your DNS provider to point to the new endpoint name with the hash value.</br>- The classic endpoint `contoso.azurefd.net` will show as a custom domain in the migrated Front Door profile under the *Migrated domain* tab of the **Domains* page. This domain will be associated to the default migrated route. This default route can only be removed once the domain is disassociated from it. The domain properties can't be updated, for the exception of associating and removing the association from a route. The domain can only be deleted after you've changed the CNAME to the new endpoint name.</br>- The certificate state and DNS state for `www.contoso.com` is the same as the Front Door (classic) profile.</br></br> There are no changes to the managed certificate auto rotation settings. |
+
+## Next steps
+
+* Learn more about the [Azure Front Door tier migration process](tier-migration.md).
+* Learn how to [migrate from Classic to Standard/Premium tier](migrate-tier.md) using the Azure portal.
frontdoor Tier Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/tier-migration.md
+
+ Title: About Azure Front Door (classic) to Standard/Premium tier migration (Preview)
+description: This article explains the migration process and changes expected when using the migration tool to Azure Front Door Standard/Premium tier.
++++ Last updated : 11/3/2022+++
+# About Azure Front Door (classic) to Standard/Premium tier migration (Preview)
+
+Azure Front Door Standard and Premium tiers were released in March 2022 as the next generation content delivery network service. The newer tiers combine the capabilities of Azure Front Door (classic), Microsoft CDN (classic), and Web Application Firewall (WAF). With features such as Private Link integration, enhanced rules engine and advanced diagnostics you have the ability to secure and accelerate your web applications to bring a better experience to your customers.
+
+Azure recommends migrating to the newer tiers to benefit from the new features and improvements over the Classic tier. To help with the migration process, Azure Front Door provides a zero-downtime migration to migrate your workload from Azure Front Door (class) to either Standard or Premium tier.
+
+In this article you'll learn about the migration process, understand the breaking changes involved, and what to do before, during and after the migration.
+
+## Migration process overview
+
+Azure Front Door zero-down time migration happens in three stages. The first stage is validation, followed by preparing for migration, and then migrate. The time it takes for a migration to complete depends on the complexity of the Azure Front Door profile. You can expect the migration to take a few minutes for a simple Azure Front Door profile and longer for a profile that has many frontend domains, backend pools, routing rules and rule engine rules.
+
+### Five steps of migration
+
+**Validate compatibility** - The migration will validate if the Azure Front Door (classic) profile is eligible for migration. You'll be prompted with messages on what needs to be fixed before you can move onto the preparation phase. For more information, see [prerequisites](#prerequisites).
+
+**Prepare for migration** - Azure Front Door will create a new Standard or Premium profile based on your Classic profile configuration in a disabled state. The new Front Door profile created will depend on the Web Application Firewall (WAF) policy you've associated to the profile.
+
+* **Premium tier** - If you have *managed WAF* policies associated to the Azure Front Door (classic) profile. A premium tier profile **can't** be downgraded to a standard tier after migration.
+* **Standard tier** - If you have *custom WAF* policies associated to the Azure Front Door (classic) profile. A standard tier profile **can** be upgraded to premium tier after migration.
+
+ During the preparation stage, Azure Front Door will create copies of WAF policies specific to the Front Door tier with default names. You can change the name for the WAF policies at this time. You can also select an existing WAF policy that matches the tier you're migrating to. At this time, a read-only view of the newly created profile is provided for you to verify configurations.
+
+ > [!NOTE]
+ > No changes can be to the Front Door (classic) configuration once this step has been initiated.
+
+**Enable managed identity** - During this step you can configure managed identities for Azure Front Door to access your certificate in a Key Vault.
+
+**Grant managed identity to Key Vault** - This step adds managed identity access to all the Key Vaults used in the Front Door (classic) profile.
+
+**Migrate/Abort migration**
+
+* **Migrate** - Once you select this option, the Azure Front Door (classic) profile gets disabled and the Azure Front Door Standard or Premium profile will be activated. Traffic will start going through the new profile once the migration completes.
+* **Abort migration** - If you decided you no longer want to move forward with the migration process, selecting this option will delete the new Front Door profile that was created.
+
+> [!NOTE]
+> * If you cancel the migration only the new Front Door profile gets deleted, any WAF policy copies will need to be manually deleted.
+> * Traffic to your Azure Front Door (classic) will continue to be serve until migration has been completed.
+> * Each Azure Front Door (classic) profile can create one Azure Front Door Standard or Premium profile.
+
+Migration is only available can be completed using the Azure portal. Service charges for Azure Front Door Standard or Premium tier will start once migration is completed.
+
+## Breaking changes between tiers
+
+### Dev-ops
+
+Azure Front Door Standard/Premium uses a different resource provider namespace of *Microsoft.Cdn*, while Azure Front Door (classic) uses *Microsoft.Network*. After you've migrated your Azure Front Door profile, you need to change your Dev-ops script to use the new namespace, different Azure PowerShell module and CLI commands and API.
+
+### Endpoint with hash value
+
+Azure Front Door Standard and Premium endpoints are generated to include a hash value to prevent your domain from being taken over. The format of the endpoint name is `<endpointname>-<hashvalue>.z01.azurefd.net`. The Classic Front Door endpoint name will continue to work after migration but we recommend replacing it with the newly created endpoint name from the Standard or Premium profile. For more information, see [Endpoint domain names](endpoint.md#endpoint-domain-names).
+
+### Logs and metrics
+
+Diagnostic logs and metrics won't be migrated. Azure Front Door Standard/Premium log fields are different from Front Door (classic). The newer tier also has heath probe logs and is recommended you enable diagnostic logging after the migration complete. Standard and Premium tier also supports built-in reports that will start displaying data once the migration is done.
+
+## Prerequisites
+
+* HTTPS is required for all custom domains. All Azure Front Door Standard and Premium tiers enforce HTTPS on every domain. If you don't your own certificate, you can use Azure Front Door managed certificate that is free and managed for you.
+* If you use BYOC for Azure Front Door (classic), you need to grant Key Vault access to your Azure Front Door Standard or Premium profile by completing the following steps:
+ * Register the service principal for **Microsoft.AzureFrontDoor-Cdn** as an app in your Azure Active Directory using Azure PowerShell.
+ * Grant **Microsoft.AzureFrontDoor-Cdn** access to your Key Vault.
+* Session affinity is enabled from within the origin group in an Azure Front Door Standard and Premium profile. In Azure Front Door (classic), session affinity is controlled at the domain level. As part of the migration, session affinity gets enabled or disabled based on the Classic profile's configuration. If you have two domains in a Classic profile that shares the same origin group, session affinity has to be consistent across both domains in order for migration can pass validation.
+
+> [!IMPORTANT]
+> * If your Azure Front Door (classic) profile can qualify to migrate to Standard tier but the number of resources exceeds the Standard tier quota limit, it will be migrated to Premium tier instead.
+> * If you use Azure PowerShell, Azure CLI, API, or Terraform to do the migration, then you need to create WAF policies separately.
+
+### Web Application Firewall (WAF)
+
+The default Azure Front Door tier created during migration is determined by the type of rules contain in the WAF policy. In this section we'll, cover scenarios for different rule type for a WAF policy.
+
+* Classic WAF policy contains only custom rules.
+ * The new Azure Front Door profile defaults to Standard tier and can be upgraded to Premium during migration. If you use the portal for migration, Azure will create custom WAF rules for Standard. If you upgrade to Premium during migration, custom WAF rules will be created by the migration capability, but managed WAF rules will need to be created manually after migration.
+* Classic WAF policy has only managed WAF rules, or both managed and custom WAF rules.
+ * The new Azure Front Door profile defaults to Premium tier and isn't eligible for downgrade during migration. Remove the WAF policy association or delete the manage WAF rules from the Classic WAF policy.
+
+ > [!NOTE]
+ > To avoid creating duplicate WAF policies during migration, the Azure portal provides the option to either create copies or reuse an existing Azure Front Door Standard or Premium WAF policy.
+
+* If you migrate your Azure Front Door profile using Azure PowerShell or Azure CLI, you need to create the WAF policies separately before migration.
+
+## Naming convention for migration
+
+During the migration, a default profile name is used in the format of `<endpointprefix>-migrated`. For example, a Classic endpoint named `myEndpoint.azurefd.net`, will have the default name of `myEndpoint-migrated`.
+WAF policy name will use the format of `<classicWAFpolicyname>-<standard or premium>`. For example, a Classic WAF policy named `contosoWAF1`, will have the default name of `contosoWAF1-premium`. You can rename the Front Door profile and the WAF policy during migration. Renaming of rule engine and routes isn't supported, instead default names will be assigned.
+
+URL redirect and URL rewrite are supported through rules engine in Azure Front Door Standard and Premium, while Azure Front Door (classic) supports them through routing rules. During migration, these two rules get created as rules engine rules in a Standard and Premium profile. The names of these rules are `urlRewriteMigrated` and `urlRedirectMigrated`.
+
+## Resource states
+
+The following table explains the various stages of the migration process and if changes can be made to the profile.
+
+| Migration state | Front Door (classic) resource state | Can make changes? | Front Door Standard/Premium | Can make changes? |
+|--|--|--|--|--|
+|Before migration| Active | Yes | N/A | N/A |
+| Step 1: Validating compatibility | Active | Yes | N/A | N/A |
+| Step 2: Preparing for migration | Migrating | No | Creating | No |
+| Step 5: Committing migration | Migrating | No | CommittingMigration | No |
+| Step 5: Committed migration | Migrated | No | Active | Yes |
+| Step 5: Aborting migration | AbortingMigration | No | Deleting | No |
+| Step 5: Aborted migration | Active | Yes | Deleted | N/A |
+
+## Next steps
+
+* Understand the [mapping between Front Door tiers](tier-mapping.md) settings.
+* Learn how to [migrate from Classic to Standard/Premium tier](migrate-tier.md) using the Azure portal.
frontdoor Tier Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/tier-upgrade.md
+
+ Title: Upgrade from Azure Front Door Standard to Premium tier (Preview)
+description: This article provides step-by-step instructions on how to upgrade from an Azure Front Door Standard to an Azure Front Door Premium tier profile.
++++ Last updated : 11/2/2022+++
+# Upgrade from Azure Front Door Standard to Premium tier (Preview)
+
+Azure Front Door supports upgrading from Standard to Premium tier for more advanced capabilities and an increase in quota limits. The upgrade won't cause any downtime to your services or applications. For more information about the differences between Standard and Premium tier, see [Tier comparison](standard-premium/tier-comparison.md).
+
+This article will walk you through how to perform the tier upgrade on the configuration page of a Front Door Standard profile. Once upgraded, you'll be charge for the Azure Front Door Premium monthly base fee at an hourly rate.
+
+> [!IMPORTANT]
+> Downgrading from Premium to Standard tier is not supported.
+
+## Prerequisite
+
+Confirm you have an Azure Front Door Standard profile available in your subscription to upgrade.
+
+## Upgrade tier
+
+1. Go to the Azure Front Door Standard profile you want to upgrade and select **Configuration (preview)** from under *Settings*.
+
+ :::image type="content" source="./media/tier-upgrade/overview.png" alt-text="Screenshot of the configuration button under settings for a Front Door standard profile.":::
+
+1. Select **Upgrade** to begin the upgrade process. If you don't have any WAF policies associated to your Front Door Standard profile, then you'll be prompted with a confirmation to proceed with the upgrade.
+
+ :::image type="content" source="./media/tier-upgrade/upgrade-button.png" alt-text="Screenshot of the upgrade button on the configuration page a Front Door Standard profile.":::
+
+1. If you have WAF policies associated to the Front Door Standard profile, then you'll be taken to the *Upgrade WAF policies* page. On this page, you'll decide whether you want to make copies of the WAF policies or use an existing premium WAF policy. You can also change the name of the new WAF policy copy during this step.
+
+ :::image type="content" source="./media/tier-upgrade/upgrade-waf.png" alt-text="Screenshot of the upgrade WAF policies page.":::
+
+ > [!NOTE]
+ > To use managed WAF rules for the new premium WAF policy copies, you'll need to manually enable them after the upgrade.
+
+1. Select **Upgrade** once you're done setting up the WAF policies. Select **Yes** to confirm you would like to proceed with the upgrade.
+
+ :::image type="content" source="./media/tier-upgrade/confirm-upgrade.png" alt-text="Screenshot of the confirmation message from upgrade WAF policies page.":::
+
+1. The upgrade process will create new premium WAF policy copies and associate them to the upgraded Front Door profile. The upgrade can take a few minutes to complete depending on the complexity of your Front Door profile.
+
+ :::image type="content" source="./media/tier-upgrade/upgrade-in-progress.png" alt-text="Screenshot of the configuration page with upgrade in progress status.":::
+
+1. Once the upgrade completes, you'll see **Tier: Premium** display on the *Configuration* page.
+
+ :::image type="content" source="./media/tier-upgrade/upgrade-complete.png" alt-text="Screenshot of the Front Door tier upgraded to premium on the configuration page.":::
+
+ > [!NOTE]
+ > You're now being billed for the Azure Front Door Premium base fee at an hourly rate.
+
+## Next steps
+
+* Learn more about [Managed rule for WAF policy](../web-application-firewall/afds/waf-front-door-drs.md).
+* Learn how to enable [Private Link to origin resources](private-link.md).
iot-central Concepts Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-iot-edge.md
Title: Azure IoT Edge and Azure IoT Central | Microsoft Docs
description: Understand how to use Azure IoT Edge with an IoT Central application. Previously updated : 06/08/2022 Last updated : 10/11/2022
To learn more about IoT Edge, see [What is Azure IoT Edge?](../../iot-edge/about
IoT Edge is made up of three components:
-* *IoT Edge modules* are containers that run Azure services, partner services, or your own code. Modules are deployed to IoT Edge devices, and run locally on those devices. To learn more, see [Understand Azure IoT Edge modules](../../iot-edge/iot-edge-modules.md).
-* The *IoT Edge runtime* runs on each IoT Edge device, and manages the modules deployed to each device. The runtime consists of two IoT Edge modules: *IoT Edge agent* and *IoT Edge hub*. To learn more, see [Understand the Azure IoT Edge runtime and its architecture](../../iot-edge/iot-edge-runtime.md).
+* [IoT Edge modules](../../iot-edge/iot-edge-modules.md) are containers that run Azure services, partner services, or your own code. Modules are deployed to IoT Edge devices, and run locally on those devices. A [deployment manifest](../../iot-edge/module-composition.md) specifies the modules to deploy to an IoT Edge device.
+* The [IoT Edge runtime](../../iot-edge/iot-edge-runtime.md) runs on each IoT Edge device, and manages the modules deployed to each device. The runtime consists of two IoT Edge modules: [IoT Edge agent and IoT Edge hub](../../iot-edge/module-edgeagent-edgehub.md).
* A *cloud-based interface* enables you to remotely monitor and manage IoT Edge devices. IoT Central is an example of a cloud interface. IoT Central enables the following capabilities to for IoT Edge devices:
+* Deployment manifest management. An IoT Central application can manage a collection of deployment manifests and assign them to devices.
* Device templates to describe the capabilities of an IoT Edge device, such as:
- * Deployment manifest upload capability, which helps you manage a manifest for a fleet of devices.
- * Modules that run on the IoT Edge device.
- * The telemetry each module sends.
- * The properties each module reports.
- * The commands each module responds to.
+ * The telemetry each IoT Edge module sends.
+ * The properties each IoT Edge module reports.
+ * The commands each IoT Edge module responds to.
* The relationships between an IoT Edge gateway device and downstream device. * Cloud properties that aren't stored on the IoT Edge device. * Device views and forms.
An IoT Edge device can be:
IoT Edge devices can use *shared access signature* tokens or X.509 certificates to authenticate with IoT Central. You can manually register your IoT Edge devices in IoT Central before they connect for the first time, or use the Device Provisioning Service to handle the registration. To learn more, see [How devices connect](overview-iot-central-developer.md#how-devices-connect).
-IoT Central uses [device templates](concepts-device-templates.md) to define how IoT Central interacts with a device. For example, a device template specifies:
+IoT Central optionally uses [device templates](concepts-device-templates.md) to define how IoT Central interacts with an IoT Edge device. For example, a device template specifies:
-* The types of telemetry and properties a device sends so that IoT Central can interpret them and create visualizations.
-* The commands a device responds to so that IoT Central can display a UI for an operator to use to call the commands.
+* The types of telemetry and properties an IoT Edge device sends so that IoT Central can interpret them and create visualizations.
+* The commands an IoT Edge device responds to so that IoT Central can display a UI for an operator to use to call the commands.
-An IoT Edge device can send telemetry, synchronize property values, and respond to commands in the same way as a standard device. So, an IoT Edge device needs a device template in IoT Central.
+If there's no device template associated with a device, telemetry and property values display as *unmodeled* data. However, you can still use IoT Central data export capabilities to forward telemetry and property values to other backend services.
-### IoT Edge device templates
-
-IoT Central device templates use models to describe the capabilities of devices. The following diagram shows the structure of the model for an IoT Edge device:
--
-IoT Central models an IoT Edge device as follows:
-
-* Every IoT Edge device template has a capability model.
-* For every custom module listed in the deployment manifest, a module capability model is generated.
-* A relationship is established between each module capability model and a device model.
-* A module capability model implements one or more module interfaces.
-* Each module interface contains telemetry, properties, and commands.
-
-### IoT Edge deployment manifests and IoT Central device templates
+## IoT Edge deployment manifests
In IoT Edge, you deploy and manage business logic in the form of modules. IoT Edge modules are the smallest unit of computation managed by IoT Edge, and can contain Azure services such as Azure Stream Analytics, or your own solution-specific code.
-An IoT Edge *deployment manifest* lists the IoT Edge modules to deploy on the device and how to configure them. To learn more, see [Learn how to deploy modules and establish routes in IoT Edge](../../iot-edge/module-composition.md).
+An IoT Edge [deployment manifest](../../iot-edge/module-composition.md) lists the IoT Edge modules to deploy on the device and how to configure them.
-In Azure IoT Central, you import a deployment manifest to create a device template for the IoT Edge device.
+In Azure IoT Central, navigate to **Edge manifests** to import and manage the deployment manifests for the IoT Edge devices in your solution.
The following code snippet shows an example IoT Edge deployment manifest:
In the previous snippet, you can see:
* There are three modules. The *IoT Edge agent* and *IoT Edge hub* system modules that are present in every deployment manifest. The custom **SimulatedTemperatureSensor** module. * The public module images are pulled from an Azure Container Registry repository that doesn't require any credentials to connect. For private module images, set the container registry credentials to use in the `registryCredentials` setting for the *IoT Edge agent* module.
-* The custom **SimulatedTemperatureSensor** module has two properties `"SendData": true` and `"SendInterval": 10`.
+* The custom **SimulatedTemperatureSensor** module has two writable properties `"SendData": true` and `"SendInterval": 10`.
-When you import this deployment manifest into an IoT Central application, it generates the following device template:
+The following screenshot shows this deployment manifest imported into IoT Central:
-In the previous screenshot you can see:
+If your application uses [organizations](howto-create-organizations.md), you can assign your deployment manifests to specific organizations. The previous screenshot shows the deployment manifest assigned to the **Store Manager / Americas** organization.
-* A module called **SimulatedTemperatureSensor**. The *IoT Edge agent* and *IoT Edge hub* system modules don't appear in the template.
-* An interface called **management** that includes two writable properties called **SendData** and **SendInterval**.
+To learn how to use the **Edge manifests** page and assign deployment manifests to IoT Edge devices, see [Manage IoT Edge deployment manifests in your IoT Central application](howto-manage-deployment-manifests.md).
-The deployment manifest doesn't include information about the telemetry the **SimulatedTemperatureSensor** module sends or the commands it responds to. Add these definitions to the device template manually before you publish it.
+### Manage an unassigned device
-To learn more, see [Tutorial: Add an Azure IoT Edge device to your Azure IoT Central application](/training/modules/connect-iot-edge-device-to-iot-central/).
+An IoT Edge device that doesn't have an associated device template is known as an *unassigned* device. You can't use IoT Central features such as dashboards, device groups, analytics, rules, and jobs with unassigned devices. However, you can use the following capabilities with unassigned devices:
-### Update a deployment manifest
+* View raw data such as telemetry and properties.
+* Call device commands.
+* Read and write properties.
-When you replace the deployment manifest, any connected IoT Edge devices download the new manifest and update their modules. However, IoT Central doesn't update the interfaces in the device template with any changes to the module configuration. For example, if you replace the manifest shown in the previous snippet with the following manifest, you don't automatically see the **SendUnits** property in the **management** interface in the device template. Manually add the new property to the **management** interface for IoT Central to recognize it:
-```json
-{
- "modulesContent": {
- "$edgeAgent": {
- "properties.desired": {
- "schemaVersion": "1.0",
- "runtime": {
- "type": "docker",
- "settings": {
- "minDockerVersion": "v1.25",
- "loggingOptions": "",
- "registryCredentials": {}
- }
- },
- "systemModules": {
- "edgeAgent": {
- "type": "docker",
- "settings": {
- "image": "mcr.microsoft.com/azureiotedge-agent:1.0.9",
- "createOptions": "{}"
- }
- },
- "edgeHub": {
- "type": "docker",
- "status": "running",
- "restartPolicy": "always",
- "settings": {
- "image": "mcr.microsoft.com/azureiotedge-hub:1.0.9",
- "createOptions": "{}"
- }
- }
- },
- "modules": {
- "SimulatedTemperatureSensor": {
- "version": "1.0",
- "type": "docker",
- "status": "running",
- "restartPolicy": "always",
- "settings": {
- "image": "mcr.microsoft.com/azureiotedge-simulated-temperature-sensor:1.0",
- "createOptions": "{}"
- }
- }
- }
- }
- },
- "$edgeHub": {
- "properties.desired": {
- "schemaVersion": "1.0",
- "routes": {
- "route": "FROM /* INTO $upstream"
- },
- "storeAndForwardConfiguration": {
- "timeToLiveSecs": 7200
- }
- }
- },
- "SimulatedTemperatureSensor": {
- "properties.desired": {
- "SendData": true,
- "SendInterval": 10,
- "SendUnits": "Celsius"
- }
- }
- }
-}
-```
+You can also manage individual modules on unassigned devices:
++
+## IoT Edge device templates
+
+IoT Central device templates use models to describe the capabilities of IoT Edge devices. Device templates are optional for IoT Edge devices. The device template enables you to interact with telemetry, properties, and commands using IoT Central capabilities such as dashboards and analytics. The following diagram shows the structure of the model for an IoT Edge device:
++
+IoT Central models an IoT Edge device as follows:
+
+* Every IoT Edge device template has a capability model.
+* For every custom module listed in the deployment manifest, add a module definition if you want to use IoT Central to interact with that module.
+* A module capability model implements one or more module interfaces.
+* Each module interface contains telemetry, properties, and commands.
+
+You can generate the basic capability model based on the modules and properties defined in the device manifest. To learn more, see [Add modules and properties to device templates](howto-manage-deployment-manifests.md#add-modules-and-properties-to-device-templates).
## IoT Edge gateway patterns
iot-central How To Connect Devices X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/how-to-connect-devices-x509.md
You can repeat the above steps for _mytestselfcertsecondary_ certificate as well
This section assumes you're using a group enrollment to connect your IoT Edge device. Follow the steps in the previous sections to: - [Generate root and device certificates](#generate-root-and-device-certificates)-- [Create a group enrollment](#create-a-group-enrollment) <!-- No slightly different type of enrollment group - UPDATE!! -->
+- [Create a group enrollment](#create-a-group-enrollment)
To connect the IoT Edge device to IoT Central using the X.509 device certificate:
iot-central How To Connect Iot Edge Transparent Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/how-to-connect-iot-edge-transparent-gateway.md
Title: Connect an IoT Edge transparent gateway to an Azure IoT Central applicati
description: How to connect devices through an IoT Edge transparent gateway to an IoT Central application. The article shows how to use both the IoT Edge 1.1 and 1.2 runtimes. Previously updated : 05/08/2022 Last updated : 10/11/2022
To follow the steps in this article, download the following files to your comput
+## Import deployment manifest
+
+Every IoT Edge device needs a deployment manifest to configure the IoT Edge runtime. To import a deployment manifest for the IoT Edge transparent gateway:
+
+1. Navigate to **Edge manifests**.
+
+1. Select **+ New**, enter a name for the deployment manifest such as *Transparent gateway* and then upload the *EdgeTransparentGatewayManifest.json* file you downloaded previously.
+
+1. Select **Create** to save the deployment manifest in your application.
+ ## Add device templates Both the downstream devices and the gateway device can use device templates in IoT Central. IoT Central lets you model the relationship between your downstream devices and your gateway so you can view and manage them after they're connected. A device template isn't required to attach a downstream device to a gateway.
To create a device template for an IoT Edge transparent gateway device:
1. On the **Customize** page of the wizard, enter a name such as *Edge Gateway* for the device template.
-1. On the **Customize** page of the wizard, check **Gateway device with downstream devices**.
+1. On the **Customize** page of the wizard, check **This is a gateway device**.
+
+1. On the **Review** page, select **Create**.
-1. On the **Customize** page of the wizard, select **Browse**. Upload the *EdgeTransparentGatewayManifest.json* file you downloaded previously.
+1. On the **Create a model** page, select **Custom model**.
1. Add an entry in **Relationships** to the downstream device template.
To add the devices:
1. Navigate to the devices page in your IoT Central application.
-1. Add an instance of the transparent gateway IoT Edge device. In this article, the gateway device ID is `edgegateway`.
+1. Add an instance of the transparent gateway IoT Edge device. When you add the device, make sure that you select the **Transparent gateway** deployment manifest. In this article, the gateway device ID is `edgegateway`.
1. Add one or more instances of the downstream device. In this article, the downstream devices are thermostats with IDs `thermostat1` and `thermostat2`.
To let you try out this scenario, the following steps show you how to deploy the
To try out the transparent gateway scenario, select the following button to deploy two Linux virtual machines. One virtual machine has the IoT Edge 1.1 runtime installed and is the transparent IoT Edge gateway. The other virtual machine is a downstream device where you run code to send simulated thermostat telemetry:
-[![Deploy to Azure Button](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fiot-central-docs-samples%2Fmaster%2Ftransparent-gateway-1-1%2FDeployGatewayVMs.json)
+[![Deploy to Azure Button](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fiot-central-docs-samples%2Fmain%2Ftransparent-gateway-1-1%2FDeployGatewayVMs.json)
When the two virtual machines are deployed and running, verify the IoT Edge gateway device is running on the `edgegateway` virtual machine:
When the two virtual machines are deployed and running, verify the IoT Edge gate
To try out the transparent gateway scenario, select the following button to deploy two Linux virtual machines. One virtual machine has the IoT Edge 1.2 runtime installed and is the transparent IoT Edge gateway. The other virtual machine is a downstream device where you run code to send simulated thermostat telemetry:
-[![Deploy to Azure Button](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fiot-central-docs-samples%2Fmaster%2Ftransparent-gateway-1-2%2FDeployGatewayVMs.json)
+[![Deploy to Azure Button](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fiot-central-docs-samples%2Fmain%2Ftransparent-gateway-1-2%2FDeployGatewayVMs.json)
When the two virtual machines are deployed and running, verify the IoT Edge gateway device is running on the `edgegateway` virtual machine:
Your transparent gateway is now configured and ready to start forwarding telemet
sudo nano /etc/aziot/config.toml ```
-1. Locate the `Certificate settings` settings. Add the certificate settings as follows:
+1. Locate the following settings in the configuration file. Add the certificate settings as follows:
```text trust_bundle_cert = "file:///home/AzureUser/certs/certs/azure-iot-test-only.root.ca.cert.pem"
To run the thermostat simulator on the `leafdevice` virtual machine:
... ```
+ > [!TIP]
+ > If you see an error when the downstream device tries to connect. Try re-running the device provisioning steps above.
+ 1. To see the telemetry in IoT Central, navigate to the **Overview** page for the **thermostat1** device: :::image type="content" source="media/how-to-connect-iot-edge-transparent-gateway/downstream-device-telemetry.png" alt-text="Screenshot showing telemetry from the downstream device." lightbox="media/how-to-connect-iot-edge-transparent-gateway/downstream-device-telemetry.png":::
iot-central Howto Connect Eflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-connect-eflow.md
Title: Azure IoT Edge for Linux on Windows (EFLOW) with IoT Central | Microsoft Docs description: Learn how to connect Azure IoT Edge for Linux on Windows (EFLOW) with IoT Central -- Previously updated : 06/16/2022++ Last updated : 10/11/2022
In this how-to article, you learn how to:
+* Import a device manifest for an IoT Edge device.
* Create a device template for an IoT Edge device. * Create an IoT Edge device in IoT Central. * Connect and provision an EFLOW device.
To complete the steps in this article, you need:
To follow the steps in this article, download the [EnvironmentalSensorManifest.json](https://raw.githubusercontent.com/Azure-Samples/iot-central-docs-samples/main/iotedge/EnvironmentalSensorManifest.json) file to your computer.
+## Import a deployment manifest
+
+You use a deployment manifest to specify the modules to run on an IoT Edge device. IoT Central manages the deployment manifests for the IoT Edge devices in your solution. To import the deployment manifest for this example:
+
+1. In your IoT Central application, navigate to **Edge manifests**.
+
+1. Select **+ New**. Enter a name such as *Environmental Sensor* for your deployment manifest, and then upload the *EnvironmentalSensorManifest.json* file you downloaded previously.
+
+1. Select **Next** and then **Create**.
+
+The example deployment manifest includes a custom module called *SimulatedTemperatureSensor*.
+ ## Add device template In this section, you create an IoT Central device template for an IoT Edge device. You import an IoT Edge manifest to get started, and then modify the template to add telemetry definitions and views:
In this section, you create an IoT Central device template for an IoT Edge devic
1. On the **Customize** page of the wizard, enter a name such as *Environmental Sensor Edge Device* for the device template.
-1. Select **Browse** and upload the *EnvironmentalSensorManifest.json* manifest file you downloaded previously.
- 1. On the **Review** page, select **Create**.
+1. On the **Create a model** page, select **Custom model**.
+
+1. In the model, select **Modules** and then **Import modules from manifest**. Select the **Environmental Sensor** deployment manifest and then select **Import**.
+ 1. Select the **management** interface in the **SimulatedTemperatureSensor** module to view the two properties defined in the manifest: :::image type="content" source="media/howto-connect-eflow/imported-manifest.png" alt-text="Device template created from IoT Edge manifest.":::
Before you can connect a device to IoT Central, you must register the device in
1. In your IoT Central application, navigate to the **Devices** page and select **Environmental Sensor Edge Device** in the list of available templates.
-1. Select **+ New** to add a new device from the template. On the **Create new device** page, select **Create**.
+1. Select **+ New** to add a new device from the template.
+
+1. On the **Create new device** page, select the **Environmental Sensor** deployment manifest, and then select **Create**.
You now have a new device with the status **Registered**:
Go to the **Device Details** page in your IoT Central application and you can se
:::image type="content" source="media/howto-connect-eflow/telemetry.png" alt-text="Telemetry from the device.":::
+## Clean up resources
+
+If you want to remove the Azure IoT Edge for Linux on Windows installation from your device, use the following commands.
+
+1. Open **Settings** on Windows
+1. Select **Add or Remove Programs**
+1. Select **Azure IoT Edge LTS** app
+1. Select **Uninstall**
+ ## Next Steps Now that you've learned how to connect Azure IoT Edge for Linux on Windows (EFLOW) with IoT Central, the suggested next step is to learn how to [Connect devices through an IoT Edge transparent gateway](how-to-connect-iot-edge-transparent-gateway.md).
iot-central Howto Create Organizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-organizations.md
After you've created your organization hierarchy you can use organizations in ar
- [Organization dashboards](howto-manage-dashboards.md) that show information to users about devices in their organization. - [Device groups](tutorial-use-device-groups.md) for devices in specific organizations.
+- [IoT Edge deployment manifests](concepts-iot-edge.md#iot-edge-deployment-manifests) for deployment manifests associated with specific organizations.
- [Analytics](howto-create-analytics.md) for devices in specific organizations. - [Jobs](howto-manage-devices-in-bulk.md#create-and-run-a-job) that bulk manage devices in specific organizations.
To set the default organization, select **Settings** on the top menu bar:
:::image type="content" source="media/howto-create-organization/set-default-organization.png" alt-text="Screenshot that shows how to set your default organization." lightbox="media/howto-create-organization/set-default-organization.png"::: - ## Add organizations to an existing application An application may contain devices, users, and experiences such as dashboards, device groups, and jobs before you add an organization hierarchy.
The following limits apply to organizations:
- The hierarchy can be no more than five levels deep. - The total number of organizations can't be more than 200. Each node in the hierarchy counts as an organization. - ## Next steps Now that you've learned how to manage Azure IoT Central organizations, the suggested next step is to learn how to [Export IoT data to cloud destinations using Blob Storage](howto-export-to-blob-storage.md).
iot-central Howto Edit Device Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-edit-device-template.md
In early device development phases, while you're still designing and testing the
After you attach production devices to a device template, evaluate the impact of any changes before you edit a device template. You shouldn't make breaking changes to a device template in production. To make such changes, create a new version of the device template. Test the new device template and then migrate your production devices to the new template at a scheduled downtime.
-## Update an IoT Edge device template
+### Update an IoT Edge device template
-IoT Edge device templates contain a _deployment manifest_ in addition to the device model. For an IoT Edge device, the model groups capabilities by modules that correspond to the IoT Edge modules running on the device. The deployment manifest is a separate JSON document that tells an IoT Edge device which modules to install and how to configure them. The same guidance as outlined in the previous section applies to the modules in the device model. Also, every module defined in the device model must be included in the deployment manifest. Once an IoT Edge device template is published, you must create a new version if you need to replace the deployment manifest. For IoT Edge devices to receive the new deployment manifest, migrate them to the new template version.
+For an IoT Edge device, the model groups capabilities by modules that correspond to the IoT Edge modules running on the device. The deployment manifest is a separate JSON document that tells an IoT Edge device which modules to install, how to configure them, and what properties the module has. If you've modified a deployment manifest, you can update the device template to include the modules and properties defined in the manifest:
-To learn more, see [IoT Edge deployment manifests and IoT Central device templates](concepts-iot-edge.md#iot-edge-deployment-manifests-and-iot-central-device-templates).
+1. Navigate to the **Modules** node in the device template.
+1. On the **Modules summary** page, select **Import modules from manifest**.
+1. Select the appropriate deployment manifest and select **Import**.
+
+To learn more, see [IoT Edge devices and IoT Central](concepts-iot-edge.md#iot-edge-devices-and-iot-central).
### Edit and publish actions
iot-central Howto Manage Deployment Manifests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-deployment-manifests.md
+
+ Title: Manage Azure IoT Edge deployment manifests | Microsoft Docs
+description: This article describes how to the deployment manifests for the IT Edge devices that connect to your IoT Central application.
++++ Last updated : 10/05/2022+++
+# Manage IoT Edge deployment manifests in your IoT Central application
+
+A deployment manifest lets you specify the modules the IoT Edge runtime should download and configure. An IoT Edge device can download a deployment manifest when it first connects to your IoT Central application. This article describes how you manage these deployment manifests in your IoT Central application.
+
+To learn more about IoT Edge and IoT Central, see [Connect Azure IoT Edge devices to an Azure IoT Central application](concepts-iot-edge.md).
+
+<!-- TODO: Link to REST API article -->
+
+## Manage deployment manifests
+
+The **Edge manifests** page lets you manage the deployment manifests in your application. From this page you can:
+
+- Upload or create deployment manifests
+- Modify existing deployment manifests
+- Delete deployment manifests
+
+### Upload and create deployment manifests
+
+When you create a new deployment manifest, you can upload the deployment manifest JSON file or start with an existing manifest:
+
+1. On the **Edge manifests** page, select **+ New**.
+
+1. Enter a name for the deployment manifest.
+
+1. If your application uses organizations, select an organization to associate the deployment manifest with.
+
+1. Browse for a deployment manifest file to upload or choose an existing deployment manifest as a starting point for your new one. IoT Central validates any uploaded files.
+
+ :::image type="content" source="media/howto-manage-deployment-manifests/uploaded-deployment-manifest.png" alt-text="Screenshot that shows an uploaded and validated deployment manifest.":::
+
+1. Select **Next**. The **Review and finish** page shows information about the deployment manifest and the modules it defines. You can also view the raw JSON.
+
+1. Select **Create**. The **Edge manifests** page now includes the new deployment manifest.
+
+> [!TIP]
+> If you have a large number of deployment manifest, you can sort and filter the list shown on the **Edge manifests** page.
+
+### Edit the JSON source of a deployment manifest
+
+To modify a deployment manifest by editing the JSON directly:
+
+1. Navigate to the **Edge manifests** page.
+
+1. Select **Edit JSON** in the context menu for the deployment manifest you want to modify.
+
+1. Use the JSON editor to make the required changes. Then select **Save**.
+
+### Replace the content of a deployment manifest
+
+To completely replace the content of a deployment manifest:
+
+1. Navigate to the **Edge manifests** page.
+
+1. Click on the name of the deployment manifest you want to replace.
+
+1. In the **Customize** dialog, browse for a new deployment manifest file to upload or choose an existing deployment manifest as a starting point. IoT Central validates any uploaded files.
+
+1. Select **Next**. The **Review and finish** page shows information about the new deployment manifest and the modules it defines. You can also view the raw JSON.
+
+1. Select **Save**. The **Edge manifests** page now includes the updated deployment manifest.
+
+## Manage IoT Edge devices
+
+When you add an IoT Edge device on the devices page, you can choose a deployment manifest for the device. In the **Create a new device** dialog, you can choose from the list of previously uploaded device manifests on the **Edge manifests** page. It's also possible to add a deployment manifest directly to a device after you create the device.
+
+If you add an IoT Edge device that isn't assigned to a device template, the **Create a new device** dialog looks like the following screenshot:
++
+To choose the deployment manifest for the device:
+
+1. Toggle **Azure IoT Edge device?** to **Yes**.
+
+1. Select the IoT Edge deployment manifest to use. You can also choose to assign a deployment manifest after you create the device.
+
+1. Select **Create**.
+
+If you add an IoT Edge device that is assigned to a device template, the **Create a new device** dialog looks like the following screenshot:
++
+To choose the deployment manifest for the device:
+
+1. The **Azure IoT Edge device?** toggle is already set to **Yes** because IoT Central recognizes that you're using an IoT Edge device template.
+
+1. Select the IoT Edge deployment manifest to use. You can also choose to assign a deployment manifest after you create the device.
+
+1. Select **Create**.
+
+When an IoT Edge device connects to your application for the first time, it downloads the deployment manifest, configures the modules specified in the deployment manifest, and runs the modules.
+
+If you don't select a deployment manifest when you create an IoT Edge device, you can assign one later either individually or to multiple devices by using a job.
+
+### Update the deployment manifest a device uses
+
+You can manage the deployment manifest for an existing device:
++
+Use **Assign edge manifest** to select a previously uploaded deployment manifest from the **Edge manifests** page. You can also use this option to manually notify a device if you've modified the deployment manifest on the **Edge manifests** page.
+
+Use **Edit manifest** to modify the deployment manifest for this device. Changes you make here don't affect the deployment manifest on the **Edge manifests** page.
+
+### Jobs
+
+To assign or update the deployment manifest for multiple devices, use a [job](howto-manage-devices-in-bulk.md). Use the **Change edge deployment manifest** job type:
++
+## Add modules and properties to device templates
+
+A deployment manifest defines the modules to run on the device and optionally [writable properties](../../iot-edge/module-composition.md?#define-or-update-desired-properties) that you can use to configure modules.
+
+If you're assigning a device template to an IoT Edge device, you may want to define the modules and writable properties in the device template. To add the modules and property definitions to a device template:
+
+1. Navigate to the **Modules Summary** page of the IoT Edge device template.
+1. Select **Import modules from manifest**.
+1. Select the appropriate deployment manifest from the list.
+1. Select **Import**. IoT Central adds the custom modules defined in the deployment manifest to the device template. The names of the modules in the device template match the names of the custom modules in the deployment manifest. The generated interface includes property definitions for the properties defined for the custom module in the deployment manifest:
++
+## Next steps
+
+Now that you've learned how to manage IoT Edge deployment manifests in your Azure IoT Central application, the suggested next step is to learn how to [How to connect devices through an IoT Edge transparent gateway](how-to-connect-iot-edge-transparent-gateway.md).
iot-central Howto Manage Devices In Bulk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-devices-in-bulk.md
The following example shows you how to create and run a job to set the light thr
1. Select the target device group that you want your job to apply to. If your application uses organizations, the selected organization determines the available device groups. You can see how many devices your job configuration applies to below your **Device group** selection.
-1. Choose **Cloud property**, **Property**, **Command**, or **Change device template** as the **Job type**:
+1. Choose **Cloud property**, **Property**, **Command**, **Change device template**, or **Change edge deployment manifest** as the **Job type**:
- To configure a **Property** job, select a property and set its new value. A property job can set multiple properties. To configure a **Command** job, choose the command to run. To configure a **Change device template** job, select the device template to assign to the devices in the device group.
+ To configure a **Property** job, select a property and set its new value. A property job can set multiple properties. To configure a **Command** job, choose the command to run. To configure a **Change device template** job, select the device template to assign to the devices in the device group. To configure a **Change edge deployment manifest** job, select the IoT Edge deployment manifest to assign to the IoT Edge devices in the device group.
Select **Save and exit** to add the job to the list of saved jobs on the **Jobs** page. You can later return to a job from the list of saved jobs.
If your devices use SAS tokens to authenticate, [export a CSV file from your IoT
If your devices use X.509 certificates to authenticate, generate X.509 leaf certificates for your devices using the root or intermediate certificate in your X.509 enrollment group. Use the device IDs you imported as the `CNAME` value in the leaf certificates. - ## Export devices To connect a real device to IoT Central, you need its connection string. You can export device details in bulk to get the information you need to create device connection strings. The export process creates a CSV file with the device identity, device name, and keys for all the selected devices.
iot-central Howto Manage Users Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-users-roles.md
When you define a custom role, you choose the set of permissions that a user is
| Manage global | Read global | | Full Control | Read instance, Manage instance, Read global, Manage global <br/> Other dependencies: View device templates, device groups, device instances |
+**Edge deployment manifests**
+
+| Name | Dependencies |
+| - | -- |
+| Read instance | None <br/> Other dependencies: View device templates, device groups, device instances |
+| Manage instance | Read instance <br /> Other dependencies: View device templates, device groups, device instances |
+| Read global | None |
+| Manage global | Read global |
+| Full Control | Read instance, Manage instance, Read global, Manage global <br/> Other dependencies: View device templates, device groups, device instances. Update device instances |
+ **Jobs permissions** | Name | Dependencies |
iot-central Howto Transform Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-transform-data.md
Title: Transform data for Azure IoT Central | Microsoft Docs
description: IoT devices send data in various formats that you may need to transform. This article describes how to transform data both on the way into IoT Central and on the way out. The scenarios described use IoT Edge and Azure Functions. Previously updated : 06/24/2022 Last updated : 10/11/2022
To create a container registry:
To build the custom module in the [Azure Cloud Shell](https://shell.azure.com/):
-1. In the [Azure Cloud Shell](https://shell.azure.com/), create a new folder and navigate to it by running the following commands:
-
- ```azurecli
- mkdir yournewfolder
- cd yournewfolder
- ```
-
-1. To clone the GitHub repository that contains the module source code, run the following command:
+1. In the [Azure Cloud Shell](https://shell.azure.com/), clone the GitHub repository that contains the module source code:
```azurecli git clone https://github.com/iot-for-all/iot-central-transform-with-iot-edge
To create a device template for the IoT Edge gateway device:
1. Find the `settings` section for the `transformmodule`. Replace `<acr or docker repo>` with the same `address` value you used in the previous step. Save the changes.
+1. In your IoT Central application, navigate to the **Edge manifests** page.
+
+1. Select **+ New**. Enter a name such as *Transformer* for your deployment manifest, and then upload the *moduledeployment.json* file you downloaded previously. The deployment manifest includes a custom module called *transformmodule*.
+
+1. Select **Next** and then **Create**.
+ 1. In your IoT Central application, navigate to the **Device templates** page. 1. Select **+ New**, select **Azure IoT Edge**, and then select **Next: Customize**.
-1. Enter *IoT Edge gateway device* as the device template name. Don't select **This is a gateway device**. Select **Browse** to upload the *moduledeployment.json* deployment manifest file you edited previously.
+1. Enter *IoT Edge gateway device* as the device template name. Don't select **This is a gateway device**.
+
+1. Select **Next: Review**, then select **Create**.
+
+1. On the **Create a model** page, select **Custom model**.
-1. When the deployment manifest is validated, select **Next: Review**, then select **Create**.
+1. In the model, select **Modules** and then **Import modules from manifest**. Select the **Transformer** deployment manifest and then select **Import**.
The deployment manifest doesn't specify the telemetry the module sends. To add the telemetry definitions to the device template:
To register a gateway device in IoT Central:
1. In your IoT Central application, navigate to the **Devices** page.
-1. Select **IoT Edge gateway device** and select **Create a device**. Enter *IoT Edge gateway device* as the device name, enter *gateway-01* as the device ID, make sure **IoT Edge gateway device** is selected as the device template and **No** is selected as **Simulate this device?**. Select **Create**.
+1. Select **IoT Edge gateway device** and select **Create a device**. Enter *IoT Edge gateway device* as the device name, enter *gateway-01* as the device ID, make sure **IoT Edge gateway device** is selected as the device template and **No** is selected as **Simulate this device?**. Select **Transformer** as the edge manifest. Select **Create**.
1. In the list of devices, click on the **IoT Edge gateway device**, and then select **Connect**.
To register a downstream device in IoT Central:
1. Don't select a device template. Select **+ New**. Enter *Downstream 01* as the device name, enter *downstream-01* as the device ID, make sure that the device template is **Unassigned** and **No** is selected as **Simulate this device?**. Select **Create**.
-1. In the list of devices, click on the **Downstream 01**, and then select **Connect**.
+1. In the list of devices, click on the **Downstream 01** device, and then select **Connect**.
1. Make a note of the **ID scope**, **Device ID**, and **Primary key** values for the **Downstream 01** device. You use them later.
For convenience, this article uses Azure virtual machines to run the gateway and
| Authentication Type | Password | | Admin Password Or Key | Your choice of password for the **AzureUser** account on both virtual machines. |
-[![Deploy to Azure Button](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fiot-central-docs-samples%2Fmaster%2Ftransparent-gateway%2FDeployGatewayVMs.json)
+[![Deploy to Azure Button](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fiot-central-docs-samples%2Fmain%2Ftransparent-gateway-1-1%2FDeployGatewayVMs.json)
Select **Review + Create**, and then **Create**. It takes a couple of minutes to create the virtual machines in the **ingress-scenario** resource group.
To add a device template to your IoT Central application, navigate to your IoT C
1. After the model is imported, select **Publish** to publish the **Compute model** device template.
-To set up the data export to send data to your Device bridge:
+Set up the data export to send data to your Device bridge:
1. In your IoT Central application, select **Data export**.
To set up the data export to send data to your Device bridge:
1. Select the **+ New export** and create a data export called *Compute export*.
-1. Add a filter to only export device data for the device template you're using. Select **+ Filter**, select item **Device template**, select the operator **Equals**, and select the **Compute model** device template you just created.
+1. Add a filter to only export device data for the device template you're using. Select **+ Filter**, select item **Device template**, select the operator **Equals**, and select the **Compute model** device template you created.
1. Add a message filter to differentiate between transformed and untransformed data. This filter prevents sending transformed values back to the device bridge. Select **+ Message property filter** and enter the name value *computed*, then select the operator **Does not exist**. The string `computed` is used as a keyword in the device bridge example code.
To set up the data export to send data to your Device bridge:
### Verify
-The sample device you use to test the scenario is written in Node.js. Make sure you have Node.js and NPM installed on your local machine. If you don't want to install these prerequisites, use the [Azure Cloud Shell](https://shell.azure.com/) that has them preinstalled.
+The sample device you use to test the scenario is written in Node.js. Make sure you have Node.js and npm installed on your local machine. If you don't want to install these prerequisites, use the [Azure Cloud Shell](https://shell.azure.com/) that has them preinstalled.
To run a sample device that tests the scenario:
To run a sample device that tests the scenario:
send status: MessageEnqueued [{"data":"40.5, 36.41, 14.6043, 14.079"}] ```
-1. In your IoT Central application, navigate to the device called **computeDevice**. On the **Raw data** view there are two different telemetry streams that show up around every five seconds. The stream with un-modeled data is the original telemetry, the stream with modeled data is the data that the function transformed:
+1. In your IoT Central application, navigate to the device called **computeDevice**. On the **Raw data** view, there are two different telemetry streams that show up around every five seconds. The stream with unmodeled data is the original telemetry, the stream with modeled data is the data that the function transformed:
:::image type="content" source="media/howto-transform-data/egress-telemetry.png" alt-text="Screenshot that shows original and transformed raw data.":::
iot-central Howto Upload File Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-upload-file-rest-api.md
To test the file upload you run a sample device application. Create a device tem
1. On the **Review** page, select **Create**.
-1. Select **Import a model** and upload the *FileUploadDeviceDcm.json* manifest file from the folder `iotc-file-upload-device\setup` in the repository you downloaded previously.
+1. Select **Import a model** and upload the *FileUploadDeviceDcm.json* model file from the folder `iotc-file-upload-device\setup` in the repository you downloaded previously.
1. Select **Publish** to publish the device template.
iot-central Overview Iot Central Operator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-operator.md
For individual device, you can complete tasks such as [block or unblock it](howt
You can also set writable properties and cloud properties that are defined in the device template, and call commands on the device.
-To manage IoT Edge devices, you can use the IoT Central UI to[create and edit deployment manifests](concepts-iot-edge.md#iot-edge-deployment-manifests-and-iot-central-device-templates), and then deploy them to your IoT Edge devices. You can also run commands in IoT Edge modules from within IoT Central.
+To manage IoT Edge devices, you can use the IoT Central UI to create and edit [deployment manifests](concepts-iot-edge.md), and then deploy them to your IoT Edge devices. You can also run commands in IoT Edge modules from within IoT Central.
Use the **Jobs** page to manage your devices in bulk. Jobs can update properties, run commands, or assign a new device template on multiple devices. To learn more, see [Manage devices in bulk in your Azure IoT Central application](howto-manage-devices-in-bulk.md).
iot-central Overview Iot Central Tour https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-tour.md
Once you're inside your IoT application, use the left pane to access various fea
**Device templates** lets you create and manage the characteristics of devices that connect to your application.
+ **Edge manifests** lets you import and manage deployment manifests for the IoT Edge devices that connect to your application.
+ **Data explorer** exposes rich capabilities to analyze historical trends and correlate various telemetries from your devices. **Dashboards** displays all application and personal dashboards.
This page lets you create and view device groups in your IoT Central application
:::image type="content" source="Media/overview-iot-central-tour/templates.png" alt-text="Screenshot of Device Templates.":::
-The device templates page is where you can view and create device templates in the application. To learn more, see the [Define a new device type in your Azure IoT Central application](howto-set-up-template.md) tutorial.
+The device templates page is where you can view and create device templates in the application. To learn more, see [Connect Azure IoT Edge devices to an Azure IoT Central application](concepts-iot-edge.md).
+
+### Edge manifests
++
+The edge manifests page is where you can import and manage IoT Edge deployment manifests in the application. To learn more, see the [Define a new device type in your Azure IoT Central application](howto-set-up-template.md) tutorial.
### Data Explorer
iot-central Quick Configure Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/quick-configure-rules.md
Title: Quickstart - Configure rules and actions in Azure IoT Central
description: In this quickstart, you learn how to configure telemetry-based rules and actions in your IoT Central application. Previously updated : 09/26/2022 Last updated : 10/28/2022
iot-central Quick Deploy Iot Central https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/quick-deploy-iot-central.md
Title: Quickstart - Connect a device to an Azure IoT Central application | Micro
description: In this quickstart, you learn how to connect your first device to a new IoT Central application. This quickstart uses a smartphone app from either the Google Play or Apple app store as an IoT device. Previously updated : 09/26/2022 Last updated : 10/28/2022
IoT Central provides various industry-focused application templates to help you
1. Navigate to the **Build** page and select **Create app** in the **Custom app** tile:
- :::image type="content" source="media/quick-deploy-iot-central/iot-central-create-new-application.png" alt-text="Build your IoT application page":::
+ :::image type="content" source="media/quick-deploy-iot-central/iot-central-create-new-application.png" alt-text="Build your IoT application page" lightbox="media/quick-deploy-iot-central/iot-central-create-new-application.png":::
If you're prompted to sign in, use the Microsoft account associated with your Azure subscription.
IoT Central provides various industry-focused application templates to help you
1. Azure IoT Central also generates a unique **URL** prefix for you, based on the application name. You use this URL to access your application. Change this URL prefix to something more memorable if you'd like. This URL must be unique.
- :::image type="content" source="media/quick-deploy-iot-central/iot-central-create-custom.png" alt-text="Azure IoT Central Create an application page":::
+ :::image type="content" source="media/quick-deploy-iot-central/iot-central-create-custom.png" alt-text="Azure IoT Central Create an application page" lightbox="media/quick-deploy-iot-central/iot-central-create-custom.png":::
1. For this quickstart, leave the pricing plan set to **Standard 2**.
IoT Central provides various industry-focused application templates to help you
1. Review the Terms and Conditions, and select **Create** at the bottom of the page. After a few seconds, your IoT Central application is ready to use:
- :::image type="content" source="media/quick-deploy-iot-central/iot-central-application.png" alt-text="Azure IoT Central application":::
+ :::image type="content" source="media/quick-deploy-iot-central/iot-central-application.png" alt-text="Azure IoT Central application" lightbox="media/quick-deploy-iot-central/iot-central-application.png":::
## Register a device
To register your device:
1. In IoT Central, navigate to the **Devices** page and select **Add a device**:
- :::image type="content" source="media/quick-deploy-iot-central/create-device.png" alt-text="Screenshot that shows create a device in IoT Central.":::
+ :::image type="content" source="media/quick-deploy-iot-central/create-device.png" alt-text="Screenshot that shows create a device in IoT Central." lightbox="media/quick-deploy-iot-central/create-device.png":::
1. On the **Create a new device** page, accept the defaults, and then select **Create**. 1. In the list of devices, click on the device name:
- :::image type="content" source="media/quick-deploy-iot-central/device-name.png" alt-text="A screenshot that shows the highlighted device name that you can select.":::
+ :::image type="content" source="media/quick-deploy-iot-central/device-name.png" alt-text="A screenshot that shows the highlighted device name that you can select." lightbox="media/quick-deploy-iot-central/device-name.png":::
1. On the device page, select **Connect** and then **QR Code**:
- :::image type="content" source="media/quick-deploy-iot-central/device-registration.png" alt-text="Screenshot that shows the QR code you can use to connect the smartphone app.":::
+ :::image type="content" source="media/quick-deploy-iot-central/device-registration.png" alt-text="Screenshot that shows the QR code you can use to connect the smartphone app." lightbox="media/quick-deploy-iot-central/device-registration.png":::
Keep this page open. In the next section, you scan this QR code using the smartphone app to connect it to IoT Central.
To view the telemetry from the smartphone app in IoT Central:
1. In the list of devices, click on the device name, then select **Overview**:
- :::image type="content" source="media/quick-deploy-iot-central/iot-central-telemetry.png" alt-text="Screenshot of the overview page with telemetry plots.":::
+ :::image type="content" source="media/quick-deploy-iot-central/iot-central-telemetry.png" alt-text="Screenshot of the overview page with telemetry plots." lightbox="media/quick-deploy-iot-central/iot-central-telemetry.png":::
> [!TIP] > The smartphone app only sends data when the screen is on.
To view the telemetry from the smartphone app in IoT Central:
To send a command from IoT Central to your device, select the **Commands** view for your device. The smartphone app can respond to three commands: To make the light on your smartphone flash, use the **LightOn** command. Set the duration to three seconds, the pulse interval to five seconds, and the number of pulses to two. Select **Run** to send the command to the smartphone app. The light on your smartphone app flashes twice.
iot-central Quick Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/quick-export-data.md
Title: Quickstart - Export data from Azure IoT Central
description: In this quickstart, you learn how to use the data export feature in IoT Central to integrate with other cloud services. Previously updated : 09/26/2022 Last updated : 10/28/2022
To configure the data export:
} ```
- :::image type="content" source="media/quick-export-data/data-transformation-query.png" alt-text="Screenshot that shows the data transformation query for the export.":::
+ :::image type="content" source="media/quick-export-data/data-transformation-query.png" alt-text="Screenshot that shows the data transformation query for the export." lightbox="media/quick-export-data/data-transformation-query.png":::
If you want to see how the transformation works and experiment with the query, paste the following sample telemetry message into **1. Add your input message**:
To configure the data export:
Wait until the export status shows **Healthy**: ## Query exported data
To query the exported telemetry:
You may need to wait for several minutes to collect enough data. Try holding your phone in different orientations to see the telemetry values change: ## Clean up resources
iot-central Tutorial Connect Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/tutorial-connect-device.md
Title: Tutorial - Connect a generic client app to Azure IoT Central | Microsoft
description: This tutorial shows you how to connect a device running either a C, C#, Java, JavaScript, or Python client app to your Azure IoT Central application. You modify the automatically generated device template by adding views that let an operator interact with a connected device. Previously updated : 06/10/2022 Last updated : 10/26/2022
In this tutorial, you learn how to:
You can use the **Raw data** view to examine the raw data your device is sending to IoT Central: On this view, you can select the columns to display and set a time range to view. The **Unmodeled data** column shows device data that doesn't match any property or telemetry definitions in the device template.
On this view, you can select the columns to display and set a time range to view
If you'd prefer to continue through the set of IoT Central tutorials and learn more about building an IoT Central solution, see: > [!div class="nextstepaction"]
-> [Create a gateway device template](./tutorial-define-gateway-device-type.md)
+> [Tutorial: Use device groups to analyze device telemetry](tutorial-use-device-groups.md)
iot-central Tutorial Connect Iot Edge Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/tutorial-connect-iot-edge-device.md
+
+ Title: Tutorial - Connect an IoT Edge device to Azure IoT Central | Microsoft Docs
+description: This tutorial shows you how to connect an IoT Edge device to your IoT Central application. You first create an unassigned device, and then add a device template to enable views and forms for an operator to be able to interact with the device.
++ Last updated : 10/18/2022+++++
+# Customer intent: As a solution developer, I want to learn how to connect an IoT Edge device to IoT Central and then configure views and forms so that I can interact with the device.
++
+# Tutorial: Connect an IoT Edge device to your Azure IoT Central application
+
+This tutorial shows you how to connect an IoT Edge device to your Azure IoT Central application. The IoT Edge device runs a module that sends temperature, pressure, and humidity telemetry to your application. You use a device template to enable views and forms that let you interact with the module on the IoT Edge device.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Import an IoT Edge deployment manifest into your IoT Central application.
+> * Add an IoT Edge device that uses this deployment manifest to your application.
+> * Connect the IoT Edge device to your application.
+> * Monitor the IoT Edge runtime from your application.
+> * Add a device template with views and forms to your application.
+> * View the telemetry sent from the device in your application.
+
+## Prerequisites
+
+To complete the steps in this tutorial, you need:
++
+You also need to be able to upload configuration files to your IoT Central application from your local machine.
+
+## Import a deployment manifest
+
+A deployment manifest specifies the configuration of an IoT Edge device including the details of any custom modules the device should download and run. IoT Edge devices that connect to an IoT Central application download their deployment manifests from the application.
+
+To add a deployment manifest to IoT Central to use in this tutorial:
+
+1. Download and save the [EnvironmentalSensorManifest-1-4.json](https://raw.githubusercontent.com/Azure-Samples/iot-central-docs-samples/main/iotedge/EnvironmentalSensorManifest-1-4.json) deployment manifest to your local machine.
+
+1. In your IoT Central application, navigate to the **Edge manifests** page.
+
+1. Select **+ New**.
+
+1. On the **Customize** page, enter *Environmental Sensor* as the name and then upload the *EnvironmentalSensorManifest-1-4.json* file.
+
+1. After the manifest file is validated, select **Next**.
+
+1. The **Review and finish** page shows the modules defined in the manifest, including the **SimulatedTemperatureSensor** custom module. Select **Create**.
+
+The **Edge manifests** list now includes the **Environmental sensor** manifest:
++
+## Add an IoT Edge device
+
+Before the IoT Edge device can connect to your IoT Central application, you need to add it to the list of devices and get its credentials:
+
+1. In your IoT Central application, navigate to the **Devices** page.
+
+1. On the **Devices** page, make sure that **All devices** is selected. Then select **+ New**.
+
+1. On the **Create a new device** page:
+ * Enter *Environmental sensor - 001* as the device name.
+ * Enter *env-sens-001* as the device ID.
+ * Make sure that the device template is **unassigned**.
+ * Make sure that the device isn't simulated.
+ * Set **Azure IoT Edge device** to **Yes**.
+ * Select the **Environmental sensor** IoT Edge deployment manifest.
+
+1. Select **Create**.
+
+The list of devices on the **Devices** page now includes the **Environmental sensor - 001** device. The device status is **Registered**:
++
+Before you deploy the IoT Edge device, you need the:
+
+* **ID Scope** of your IoT Central application.
+* **Device ID** values for the gateway and downstream devices.
+* **Primary key** values for the gateway and downstream devices.
+
+To find these values, navigate to the **Environmental sensor - 001** device from the **Devices** page and select **Connect**. Make a note of these values before you continue.
+
+## Deploy the IoT Edge device
+
+In this tutorial, you deploy the IoT Edge runtime to a Linux virtual machine in Azure. To deploy and configure the virtual machine, select the following button:
+
+[![Deploy to Azure Button](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fiot-central-docs-samples%2Fmain%2Fedge-vm-deploy-1-4%2FedgeDeploy.json)
+
+On the **Custom deployment** page, use the following values to complete the form:
+
+| Setting | Value |
+| - | -- |
+| `Resource group` | Create a new resource group with a name such as *MyIoTEdgeDevice_rg*. |
+| `Region` | Select a region close to you. |
+| `Dns Label Prefix` | A unique DNS prefix for your virtual machine. |
+| `Admin Username` | *AzureUser* |
+| `Admin Password` | A password of your choice to access the virtual machine. |
+| `Scope Id` | The ID scope you made a note of previously. |
+| `Device Id` | The device ID you made a note of previously. |
+| `Device Key` | The device key you made a note of previously. |
+
+Select **Review + create** and then **Create**. Wait for the deployment to finish before you continue.
+
+## Manage the IoT Edge device
+
+To verify the deployment of the IoT Edge device was successful:
+
+1. In your IoT Central application, navigate to the **Devices** page. Check the status of the **Environmental sensor - 001** device is **Provisioned**. You may need to wait for a few minutes while the device connects.
+
+1. Navigate to the **Environmental sensor - 001** device.
+
+1. On the **Modules** page, check the status of the three modules is **Running**.
+
+On the **Modules** page, you can view status information about the modules and perform actions such as viewing their logs and restarting them.
+
+## View raw data
+
+On the **Raw data** page for the **Environmental sensor - 001** device, you can see the telemetry it's sending and the property values it's reporting.
+
+At the moment, the IoT Edge device doesn't have a device template assigned, so all the data from the device is **Unmodeled**. Without a device template, there are no views or dashboards to display custom device information in the IoT Central application. However, you can use data export to forward the data to other services for analysis or storage.
+
+## Add a device template
+
+A deployment manifest may include definitions of properties exposed by a module. For example, the configuration in the deployment manifest for the **SimulatedTemperatureSensor** module includes the following:
+
+```json
+"SimulatedTemperatureSensor": {
+ "properties.desired": {
+ "SendData": true,
+ "SendInterval": 10
+ }
+}
+```
+
+The following steps show you how to add a device template for an IoT Edge device and the module property definitions from the deployment manifest:
+
+1. In your IoT Central application, navigate to the **Device templates** page and select **+ New**.
+
+1. On the **Select type** page, select **Azure IoT Edge**, and then **Next: Customize**.
+
+1. On the **Customize** page, enter **Environmental sensor** as the device template name.
+
+1. On the **Review** page, select **Create**.
+
+1. On the **Create a model** page, select **Custom model**.
+
+1. On the **Environmental sensor** page, select **Modules**, then **Import modules from manifest**.
+
+1. In the **Import modules** dialog, select the **Environmental sensor** deployment manifest, then **Import**.
+
+The device template now includes a module called **SimulatedTemperatureSensor**, with an interface called **management**. This interface includes definitions of the **SendData** and **SendInterval** properties from the deployment manifest.
+
+A deployment manifest can only define module properties, not commands or telemetry. To add the telemetry definitions to the device template:
+
+1. Download and save the [EnvironmentalSensorTelemetry.json](https://raw.githubusercontent.com/Azure-Samples/iot-central-docs-samples/main/iotedge/EnvironmentalSensorTelemetry.json) interface definition to your local machine.
+
+1. Navigate to the **SimulatedTemperatureSensor** module in the **Environmental sensor** device template.
+
+1. Select **Add inherited interface** (you may need to select **...** to see this option). Select **Import interface**. Then import the *EnvironmentalSensorTelemetry.json* file you previously downloaded.
+
+The module now includes a **telemetry** interface that defines **machine**, **ambient**, and **timeCreated** telemetry types:
++
+To add a view that plots telemetry from the device:
+
+1. In the **Environmental sensor** device template, select **Views**.
+
+1. On the **Select to add a new view** page, select **Visualizing the device**.
+
+1. Enter *Environmental telemetry* as the view name.
+
+1. Select **Start with devices**. Then add the following telemetry types:
+ * **ambient/temperature**
+ * **humidity**
+ * **machine/temperature**
+ * **pressure**
+
+1. Select **Add tile**, then **Save**.
+
+1. To publish the template, select **Publish**.
+
+## View telemetry and control module
+
+To view the telemetry from your device, you need to attach the device to the device template:
+
+1. Navigate to the **Devices** page and select the **Environmental sensor - 001** device.
+
+1. Select **Migrate**.
+
+1. In the **Migrate** dialog, select the **Environmental sensor** device template, and select **Migrate**.
+
+1. Navigate to the **Environmental sensor - 001** device and select the **Environmental telemetry** view.
+
+1. The line chart plots the four telemetry values you selected for the view:
+
+ :::image type="content" source="media/tutorial-connect-iot-edge-device/environmental-telemetry-view.png" alt-text="Screenshot that shows the telemetry line charts.":::
+
+1. The **Raw data** page now includes columns for the **ambient**, **machine**, and **timeCreated** telemetry values.
+
+To control the module by using the properties defined in the deployment manifest, navigate to the **Environmental sensor - 001** device and select the **Manage** view.
+
+IoT Central created this view automatically from the **manage** interface in the **SimulatedTemperatureSensor** module. The **Raw data** page now includes columns for the **SendData** and **SendInterval** properties.
+
+## Clean up resources
++
+## Next steps
+
+If you'd prefer to continue through the set of IoT Central tutorials and learn more about building an IoT Central solution, see:
+
+> [!div class="nextstepaction"]
+> [Create a gateway device template](./tutorial-define-gateway-device-type.md)
iot-central Tutorial Create Telemetry Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/tutorial-create-telemetry-rules.md
Title: Tutorial - Create and manage rules in your Azure IoT Central application
description: This tutorial shows you how Azure IoT Central rules enable you to monitor your devices in near real time and to automatically invoke actions, such as sending an email, when the rule triggers. Previously updated : 06/09/2022 Last updated : 10/27/2022
Add a device template from the device catalog. This tutorial uses the **ESP32-Az
The name of the template you created is **Sensor Controller**. The model includes components such as **Sensor Controller**, **SensorTemp**, and **Device Information interface**. Components define the capabilities of an ESP32 device. Capabilities include the telemetry, properties, and commands.
-Add two cloud properties to the **Sensor Controller** device template:
+Modify the **Overview** view to include the temperature telemetry:
-1. Select **Cloud Properties** and then **+ Add cloud property**. Use the information in the following table to add two cloud properties to your device template:
+1. In the **Sensor Controller** device template, select the **Overview** view.
- | Display Name | Semantic Type | Schema |
- | -- | - | |
- | Last Service Date | None | Date |
- | Customer Name | None | String |
+1. On the **Working Set, SensorAltitude, SensorHumid, SensorLight** tile, select **Edit**.
-1. Select **Save** to save your changes.
+1. Update the title to **Telemetry**.
-Add a new form to the device template to manage the device:
+1. Add the **Temperature** capability to the list of telemetry values shown on the chart. Then **Save** the changes.
-1. Select the **Views** node, and then select the **Editing device and cloud data** tile to add a new view.
+Now publish the device template.
-1. Change the form name to **Manage device**.
+## Add a simulated device
-1. Select the **Customer Name** and **Last Service Date** cloud properties, and the **Target Temperature** property. Then select **Add section**.
+To test the rule you create in the next section, add a simulated device to your application:
-1. Select **Save** to save your new form.
+1. Select **Devices** in the left-navigation panel. Then select **Sensor Controller**.
-Now publish the device template.
+1. Select **+ New**. In the **Create a new device** panel, leave the default device name and device ID values. Toggle **Simulate this device?** to **Yes**.
+
+1. Select **Create**.
## Create a rule
To create a telemetry rule, the device template must include at least one teleme
1. Enter the name _Temperature monitor_ to identify the rule and press Enter.
-1. Select the **Sensor Controller** device template. By default, the rule automatically applies to all the devices assigned to the device template. To filter for a subset of the devices, select **+ Filter** and use device properties to identify the devices. To disable the rule, toggle the **Enabled/Disabled** button:
+1. Select the **Sensor Controller** device template. By default, the rule automatically applies to all the devices assigned to the device template:
- :::image type="content" source="media/tutorial-create-telemetry-rules/device-filters.png" alt-text="Screenshot that shows the selection of the device template in the rule definition":::
+ :::image type="content" source="media/tutorial-create-telemetry-rules/device-filters.png" alt-text="Screenshot that shows the selection of the device template in the rule definition." lightbox="media/tutorial-create-telemetry-rules/device-filters.png":::
+
+ To filter for a subset of the devices, select **+ Filter** and use device properties to identify the devices. To disable the rule, toggle the **Enabled/Disabled** button.
### Configure the rule conditions
Conditions define the criteria that the rule monitors. In this tutorial, you con
1. Select **Temperature** in the **Telemetry** dropdown.
-1. Next, choose **Is greater than** as the **Operator** and enter _70_ as the **Value**.
+1. Next, choose **Is greater than** as the **Operator** and enter _70_ as the **Value**:
+
+ :::image type="content" source="media/tutorial-create-telemetry-rules/aggregate-condition-filled-out.png" alt-text="Screenshot that shows the aggregate condition filled out." lightbox="media/tutorial-create-telemetry-rules/aggregate-condition-filled-out.png":::
-1. Optionally, you can set a **Time aggregation**. When you select a time aggregation, you must also select an aggregation type, such as average or sum from the aggregation drop-down.
+ Optionally, you can set a **Time aggregation**. When you select a time aggregation, you must also select an aggregation type, such as average or sum from the aggregation drop-down.
* Without aggregation, the rule triggers for each telemetry data point that meets the condition. For example, if you configure the rule to trigger when temperature is above 70 then the rule triggers almost instantly when the device temperature exceeds this value. * With aggregation, the rule triggers if the aggregate value of the telemetry data points in the time window meets the condition. For example, if you configure the rule to trigger when temperature is above 70 and with an average time aggregation of 10 minutes, then the rule triggers when the device reports an average temperature greater than 70, calculated over a 10-minute interval.
- :::image type="content" source="media/tutorial-create-telemetry-rules/aggregate-condition-filled-out.png" alt-text="Screenshot that shows the aggregate condition filled out":::
- You can add multiple conditions to a rule by selecting **+ Condition**. When multiple conditions are added, you can specify if all the conditions must be met or any of the conditions must be met for the rule to trigger. If you're using time aggregation with multiple conditions, all the telemetry values must be aggregated. ### Configure actions
After you define the condition, you set up the actions to take when the rule fir
> [!NOTE] > Emails are only sent to the users that have been added to the application and have logged in at least once. Learn more about [user management](howto-administer.md) in Azure IoT Central.
- :::image type="content" source="media/tutorial-create-telemetry-rules/configure-action.png" alt-text="Screenshot that shows the email action for the rule":::
+ :::image type="content" source="media/tutorial-create-telemetry-rules/configure-action.png" alt-text="Screenshot that shows the email action for the rule." lightbox="media/tutorial-create-telemetry-rules/configure-action.png":::
1. To save the action, choose **Done**. You can add multiple actions to a rule.
After you define the condition, you set up the actions to take when the rule fir
After a while, you receive an email message when the rule fires: ## Delete a rule
iot-central Tutorial Define Gateway Device Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/tutorial-define-gateway-device-type.md
Title: Tutorial - Define a new gateway device type in Azure IoT Central | Micros
description: This tutorial shows you, as a builder, how to define a new IoT gateway device type in your Azure IoT Central application. Previously updated : 06/09/2022 Last updated : 10/26/2022
This tutorial shows you how to use a gateway device template to define a gateway
In this tutorial, you create a **Smart Building** gateway device template. A **Smart Building** gateway device has relationships with other downstream devices.
-![Diagram of relationship between gateway device and downstream devices](./media/tutorial-define-gateway-device-type/gatewaypattern.png)
As well as enabling downstream devices to communicate with your IoT Central application, a gateway device can also: * Send its own telemetry, such as temperature.
-* Respond to writable property updates made by an operator. For example, an operator could changes the telemetry send interval.
+* Respond to writable property updates made by an operator. For example, an operator could change the telemetry send interval.
* Respond to commands, such as rebooting the device. In this tutorial, you learn how to:
To create a device template for an **RS40 Occupancy Sensor** device:
You now have device templates for the two downstream device types:
-![Device templates for downstream devices](./media/tutorial-define-gateway-device-type/downstream-device-types.png)
- ## Create a gateway device template
To add a new gateway device template to your application:
1. Enter **Send Data** as the display name, and then select **Property** as the capability type.
-1. Select **Boolean** as the schema type and then select **Save**.
+1. Select **Boolean** as the schema type, set **Writable** on, and then select **Save**.
### Add relationships
Next you add relationships to the templates for the downstream device templates:
1. Select **Save**.
-![Smart Building gateway device template, showing relationships](./media/tutorial-define-gateway-device-type/relationships.png)
### Add cloud properties
A gateway device template can include cloud properties. Cloud properties only ex
To add cloud properties to the **Smart Building gateway device** template.
-1. In the **Smart Building gateway device** template, select **Cloud properties**.
+1. In the **Smart Building gateway device** template, select **Smart Building gateway device** model.
1. Use the information in the following table to add two cloud properties to your gateway device template.
- | Display name | Semantic type | Schema |
- | -- | - | |
- | Last Service Date | None | Date |
- | Customer Name | None | String |
+ | Display name | Capability type | Semantic type | Schema |
+ | -- | | - | |
+ | Last Service Date | Cloud property | None | Date |
+ | Customer Name | Cloud property | None | String |
1. Select **Save**.
To create a simulated gateway device:
1. Select **+ New** to start adding a new device.
-1. Keep the generated **Device ID** and **Device name**. Make sure that the **Simulated** switch is **On**. Select **Create**.
+1. Keep the generated **Device ID** and **Device name**. Make sure that the **Simulated** switch is **Yes**. Select **Create**.
-To create a simulated downstream devices:
+To create simulated downstream devices:
1. On the **Devices** page, select **RS40 Occupancy Sensor** in the list of device templates. 1. Select **+ New** to start adding a new device.
-1. Keep the generated **Device ID** and **Device name**. Make sure that the **Simulated** switch is **On**. Select **Create**.
+1. Keep the generated **Device ID** and **Device name**. Make sure that the **Simulated** switch is **Yes**. Select **Create**.
1. On the **Devices** page, select **S1 Sensor** in the list of device templates. 1. Select **+ New** to start adding a new device.
-1. Keep the generated **Device ID** and **Device name**. Make sure that the **Simulated** switch is **On**. Select **Create**.
+1. Keep the generated **Device ID** and **Device name**. Make sure that the **Simulated** switch is **Yes**. Select **Create**.
-![Simulated devices in your application](./media/tutorial-define-gateway-device-type/simulated-devices.png)
### Add downstream device relationships to a gateway device
Now that you have the simulated devices in your application, you can create the
Both your simulated downstream devices are now connected to your simulated gateway device. If you navigate to the **Downstream Devices** view for your gateway device, you can see the related downstream devices:
-![Downstream devices view](./media/tutorial-define-gateway-device-type/downstream-device-view.png)
## Connect real downstream devices In the [Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md) tutorial, the sample code shows how to include the model ID from the device template in the provisioning payload the device sends.
-When you connect a downstream device, you can modify the provisioning payload to include the the ID of the gateway device. The model ID lets IoT Central assign the device to the correct downstream device template. The gateway ID lets IoT Central establish the relationship between the downstream device and its gateway. In this case the provisioning payload the device sends looks like the following JSON:
+When you connect a downstream device, you can modify the provisioning payload to include the ID of the gateway device. The model ID lets IoT Central assign the device to the correct downstream device template. The gateway ID lets IoT Central establish the relationship between the downstream device and its gateway. In this case the provisioning payload the device sends looks like the following JSON:
```json {
In this tutorial, you learned how to:
Next you can learn how to: > [!div class="nextstepaction"]
-> [Add an Azure IoT Edge device to your Azure IoT Central application](/training/modules/connect-iot-edge-device-to-iot-central/)
+> [Create a rule and set up notifications in your Azure IoT Central application](tutorial-create-telemetry-rules.md)
iot-central Tutorial Use Device Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/tutorial-use-device-groups.md
Title: Tutorial - Use device groups in your Azure IoT Central application | Micr
description: Tutorial - Learn how to use device groups to analyze telemetry from devices in your Azure IoT Central application. Previously updated : 06/16/2022 Last updated : 10/26/2022
Add a device template from the device catalog. This tutorial uses the **ESP32-Az
The name of the template you created is **Sensor Controller**. The model includes components such as **Sensor Controller**, **SensorTemp**, and **Device Information interface**. Components define the capabilities of an ESP32 device. Capabilities include the telemetry, properties, and commands.
-Add two cloud properties to the **Sensor Controller** device template:
+Add two cloud properties to the **Sensor Controller** model in the device template:
-1. Select **Cloud Properties** and then **+ Add cloud property**. Use the information in the following table to add two cloud properties to your device template:
+1. Select **+ Add capability** and then use the information in the following table to add two cloud properties to your device template:
- | Display Name | Semantic Type | Schema |
- | -- | - | |
- | Last Service Date | None | Date |
- | Customer Name | None | String |
+ | Display name | Capability type | Semantic type | Schema |
+ | -- | | - | |
+ | Last Service Date | Cloud property | None | Date |
+ | Customer Name | Cloud property | None | String |
1. Select **Save** to save your changes.
Now publish the device template.
Before you create a device group, add at least five simulated devices based on the **Sensor Controller** device template to use in this tutorial: For four of the simulated sensor devices, use the **Manage device** view to set the customer name to *Contoso* and select **Save**. ## Create a device group
For four of the simulated sensor devices, use the **Manage device** view to set
1. Choose **Save**. > [!NOTE] > For Azure IoT Edge devices, select Azure IoT Edge templates to create a device group.
To analyze the telemetry for a device group:
1. Choose **Data explorer** on the left pane and select **Create a query**.
-1. Select the **Contoso devices** device group you created. Then add both the **Temperature** and **Humidity** telemetry types.
+1. Select the **Contoso devices** device group you created. Then add both the **Temperature** and **SensorHumid** telemetry types.
Use the ellipsis icons next to the telemetry types to select an aggregation type. The default is **Average**. Use **Group by** to change how the aggregate data is shown. For example, if you split by device ID you see a plot for each device when you select **Analyze**.
To analyze the telemetry for a device group:
You can customize the view, change the time period shown, and export the data as CSV or view data as table.
- :::image type="content" source="media/tutorial-use-device-groups/export-data.png" alt-text="Screenshot that shows how to export data for the Contoso devices":::
+ :::image type="content" source="media/tutorial-use-device-groups/export-data.png" alt-text="Screenshot that shows how to export data for the Contoso devices." lightbox="media/tutorial-use-device-groups/export-data.png":::
To learn more about analytics, see [How to use data explorer to analyze device data](howto-create-analytics.md).
To learn more about analytics, see [How to use data explorer to analyze device d
## Next steps
-Now that you've learned how to use device groups in your Azure IoT Central application, here is the suggested next step:
+Now that you've learned how to use device groups in your Azure IoT Central application, here's the suggested next step:
> [!div class="nextstepaction"]
-> [How to create telemetry rules](tutorial-create-telemetry-rules.md)
+> [Connect an IoT Edge device to your Azure IoT Central application](tutorial-connect-iot-edge-device.md)
iot-edge How To Connect Downstream Iot Edge Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-connect-downstream-iot-edge-device.md
For example, the following commands create a root CA certificate, a parent devic
For more information about creating test certificates, see [create demo certificates to test IoT Edge device features](how-to-create-test-certificates.md).
-01. You'll need to transfer the certificates and keys to each device. You can use a USB drive, a service like [Azure Key Vault](../key-vault/general/overview.md), or with a function like [Secure file copy](https://www.ssh.com/ssh/scp/). Choose one of these methods that best matches your scenario.
+01. You'll need to transfer the certificates and keys to each device. You can use a USB drive, a service like [Azure Key Vault](../key-vault/general/overview.md), or with a function like [Secure file copy](https://www.ssh.com/ssh/scp/). Choose one of these methods that best matches your scenario. Copy the files to the preferred directory for certificates and keys. Use `/var/aziot/certs` for certificates and `/var/aziot/secrets` for keys.
For more information on installing certificates on a device, see [Manage certificates on an IoT Edge device](how-to-manage-device-certificates.md).
To configure your parent device, open a local or remote command shell.
To enable secure connections, every IoT Edge parent device in a gateway scenario needs to be configured with a unique device CA certificate and a copy of the root CA certificate shared by all devices in the gateway hierarchy.
-01. Transfer the **root CA certificate**, **parent device CA certificate**, and **parent private key** to the parent device. The examples in this article use the directory `/var/secrets` for the certificates and keys directory.
+01. Transfer the **root CA certificate**, **parent device CA certificate**, and **parent private key** to the parent device. The examples in this article use the preferred directory `/var/aziot` for the certificates and keys.
01. Install the **root CA certificate** on the parent IoT Edge device. First, copy the root certificate into the certificate directory and add `.crt` to the end of the file name. Next, update the certificate store on the device using the platform-specific command. **Debian or Ubuntu:** ```bash
- sudo cp /var/secrets/azure-iot-test-only.root.ca.cert.pem /usr/local/share/ca-certificates/azure-iot-test-only.root.ca.cert.pem.crt
+ sudo cp /var/aziot/certs/azure-iot-test-only.root.ca.cert.pem /usr/local/share/ca-certificates/azure-iot-test-only.root.ca.cert.pem.crt
sudo update-ca-certificates ```
To enable secure connections, every IoT Edge parent device in a gateway scenario
**IoT Edge for Linux on Windows (EFLOW):** ```bash
- sudo cp /var/secrets/azure-iot-test-only.root.ca.cert.pem /etc/pki/ca-trust/source/anchors/azure-iot-test-only.root.ca.cert.pem.crt
+ sudo cp /var/aziot/certs/azure-iot-test-only.root.ca.cert.pem /etc/pki/ca-trust/source/anchors/azure-iot-test-only.root.ca.cert.pem.crt
sudo update-ca-trust ```
You should already have IoT Edge installed on your device. If not, follow the st
device. For example: ```toml
- trust_bundle_cert = "file:///var/secrets/azure-iot-test-only.root.ca.cert.pem"
+ trust_bundle_cert = "file:///var/aziot/certs/azure-iot-test-only.root.ca.cert.pem"
``` 01. Find or add the **Edge CA certificate** section in the config file. Update the certificate `cert` and private key `pk` parameters with the file URI paths for the certificate and key files on the parent IoT Edge device. IoT Edge requires the certificate and private key to be in text-based privacy-enhanced mail (PEM) format. For example: ```toml [edge_ca]
- cert = "file:///var/secrets/iot-edge-device-ca-gateway.cert.pem"
- pk = "file:///var/secrets/iot-edge-device-ca-gateway.key.pem"
+ cert = "file:///var/aziot/certs/iot-edge-device-ca-gateway.cert.pem"
+ pk = "file:///var/aziot/secrets/iot-edge-device-ca-gateway.key.pem"
``` 01. Verify your IoT Edge device uses the correct version of the IoT Edge agent when it starts. Find the **Default Edge Agent** section and set the image value for IoT Edge to version 1.4. For example:
You should already have IoT Edge installed on your device. If not, follow the st
```toml hostname = "10.0.0.4"
- trust_bundle_cert = "file:///var/secrets/azure-iot-test-only.root.ca.cert.pem"
+ trust_bundle_cert = "file:///var/aziot/certs/azure-iot-test-only.root.ca.cert.pem"
[edge_ca]
- cert = "file:///var/secrets/iot-edge-device-ca-gateway.cert.pem"
- pk = "file:///var/secrets/iot-edge-device-ca-gateway.key.pem"
+ cert = "file:///var/aziot/certs/iot-edge-device-ca-gateway.cert.pem"
+ pk = "file:///var/aziot/secrets/iot-edge-device-ca-gateway.key.pem"
``` 01. Save and close the `config.toml` configuration file. For example if you're using the **nano** editor, select **Ctrl+O** - *Write Out*, **Enter**, and **Ctrl+X** - *Exit*.
To configure your child device, open a local or remote command shell.
To enable secure connections, every IoT Edge child device in a gateway scenario needs to be configured with a unique device CA certificate and a copy of the root CA certificate shared by all devices in the gateway hierarchy.
-01. Transfer the **root CA certificate**, **child device CA certificate**, and **child private key** to the child device. The examples in this article use the directory `/var/secrets` for the certificates and keys directory.
+01. Transfer the **root CA certificate**, **child device CA certificate**, and **child private key** to the child device. The examples in this article use the directory `/var/aziot` for the certificates and keys directory.
01. Install the **root CA certificate** on the child IoT Edge device. First, copy the root certificate into the certificate directory and add `.crt` to the end of the file name. Next, update the certificate store on the device using the platform-specific command. **Debian or Ubuntu:** ```bash
- sudo cp /var/secrets/azure-iot-test-only.root.ca.cert.pem /usr/local/share/ca-certificates/azure-iot-test-only.root.ca.cert.pem.crt
+ sudo cp /var/aziot/certs/azure-iot-test-only.root.ca.cert.pem /usr/local/share/ca-certificates/azure-iot-test-only.root.ca.cert.pem.crt
sudo update-ca-certificates ```
To enable secure connections, every IoT Edge child device in a gateway scenario
**IoT Edge for Linux on Windows (EFLOW):** ```bash
- sudo cp /var/secrets/azure-iot-test-only.root.ca.cert.pem /etc/pki/ca-trust/source/anchors/azure-iot-test-only.root.ca.cert.pem.crt
+ sudo cp /var/aziot/certs/azure-iot-test-only.root.ca.cert.pem /etc/pki/ca-trust/source/anchors/azure-iot-test-only.root.ca.cert.pem.crt
sudo update-ca-trust ```
You should already have IoT Edge installed on your device. If not, follow the st
device. For example: ```toml
- trust_bundle_cert = "file:///var/secrets/azure-iot-test-only.root.ca.cert.pem"
+ trust_bundle_cert = "file:///var/aziot/certs/azure-iot-test-only.root.ca.cert.pem"
``` 01. Find or add the **Edge CA certificate** section in the configuration file. Update the certificate `cert` and private key `pk` parameters with the file URI paths for the certificate and key files on the IoT Edge child device. IoT Edge requires the certificate and private key to be in text-based privacy-enhanced mail (PEM) format. For example: ```toml [edge_ca]
- cert = "file:///var/secrets/iot-edge-device-ca-downstream.cert.pem"
- pk = "file:///var/secrets/iot-edge-device-ca-downstream.key.pem"
+ cert = "file:///var/aziot/certs/iot-edge-device-ca-downstream.cert.pem"
+ pk = "file:///var/aziot/secrets/iot-edge-device-ca-downstream.key.pem"
``` 01. Verify your IoT Edge device uses the correct version of the IoT Edge agent when it starts. Find the **Default Edge Agent** section and set the image value for IoT Edge to version 1.4. For example:
You should already have IoT Edge installed on your device. If not, follow the st
```toml parent_hostname = "10.0.0.4"
- trust_bundle_cert = "file:///var/secrets/azure-iot-test-only.root.ca.cert.pem"
+ trust_bundle_cert = "file:///var/aziot/certs/azure-iot-test-only.root.ca.cert.pem"
[edge_ca]
- cert = "file:///var/secrets/iot-edge-device-ca-downstream.cert.pem"
- pk = "file:///var/secrets/iot-edge-device-ca-downstream.key.pem"
+ cert = "file:///var/aziot/certs/iot-edge-device-ca-downstream.cert.pem"
+ pk = "file:///var/aziot/secrets/iot-edge-device-ca-downstream.key.pem"
``` 01. Save and close the `config.toml` configuration file. For example if you're using the **nano** editor, select **Ctrl+O** - *Write Out*, **Enter**, and **Ctrl+X** - *Exit*.
iot-edge How To Create Transparent Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-create-transparent-gateway.md
If you don't have your own certificate authority and want to use demo certificat
# [IoT Edge](#tab/iotedge)
-If you created the certificates on a different machine, copy them over to your IoT Edge device then proceed with the next steps. You can use a USB drive, a service like [Azure Key Vault](../key-vault/general/overview.md), or with a function like [Secure file copy](https://www.ssh.com/ssh/scp/). Choose one of these methods that best matches your scenario.
+1. If you created the certificates on a different machine, copy them over to your IoT Edge device. You can use a USB drive, a service like [Azure Key Vault](../key-vault/general/overview.md), or with a function like [Secure file copy](https://www.ssh.com/ssh/scp/).
+1. Move the files to the preferred directory for certificates and keys. Use `/var/aziot/certs` for certificates and `/var/aziot/secrets` for keys.
+1. Change the ownership and permissions of the certificates and keys.
+
+ ```bash
+ sudo chown aziotcs:aziotcs /var/aziot/certs
+ sudo chown -R iotedge /var/aziot/certs
+ sudo chmod 644 /var/aziot/secrets/
+ ```
# [IoT Edge for Linux on Windows](#tab/eflow) Now, you need to copy the certificates to the Azure IoT Edge for Linux on Windows virtual machine.
+1. Copy the certificates to the EFLOW virtual machine to a directory where you have write access. For example, the `/home/iotedge-user` home directory.
+
+ ```powershell
+ # Copy the IoT Edge device CA certificate and key
+ Copy-EflowVMFile -fromFile <path>\certs\iot-edge-device-ca-<cert name>-full-chain.cert.pem -toFile ~/iot-edge-device-ca-<cert name>-full-chain.cert.pem -pushFile
+ Copy-EflowVMFile -fromFile <path>\private\iot-edge-device-ca-<cert name>.key.pem -toFile ~/iot-edge-device-ca-<cert name>.key.pem -pushFile
+
+ # Copy the root CA certificate
+ Copy-EflowVMFile -fromFile <path>\certs\azure-iot-test-only.root.ca.cert.pem -toFile ~/azure-iot-test-only.root.ca.cert.pem -pushFile
+ ```
1. Open an elevated _PowerShell_ session by starting with **Run as Administrator**. Connect to the EFLOW virtual machine.
Now, you need to copy the certificates to the Azure IoT Edge for Linux on Window
Connect-EflowVm ```
-1. Create the certificates directory. You can select any writeable directory. For this tutorial, we'll use the _iotedge-user_ home folder.
+1. Create the certificates directory. You should store your certificates and keys to the preferred `/var/aziot` directory. Use `/var/aziot/certs` for certificates and `/var/aziot/secrets` for keys.
```bash
- cd ~
- mkdir certs
- cd certs
- mkdir certs
- mkdir private
+ sudo mkdir -p /var/aziot/certs
+ sudo mkdir -p /var/aziot/secrets
```
-1. Exit the EFLOW VM connection.
+1. Move the certificates and keys to the preferred `/var/aziot` directory.
```bash
- exit
+ # Move the IoT Edge device CA certificate and key to preferred location
+ sudo mv ~/iot-edge-device-ca-<cert name>-full-chain.cert.pem /var/aziot/certs
+ sudo mv ~/iot-edge-device-ca-<cert name>.key.pem /var/aziot/secrets
+ sudo mv ~/azure-iot-test-only.root.ca.cert.pem /var/aziot/certs
```
-1. Copy the certificates to the EFLOW virtual machine.
+1. Change the ownership and permissions of the certificates and keys.
- ```powershell
- # Copy the IoT Edge device CA certificates
- Copy-EflowVMFile -fromFile <path>\certs\iot-edge-device-ca-<cert name>-full-chain.cert.pem -toFile /home/iotedge-user/certs/certs/iot-edge-device-ca-<cert name>-full-chain.cert.pem -pushFile
- Copy-EflowVMFile -fromFile <path>\private\iot-edge-device-ca-<cert name>.key.pem -toFile /home/iotedge-user/certs/private/iot-edge-device-ca-<cert name>.key.pem -pushFile
-
- # Copy the root CA certificate
- Copy-EflowVMFile -fromFile <path>\certs\azure-iot-test-only.root.ca.cert.pem -toFile /home/iotedge-user/certs/certs/azure-iot-test-only.root.ca.cert.pem -pushFile
+ ```bash
+ sudo chown -R iotedge /var/aziot/certs
+ sudo chmod 644 /var/aziot/secrets/iot-edge-device-ca-<cert name>.key.pem
```
+
+1. Exit the EFLOW VM connection.
-1. Invoke the following commands on the EFLOW VM to grant *iotedge* permissions to the certificate files since `Copy-EflowVMFile` copies files with root only access permissions.
-
- ```powershell
- Invoke-EflowVmCommand "sudo chown -R iotedge /home/iotedge-user/certs/"
- Invoke-EflowVmCommand "sudo chmod 0644 /home/iotedge-user/certs/"
+ ```bash
+ exit
``` -
iot-edge Iot Edge Certs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-certs.md
In summary, *EdgeGateway* can verify and trust *ContosoIotHub's* identity becaus
## IoT Hub verifies IoT Edge device identity
-How does *ContosoIotHub* verify it's communicating with *EdgeGateway*? Verification is done by checking the certificate at the IoTHub application code level. This step happens together with the *TLS handshake*. IoT Hub doesn't do mutual TLS. Authentication of the client doesn't happen at the TLS level, only at the application layer. For simplicity, we'll skip some steps in the following diagram.
+How does *ContosoIotHub* verify it's communicating with *EdgeGateway*? Verification is done by checking the certificate at the IoTHub application code level. This step happens together with the *TLS handshake* (IoT Hub doesn't support mutual TLS). Authentication of the client doesn't happen at the TLS level, only at the application layer. For simplicity, we'll skip some steps in the following diagram.
:::image type="content" source="./media/iot-edge-certs/verify-edge-identity.svg" alt-text="Sequence diagram showing certificate exchange from IoT Edge device to IoT Hub with certificate thumbprint check verification on IoT Hub.":::
If we view the thumbprint value for the *EdgeGateway* device in the Azure portal
:::image type="content" source="./media/iot-edge-certs/edge-id-thumbprint.png" alt-text="Screenshot from Azure portal of EdgeGateway device's thumbprint in ContosoIotHub.":::
-In summary, *ContosoIotHub* can trust *EdgeGateway* because:
-
-* *ContosoIotHub* presents a valid **IoT Edge device identity certificate** whose thumbprint matches the one registered in IoT Hub
-* *EdgeGateway's* ability to decrypt data signed with its public key using its private key verifies the cryptographic key pair
+In summary, *ContosoIotHub* can trust *EdgeGateway* because *EdgeGateway* presents a valid **IoT Edge device identity certificate** whose thumbprint matches the one registered in IoT Hub.
> [!NOTE] > This example doesn't address Azure IoT Hub Device Provisioning Service (DPS), which has support for X.509 CA authentication with IoT Edge when provisioned with an enrollment group. Using DPS, you upload the CA certificate or an intermediate certificate, the certificate chain is verified, then the device is provisioned. To learn more, see [DPS X.509 certificate attestation](../iot-dps/concepts-x509-attestation.md).
stateDiagram-v2
## Device verifies gateway identity
-How does *TempSensor* verify it's communicating with the genuine *EdgeGateway?* When *TempSensor* wants to talk to the *EdgeGateway*, *TempSensor* needs *EdgeGateway* to show an ID. The ID must be issued by an authority that *EdgeGateway* trusts.
+How does *TempSensor* verify it's communicating with the genuine *EdgeGateway?* When *TempSensor* wants to talk to the *EdgeGateway*, *TempSensor* needs *EdgeGateway* to show an ID. The ID must be issued by an authority that *TempSensor* trusts.
:::image type="content" source="./media/iot-edge-certs/verify-gateway-identity.svg" alt-text="Sequence diagram showing certificate exchange from gateway device to IoT Edge device with certificate verification using the private root certificate authority.":::
iot-edge Tutorial Configure Est Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-configure-est-server.md
The Dockerfile uses Ubuntu 18.04, a [Cisco library called `libest`](https://gith
Each device requires the Certificate Authority (CA) certificate that is associated to a device identity certificate.
-1. On the IoT Edge device, create the `/var/secrets` directory if it doesn't exist then change directory to it.
+1. On the IoT Edge device, create the `/var/aziot` directory if it doesn't exist then change directory to it.
```bash
- # Create the /var/secrets directory if it doesn't exist
- sudo mkdir /var/secrets
+ # Create the /var/aziot/certs directory if it doesn't exist
+ sudo mkdir -p /var/aziot/certs
- # Change directory to /var/secrets
- cd /var/secrets
+ # Change directory to /var/aziot/certs
+ cd /var/aziot/certs
```
-1. Retrieve the CA certificate from the EST server into the `/var/secrets` directory and name it `cacert.crt.pem`.
+1. Retrieve the CA certificate from the EST server into the `/var/aziot/certs` directory and name it `cacert.crt.pem`.
```bash openssl s_client -showcerts -verify 5 -connect localhost:8085 < | sudo awk '/BEGIN/,/END/{ if(/BEGIN/){a++}; out="cert"a".pem"; print >out}' && sudo cp cert2.pem cacert.crt.pem
Each device requires the Certificate Authority (CA) certificate that is associat
1. Certificates should be owned by the key service user **aziotks**. Set the ownership to **aziotks** for all the certificate files. ```bash
- sudo chown aziotks:aziotks /var/secrets/*.pem
+ sudo chown aziotks:aziotks /var/aziot/certs/*.pem
``` ## Provision IoT Edge device using DPS
Using Device Provisioning Service allows you to automatically issue and renew ce
### Upload CA certificate to DPS 1. If you don't have a Device Provisioning Service linked to IoT Hub, see [Quickstart: Set up the IoT Hub Device Provisioning Service with the Azure portal](../iot-dps/quick-setup-auto-provision.md).
-1. Transfer the `cacert.crt.pem` file from your device to a computer with access to the Azure portal such as your development computer. An easy way to transfer the certificate is to remotely connect to your device, display the certificate using the command `cat /var/secrets/cacert.crt.pem`, copy the entire output, and paste the contents to a new file on your development computer.
+1. Transfer the `cacert.crt.pem` file from your device to a computer with access to the Azure portal such as your development computer. An easy way to transfer the certificate is to remotely connect to your device, display the certificate using the command `cat /var/aziot/certs/cacert.crt.pem`, copy the entire output, and paste the contents to a new file on your development computer.
1. In the [Azure portal](https://portal.azure.com), navigate to your instance of IoT Hub Device Provisioning Service. 1. Under **Settings**, select **Certificates**, then **+Add**.
On the IoT Edge device, update the IoT Edge configuration file to use device cer
# Optional if the EST server's TLS certificate is already trusted by the system's CA certificates. [cert_issuance.est] trusted_certs = [
- "file:///var/secrets/cacert.crt.pem",
+ "file:///var/aziot/certs/cacert.crt.pem",
] # The default username and password for libest
iot-hub-device-update Delta Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/delta-updates.md
The following table provides a list of the content needed, where to retrieve the
| Binary Name | Where to acquire | How to install | |--|--|--|
-| DiffGen | [Azure/iot-hub-device-update-delta](https://github.com/Azure/iot-hub-device-update-delta) Github repo | Select _Microsoft.Azure.DeviceUpdate.Diffs_ under the Packages section on the right side of the page. From there you can install from the cmd line or select _package.nupkg_ under the Assets section on the right side of the page to download the package. [Learn more about NuGet packages](https://learn.microsoft.com/nuget/).|
+| DiffGen | [Azure/iot-hub-device-update-delta](https://github.com/Azure/iot-hub-device-update-delta) GitHub repo | Select _Microsoft.Azure.DeviceUpdate.Diffs_ under the Packages section on the right side of the page. From there you can install from the cmd line or select _package.nupkg_ under the Assets section on the right side of the page to download the package. [Learn more about NuGet packages](https://learn.microsoft.com/nuget/).|
| .NET (Runtime) | Via Terminal / Package Managers | [Instructions for Linux](/dotnet/core/install/linux). Only the Runtime is required. | ### Dependencies
iot-hub Tutorial Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-routing.md
Now that you have a device ID and key, use the sample code to start sending devi
>If you're following the Azure CLI steps for this tutorial, run the sample code in a separate session. That way, you can allow the sample code to continue running while you follow the rest of the CLI steps. 1. If you didn't as part of the prerequisites, download or clone the [Azure IoT SDK for C# repo](https://github.com/Azure/azure-iot-sdk-csharp) from GitHub now.
-1. In the sample folder, navigate to the `/iothub/device/samples/getting started/RoutingTutorial/SimulatedDevice/` folder.
-1. Install the Azure IoT C# SDK and necessary dependencies as specified in the `SimulatedDevice.csproj` file:
+1. From the folder where you downloaded or cloned the SDK, navigate to the `azure-iot-sdk-csharp\iothub\device\samples\how to guides\HubRoutingSample` folder.
+1. Install the Azure IoT C# SDK and necessary dependencies as specified in the `HubRoutingSample.csproj` file:
```console dotnet restore
key-vault About Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/about-certificates.md
Key Vault certificates support provides for management of your x509 certificates
>[!Note] >Non-partnered providers/authorities are also allowed but, will not support the auto renewal feature.
+For details on certificate creation, see [Certificate creation methods](create-certificate.md).
+ ## Composition of a Certificate When a Key Vault certificate is created, an addressable key and secret are also created with the same name. The Key Vault key allows key operations and the Key Vault secret allows retrieval of the certificate value as a secret. A Key Vault certificate also contains public x509 certificate metadata.
TLS certificates can help encrypt communications over the internet and establish
A certificate can help secure the code/script of software, thereby ensuring that the author can share the software over the internet without being changed by malicious entities. Furthermore, once the author signs the code using a certificate leveraging the code signing technology, the software is marked with a stamp of authentication displaying the author and their website. Therefore, the certificate used in code signing helps validate the software's authenticity, promoting end-to-end security. ## Next steps
+- [Certificate creation methods](create-certificate.md)
- [About Key Vault](../general/overview.md) - [About keys, secrets, and certificates](../general/about-keys-secrets-certificates.md) - [About keys](../keys/about-keys.md)
load-testing How To Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-assign-roles.md
Title: Manage roles in Azure Load Testing
-description: Learn how to access to an Azure Load Testing resource using Azure role-based access control (Azure RBAC).
+description: Learn how to manage access to an Azure load testing resource using Azure role-based access control (Azure RBAC).
Previously updated : 03/15/2022 Last updated : 11/07/2022 # Manage access to Azure Load Testing
-In this article, you learn how to manage access (authorization) to an Azure Load Testing resource. [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md) is used to manage access to Azure resources, such as the ability to create new resources or use existing ones. Users in your Azure Active Directory (Azure AD) are assigned specific roles, which grant access to resources.
+In this article, you learn how to manage access (authorization) to an Azure load testing resource. [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md) is used to manage access to Azure resources, such as the ability to create new resources or use existing ones. You can grant role-based access to users using the Azure portal, Azure Command-Line tools, or Azure Management APIs.
> [!IMPORTANT] > Azure Load Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
To assign Azure roles, you must have:
* `Microsoft.Authorization/roleAssignments/write` permissions, such as [User Access Administrator](../role-based-access-control/built-in-roles.md#user-access-administrator) or [Owner](../role-based-access-control/built-in-roles.md#owner).
-## Default roles
+## Roles in Azure Load Testing
-Azure Load Testing resources have three built-in roles that are available by default. When you add users to a resource, you can assign one of the built-in roles to grant permissions:
+In Azure Load Testing, access is granted by assigning the appropriate Azure role to users, groups, and applications at the load testing resource scope. Following are the built-in roles supported by a load testing resource:
-| Role | Access level |
+| Role | Description |
| | | | **Load Test Reader** | Read-only actions in the Load Testing resource. Readers can list and view tests and test runs in the resource. Readers can't create, update, or run tests. | | **Load Test Contributor** | View, create, edit, or delete (where applicable) tests and test runs in a Load Testing resource. |
You'll encounter this message if your account doesn't have the necessary permiss
> [!IMPORTANT] > Role access can be scoped to multiple levels in Azure. For example, someone with owner access to a resource may not have owner access to the resource group that contains the resource. For more information, see [How Azure RBAC works](../role-based-access-control/overview.md#how-azure-rbac-works).
-## Manage resource access
+## Role permissions
-You can manage access to the Azure Load Testing resource by using the Azure portal:
+The following tables describe the specific permissions given to each role. This can include Actions, which give permissions, and Not Actions, which restrict them.
-1. In the [Azure portal](https://portal.azure.com), go to your Azure Load Testing resource.
+### Load Test Owner
-1. On the left pane, select **Access Control (IAM)**, and then select **Add role assignment**.
+A Load Test Owner can manage everything, including access. The following table shows the permissions granted for the role:
- :::image type="content" source="media/how-to-assign-roles/load-test-access-control.png" alt-text="Screenshot that shows how to configure access control.":::
+| Actions | Description |
+| - | -- |
+| Microsoft.Resources/deployments/* | Create and manage resource group deployments. |
+| Microsoft.Resources/subscriptions/resourceGroups/read | Gets or lists resource groups. |
+| Microsoft.Insights/alertRules/* | Create and manage alert rules. |
+| Microsoft.Authorization/*/read | Read authorization. |
+| Microsoft.LoadTestService/* | Create and manage load testing resources. |
-1. Assign one of the Azure Load Testing [built-in roles](#default-roles). For details about how to assign roles, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+| DataActions | Description |
+| - | -- |
+| Microsoft.LoadTestService/loadtests/* | Start, stop, and manage load tests. |
- The role assignments might take a few minutes to become active for your account. Refresh the webpage for the user interface to reflect the updated permissions.
+### Load Test Contributor
- :::image type="content" source="media/how-to-assign-roles/add-role-assignment.png" alt-text="Screenshot that shows the role assignment screen.":::
+A Load Test Contributor can manage everything except access. The following table shows the permissions granted for the role:
-Alternatively, you can manage access without using the Azure portal:
+| Actions | Description |
+| - | -- |
+| Microsoft.Resources/deployments/* | Create and manage resource group deployments. |
+| Microsoft.Resources/subscriptions/resourceGroups/read | Gets or lists resource groups. |
+| Microsoft.Insights/alertRules/* | Create and manage alert rules. |
+| Microsoft.Authorization/*/read | Read authorization. |
+| Microsoft.LoadTestService/*/read | Create and manage load testing resources. |
-- [PowerShell](../role-based-access-control/role-assignments-powershell.md)-- [Azure CLI](../role-based-access-control/role-assignments-cli.md)-- [REST API](../role-based-access-control/role-assignments-rest.md)-- [Azure Resource Manager templates](../role-based-access-control/role-assignments-template.md)
+| DataActions | Description |
+| - | -- |
+| Microsoft.LoadTestService/loadtests/* | Start, stop, and manage load tests. |
+
+### Load Test Reader
+
+A Load Test Reader can view all the resources in a load testing resource but can't make any changes. The following table shows the permissions granted for the role:
+
+| Actions | Description |
+| - | -- |
+| Microsoft.Resources/deployments/* | Create and manage resource group deployments. |
+| Microsoft.Resources/subscriptions/resourceGroups/read | Gets or lists resource groups. |
+| Microsoft.Insights/alertRules/* | Create and manage alert rules. |
+| Microsoft.Authorization/*/read | Read authorization. |
+| Microsoft.LoadTestService/*/read | Create and manage load testing resources. |
+
+| DataActions | Description |
+| - | -- |
+| Microsoft.LoadTestService/loadtests/readTest/action | Read load tests. |
+
+## Configure Azure RBAC for your load testing resource
+
+The following section shows you how to configure Azure RBAC on your load testing resource through the Azure portal and PowerShell.
+
+### Configure Azure RBAC using the Azure portal
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) and open your load testing resource from the **Azure Load Testing** page.
+
+1. Select **Access control (IAM)** and select a role from the list of available roles. You can choose any of the available built-in roles that an Azure load testing resource supports or any custom role you might have defined. Assign the role to a user to which you want to give permissions.
+
+ For detailed steps, see [Assign Azure roles using the Azure portal](/azure/role-based-access-control/role-assignments-portal).
+
+#### Remove role assignments from a user
+
+You can remove the access permission for a user who isn't managing the Azure load testing resource, or who no longer works for the organization. The following steps show how to remove the role assignments from a user. For detailed steps, see [Remove Azure role assignments](/azure/role-based-access-control/role-assignments-remove):
+
+1. Open **Access control (IAM)** at a scope, such as management group, subscription, resource group, or resource, where you want to remove access.
+
+1. Select the **Role assignments** tab to view all the role assignments at this scope.
+
+1. In the list of role assignments, add a checkmark next to the user with the role assignment you want to remove.
+
+1. Select **Remove**, and then select **Yes** to confirm.
+
+### Configure Azure RBAC using PowerShell
+
+You can also configure role-based access to a load testing resource using the following [Azure PowerShell cmdlets](/azure/role-based-access-control/role-assignments-powershell):
+
+* [Get-AzRoleDefinition](/powershell/module/Az.Resources/Get-AzRoleDefinition) lists all Azure roles that are available in Azure Active Directory. You can use this cmdlet with the Name parameter to list all the actions that a specific role can perform.
+
+ ```azurepowershell-interactive
+ Get-AzRoleDefinition -Name 'Load Test Contributor'
+ ```
+
+ The following is the example output:
+
+ ```output
+ Name : Load Test Contributor
+ Id : 00000000-0000-0000-0000-000000000000
+ IsCustom : False
+ Description : View, create, update, delete and execute load tests. View and list load test resources but can not make any changes.
+ Actions : {Microsoft.LoadTestService/*/read, Microsoft.Authorization/*/read, Microsoft.Resources/deployments/*, Microsoft.Resources/subscriptions/resourceGroups/read…}
+ NotActions : {}
+ DataActions : {Microsoft.LoadTestService/loadtests/*}
+ NotDataActions : {}
+ AssignableScopes : {/}
+ ```
+
+* [Get-AzRoleAssignment](/powershell/module/az.resources/get-azroleassignment) lists Azure role assignments at the specified scope. Without any parameters, this cmdlet returns all the role assignments made under the subscription. Use the `ExpandPrincipalGroups` parameter to list access assignments for the specified user, as well as the groups that the user belongs to.
+
+ **Example**: Use the following cmdlet to list all the users and their roles within a load testing resource.
+
+ ```azurepowershell-interactive
+ Get-AzRoleAssignment -Scope '/subscriptions/<SubscriptionID>/resourcegroups/<Resource Group Name>/Providers/Microsoft.LoadTestService/loadtests/<Load Test Name>'
+ ```
+
+* Use [New-AzRoleAssignment](/powershell/module/Az.Resources/New-AzRoleAssignment) to assign access to users, groups, and applications to a particular scope.
+
+ **Example**: Use the following command to assign the "Load Test Reader" role for a user in the load testing resource scope.
+
+ ```azurepowershell-interactive
+ New-AzRoleAssignment -SignInName <sign-in Id of a user you wish to grant access> -RoleDefinitionName 'Load Test Reader' -Scope '/subscriptions/<SubscriptionID>/resourcegroups/<Resource Group Name>/Providers/Microsoft.LoadTestService/loadtests/<Load Testing resource name>'
+ ```
+
+* Use [Remove-AzRoleAssignment](/powershell/module/Az.Resources/Remove-AzRoleAssignment) to remove access of a specified user, group, or application from a particular scope.
+
+ **Example**: Use the following command to remove the user from the Load Test Reader role in the load testing resource scope.
+
+ ```azurepowershell-interactive
+ Remove-AzRoleAssignment -SignInName <sign-in Id of a user you wish to remove> -RoleDefinitionName 'Load Test Reader' -Scope '/subscriptions/<SubscriptionID>/resourcegroups/<Resource Group Name>/Providers/Microsoft.LoadTestService/loadtests/<Load Testing resource name>'
+ ```
## Next steps
load-testing Resource Supported Azure Resource Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/resource-supported-azure-resource-types.md
To learn how to configure your load test, see [Monitor server-side application m
This section lists the Azure resource types that Azure Load Testing supports for server-side monitoring.
-* API Management
-* App Service
-* App Service plan
-* Application Insights
+* Azure API Management
+* Azure App Service
+* Azure App Service plan
+* Azure Application Insights
* Azure Cache for Redis * Azure Cosmos DB * Azure Database for MariaDB server
logic-apps Create Single Tenant Workflows Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-visual-studio-code.md
For more information, review the [Azurite documentation](https://github.com/Azur
* [C# for Visual Studio Code extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode.csharp), which enables F5 functionality to run your logic app.
- * [Azure Functions Core Tools - 3.x version](https://github.com/Azure/azure-functions-core-tools/releases/tag/3.0.4585) by using the Microsoft Installer (MSI) version, which is `func-cli-X.X.XXXX-x*.msi`. These tools include a version of the same runtime that powers the Azure Functions runtime, which the Azure Logic Apps (Standard) extension uses in Visual Studio Code.
+ * [Azure Functions Core Tools - 3.x version](https://github.com/Azure/azure-functions-core-tools/releases/tag/3.0.4868) by using the Microsoft Installer (MSI) version, which is `func-cli-X.X.XXXX-x*.msi`. These tools include a version of the same runtime that powers the Azure Functions runtime, which the Azure Logic Apps (Standard) extension uses in Visual Studio Code.
* If you have an installation that's earlier than these versions, uninstall that version first, or make sure that the PATH environment variable points at the version that you download and install.
logic-apps Logic Apps Scenario Function Sb Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-scenario-function-sb-trigger.md
Title: Call logic apps with Azure Functions
-description: Call or trigger logic apps by using Azure Functions and Azure Service Bus.
+ Title: Set up long-running tasks by calling workflows with Azure Functions
+description: Set up long-running tasks by creating an Azure Logic Apps workflow that monitors and responds to messages or events and uses Azure Functions to trigger the workflow.
ms.suite: integration Previously updated : 11/08/2019 Last updated : 11/7/2022
+#Customer intent: As a logic apps developer, I want to set up a long-running task by creating a logic app workflow that monitors and responds to messages or events and uses Azure Functions to call the workflow.
-# Call or trigger logic apps by using Azure Functions and Azure Service Bus
+# Set up long running tasks by calling logic app workflows with Azure Functions
[!INCLUDE [logic-apps-sku-consumption](../../includes/logic-apps-sku-consumption.md)]
-You can use [Azure Functions](../azure-functions/functions-overview.md) to trigger a logic app when you need to deploy a long-running listener or task. For example, you can create a function that listens in on an [Azure Service Bus](../service-bus-messaging/service-bus-messaging-overview.md) queue and immediately fires a logic app as a push trigger.
+When you need to deploy a long-running listener or task, you can create a logic app workflow that uses the Request trigger and Azure Functions to call that trigger and run the workflow.
-## Prerequisites
+For example, you can create a function that listens for messages that arrive in an Azure Service Bus queue. When this event happens, the function calls the Request trigger, which works as a push trigger to automatically run your workflow.
+
+This how-to guide shows how to create a logic app workflow that starts with the Request trigger. You then create a function that listens to a Service Bus queue. When a message arrives in the queue, the function calls the endpoint created by the Request trigger to run your workflow.
-* An Azure subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/).
+> [!NOTE]
+>
+> Although you can implement this behavior using either a Consumption or Standard logic app workflow,
+> this example continues with a Consumption workflow.
-* An Azure Service Bus namespace. If you don't have a namespace, [create your namespace first](../service-bus-messaging/service-bus-create-namespace-portal.md).
+## Prerequisites
-* A function app, which is a container for your functions. If you don't have a function app, [create your function app first](../azure-functions/functions-get-started.md), and make sure that you select .NET as the runtime stack.
+* An Azure account and subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* Basic knowledge about [how to create logic apps](../logic-apps/quickstart-create-first-logic-app-workflow.md)
+* A Service Bus namespace. If you don't have a namespace, [create your namespace first](../service-bus-messaging/service-bus-create-namespace-portal.md). For more information, see [What is Azure Service Bus?](../service-bus-messaging/service-bus-messaging-overview.md)
-## Create logic app
+* A function app, which is a container for your functions. If you don't have a function app, [create your function app first](../azure-functions/functions-get-started.md), and make sure that you select .NET for the **Runtime stack** property.
-For this scenario, you have a function running each logic app that you want to trigger. First, create a logic app that starts with an HTTP request trigger. The function calls that endpoint whenever a queue message is received.
+* Basic knowledge about [how to create a Consumption logic app workflow](quickstart-create-first-logic-app-workflow.md).
-1. Sign in to the [Azure portal](https://portal.azure.com), and create blank logic app.
+## Create a logic app workflow
- If you're new to logic apps, review [Quickstart: Create your first logic app](../logic-apps/quickstart-create-first-logic-app-workflow.md).
+1. In the [Azure portal](https://portal.azure.com), create a Consumption blank logic app by selecting the **Blank Logic App** template.
-1. In the search box, enter `http request`. From the triggers list, select the **When a HTTP request is received** trigger.
+1. After the designer opens, under the designer search box, select **Built-in**. In the search box, enter **request**.
- ![Select trigger](./media/logic-apps-scenario-function-sb-trigger/when-http-request-received-trigger.png)
+1. From the triggers list, select the trigger named **When a HTTP request is received**.
- With the Request trigger, you can optionally enter a JSON schema to use with the queue message. JSON schemas help the Logic App Designer understand the structure for the input data, and make the outputs easier for you to use in your workflow.
+ :::image type="content" source="./media/logic-apps-scenario-function-sb-trigger/when-http-request-received-trigger.png" alt-text="Screenshot of the designer in the portal. The search box contains 'http request.' Under 'Triggers,' 'When a HTTP request is received' is highlighted.":::
-1. To specify a schema, enter the schema in the **Request Body JSON Schema** box, for example:
+ With the Request trigger, you can optionally enter a JSON schema to use with the queue message. JSON schemas help the designer understand the structure for the input data, and make the outputs easier for you to use in your workflow.
- ![Specify JSON schema](./media/logic-apps-scenario-function-sb-trigger/when-http-request-received-trigger-schema.png)
+1. To specify a schema, enter the schema in the **Request Body JSON Schema** box.
+
+ :::image type="content" source="./media/logic-apps-scenario-function-sb-trigger/when-http-request-received-trigger-schema.png" alt-text="Screenshot of the details of an HTTP request trigger. Some JSON code is visible in the 'Request Body JSON Schema' box.":::
If you don't have a schema, but you have a sample payload in JSON format, you can generate a schema from that payload.
For this scenario, you have a function running each logic app that you want to t
1. Under **Enter or paste a sample JSON payload**, enter your sample payload, and then select **Done**.
- ![Enter sample payload](./media/logic-apps-scenario-function-sb-trigger/enter-sample-payload.png)
-
- This sample payload generates this schema that appears in the trigger:
-
- ```json
- {
- "type": "object",
- "properties": {
- "address": {
- "type": "object",
- "properties": {
- "number": {
- "type": "integer"
- },
- "street": {
- "type": "string"
- },
- "city": {
- "type": "string"
- },
- "postalCode": {
- "type": "integer"
- },
- "country": {
- "type": "string"
+ :::image type="content" source="./media/logic-apps-scenario-function-sb-trigger/enter-sample-payload.png" alt-text="Screenshot of the details of an HTTP request trigger. Under 'Enter or paste a sample JSON payload,' some payload data is visible.":::
+
+ The sample payload that's pictured earlier generates the following schema, which appears in the trigger:
+
+ ```json
+ {
+ "type": "object",
+ "properties": {
+ "address": {
+ "type": "object",
+ "properties": {
+ "number": {
+ "type": "integer"
+ },
+ "street": {
+ "type": "string"
+ },
+ "city": {
+ "type": "string"
+ },
+ "postalCode": {
+ "type": "integer"
+ },
+ "country": {
+ "type": "string"
+ }
} } } }
- }
- ```
+ ```
-1. Add any other actions that you want to run after receiving the queue message.
+1. Under the trigger, add any other actions that you want to use to process the received message.
- For example, you can send an email with the Office 365 Outlook connector.
+ For example, you can add an action that sends email with the Office 365 Outlook connector.
-1. Save your logic app, which generates the callback URL for the trigger in this logic app. Later, you use this callback URL in the code for the Azure Service Bus Queue trigger.
+1. Save your logic app workflow.
- The callback URL appears in the **HTTP POST URL** property.
+ This step generates the callback URL for the Request trigger in your workflow. Later, you use this callback URL in the code for the Azure Service Bus Queue trigger. The callback URL appears in the **HTTP POST URL** property.
- ![Generated callback URL for trigger](./media/logic-apps-scenario-function-sb-trigger/callback-URL-for-trigger.png)
+ :::image type="content" source="./media/logic-apps-scenario-function-sb-trigger/callback-URL-for-trigger.png" alt-text="Screenshot of the details of an HTTP request trigger. Next to 'HTTP POST URL,' a URL is visible.":::
## Create a function
-Next, create the function that acts as the trigger and listens to the queue.
+Next, create the function that listens to the queue and calls the endpoint on the Request trigger when a message arrives.
-1. In the Azure portal, open and expand your function app, if not already open.
+1. In the [Azure portal](https://portal.azure.com), open your function app.
-1. Under your function app name, expand **Functions**. On the **Functions** pane, select **New function**.
+1. On the function app navigation menu, select **Functions**. On the **Functions** pane, select **Create**.
- ![Expand "Functions" and select "New function"](./media/logic-apps-scenario-function-sb-trigger/add-new-function-to-function-app.png)
+ :::image type="content" source="./media/logic-apps-scenario-function-sb-trigger/add-new-function-to-function-app.png" alt-text="Screenshot of a function app with 'Functions' highlighted on the function app menu. The 'Functions' page is opened, and 'Create' is highlighted.":::
-1. Select this template based on whether you created a new function app where you selected .NET as the runtime stack, or you're using an existing function app.
+1. Under **Select a template**, select the template named **Azure Service Bus Queue trigger**. After the **Template details** section appears, which shows different options based on your template selection, provide the following information:
- * For new function apps, select this template: **Service Bus Queue trigger**
+ | Property | Value | Description |
+ |-|-|-|
+ | **New Function** | <*function-name*> | Enter a name for your function. |
+ | **Service Bus connection** | <*Service-Bus-connection*> | Select **New** to set up the connection for your Service Bus queue, which uses the Service Bus SDK `OnMessageReceive()` listener. |
+ | **Queue name** | <*queue-name*> | Enter the name for your queue. |
- ![Select template for new function app](./media/logic-apps-scenario-function-sb-trigger/current-add-queue-trigger-template.png)
+ :::image type="content" source="./media/logic-apps-scenario-function-sb-trigger/current-add-queue-trigger-template.png" alt-text="Screenshot of the 'Create function' pane with 'Azure Service Bus Queue trigger' highlighted, and template example details entered.":::
- * For an existing function app, select this template: **Service Bus Queue trigger - C#**
+1. When you're done, select **Create**.
- ![Select template for existing function app](./media/logic-apps-scenario-function-sb-trigger/legacy-add-queue-trigger-template.png)
+ The Azure portal now shows the **Overview** page for your new Azure Service Bus Queue trigger function.
-1. On the **Azure Service Bus Queue trigger** pane, provide a name for your trigger, and set up the **Service Bus connection** for the queue, which uses the Azure Service Bus SDK `OnMessageReceive()` listener, and select **Create**.
+1. Now, write a basic function to call the endpoint for the logic app workflow that you created earlier. Before you write your function, review the following considerations:
-1. Write a basic function to call the previously created logic app endpoint by using the queue message as a trigger. Before you write your function, review these considerations:
+ * Trigger the function by using the message from the queue message.
- * This example uses the `application/json` message content type, but you can change this type as necessary.
-
* Due to possible concurrently running functions, high volumes, or heavy loads, avoid instantiating the [HTTPClient class](/dotnet/api/system.net.http.httpclient) with the `using` statement and directly creating HTTPClient instances per request. For more information, see [Use HttpClientFactory to implement resilient HTTP requests](/dotnet/architecture/microservices/implement-resilient-applications/use-httpclientfactory-to-implement-resilient-http-requests#issues-with-the-original-httpclient-class-available-in-net-core). * If possible, reuse the instance of HTTP clients. For more information, see [Manage connections in Azure Functions](../azure-functions/manage-connections.md).
- This example uses the [`Task.Run` method](/dotnet/api/system.threading.tasks.task.run) in [asynchronous](/dotnet/csharp/language-reference/keywords/async) mode. For more information, see [Asynchronous programming with async and await](/dotnet/csharp/programming-guide/concepts/async/).
+ The following example uses the [`Task.Run` method](/dotnet/api/system.threading.tasks.task.run) in [asynchronous](/dotnet/csharp/language-reference/keywords/async) mode. For more information, see [Asynchronous programming with async and await](/dotnet/csharp/programming-guide/concepts/async/). The example also uses the `application/json` message content type, but you can change this type as necessary.
```csharp using System;
Next, create the function that acts as the trigger and listens to the queue.
using System.Net.Http; using System.Text;
- // Can also fetch from App Settings or environment variable
+ // Set up the URI for the logic app workflow. You can also get this value on the logic app's 'Overview' pane, under the trigger history, or from an environment variable.
private static string logicAppUri = @"https://prod-05.westus.logic.azure.com:443/workflows/<remaining-callback-URL>";
- // Reuse the instance of HTTP clients if possible: https://learn.microsoft.com/azure/azure-functions/manage-connections
+ // Reuse the instance of HTTP clients if possible. For more information, see https://learn.microsoft.com/azure/azure-functions/manage-connections.
private static HttpClient httpClient = new HttpClient(); public static async Task Run(string myQueueItem, TraceWriter log)
Next, create the function that acts as the trigger and listens to the queue.
} ```
-1. To test the function, add a queue message by using a tool such as the [Service Bus Explorer](https://github.com/paolosalvatori/ServiceBusExplorer).
+## Test your logic app workflow
+
+For testing, add a message to your Service Bus queue by using the following steps or other tool:
+
+1. In the [Azure portal](https://portal.azure.com), open your Service Bus namespace.
+
+1. On the Service Bus namespace navigation menu, select **Queues**.
+
+ :::image type="content" source="./media/logic-apps-scenario-function-sb-trigger/service-bus-namespace-queues.png" alt-text="Screenshot of a Service Bus namespace. On the navigation menu, 'Queues' is highlighted.":::
+
+1. Select the Service Bus queue that you linked to your function earlier using a Service Bus connection.
+
+1. On the queue navigation menu, select **Service Bus Explorer**, and then on the toolbar, select **Send messages**.
+
+ :::image type="content" source="./media/logic-apps-scenario-function-sb-trigger/select-service-bus-explorer.png" alt-text="Screenshot of a Service Bus queue page in the portal, with 'Send messages' highlighted. On the navigation menu, 'Service Bus Explorer' is highlighted.":::
+
+1. On the **Send messages** pane, specify the message to send to your Service Bus queue.
- The logic app triggers immediately after the function receives the message.
+ This message triggers your logic app workflow.
## Next steps
-* [Call, trigger, or nest workflows by using HTTP endpoints](../logic-apps/logic-apps-http-endpoint.md)
+* [Call, trigger, or nest workflows by using HTTP endpoints](logic-apps-http-endpoint.md)
machine-learning How To Deploy Model Custom Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-deploy-model-custom-output.md
Sometimes you need to execute inference having a higher control of what is being
In any of those cases, Batch Deployments allow you to take control of the output of the jobs by allowing you to write directly to the output of the batch deployment job. In this tutorial, we'll see how to deploy a model to perform batch inference and writes the outputs in `parquet` format by appending the predictions to the original input data.
-## Prerequisites
--
-* A model registered in the workspace. In this tutorial, we'll use an MLflow model. Particularly, we are using the *heart condition classifier* created in the tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
-* You must have an endpoint already created. If you don't, follow the instructions at [Use batch endpoints for batch scoring](how-to-use-batch-endpoint.md). This example assumes the endpoint is named `heart-classifier-batch`.
-* You must have a compute created where to deploy the deployment. If you don't, follow the instructions at [Create compute](how-to-use-batch-endpoint.md#create-compute). This example assumes the name of the compute is `cpu-cluster`.
- ## About this sample This example shows how you can deploy a model to perform batch inference and customize how your predictions are written in the output. This example uses an MLflow model based on the [UCI Heart Disease Data Set](https://archive.ics.uci.edu/ml/datasets/Heart+Disease). The database contains 76 attributes, but we are using a subset of 14 of them. The model tries to predict the presence of heart disease in a patient. It is integer valued from 0 (no presence) to 1 (presence). The model has been trained using an `XGBBoost` classifier and all the required preprocessing has been packaged as a `scikit-learn` pipeline, making this model an end-to-end pipeline that goes from raw data to predictions.
+The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste YAML and other files, clone the repo and then change directories to the `cli/endpoints/batch` if you are using the Azure CLI or `sdk/endpoints/batch` if you are using our SDK for Python.
+
+```azurecli
+git clone https://github.com/Azure/azureml-examples --depth 1
+cd azureml-examples/cli/endpoints/batch
+```
### Follow along in Jupyter Notebooks You can follow along this sample in a Jupyter Notebook. In the cloned repository, open the notebook: [custom-output-batch.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/batch/custom-output-batch.ipynb).
+## Prerequisites
++
+* A model registered in the workspace. In this tutorial, we'll use an MLflow model. Particularly, we are using the *heart condition classifier* created in the tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
+* You must have an endpoint already created. If you don't, follow the instructions at [Use batch endpoints for batch scoring](how-to-use-batch-endpoint.md). This example assumes the endpoint is named `heart-classifier-batch`.
+* You must have a compute created where to deploy the deployment. If you don't, follow the instructions at [Create compute](how-to-use-batch-endpoint.md#create-compute). This example assumes the name of the compute is `cpu-cluster`.
+ ## Creating a batch deployment with a custom output In this example, we are going to create a deployment that can write directly to the output folder of the batch deployment job. The deployment will use this feature to write custom parquet files.
machine-learning How To Image Processing Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-image-processing-batch.md
[!INCLUDE [ml v2](../../../includes/machine-learning-dev-v2.md)]
-Batch Endpoints can be used for processing tabular data, but also any other file type like images. Those deployments are supported in both MLflow and custom models. In this tutorial we will learn how to deploy a model that classifies images according to the ImageNet taxonomy.
+Batch Endpoints can be used for processing tabular data, but also any other file type like images. Those deployments are supported in both MLflow and custom models. In this tutorial, we will learn how to deploy a model that classifies images according to the ImageNet taxonomy.
-## Prerequisites
--
-* You must have an endpoint already created. If you don't please follow the instructions at [Use batch endpoints for batch scoring](how-to-use-batch-endpoint.md). This example assumes the endpoint is named `imagenet-classifier-batch`.
-* You must have a compute created where to deploy the deployment. If you don't please follow the instructions at [Create compute](how-to-use-batch-endpoint.md#create-compute). This example assumes the name of the compute is `cpu-cluster`.
+## About this sample
-## About the model used in the sample
-
-The model we are going to work with was built using TensorFlow along with the RestNet architecture ([Identity Mappings in Deep Residual Networks](https://arxiv.org/abs/1603.05027)). This model has the following constrains that are important to keep in mind for deployment:
+The model we are going to work with was built using TensorFlow along with the RestNet architecture ([Identity Mappings in Deep Residual Networks](https://arxiv.org/abs/1603.05027)). A sample of this model can be downloaded from `https://azuremlexampledata.blob.core.windows.net/data/imagenet/model.zip`. The model has the following constrains that are important to keep in mind for deployment:
* It works with images of size 244x244 (tensors of `(224, 224, 3)`). * It requires inputs to be scaled to the range `[0,1]`.
-A sample of this model can be downloaded from `https://azuremlexampledata.blob.core.windows.net/data/imagenet/model.zip`.
+The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste YAML and other files, clone the repo, and then change directories to the `cli/endpoints/batch` if you are using the Azure CLI or `sdk/endpoints/batch` if you are using our SDK for Python.
+
+```azurecli
+git clone https://github.com/Azure/azureml-examples --depth 1
+cd azureml-examples/cli/endpoints/batch
+```
### Follow along in Jupyter Notebooks You can follow along this sample in a Jupyter Notebook. In the cloned repository, open the notebook: [imagenet-classifier-batch.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/batch/imagenet-classifier-batch.ipynb).
+## Prerequisites
++
+* You must have a batch endpoint already created. This example assumes the endpoint is named `imagenet-classifier-batch`. If you don't have one, follow the instructions at [Use batch endpoints for batch scoring](how-to-use-batch-endpoint.md).
+* You must have a compute created where to deploy the deployment. This example assumes the name of the compute is `cpu-cluster`. If you don't, follow the instructions at [Create compute](how-to-use-batch-endpoint.md#create-compute).
+ ## Image classification with batch deployments In this example, we are going to learn how to deploy a deep learning model that can classify a given image according to the [taxonomy of ImageNet](https://image-net.org/).
Batch Endpoint can only deploy registered models so we need to register it. You
```python import os
+ import requests
from zipfile import ZipFile
+ requests.get('https://azuremlexampledata.blob.core.windows.net/data/imagenet/model.zip', allow_redirects=True)
+
os.mkdirs("imagenet-classifier", exits_ok=True) with ZipFile(file, 'r') as zip: model_path = zip.extractall(path="imagenet-classifier")
Batch Endpoint can only deploy registered models so we need to register it. You
### Creating a scoring script
-We need to create a scoring script that can read the images provided by the batch deployment and return the scores of the model. The following script does the following:
+We need to create a scoring script that can read the images provided by the batch deployment and return the scores of the model. The following script:
> [!div class="checklist"] > * Indicates an `init` function that load the model using `keras` module in `tensorflow`.
One the scoring script is created, it's time to create a batch deployment for it
ml_client.batch_deployments.begin_create_or_update(deployment) ```
-1. Although you can invoke a specific deployment inside of an endpoint, you will usually want to invoke the endpoint itself and let the endpoint decide which deployment to use. Such deployment is named the "default" deployment. This gives you the possibility of changing the default deployment and hence changing the model serving the deployment without changing the contract with the user invoking the endpoint. Use the following instruction to update the default deployment:
+1. Although you can invoke a specific deployment inside of an endpoint, you will usually want to invoke the endpoint itself, and let the endpoint decide which deployment to use. Such deployment is named the "default" deployment. This gives you the possibility of changing the default deployment - and hence changing the model serving the deployment - without changing the contract with the user invoking the endpoint. Use the following instruction to update the default deployment:
# [Azure ML CLI](#tab/cli)
machine-learning How To Mlflow Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-mlflow-batch.md
For no-code-deployment, Azure Machine Learning
> [!NOTE] > For more information about the supported file types in batch endpoints with MLflow, view [Considerations when deploying to batch inference](#considerations-when-deploying-to-batch-inference).
-## Prerequisites
--
-* You must have a MLflow model. If your model is not in MLflow format and you want to use this feature, you can [convert your custom ML model to MLflow format](../how-to-convert-custom-model-to-mlflow.md).
- ## About this example This example shows how you can deploy an MLflow model to a batch endpoint to perform batch predictions. This example uses an MLflow model based on the [UCI Heart Disease Data Set](https://archive.ics.uci.edu/ml/datasets/Heart+Disease). The database contains 76 attributes, but we are using a subset of 14 of them. The model tries to predict the presence of heart disease in a patient. It is integer valued from 0 (no presence) to 1 (presence). The model has been trained using an `XGBBoost` classifier and all the required preprocessing has been packaged as a `scikit-learn` pipeline, making this model an end-to-end pipeline that goes from raw data to predictions.
+The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste YAML and other files, clone the repo and then change directories to the `cli/endpoints/batch` if you are using the Azure CLI or `sdk/endpoints/batch` if you are using our SDK for Python.
+
+```azurecli
+git clone https://github.com/Azure/azureml-examples --depth 1
+cd azureml-examples/cli/endpoints/batch
+```
### Follow along in Jupyter Notebooks You can follow along this sample in the following notebooks. In the cloned repository, open the notebook: [mlflow-for-batch-tabular.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/batch/mlflow-for-batch-tabular.ipynb).
+## Prerequisites
++
+* You must have a MLflow model. If your model is not in MLflow format and you want to use this feature, you can [convert your custom ML model to MLflow format](../how-to-convert-custom-model-to-mlflow.md).
+ ## Steps Follow these steps to deploy an MLflow model to a batch endpoint for running batch inference over new data:
machine-learning How To Nlp Processing Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-nlp-processing-batch.md
Batch Endpoints can be used for processing tabular data, but also any other file type like text. Those deployments are supported in both MLflow and custom models. In this tutorial we will learn how to deploy a model that can perform text summarization of long sequences of text using a model from HuggingFace.
-## Prerequisites
--
-* You must have an endpoint already created. If you don't please follow the instructions at [Use batch endpoints for batch scoring](how-to-use-batch-endpoint.md). This example assumes the endpoint is named `text-summarization-batch`.
-* You must have a compute created where to deploy the deployment. If you don't please follow the instructions at [Create compute](how-to-use-batch-endpoint.md#create-compute). This example assumes the name of the compute is `cpu-cluster`.
-
-## About the model used in the sample
+## About this sample
The model we are going to work with was built using the popular library transformers from HuggingFace along with [a pre-trained model from Facebook with the BART architecture](https://huggingface.co/facebook/bart-large-cnn). It was introduced in the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation](https://arxiv.org/abs/1910.13461). This model has the following constrains that are important to keep in mind for deployment:
The model we are going to work with was built using the popular library transfor
* It is trained for summarization of text in English. * We are going to use TensorFlow as a backend.
-Due to the size of the model, it hasn't been included in this repository. Instead, you can generate a local copy using:
-
-```python
-from transformers import pipeline
+The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste YAML and other files, clone the repo and then change directories to the `cli/endpoints/batch` if you are using the Azure CLI or `sdk/endpoints/batch` if you are using our SDK for Python.
-model = pipeline("summarization", model="facebook/bart-large-cnn")
-model_local_path = 'bart-text-summarization/model'
-summarizer.save_pretrained(model_local_path)
+```azurecli
+git clone https://github.com/Azure/azureml-examples --depth 1
+cd azureml-examples/cli/endpoints/batch
```
-A local copy of the model will be placed at `bart-text-summarization/model`. We will use it during the course of this tutorial.
- ### Follow along in Jupyter Notebooks You can follow along this sample in a Jupyter Notebook. In the cloned repository, open the notebook: [text-summarization-batch.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/batch/text-summarization-batch.ipynb).
+## Prerequisites
++
+* You must have an endpoint already created. If you don't please follow the instructions at [Use batch endpoints for batch scoring](how-to-use-batch-endpoint.md). This example assumes the endpoint is named `text-summarization-batch`.
+* You must have a compute created where to deploy the deployment. If you don't please follow the instructions at [Create compute](how-to-use-batch-endpoint.md#create-compute). This example assumes the name of the compute is `cpu-cluster`.
+* Due to the size of the model, it hasn't been included in this repository. Instead, you can generate a local copy with the following code. A local copy of the model will be placed at `bart-text-summarization/model`. We will use it during the course of this tutorial.
+
+ ```python
+ from transformers import pipeline
+
+ model = pipeline("summarization", model="facebook/bart-large-cnn")
+ model_local_path = 'bart-text-summarization/model'
+ summarizer.save_pretrained(model_local_path)
+ ```
+ ## NLP tasks with batch deployments In this example, we are going to learn how to deploy a deep learning model based on the BART architecture that can perform text summarization over text in English. The text will be placed in CSV files for convenience.
machine-learning How To Use Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-use-batch-endpoint.md
In this article, you will learn how to use batch endpoints to do batch scoring.
> [!TIP] > We suggest you to read the Scenarios sections (see the navigation bar at the left) to find more about how to use Batch Endpoints in specific scenarios including NLP, computer vision, or how to integrate them with other Azure services.
-## Prerequisites
---
-### About this example
+## About this example
On this example, we are going to deploy a model to solve the classic MNIST ("Modified National Institute of Standards and Technology") digit recognition problem to perform batch inferencing over large amounts of data (image files). In the first section of this tutorial, we are going to create a batch deployment with a model created using Torch. Such deployment will become our default one in the endpoint. On the second half, [we are going to see how we can create a second deployment](#adding-deployments-to-an-endpoint) using a model created with TensorFlow (Keras), test it out, and then switch the endpoint to start using the new deployment as default.
git clone https://github.com/Azure/azureml-examples --depth 1
cd azureml-examples/cli/endpoints/batch ```
+### Follow along in Jupyter Notebooks
+
+You can follow along this sample in the following notebooks. In the cloned repository, open the notebook: [mnist-batch.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/batch/mnist-batch.ipynb).
+
+## Prerequisites
++
+### Connect to your workspace
+
+First, let's connect to Azure Machine Learning workspace where we are going to work on.
+
+# [Azure ML CLI](#tab/cli)
+
+```azurecli
+az account set --subscription <subscription>
+az configure --defaults workspace=<workspace> group=<resource-group> location=<location>
+```
+
+# [Azure ML SDK for Python](#tab/sdk)
+
+The workspace is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. In this section, we'll connect to the workspace in which you'll perform deployment tasks.
+
+1. Import the required libraries:
+
+```python
+from azure.ai.ml import MLClient, Input
+from azure.ai.ml.entities import BatchEndpoint, BatchDeployment, Model, AmlCompute, Data, BatchRetrySettings
+from azure.ai.ml.constants import AssetTypes, BatchDeploymentOutputAction
+from azure.identity import DefaultAzureCredential
+```
+
+2. Configure workspace details and get a handle to the workspace:
+
+```python
+subscription_id = "<subscription>"
+resource_group = "<resource-group>"
+workspace = "<workspace>"
+
+ml_client = MLClient(DefaultAzureCredential(), subscription_id, resource_group, workspace)
+```
+
+# [studio](#tab/studio)
+
+Open the [Azure ML studio portal](https://ml.azure.com) and log in using your credentials.
+++ ### Create compute Batch endpoints run on compute clusters. They support both [Azure Machine Learning Compute clusters (AmlCompute)](../how-to-create-attach-compute-cluster.md) or [Kubernetes clusters](../how-to-attach-kubernetes-anywhere.md). Clusters are a shared resource so one cluster can host one or many batch deployments (along with other workloads if desired).
machine-learning Concept Compute Target https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-compute-target.md
An unmanaged compute target is *not* managed by Azure Machine Learning. You crea
Azure Machine Learning supports the following unmanaged compute types:
-* Your local computer
* Remote virtual machines * Azure HDInsight
-* Azure Batch
* Azure Databricks * Azure Data Lake Analytics
-* Azure Container Instance
-* Kubernetes
+* [Azure Synapse Spark pool](how-to-link-synapse-ml-workspaces.md) (preview)
-For more information, see [set up compute targets for model training and deployment](how-to-attach-compute-targets.md)
+ > [!TIP]
+ > Currently this requires the Azure Machine Learning SDK v1.
+* [Kubernetes](how-to-attach-kubernetes-anywhere.md)
+
+For more information, see [Manage compute resources](how-to-create-attach-compute-studio.md).
## Next steps
machine-learning How To Auto Train Forecast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-forecast.md
best_run, fitted_model = local_run.get_output()
## Forecasting with best model
-Use the best model iteration to forecast values for data that wasn't used to train the model.
+Use the best model iteration to forecast values for data that wasn't used to train the model.
+
+### Evaluating model accuracy with a rolling forecast
+
+Before you put a model into production, you should evaluate its accuracy on a test set held out from the training data. A best practice procedure is a so-called rolling evaluation which rolls the trained forecaster forward in time over the test set, averaging error metrics over several prediction windows to obtain statistically robust estimates for some set of chosen metrics. Ideally, the test set for the evaluation is long relative to the model's forecast horizon. Estimates of forecasting error may otherwise be statistically noisy and, therefore, less reliable.
+
+For example, suppose you train a model on daily sales to predict demand up to two weeks (14 days) into the future. If there is sufficient historic data available, you might reserve the final several months to even a year of the data for the test set. The rolling evaluation begins by generating a 14-day-ahead forecast for the first two weeks of the test set. Then, the forecaster is advanced by some number of days into the test set and you generate another 14-day-ahead forecast from the new position. The process continues until you get to the end of the test set.
+
+To do a rolling evaluation, you call the `rolling_forecast` method of the `fitted_model`, then compute desired metrics on the result. For example, assume you have test set features in a pandas DataFrame called `test_features_df` and the test set actual values of the target in a numpy array called `test_target`. A rolling evaluation using the mean squared error is shown in the following code sample:
+
+```python
+from sklearn.metrics import mean_squared_error
+rolling_forecast_df = fitted_model.rolling_forecast(
+ test_features_df, test_target, step=1)
+mse = mean_squared_error(
+ rolling_forecast_df[fitted_model.actual_column_name], rolling_forecast_df[fitted_model.forecast_column_name])
+```
+
+In the above sample, the step size for the rolling forecast is set to 1 which means that the forecaster is advanced 1 period, or 1 day in our demand prediction example, at each iteration. The total number of forecasts returned by `rolling_forecast` thus depends on the length of the test set and this step size. For more details and examples see the [rolling_forecast() documentation](/python/api/azureml-training-tabular/azureml.training.tabular.models.forecasting_pipeline_wrapper_base.forecastingpipelinewrapperbase#azureml-training-tabular-models-forecasting-pipeline-wrapper-base-forecastingpipelinewrapperbase-rolling-forecast) and the [Forecasting away from training data notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-forecast-function/auto-ml-forecasting-function.ipynb).
+
+### Prediction into the future
The [forecast_quantiles()](/python/api/azureml-train-automl-client/azureml.train.automl.model_proxy.modelproxy#forecast-quantiles-x-values--typing-any--y-values--typing-union-typing-any--nonetype-none--forecast-destination--typing-union-typing-any--nonetype-none--ignore-data-errors--boolfalse--azureml-data-abstract-dataset-abstractdataset) function allows specifications of when predictions should start, unlike the `predict()` method, which is typically used for classification and regression tasks. The forecast_quantiles() method by default generates a point forecast or a mean/median forecast which doesn't have a cone of uncertainty around it. Learn more in the [Forecasting away from training data notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-forecast-function/auto-ml-forecasting-function.ipynb).
machine-learning How To Manage Synapse Spark Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-synapse-spark-pool.md
The **Attach Synapse Spark pool (preview)** panel will open on the right side of
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
-The Azure Machine Learning CLI provides the ability to attach and manage a Synapse Spark pool from the command line interface, using intuitive YAML syntax and commands.
+With the Azure Machine Learning CLI, we can attach and manage a Synapse Spark pool from the command line interface, using intuitive YAML syntax and commands.
To define an attached Synapse Spark pool using YAML syntax, the YAML file should cover these properties:
The YAML files above can be used in the `az ml compute attach` command as the `-
az ml compute attach --file <YAML_SPECIFICATION_FILE_NAME>.yaml --subscription <SUBSCRIPTION_ID> --resource-group <RESOURCE_GROUP> --workspace-name <AML_WORKSPACE_NAME> ```
-This shows the expected output of the above command:
+This sample shows the expected output of the above command:
```azurecli Class SynapseSparkCompute: This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
To display details of an attached Synapse Spark pool, execute the `az ml compute
az ml compute show --name <ATTACHED_SPARK_POOL_NAME> --subscription <SUBSCRIPTION_ID> --resource-group <RESOURCE_GROUP> --workspace-name <AML_WORKSPACE_NAME> ```
-This shows the expected output of the above command:
+This sample shows the expected output of the above command:
```azurecli <ATTACHED_SPARK_POOL_NAME>
To see a list of all computes, including the attached Synapse Spark pools in a w
az ml compute list --subscription <SUBSCRIPTION_ID> --resource-group <RESOURCE_GROUP> --workspace-name <AML_WORKSPACE_NAME> ```
-This shows the expected output of the above command:
+This sample shows the expected output of the above command:
```azurecli [
Execute the `az ml compute update` command, with appropriate parameters, to upda
az ml compute update --identity SystemAssigned --subscription <SUBSCRIPTION_ID> --resource-group <RESOURCE_GROUP> --workspace-name <AML_WORKSPACE_NAME> --name <ATTACHED_SPARK_POOL_NAME> ```
-This shows the expected output of the above command:
+This sample shows the expected output of the above command:
```azurecli Class SynapseSparkCompute: This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
az ml compute update --identity UserAssigned --user-assigned-identities /subscri
```
-This shows the expected output of the above command:
+This sample shows the expected output of the above command:
```azurecli Class SynapseSparkCompute: This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
We might want to detach an attached Synapse Spark pool, to clean up a workspace.
# [Studio UI](#tab/studio-ui)
-The Azure Machine Learning studio UI also provides a way to detach an attached Synapse Spark pool. To do this:
+The Azure Machine Learning studio UI also provides a way to detach an attached Synapse Spark pool. Follow these steps to do this:
1. Open the **Details** page for the Synapse Spark pool, in the Azure Machine Learning studio.
The Azure Machine Learning studio UI also provides a way to detach an attached S
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
-An attached Synapse Spark pool can be detached by executing the `az ml compute detach` command with name of the pool passed using `--name` parameter as following:
+An attached Synapse Spark pool can be detached by executing the `az ml compute detach` command with name of the pool passed using `--name` parameter as shown here:
```azurecli
az ml compute detach --name <ATTACHED_SPARK_POOL_NAME> --subscription <SUBSCRIPT
```
-This shows the expected output of the above command:
+This sample shows the expected output of the above command:
```azurecli Are you sure you want to perform this operation? (y/n): y
Are you sure you want to perform this operation? (y/n): y
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
- An `MLClient.compute.begin_delete()` function call will do this for us. Pass the `name` of the attached Synapse Spark pool, along with the action `Detach`, to the function. This code snippet detaches a Synapse Spark pool from an Azure Machine Learning workspace:
+ We will use an `MLClient.compute.begin_delete()` function call. Pass the `name` of the attached Synapse Spark pool, along with the action `Detach`, to the function. This code snippet detaches a Synapse Spark pool from an Azure Machine Learning workspace:
```python # import required libraries
Some user scenarios may require access to a Synapse Spark Pool, during an Azure
## Next steps -- [Interactive Data Wrangling with Apache Spark in Azure Machine Learning (preview)](/interactive-data-wrangling-with-apache-spark-azure-ml.md)
+- [Interactive Data Wrangling with Apache Spark in Azure Machine Learning (preview)](./interactive-data-wrangling-with-apache-spark-azure-ml.md)
-- [Submit Spark jobs in Azure Machine Learning (preview)](/how-to-submit-spark-jobs.md)
+- [Submit Spark jobs in Azure Machine Learning (preview)](./how-to-submit-spark-jobs.md)
machine-learning How To Submit Spark Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-submit-spark-jobs.md
[!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)]
-Azure Machine Learning provides the ability to submit standalone machine learning jobs or creating a [machine learning pipeline](/concept-ml-pipelines.md) comprising multiple steps in a machine learning workflow. Azure Machine Learning supports creation of a standalone Spark job, and creation of a reusable Spark component that can be used in Azure Machine Learning pipelines. In this article you will learn how to submit Spark jobs using:
+Azure Machine Learning provides the ability to submit standalone machine learning jobs or creating a [machine learning pipeline](./concept-ml-pipelines.md) comprising multiple steps in a machine learning workflow. Azure Machine Learning supports creation of a standalone Spark job, and creation of a reusable Spark component that can be used in Azure Machine Learning pipelines. In this article you will learn how to submit Spark jobs using:
- Azure Machine Learning studio UI - Azure Machine Learning CLI - Azure Machine Learning SDK
Azure Machine Learning provides the ability to submit standalone machine learnin
## Prerequisites - An Azure subscription; if you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free) before you begin - An Azure Machine Learning workspace. See [Create workspace resources](./quickstart-create-resources.md)-- [An attached Synapse Spark pool in the Azure Machine Learning workspace](/how-to-manage-synapse-spark-pool.md).
+- [An attached Synapse Spark pool in the Azure Machine Learning workspace](./how-to-manage-synapse-spark-pool.md).
- [Configure your development environment](./how-to-configure-environment.md), or [create an Azure Machine Learning compute instance](./concept-compute-instance.md#create) - [Install the Azure Machine Learning SDK for Python](/python/api/overview/azure/ml/installv2) - [Install Azure Machine Learning CLI](./how-to-configure-cli.md?tabs=public) ## Ensuring resource access for Spark jobs
-Spark jobs can use either user identity passthrough or a managed identity to access data and other resource. Different mechanisms for accessing resources while using attached Synapse Spark pool and Managed (Automatic) Spark compute are summarized in the following table.
+Spark jobs can use either user identity passthrough or a managed identity to access data and other resource. Different mechanisms for accessing resources while using attached Synapse Spark pool and Managed (Automatic) Spark compute are summarized in the following table.
|Spark pool|Supported identities|Default identity| | - | -- | - |
armclient PATCH https://management.azure.com/subscriptions/<SUBSCRIPTION_ID>/res
> To ensure successful execution of spark job, the identity being used for the Spark job should be assigned **Contributor** and **Storage Blob Data Contributor** roles on the Azure storage account used for data input and output. ## Submit a standalone Spark job
-Once a Python script is developed by [interactive data wrangling](/interactive-data-wrangling-with-apache-spark-azure-ml.md), it can be used for submitting a batch job to process a larger volume of data after making necessary changes for parameterization of the Python script. A simple data wrangling batch job can be submitted as a standalone Spark job.
+Once a Python script is developed by [interactive data wrangling](./interactive-data-wrangling-with-apache-spark-azure-ml.md), it can be used for submitting a batch job to process a larger volume of data after making necessary changes for parameterization of the Python script. A simple data wrangling batch job can be submitted as a standalone Spark job.
-A Spark job requires a Python script that takes arguments, which can be developed by modifying the Python code developed from [interactive data wrangling](/interactive-data-wrangling-with-apache-spark-azure-ml.md). A sample Python script is shown here.
+A Spark job requires a Python script that takes arguments, which can be developed by modifying the Python code developed from [interactive data wrangling](./interactive-data-wrangling-with-apache-spark-azure-ml.md). A sample Python script is shown here.
```python
A standalone Spark job can be defined as a YAML specification file, which can be
- `spark.dynamicAllocation.maxExecutors` - the maximum number of Spark executors instances, for dynamic allocation. - If dynamic allocation of executors is disabled, define this property: - `spark.executor.instances` - the number of Spark executor instances.-- `environment` - an [Azure Machine Learning environment](/reference-yaml-environment) to run the job.
+- `environment` - an [Azure Machine Learning environment](./reference-yaml-environment.md) to run the job.
- `args` - the command line arguments that should be passed to the job entry point Python script or class. See the YAML specification file provided below for an example. - `compute` - this property defines the name of an attached Synapse Spark pool, as shown in this example: ```yaml
To submit a standalone Spark job using the Azure Machine Learning studio UI:
1. Select **Create** to submit the standalone Spark job. ## Spark component in a pipeline job
-A Spark component allows the flexibility to use the same component in multiple [Azure Machine Learning pipelines](/concept-ml-pipelines) as a pipeline step.
+A Spark component allows the flexibility to use the same component in multiple [Azure Machine Learning pipelines](./concept-ml-pipelines.md) as a pipeline step.
# [Azure CLI](#tab/cli)
conf:
```
-The Spark component defined in the above YAML specification file can be used in an Azure Machine Learning pipeline job. See [pipeline job YAML schema](/reference-yaml-job-pipeline.md) to learn more about the YAML syntax that defines a pipeline job. This is an example YAML specification file for a pipeline job, with a Spark component:
+The Spark component defined in the above YAML specification file can be used in an Azure Machine Learning pipeline job. See [pipeline job YAML schema](./reference-yaml-job-pipeline.md) to learn more about the YAML syntax that defines a pipeline job. This is an example YAML specification file for a pipeline job, with a Spark component:
```yaml
machine-learning Interactive Data Wrangling With Apache Spark Azure Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/interactive-data-wrangling-with-apache-spark-azure-ml.md
df.head()
- [Code samples for interactive data wrangling with Apache Spark in Azure Machine Learning](https://github.com/Azure/azureml-examples/tree/main/sdk/python/data-wrangling) - [Optimize Apache Spark jobs in Azure Synapse Analytics](../synapse-analytics/spark/apache-spark-performance.md)-- [What are Azure Machine Learning pipelines?](/concept-ml-pipelines.md)-- [Submit Spark jobs in Azure Machine Learning (preview)](/how-to-submit-spark-jobs.md)
+- [What are Azure Machine Learning pipelines?](./concept-ml-pipelines.md)
+- [Submit Spark jobs in Azure Machine Learning (preview)](./how-to-submit-spark-jobs.md)
purview Catalog Asset Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-asset-details.md
Microsoft Purview makes it easy to work with useful data you find the data catal
If you're a data curator on the collection containing an asset, you can delete an asset by selecting the delete icon under the name of the asset.
+> [!IMPORTANT]
+> You cannot delete an asset that has child assets.
+>
+> Currently, Microsoft Purview doesn't support cascaded deletes. For example, if you attempt to delete a storage account asset in your catalog the containers, folders and files within them will still exist in the data map and the the storage account asset will still exist in relation to them.
+ Any asset you delete using the delete button is permanently deleted in Microsoft Purview. However, if you run a **full scan** on the source from which the asset was ingested into the catalog, then the asset is reingested and you can discover it using the Microsoft Purview catalog. If you have a scheduled scan (weekly or monthly) on the source, the **deleted asset won't get re-ingested** into the catalog unless the asset is modified by an end user since the previous run of the scan. For example, say you manually delete a SQL table from the Microsoft Purview Data Map. Later, a data engineer adds a new column to the source table. When Microsoft Purview scans the database, the table will be reingested into the data map and be discoverable in the data catalog.
-If you delete an asset, only that asset is deleted. Microsoft Purview doesn't currently support cascaded deletes. For example, if you delete a storage account asset in your catalog - the containers, folders and files within them will still exist in the data map and be discoverable in the data catalog.
## Next steps
purview How To Share Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-share-data.md
This registration is only needed the first time when sharing or receiving data i
:::image type="content" source="./media/how-to-share-data/create-share-edit-asset-name.png" alt-text="Screenshot showing the add assets second page, with the asset paths listed and the display name bars available to edit." border="true":::
-1. Select **Add Recipient**. Enter the Azure log in email address of who you want to share data with. Select **Create and Share**. Optionally, you can specify an **Expiration date** for when to terminate the share. You can share the same data with multiple recipients by clicking on **Add Recipient** multiple times.
+1. Select **Add Recipient** and select **User**. Enter the Azure log in email address of who you want to share data with. By default, the option to enter email address of user is shown.
- > [!NOTE]
- > In Microsoft Purview governance portal, you can only use user's Azure login email address as recipient. In Microsoft Purview SDK or API, you can use object ID of the user or service principal as a recipient, and you can also optionally specify a target tenant ID (i.e. the Azure tenant recipient can receive the share into).
+
+Select **Add Recipient** and select **App** if you want to share data with a service principal. Enter the object ID and tenant ID of the recipient you want to share data with.
+
- :::image type="content" source="./media/how-to-share-data/create-share-add-recipient.png" alt-text="Screenshot showing the add recipients page, with the add recipient button highlighted, two users added." border="true":::
+Select **Create and Share**. Optionally, you can specify an **Expiration date** for when to terminate the share. You can share the same data with multiple recipients by clicking on **Add Recipient** multiple times.
You've now created your share. The recipients of your share will receive an invitation and they can view the pending share in their Microsoft Purview account.
purview Microsoft Purview Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/microsoft-purview-connector-overview.md
The table below shows the supported capabilities for each data source. Select th
|| [SAP HANA](register-scan-sap-hana.md) | [Yes](register-scan-sap-hana.md#register) | No | No | No | No | || [Snowflake](register-scan-snowflake.md) | [Yes](register-scan-snowflake.md#register) | No | [Yes](register-scan-snowflake.md#lineage) | No | No | || [SQL Server](register-scan-on-premises-sql-server.md)| [Yes](register-scan-on-premises-sql-server.md#register) |[Yes](register-scan-on-premises-sql-server.md#scan) | No* | No| No |
-|| **SQL Server on Azure-Arc**| No |No | No |Preview: [1.DevOps policies](how-to-policies-devops-arc-sql-server.md) [2.Data Owner](how-to-policies-data-owner-arc-sql-server.md) | No |
+|| [SQL Server on Azure-Arc](register-scan-azure-arc-enabled-sql-server.md)| [Yes](register-scan-azure-arc-enabled-sql-server.md#register) | [Yes](register-scan-azure-arc-enabled-sql-server.md#scan) | No* |Preview: [1.DevOps policies](how-to-policies-devops-arc-sql-server.md) [2.Data Owner](how-to-policies-data-owner-arc-sql-server.md) | No |
|| [Teradata](register-scan-teradata-source.md)| [Yes](register-scan-teradata-source.md#register)| [Yes](register-scan-teradata-source.md#scan)| [Yes*](register-scan-teradata-source.md#lineage) | No| No | |File|[Amazon S3](register-scan-amazon-s3.md)|[Yes](register-scan-amazon-s3.md)| [Yes](register-scan-amazon-s3.md)| Limited* | No| No | ||[HDFS](register-scan-hdfs.md)|[Yes](register-scan-hdfs.md)| [Yes](register-scan-hdfs.md)| No | No| No |
purview Register Scan Azure Arc Enabled Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-arc-enabled-sql-server.md
+
+ Title: Connect to and manage Azure Arc-enabled SQL Server instances
+description: This guide describes how to connect to Azure Arc-enabled SQL Server in Microsoft Purview, and use Microsoft Purview's features to scan and manage your Azure Arc-enabled SQL Server source.
+++++ Last updated : 11/07/2022+++
+# Connect to and manage an Azure Arc-enabled SQL Server instance in Microsoft Purview (Public preview)
++
+This article outlines how to register Azure Arc-enabled SQL Server instances, and how to authenticate and interact with an Azure Arc-enabled SQL Server instance in Microsoft Purview. For more information about Microsoft Purview, read the [introductory article](overview.md).
+
+## Supported capabilities
+
+|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|**Data Sharing**|
+|||||||||
+| [Yes](#register) | [Yes](#scan) | [Yes](#scan) | [Yes](#scan) | [Yes](#scan) | [1.DevOps policies](how-to-policies-devops-arc-sql-server.md) [2.Data Owner](how-to-policies-data-owner-arc-sql-server.md) | Limited** | No |
+
+\** Lineage is supported if dataset is used as a source/sink in [Data Factory Copy activity](how-to-link-azure-data-factory.md)
+
+The supported SQL Server versions are 2012 and above. SQL Server Express LocalDB is not supported.
+
+When scanning Azure Arc-enabled SQL Server, Microsoft Purview supports:
+
+- Extracting technical metadata including:
+
+ - Instance
+ - Databases
+ - Schemas
+ - Tables including the columns
+ - Views including the columns
+
+When setting up scan, you can choose to specify the database name to scan one database, and you can further scope the scan by selecting tables and views as needed. The whole Azure Arc-enabled SQL Server will be scanned if database name is not provided.
+
+## Prerequisites
+
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+* An active [Microsoft Purview account](create-catalog-portal.md).
+
+* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+
+* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md).
+
+## Register
+
+This section describes how to register an Azure Arc-enabled SQL Server instance in Microsoft Purview using the [Microsoft Purview governance portal](https://web.purview.azure.com/).
+
+### Authentication for registration
+
+There are two ways to set up authentication for scanning Azure Arc-enabled SQL Server with self-hosted integration runtime:
+
+- SQL Authentication
+- Windows Authentication
+
+#### Set up SQL server authentication
+
+If SQL Authentication is applied, ensure the SQL Server deployment is configured to allow SQL Server and Windows Authentication.
+
+To enable this, within SQL Server Management Studio (SSMS), navigate to "Server Properties" and change from "Windows Authentication Mode" to "SQL Server and Windows Authentication mode".
++
+If Windows Authentication is applied, configure the SQL Server deployment to use Windows Authentication mode.
+
+A change to the Server Authentication will require a restart of the SQL Server Instance and SQL Server Agent, this can be triggered within SSMS by navigating to the SQL Server instance and selecting "Restart" within the right-click options pane.
+
+##### Creating a new login and user
+
+If you would like to create a new login and user to be able to scan your SQL server, follow the steps below:
+
+The account must have access to the **master** database. This is because the `sys.databases` is in the master database. The Microsoft Purview scanner needs to enumerate `sys.databases` in order to find all the SQL databases on the server.
+
+> [!Note]
+> All the steps below can be executed using the code provided [here](https://github.com/Azure/Purview-Samples/blob/master/TSQL-Code-Permissions/grant-access-to-on-prem-sql-databases.sql)
+
+1. Navigate to SQL Server Management Studio (SSMS), connect to the server, navigate to security, select and hold (or right-click) on login and create New login. If Windows Authentication is applied, select "Windows authentication". If SQL Authentication is applied, make sure to select "SQL authentication".
+
+ :::image type="content" source="media/register-scan-azure-arc-enabled-sql-server/create-new-login-user.png" alt-text="Screenshot that shows how to create a new login and user.":::
+
+1. Select Server roles on the left navigation and ensure that public role is assigned.
+
+1. Select User mapping on the left navigation, select all the databases in the map and select the Database role: **db_datareader**.
+
+ :::image type="content" source="media/register-scan-azure-arc-enabled-sql-server/user-mapping.png" alt-text="Screenshot that shows user mapping.":::
+
+1. Select OK to save.
+
+1. If SQL Authentication is applied, navigate again to the user you created, by selecting and holding (or right-clicking) and selecting **Properties**. Enter a new password and confirm it. Select the 'Specify old password' and enter the old password. **It is required to change your password as soon as you create a new login.**
+
+ :::image type="content" source="media/register-scan-azure-arc-enabled-sql-server/change-password.png" alt-text="Screenshot that shows how to change a password.":::
+
+##### Storing your SQL login password in a key vault and creating a credential in Microsoft Purview
+
+1. Navigate to your key vault in the Azure portal1. Select **Settings > Secrets**
+1. Select **+ Generate/Import** and enter the **Name** and **Value** as the *password* from your SQL server login
+1. Select **Create** to complete
+1. If your key vault is not connected to Microsoft Purview yet, you will need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-microsoft-purview-account)
+1. Finally, [create a new credential](manage-credentials.md#create-a-new-credential) using the **username** and **password** to set up your scan. Make sure the right authentication method is selected when creating a new credential. If SQL Authentication is applied, select "SQL authentication" as the authentication method. If Windows Authentication is applied, then select "Windows authentication".
+
+### Steps to register
+
+1. Navigate to your Microsoft Purview account
+
+1. Under Sources and scanning in the left navigation, select **Integration runtimes**. Make sure a self-hosted integration runtime is set up. If it is not set up, follow the steps mentioned [here](manage-integration-runtimes.md) to create a self-hosted integration runtime for scanning on an on-premises or Azure VM that has access to your on-premises network.
+
+1. Select **Data Map** on the left navigation.
+
+1. Select **Register**
+
+1. Select **SQL server** and then **Continue**
+
+ :::image type="content" source="media/register-scan-azure-arc-enabled-sql-server/set-up-azure-arc-enabled-sql-data-source.png" alt-text="Screenshot that shows how to set up the SQL data source.":::
+
+1. Provide a friendly name, which will be a short name you can use to identify your server, and the server endpoint.
+
+1. Select **Finish** to register the data source.
+
+## Scan
+
+Follow the steps below to scan Azure Arc-enabled SQL Server instances to automatically identify assets and classify your data. For more information about scanning in general, see our [introduction to scans and ingestion](concept-scans-and-ingestion.md)
+
+### Create and run scan
+
+To create and run a new scan, do the following:
+
+1. Select the **Data Map** tab on the left pane in the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/).
+
+1. Select the SQL Server source that you registered.
+
+1. Select **New scan**
+
+1. Select the credential to connect to your data source. The credentials are grouped and listed under different authentication methods.
+
+ :::image type="content" source="media/register-scan-azure-arc-enabled-sql-server/azure-arc-enabled-sql-set-up-scan-win-auth.png" alt-text="Screenshot that shows how to set up a scan.":::
+
+1. You can scope your scan to specific tables by choosing the appropriate items in the list.
+
+ :::image type="content" source="media/register-scan-azure-arc-enabled-sql-server/azure-arc-enabled-sql-scope-your-scan.png" alt-text="Screenshot that shows how to scope your scan.":::
+
+1. Then select a scan rule set. You can choose between the system default, existing custom rule sets, or create a new rule set inline.
+
+ :::image type="content" source="media/register-scan-azure-arc-enabled-sql-server/azure-arc-enabled-sql-scan-rule-set.png" alt-text="Screenshot that shows the scan rule set.":::
+
+1. Choose your scan trigger. You can set up a schedule or run the scan once.
+
+ :::image type="content" source="media/register-scan-azure-arc-enabled-sql-server/trigger-scan.png" alt-text="Screenshot that shows how to choose a trigger.":::
+
+1. Review your scan and select **Save and run**.
++
+## Next steps
+
+Now that you have registered your source, follow the below guides to learn more about Microsoft Purview and your data.
+
+- [Data Estate Insights in Microsoft Purview](concept-insights.md)
+- [Lineage in Microsoft Purview](catalog-lineage-user-guide.md)
+- [Search Data Catalog](how-to-search-catalog.md)
role-based-access-control Built In Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles.md
The following table provides a brief description of each built-in role. Click th
> | [AcrPush](#acrpush) | Push artifacts to or pull artifacts from a container registry. | 8311e382-0749-4cb8-b61a-304f252e45ec | > | [AcrQuarantineReader](#acrquarantinereader) | Pull quarantined images from a container registry. | cdda3590-29a3-44f6-95f2-9f980659eb04 | > | [AcrQuarantineWriter](#acrquarantinewriter) | Push quarantined images to or pull quarantined images from a container registry. | c8d4ff99-41c3-41a8-9f60-21dfdad59608 |
-> | [Azure Kubernetes Fleet Manager RBAC Admin](#azure-kubernetes-fleet-manager-rbac-admin) | This role grants admin access - provides write permissions on most objects within a a namespace, with the exception of ResourceQuota object and the namespace object itself. Applying this role at cluster scope will give access across all namespaces. | 434fb43a-c01c-447e-9f67-c3ad923cfaba |
+> | [Azure Kubernetes Fleet Manager RBAC Admin](#azure-kubernetes-fleet-manager-rbac-admin) | This role grants admin access - provides write permissions on most objects within a namespace, with the exception of ResourceQuota object and the namespace object itself. Applying this role at cluster scope will give access across all namespaces. | 434fb43a-c01c-447e-9f67-c3ad923cfaba |
> | [Azure Kubernetes Fleet Manager RBAC Cluster Admin](#azure-kubernetes-fleet-manager-rbac-cluster-admin) | Lets you manage all resources in the fleet manager cluster. | 18ab4d3d-a1bf-4477-8ad9-8359bc988f69 | > | [Azure Kubernetes Fleet Manager RBAC Reader](#azure-kubernetes-fleet-manager-rbac-reader) | Allows read-only access to see most objects in a namespace. It does not allow viewing roles or role bindings. This role does not allow viewing Secrets, since reading the contents of Secrets enables access to ServiceAccount credentials in the namespace, which would allow API access as any ServiceAccount in the namespace (a form of privilege escalation). Applying this role at cluster scope will give access across all namespaces. | 30b27cfc-9c84-438e-b0ce-70e35255df80 |
-> | [Azure Kubernetes Fleet Manager RBAC Writer](#azure-kubernetes-fleet-manager-rbac-writer) | Allows read/write access to most objects in a namespace.This role does not allow viewing or modifying roles or role bindings. However, this role allows accessing Secrets as any ServiceAccount in the namespace, so it can be used to gain the API access levels of any ServiceAccount in the namespace. Applying this role at cluster scope will give access across all namespaces. | 5af6afb3-c06c-4fa4-8848-71a8aee05683 |
+> | [Azure Kubernetes Fleet Manager RBAC Writer](#azure-kubernetes-fleet-manager-rbac-writer) | Allows read/write access to most objects in a namespace. This role does not allow viewing or modifying roles or role bindings. However, this role allows accessing Secrets as any ServiceAccount in the namespace, so it can be used to gain the API access levels of any ServiceAccount in the namespace. Applying this role at cluster scope will give access across all namespaces. | 5af6afb3-c06c-4fa4-8848-71a8aee05683 |
> | [Azure Kubernetes Service Cluster Admin Role](#azure-kubernetes-service-cluster-admin-role) | List cluster admin credential action. | 0ab0b1a8-8aac-4efd-b8c2-3ee1fb270be8 | > | [Azure Kubernetes Service Cluster User Role](#azure-kubernetes-service-cluster-user-role) | List cluster user credential action. | 4abbcc35-e782-43d8-92c5-2d3f1bd2253f | > | [Azure Kubernetes Service Contributor Role](#azure-kubernetes-service-contributor-role) | Grants access to read and write Azure Kubernetes Service clusters | ed7f3fbd-7b88-4dd4-9017-9adb7ce333f8 | > | [Azure Kubernetes Service RBAC Admin](#azure-kubernetes-service-rbac-admin) | Lets you manage all resources under cluster/namespace, except update or delete resource quotas and namespaces. | 3498e952-d568-435e-9b2c-8d77e338d7f7 | > | [Azure Kubernetes Service RBAC Cluster Admin](#azure-kubernetes-service-rbac-cluster-admin) | Lets you manage all resources in the cluster. | b1ff04bb-8a4e-4dc4-8eb5-8693973ce19b | > | [Azure Kubernetes Service RBAC Reader](#azure-kubernetes-service-rbac-reader) | Allows read-only access to see most objects in a namespace. It does not allow viewing roles or role bindings. This role does not allow viewing Secrets, since reading the contents of Secrets enables access to ServiceAccount credentials in the namespace, which would allow API access as any ServiceAccount in the namespace (a form of privilege escalation). Applying this role at cluster scope will give access across all namespaces. | 7f6c6a51-bcf8-42ba-9220-52d62157d7db |
-> | [Azure Kubernetes Service RBAC Writer](#azure-kubernetes-service-rbac-writer) | Allows read/write access to most objects in a namespace.This role does not allow viewing or modifying roles or role bindings. However, this role allows accessing Secrets and running Pods as any ServiceAccount in the namespace, so it can be used to gain the API access levels of any ServiceAccount in the namespace. Applying this role at cluster scope will give access across all namespaces. | a7ffa36f-339b-4b5c-8bdf-e2c188b2c0eb |
+> | [Azure Kubernetes Service RBAC Writer](#azure-kubernetes-service-rbac-writer) | Allows read/write access to most objects in a namespace. This role does not allow viewing or modifying roles or role bindings. However, this role allows accessing Secrets and running Pods as any ServiceAccount in the namespace, so it can be used to gain the API access levels of any ServiceAccount in the namespace. Applying this role at cluster scope will give access across all namespaces. | a7ffa36f-339b-4b5c-8bdf-e2c188b2c0eb |
> | **Databases** | | | > | [Azure Connected SQL Server Onboarding](#azure-connected-sql-server-onboarding) | Allows for read and write access to Azure resources for SQL Server on Arc-enabled servers. | e8113dce-c529-4d33-91fa-e9b972617508 | > | [Cosmos DB Account Reader Role](#cosmos-db-account-reader-role) | Can read Azure Cosmos DB account data. See [DocumentDB Account Contributor](#documentdb-account-contributor) for managing Azure Cosmos DB accounts. | fbdf93bf-df7d-467e-a4d2-9458aa1360c8 |
The following table provides a brief description of each built-in role. Click th
> | [Workbook Contributor](#workbook-contributor) | Can save shared workbooks. | e8ddcd69-c73f-4f9f-9844-4100522f16ad | > | [Workbook Reader](#workbook-reader) | Can read workbooks. | b279062a-9be3-42a0-92ae-8b3cf002ec4d | > | **Management and governance** | | |
-> | [Automation Contributor](#automation-contributor) | Manage azure automation resources and other resources using azure automation. | f353d9bd-d4a6-484e-a77a-8050b599b867 |
+> | [Automation Contributor](#automation-contributor) | Manage Azure Automation resources and other resources using Azure Automation. | f353d9bd-d4a6-484e-a77a-8050b599b867 |
> | [Automation Job Operator](#automation-job-operator) | Create and Manage Jobs using Automation Runbooks. | 4fe576fe-1146-4730-92eb-48519fa6bf9f | > | [Automation Operator](#automation-operator) | Automation Operators are able to start, stop, suspend, and resume jobs | d3881f73-407a-4167-8283-e981cbba0404 | > | [Automation Runbook Operator](#automation-runbook-operator) | Read Runbook properties - to be able to create Jobs of the runbook. | 5fb5aef8-1081-4b8e-bb16-9d5d0385bab5 |
The following table provides a brief description of each built-in role. Click th
> | [Azure Digital Twins Data Owner](#azure-digital-twins-data-owner) | Full access role for Digital Twins data-plane | bcd981a7-7f74-457b-83e1-cceb9e632ffe | > | [Azure Digital Twins Data Reader](#azure-digital-twins-data-reader) | Read-only role for Digital Twins data-plane properties | d57506d4-4c8d-48b1-8587-93c323f6a5a3 | > | [BizTalk Contributor](#biztalk-contributor) | Lets you manage BizTalk services, but not access to them. | 5e3c6656-6cfa-4708-81fe-0de47ac73342 |
+> | [Load Test Contributor](#load-test-contributor) | View, create, update, delete and execute load tests. View and list load test resources but can not make any changes. | 749a398d-560b-491b-bb21-08924219302e |
+> | [Load Test Owner](#load-test-owner) | Execute all operations on load test resources and load tests. | 45bb0b16-2f0c-4e78-afaa-a07599b003f6 |
+> | [Load Test Reader](#load-test-reader) | View and list all load tests and load test resources but can not make any changes. | 3ae3fb29-0000-4ccd-bf80-542e7b26e081 |
> | [Scheduler Job Collections Contributor](#scheduler-job-collections-contributor) | Lets you manage Scheduler job collections, but not access to them. | 188a0f2f-5c9e-469b-ae67-2aa5ce574b94 | > | [Services Hub Operator](#services-hub-operator) | Services Hub Operator allows you to perform all read, write, and deletion operations related to Services Hub Connectors. | 82200a5b-e217-47a5-b665-6d8765ee745b |
Push quarantined images to or pull quarantined images from a container registry.
### Azure Kubernetes Fleet Manager RBAC Admin
-This role grants admin access - provides write permissions on most objects within a a namespace, with the exception of ResourceQuota object and the namespace object itself. Applying this role at cluster scope will give access across all namespaces.
+This role grants admin access - provides write permissions on most objects within a namespace, with the exception of ResourceQuota object and the namespace object itself. Applying this role at cluster scope will give access across all namespaces.
> [!div class="mx-tableFixed"] > | Actions | Description |
This role grants admin access - provides write permissions on most objects withi
"assignableScopes": [ "/" ],
- "description": "This role grants admin access - provides write permissions on most objects within a a namespace, with the exception of ResourceQuota object and the namespace object itself. Applying this role at cluster scope will give access across all namespaces.",
+ "description": "This role grants admin access - provides write permissions on most objects within a namespace, with the exception of ResourceQuota object and the namespace object itself. Applying this role at cluster scope will give access across all namespaces.",
"id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/434fb43a-c01c-447e-9f67-c3ad923cfaba", "name": "434fb43a-c01c-447e-9f67-c3ad923cfaba", "permissions": [
Allows read-only access to see most objects in a namespace. It does not allow vi
### Azure Kubernetes Fleet Manager RBAC Writer
-Allows read/write access to most objects in a namespace.This role does not allow viewing or modifying roles or role bindings. However, this role allows accessing Secrets as any ServiceAccount in the namespace, so it can be used to gain the API access levels of any ServiceAccount in the namespace. Applying this role at cluster scope will give access across all namespaces.
+Allows read/write access to most objects in a namespace. This role does not allow viewing or modifying roles or role bindings. However, this role allows accessing Secrets as any ServiceAccount in the namespace, so it can be used to gain the API access levels of any ServiceAccount in the namespace. Applying this role at cluster scope will give access across all namespaces.
> [!div class="mx-tableFixed"] > | Actions | Description |
Allows read/write access to most objects in a namespace.This role does not allow
"assignableScopes": [ "/" ],
- "description": "Allows read/write access to most objects in a namespace.This role does not allow viewing or modifying roles or role bindings. However, this role allows accessing Secrets as any ServiceAccount in the namespace, so it can be used to gain the API access levels of any ServiceAccount in the namespace. Applying this role at cluster scope will give access across all namespaces.",
+ "description": "Allows read/write access to most objects in a namespace. This role does not allow viewing or modifying roles or role bindings. However, this role allows accessing Secrets as any ServiceAccount in the namespace, so it can be used to gain the API access levels of any ServiceAccount in the namespace. Applying this role at cluster scope will give access across all namespaces.",
"id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/5af6afb3-c06c-4fa4-8848-71a8aee05683", "name": "5af6afb3-c06c-4fa4-8848-71a8aee05683", "permissions": [
Allows read-only access to see most objects in a namespace. It does not allow vi
### Azure Kubernetes Service RBAC Writer
-Allows read/write access to most objects in a namespace.This role does not allow viewing or modifying roles or role bindings. However, this role allows accessing Secrets and running Pods as any ServiceAccount in the namespace, so it can be used to gain the API access levels of any ServiceAccount in the namespace. Applying this role at cluster scope will give access across all namespaces. [Learn more](../aks/manage-azure-rbac.md)
+Allows read/write access to most objects in a namespace. This role does not allow viewing or modifying roles or role bindings. However, this role allows accessing Secrets and running Pods as any ServiceAccount in the namespace, so it can be used to gain the API access levels of any ServiceAccount in the namespace. Applying this role at cluster scope will give access across all namespaces. [Learn more](../aks/manage-azure-rbac.md)
> [!div class="mx-tableFixed"] > | Actions | Description |
Allows read/write access to most objects in a namespace.This role does not allow
"assignableScopes": [ "/" ],
- "description": "Allows read/write access to most objects in a namespace.This role does not allow viewing or modifying roles or role bindings. However, this role allows accessing Secrets and running Pods as any ServiceAccount in the namespace, so it can be used to gain the API access levels of any ServiceAccount in the namespace. Applying this role at cluster scope will give access across all namespaces.",
+ "description": "Allows read/write access to most objects in a namespace. This role does not allow viewing or modifying roles or role bindings. However, this role allows accessing Secrets and running Pods as any ServiceAccount in the namespace, so it can be used to gain the API access levels of any ServiceAccount in the namespace. Applying this role at cluster scope will give access across all namespaces.",
"id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/a7ffa36f-339b-4b5c-8bdf-e2c188b2c0eb", "name": "a7ffa36f-339b-4b5c-8bdf-e2c188b2c0eb", "permissions": [
Lets you manage BizTalk services, but not access to them.
} ```
+### Load Test Contributor
+
+View, create, update, delete and execute load tests. View and list load test resources but can not make any changes.
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/alertRules/* | Create and manage a classic metric alert |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/* | Create and manage a deployment |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
+> | Microsoft.LoadTestService/*/read | Read load testing resources |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | Microsoft.LoadTestService/loadtests/* | Create and manage load tests |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "id": "/providers/Microsoft.Authorization/roleDefinitions/749a398d-560b-491b-bb21-08924219302e",
+ "properties": {
+ "roleName": "Load Test Contributor",
+ "description": "View, create, update, delete and execute load tests. View and list load test resources but can not make any changes.",
+ "assignableScopes": [
+ "/"
+ ],
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.LoadTestService/*/read",
+ "Microsoft.Authorization/*/read",
+ "Microsoft.Resources/deployments/*",
+ "Microsoft.Resources/subscriptions/resourceGroups/read",
+ "Microsoft.Insights/alertRules/*"
+ ],
+ "notActions": [],
+ "dataActions": [
+ "Microsoft.LoadTestService/loadtests/*"
+ ],
+ "notDataActions": []
+ }
+ ]
+ }
+}
+```
+
+### Load Test Owner
+
+Execute all operations on load test resources and load tests.
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/alertRules/* | Create and manage a classic metric alert |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/* | Create and manage a deployment |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
+> | Microsoft.LoadTestService/* | Create and manage load testing resources |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | Microsoft.LoadTestService/loadtests/* | Create and manage load tests |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "id": "/providers/Microsoft.Authorization/roleDefinitions/45bb0b16-2f0c-4e78-afaa-a07599b003f6",
+ "properties": {
+ "roleName": "Load Test Owner",
+ "description": "Execute all operations on load test resources and load tests",
+ "assignableScopes": [
+ "/"
+ ],
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.LoadTestService/*",
+ "Microsoft.Authorization/*/read",
+ "Microsoft.Resources/deployments/*",
+ "Microsoft.Resources/subscriptions/resourceGroups/read",
+ "Microsoft.Insights/alertRules/*"
+ ],
+ "notActions": [],
+ "dataActions": [
+ "Microsoft.LoadTestService/*"
+ ],
+ "notDataActions": []
+ }
+ ]
+ }
+}
+```
+
+### Load Test Reader
+
+View and list all load tests and load test resources but can not make any changes.
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/alertRules/* | Create and manage a classic metric alert |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/* | Create and manage a deployment |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
+> | Microsoft.LoadTestService/*/Read | Read load testing resources |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | Microsoft.LoadTestService/loadtests/readTest/action | Read load tests |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "id": "/providers/Microsoft.Authorization/roleDefinitions/3ae3fb29-0000-4ccd-bf80-542e7b26e081",
+ "properties": {
+ "roleName": "Load Test Reader",
+ "description": "View and list all load tests and load test resources but can not make any changes",
+ "assignableScopes": [
+ "/"
+ ],
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.LoadTestService/*/read",
+ "Microsoft.Authorization/*/read",
+ "Microsoft.Resources/deployments/*",
+ "Microsoft.Resources/subscriptions/resourceGroups/read",
+ "Microsoft.Insights/alertRules/*"
+ ],
+ "notActions": [],
+ "dataActions": [
+ "Microsoft.LoadTestService/loadtests/readTest/action"
+ ],
+ "notDataActions": []
+ }
+ ]
+ }
+}
+```
+ ### Scheduler Job Collections Contributor Lets you manage Scheduler job collections, but not access to them.
search Search Howto Index Sharepoint Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-sharepoint-online.md
After selecting **Save** you'll see an Object ID that has been assigned to your
The SharePoint indexer supports both [delegated and application](/graph/auth/auth-concepts#delegated-and-application-permissions) permissions. Choose which permissions you want to use based on your scenario:
-+ Delegated permissions, where the indexer runs under the identity of the user or app sending the request. Data access is limited to the sites and files to which the user has access. To support deleted permissions, the indexer requires a [device code prompt](../active-directory/develop/v2-oauth2-device-code.md) to sign in on behalf of the user.
++ Delegated permissions, where the indexer runs under the identity of the user or app sending the request. Data access is limited to the sites and files to which the user has access. To support delegated permissions, the indexer requires a [device code prompt](../active-directory/develop/v2-oauth2-device-code.md) to sign in on behalf of the user. + Application permissions, where the indexer runs under the identity of the SharePoint tenant with access to all sites and files within the SharePoint tenant. The indexer requires a [client secret](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md) to access the SharePoint tenant. The indexer will also require [tenant admin approval](../active-directory/manage-apps/grant-admin-consent.md) before it can index any content.
sentinel Deployment Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deployment-overview.md
Last updated 04/12/2022
This article introduces you to the process of deploying the Microsoft Sentinel Solution for SAP. The full process is detailed in a whole set of articles linked under [Deployment milestones](#deployment-milestones) below.
+> [!NOTE]
+> If needed, you can [update an existing Microsoft Sentinel for SAP data connector](update-sap-data-connector.md) to its latest version.
+ ## Overview **Microsoft Sentinel Solution for SAP** is a [Microsoft Sentinel solution](../sentinel-solutions.md) that you can use to monitor your SAP systems and detect sophisticated threats throughout the business logic and application layers. The solution includes the following components:
The Microsoft Sentinel for SAP data connector is an agent, installed on a VM or
## Deployment milestones
-Follow your deployment journey through this series of articles, in which you'll learn how to navigate each of the following steps:
+Follow your deployment journey through this series of articles, in which you'll learn how to navigate each of the following steps.
+
+> [!NOTE]
+> If needed, you can [update an existing Microsoft Sentinel for SAP data connector](update-sap-data-connector.md) to its latest version.
| Milestone | Article | | | - |
Follow your deployment journey through this series of articles, in which you'll
| **3. Prepare SAP environment** | [Deploying SAP CRs and configuring authorization](preparing-sap.md) | | **4. Deploy data connector agent** | [Deploy and configure the container hosting the data connector agent](deploy-data-connector-agent-container.md) | | **5. Deploy SAP security content** | [Deploy SAP security content](deploy-sap-security-content.md)
-| **6. Microsoft Sentinel Solution for SAP** | [Configure Microsoft Sentinel Solution for SAP](deployment-solution-configuration.md)
+| **6. Microsoft Sentinel Solution for SAP** | [Configure Microsoft Sentinel Solution for SAP](deployment-solution-configuration.md) |
| **7. Optional steps** | - [Configure auditing](configure-audit.md)<br>- [Configure Microsoft Sentinel for SAP data connector to use SNC](configure-snc.md)<br>- [Configure audit log monitoring rules](configure-audit-log-rules.md) ## Next steps
sentinel Update Sap Data Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/update-sap-data-connector.md
description: This article shows you how to update an already existing SAP data c
Previously updated : 03/02/2022 Last updated : 11/07/2022 # Update Microsoft Sentinel's SAP data connector agent
site-recovery Azure To Azure Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-architecture.md
Last updated 4/28/2022 +
site-recovery Azure To Azure Common Questions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-common-questions.md
Title: Common questions about Azure VM disaster recovery with Azure Site Recovery description: This article answers common questions about Azure VM disaster recovery when you use Azure Site Recovery.+ Last updated 04/28/2022
site-recovery Azure To Azure Enable Global Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-enable-global-disaster-recovery.md
Last updated 08/09/2021 + # Enable global disaster recovery using Azure Site Recovery
site-recovery Azure To Azure Enable Replication Added Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-enable-replication-added-disk.md
Title: Enable replication for an added Azure VM disk in Azure Site Recovery description: This article describes how to enable replication for a disk added to an Azure VM that's enabled for disaster recovery with Azure Site Recovery+
site-recovery Azure To Azure Exclude Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-exclude-disks.md
Title: Exclude Azure VM disks from replication with Azure Site Recovery and Azure PowerShell description: Learn how to exclude disks of Azure virtual machines during Azure Site Recovery by using Azure PowerShell.+
site-recovery Azure To Azure How To Enable Replication Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-replication-private-endpoints.md
additional fully qualified domain names are added to the same private endpoint.
The five domain names are formatted with the following pattern:
-`{Vault-ID}-asr-pod01-{type}-.{target-geo-code}.siterecovery.windowsazure.com`
+`{Vault-ID}-asr-pod01-{type}-.{target-geo-code}.privatelink.siterecovery.windowsazure.com`
## Approve private endpoints for Site Recovery
domain names to private IPs.
private DNS zone. These fully qualified domain names match the pattern:
- `{Vault-ID}-asr-pod01-{type}-.{target-geo-code}.siterecovery.windowsazure.com`
+ `{Vault-ID}-asr-pod01-{type}-.{target-geo-code}.privatelink.siterecovery.windowsazure.com`
:::image type="content" source="./media/azure-to-azure-how-to-enable-replication-private-endpoints/add-record-set.png" alt-text="Shows the page to add a DNS A type record for the fully qualified domain name to the private endpoint in the Azure portal.":::
Now that you've enabled private endpoints for your virtual machine replication,
pages for additional and related information: - [Replicate Azure VMs to another Azure region](./azure-to-azure-how-to-enable-replication.md)-- [Tutorial: Set up disaster recovery for Azure VMs](./azure-to-azure-tutorial-enable-replication.md)
+- [Tutorial: Set up disaster recovery for Azure VMs](./azure-to-azure-tutorial-enable-replication.md)
site-recovery Azure To Azure How To Enable Replication S2d Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-replication-s2d-vms.md
Title: Replicate Azure VMs running Storage Spaces Direct with Azure Site Recovery description: Learn how to replicate Azure VMs running Storage Spaces Direct using Azure Site Recovery.+
site-recovery Azure To Azure How To Enable Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-replication.md
Title: Configure replication for Azure VMs in Azure Site Recovery description: Learn how to configure replication to another region for Azure VMs, using Site Recovery.+
site-recovery Azure To Azure Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-quickstart.md
Last updated 05/02/2022 + # Quickstart: Set up disaster recovery to a secondary Azure region for an Azure VM
site-recovery Azure To Azure Replicate After Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-replicate-after-migration.md
Last updated 11/14/2019 + # Set up disaster recovery for Azure VMs after migration to Azure
site-recovery Azure To Azure Tutorial Failback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-tutorial-failback.md
description: Tutorial to learn about failing back Azure VMs to a primary region
Last updated 11/05/2020 ++ #Customer intent: As an Azure admin, I want to fail back VMs to the primary region after running a failover to a secondary region.
site-recovery Azure To Azure Tutorial Failover Failback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-tutorial-failover-failback.md
description: Tutorial to learn how to fail over and reprotect Azure VMs replicat
Last updated 11/05/2020 ++ #Customer intent: As an Azure admin, I want to run a production failover of Azure VMs to a secondary Azure region.
site-recovery Concepts Types Of Failback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/concepts-types-of-failback.md
description: This article provides an overview of various types of failback and
Last updated 08/07/2019++
site-recovery Deploy Vmware Azure Replication Appliance Modernized https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/deploy-vmware-azure-replication-appliance-modernized.md
description: This article describes support and requirements when deploying the
Last updated 09/21/2022++ # Deploy Azure Site Recovery replication appliance - Modernized
site-recovery Exclude Disks Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/exclude-disks-replication.md
Title: Exclude disks from replication with Azure Site Recovery
description: How to exclude disks from replication to Azure with Azure Site Recovery. Last updated 12/17/2019-++ # Exclude disks from disaster recovery
site-recovery Failover Failback Overview Modernized https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/failover-failback-overview-modernized.md
Title: About failover and failback in Azure Site Recovery - Modernized
description: Learn about failover and failback in Azure Site Recovery - Modernized Last updated 09/21/2022-++ # About on-premises disaster recovery failover/failback - Modernized
site-recovery Failover Failback Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/failover-failback-overview.md
Title: About failover and failback in Azure Site - Classic
description: Learn about failover and failback in Azure Site Recovery - Classic Last updated 06/30/2021-++ # About on-premises disaster recovery failover/failback - Classic
site-recovery How To Enable Replication Proximity Placement Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/how-to-enable-replication-proximity-placement-groups.md
Title: Replicate Azure VMs running in a proximity placement group description: Learn how to replicate Azure VMs running in proximity placement groups by using Azure Site Recovery.+
site-recovery How To Move From Classic To Modernized Vmware Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/how-to-move-from-classic-to-modernized-vmware-disaster-recovery.md
Title: How to move from classic to modernized VMware disaster recovery? description: This article describes how to move from classic to modernized VMware disaster recovery.+
site-recovery Hyper V Azure Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-azure-architecture.md
description: This article provides an overview of components and architecture us
Last updated 11/14/2019-++
site-recovery Hyper V Azure Common Questions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-azure-common-questions.md
Title: Common questions for Hyper-V disaster recovery with Azure Site Recovery
description: This article summarizes common questions about setting up disaster recovery for on-premises Hyper-V VMs to Azure using the Azure Site Recovery site. Last updated 11/12/2019 -++ # Common questions - Hyper-V to Azure disaster recovery
site-recovery Hyper V Azure Failover Failback Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-azure-failover-failback-tutorial.md
Last updated 12/16/2019 ++ # Fail over Hyper-V VMs to Azure
site-recovery Hyper V Azure Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-azure-tutorial.md
Last updated 11/12/2019 ++ # Set up disaster recovery of on-premises Hyper-V VMs to Azure
site-recovery Hyper V Prepare On Premises Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-prepare-on-premises-tutorial.md
Last updated 11/12/2019 ++
site-recovery Hyper V Vmm Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-vmm-architecture.md
description: This article provides an overview of the architecture for disaster
Last updated 11/12/2019++ # Architecture - Hyper-V replication to a secondary site
site-recovery Hyper V Vmm Azure Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-vmm-azure-tutorial.md
description: Learn how to set up disaster recovery of on-premises Hyper-V VMs in
Last updated 03/19/2020 ++ # Set up disaster recovery of on-premises Hyper-V VMs in VMM clouds to Azure
site-recovery Hyper V Vmm Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-vmm-disaster-recovery.md
description: Learn how to set up disaster recovery for Hyper-V VMs between your
Last updated 11/14/2019-++ # Set up disaster recovery for Hyper-V VMs to a secondary on-premises site
site-recovery Hyper V Vmm Failover Failback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-vmm-failover-failback.md
Last updated 11/14/2019-++ # Fail over and fail back Hyper-V VMs replicated to your secondary on-premises site
site-recovery Hyper V Vmm Network Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-vmm-network-mapping.md
description: Describes how to set up network mapping for disaster recovery of Hy
Last updated 11/14/2019-++
site-recovery Hyper V Vmm Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-vmm-networking.md
description: Describes how to set up IP addressing for connecting to VMs in a se
Last updated 11/12/2019-++ # Set up IP addressing to connect to a secondary on-premises site after failover
site-recovery Hyper V Vmm Secondary Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-vmm-secondary-support-matrix.md
description: Summarizes support for Hyper-V VM replication in VMM clouds to a se
Last updated 11/06/2019++ # Support matrix for disaster recovery of Hyper-V VMs to a secondary site
site-recovery Migrate Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/migrate-overview.md
Last updated 08/06/2020-++ # Migrating to Azure
site-recovery Migrate Tutorial Aws Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/migrate-tutorial-aws-azure.md
Last updated 07/27/2019 -++ # Migrate Amazon Web Services (AWS) VMs to Azure
site-recovery Migrate Tutorial On Premises Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/migrate-tutorial-on-premises-azure.md
description: This article summarizes how to migrate on-premises machines to Azur
Last updated 07/27/2020-++ # Migrate on-premises machines to Azure
site-recovery Migrate Tutorial Windows Server 2008 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/migrate-tutorial-windows-server-2008.md
Last updated 07/27/2020 ++
site-recovery Monitor Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/monitor-log-analytics.md
description: Learn how to monitor Azure Site Recovery with Azure Monitor Logs (L
Last updated 11/15/2019-++ # Monitor Site Recovery with Azure Monitor Logs
site-recovery Monitoring Common Questions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/monitoring-common-questions.md
Last updated 07/31/2019 ++ # Common questions about Site Recovery monitoring
site-recovery Move From Classic To Modernized Vmware Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/move-from-classic-to-modernized-vmware-disaster-recovery.md
Title: Move from classic to modernized VMware disaster recovery.
description: Learn about the architecture, necessary infrastructure, and FAQs about moving your VMware replications from classic to modernized protection architecture. Previously updated : 07/15/2022 Last updated : 11/04/2022+ # Move from classic to modernized VMware disaster recovery  
The components involved in the migration of replicated items of a VMware machine
|Component|Requirement| ||-|
-|Replicated items in a classic Recovery Services vault|One or more replicated items that are protected using the classic architecture and a healthy configuration server.<br></br>The replicated item should be in a non-critical state and must be replicated from on-premises to Azure with the mobility agent running on version 9.50 or later.|
+|Replicated items in a classic Recovery Services vault| One or more replicated items that are protected using the classic architecture and a healthy configuration server.<br></br>The replicated item should be in a non-critical state and must be replicated from on-premises to Azure with the mobility agent running on version 9.50 or later.|
|Configuration server used by the replicated items|The configuration server, used by the replicated items, should be in a non-critical state and its components should be upgraded to the latest version (9.50 or later).|  |A Recovery Services vault with modernized experience|A Recovery Services vault with modernized experience.| |A healthy Azure Site Recovery replication appliance|A non-critical Azure Site Recovery replication appliance, which can discover on-premises machines, with all its components upgraded to the latest version (9.50 or later). The exact required versions are as follows:<br></br>Process server: 9.50<br>Proxy server: 1.35.8419.34591<br>Recovery services agent: 2.0.9249.0<br>Replication service: 1.35.8433.24227|
Ensure the following for the replicated items you are planning to move:
- The Recovery Services vault does not have MSI enabled on it. - The replicated item is a VMware machine replicating via a configuration server. -- Replication is not happening to an unmanaged storage account but rather to managed disk.
+- Replication is not happening to an un-managed storage account but rather to managed disk.
- Replication is happening from on-premises to Azure and the replicated item is not in a failed-over or in failed-back state. - The replicated item is not replicating the data from Azure to on-premises.  - The initial replication is not under progress and has already been completed.  
Site Recovery will start charging license fee on replicated items in the moderni
Ultimately, the classic architecture will be deprecated, so one must ensure that they are using the latest modernized architecture. The table below shows a comparison of the two architectures to enable you to select the correct option for enabling disaster recovery for your machines:ΓÇ»
-|Classic architecture|Modernized architecture [New]|
+|Classic architecture| Modernized architecture [New]|
||--| |Multiple setups required for discovering on-premises data.|**Central discovery** of on-premises data center using discovery service.| |Extensive number of steps required for initial onboarding.|**Simplified the onboarding experience** by automating artifact creation and introduced defaults to reduce required inputs.|ΓÇ»
site-recovery Physical Azure Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/physical-azure-architecture.md
Title: Physical server disaster recovery architecture in Azure Site Recovery
description: This article provides an overview of components and architecture used during disaster recovery of on-premises physical servers to Azure with the Azure Site Recovery service. Last updated 02/11/2020++ # Physical server to Azure disaster recovery architecture
site-recovery Physical Azure Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/physical-azure-disaster-recovery.md
description: Learn how to set up disaster recovery to Azure for on-premises Wind
Last updated 05/02/2022++
site-recovery Physical Server Azure Architecture Modernized https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/physical-server-azure-architecture-modernized.md
description: This article provides an overview of components and architecture us
Last updated 09/21/2022++ # Physical server to Azure disaster recovery architecture ΓÇô Modernized
site-recovery Physical To Azure Failover Failback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/physical-to-azure-failover-failback.md
Last updated 12/17/2019++ # Fail over and fail back physical servers replicated to Azure
site-recovery Quickstart Create Vault Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/quickstart-create-vault-template.md
description: In this quickstart, you learn how to create an Azure Recovery Servi
Last updated 09/21/2022 ++ # Quickstart: Create a Recovery Services vault using an ARM template
site-recovery Recovery Plan Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/recovery-plan-overview.md
Title: About recovery plans in Azure Site Recovery
description: Learn about recovery plans in Azure Site Recovery. Last updated 01/23/2020-++ # About recovery plans
site-recovery Site Recovery Capacity Planner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-capacity-planner.md
Last updated 11/12/2019-++
site-recovery Site Recovery Create Recovery Plans https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-create-recovery-plans.md
Title: Create/customize recovery plans in Azure Site Recovery
description: Learn how to create and customize recovery plans for disaster recovery using the Azure Site Recovery service. Last updated 01/23/2020++ # Create and customize recovery plans
site-recovery Site Recovery Dynamicsax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-dynamicsax.md
Title: Disaster recovery of Dynamics AX with Azure Site Recovery description: Learn how to set up disaster recovery for Dynamics AX with Azure Site Recovery+
site-recovery Site Recovery Failover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-failover.md
description: How to fail over VMs/physical servers to Azure with Azure Site Reco
Last updated 12/10/2019-++ # Run a failover from on-premises to Azure
site-recovery Site Recovery Monitor And Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-monitor-and-troubleshoot.md
description: Monitor and troubleshoot Azure Site Recovery replication issues and
Last updated 07/30/2019-++ # Monitor Site Recovery
site-recovery Site Recovery Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-overview.md
description: Provides an overview of the Azure Site Recovery service, and summar
Last updated 09/21/2022 ++ # About Site Recovery
site-recovery Site Recovery Sap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-sap.md
Title: Set up SAP NetWeaver disaster recovery with Azure Site Recovery description: Learn how to set up disaster recovery for SAP NetWeaver with Azure Site Recovery.+
site-recovery Site Recovery Test Failover To Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-test-failover-to-azure.md
description: Learn about running a test failover from on-premises to Azure, usin
Last updated 11/14/2019-++ # Run a test failover (disaster recovery drill) to Azure
site-recovery Site Recovery Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-whats-new.md
Title: What's new in Azure Site Recovery description: Provides a summary of new features and the latest updates in the Azure Site Recovery service. -++ Last updated 07/28/2021
site-recovery Site Recovery Workload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-workload.md
Title: About disaster recovery for on-premises apps with Azure Site Recovery
description: Describes the workloads that can be protected using disaster recovery with the Azure Site Recovery service. Last updated 03/18/2020++ # About disaster recovery for on-premises apps
site-recovery Switch Replication Appliance Modernized https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/switch-replication-appliance-modernized.md
description: This article describes show to switch between different replication
Last updated 09/21/2022++ # Switch Azure Site Recovery replication appliance
site-recovery Transport Layer Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/transport-layer-security.md
Title: Transport Layer Security in Azure Site Recovery
description: Learn how to enable Azure Site Recovery to use the encryption protocol Transport Layer Security (TLS) to keep data secure when being transferred over a network. Last updated 11/01/2020++ # Transport Layer Security in Azure Site Recovery
site-recovery Tutorial Dr Drill Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/tutorial-dr-drill-azure.md
Last updated 11/12/2019 ++ # Run a disaster recovery drill to Azure
site-recovery Tutorial Prepare Azure For Hyperv https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/tutorial-prepare-azure-for-hyperv.md
Last updated 11/14/2019 -++ # Prepare Azure resources for Hyper-V disaster recovery
site-recovery Tutorial Prepare Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/tutorial-prepare-azure.md
Last updated 09/09/2019 -++ # Prepare Azure for on-premises disaster recovery to Azure
site-recovery Unregister Vmm Server Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/unregister-vmm-server-script.md
Last updated 03/25/2021++ # Cleanup script on a VMM server
site-recovery Upgrade Mobility Service Modernized https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/upgrade-mobility-service-modernized.md
description: This article describes about automatic updates for mobility agent a
Last updated 09/21/2022++
site-recovery Vmware Azure About Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-about-disaster-recovery.md
description: This article provides an overview of disaster recovery of VMware VM
Last updated 08/19/2021++ # About disaster recovery of VMware VMs to Azure
site-recovery Vmware Azure Architecture Modernized https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-architecture-modernized.md
description: This article provides an overview of components and architecture us
Last updated 09/21/2022++ # VMware to Azure disaster recovery architecture - Modernized
site-recovery Vmware Azure Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-architecture.md
description: This article provides an overview of components and architecture us
Last updated 08/19/2021++ # VMware to Azure disaster recovery architecture - Classic
site-recovery Vmware Azure Common Questions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-common-questions.md
Title: Common questions about VMware disaster recovery with Azure Site Recovery
description: Get answers to common questions about disaster recovery of on-premises VMware VMs to Azure by using Azure Site Recovery. Last updated 11/14/2019 ++ # Common questions about VMware to Azure replication
site-recovery Vmware Azure Configuration Server Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-configuration-server-requirements.md
description: This article describes support and requirements when deploying the
Last updated 08/19/2021++ # Configuration server requirements for VMware disaster recovery to Azure
site-recovery Vmware Azure Multi Tenant Csp Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-multi-tenant-csp-disaster-recovery.md
Title: Set up VMware disaster recovery to Azure in a multi-tenancy environment using Site Recovery and the Cloud Solution Provider (CSP) program | Microsoft Docs description: Describes how to set up VMware disaster recovery in a multi-tenant environment with Azure Site Recovery.+ Last updated 11/27/2018-
site-recovery Vmware Azure Prepare Failback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-prepare-failback.md
Title: Prepare VMware VMs for reprotection and failback with Azure Site Recovery
description: Prepare for fail back of VMware VMs after failover with Azure Site Recovery Last updated 12/24/2019++ # Prepare for reprotection and failback of VMware VMs
site-recovery Vmware Azure Set Up Replication Tutorial Modernized https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-set-up-replication-tutorial-modernized.md
Last updated 09/21/2022 -++ # Set up disaster recovery to Azure for on-premises VMware VMs - Modernized
site-recovery Vmware Azure Troubleshoot Upgrade Failures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-troubleshoot-upgrade-failures.md
description: Resolve common issues that occur when upgrading the Microsoft Azure
Last updated 11/10/2019++ # Troubleshoot Microsoft Azure Site Recovery Provider upgrade failures
site-recovery Vmware Azure Tutorial Failover Failback Modernized https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-tutorial-failover-failback-modernized.md
Last updated 08/19/2021 ++ # Fail over VMware VMs - Modernized
site-recovery Vmware Azure Tutorial Failover Failback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-tutorial-failover-failback.md
Last updated 08/19/2021 ++ # Fail over VMware VMs - Classic
site-recovery Vmware Azure Tutorial Prepare On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-tutorial-prepare-on-premises.md
Last updated 11/12/2019 -++ # Prepare on-premises VMware servers for disaster recovery to Azure
site-recovery Vmware Azure Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-tutorial.md
Last updated 02/05/2022 -++ # Set up disaster recovery to Azure for on-premises VMware VMs - Classic
site-recovery Vmware Physical Azure Config Process Server Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-azure-config-process-server-overview.md
Title: About Azure Site Recovery configuration/process/master target servers
description: This article provides an overview of the configuration, process, and master target servers using when setting up disaster recovery of on-premises VMware VMs to Azure with Azure Site Recovery Last updated 08/19/2021++ # About Site Recovery components (configuration, process, master target)
site-recovery Vmware Physical Azure Monitor Process Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-azure-monitor-process-server.md
description: This article describes how to monitor Azure Site Recovery process s
Last updated 11/14/2019-++ # Monitor the process server
site-recovery Vmware Physical Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-azure-support-matrix.md
Title: Support matrix for VMware/physical disaster recovery in Azure Site Recove
description: Summarizes support for disaster recovery of VMware VMs and physical server to Azure using Azure Site Recovery. Last updated 09/21/2022++ # Support matrix for disaster recovery of VMware VMs and physical servers to Azure
site-recovery Vmware Physical Azure Troubleshoot Process Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-azure-troubleshoot-process-server.md
description: This article describes how to troubleshoot issues with the Azure Si
Last updated 09/09/2019-++ # Troubleshoot the process server
site-recovery Vmware Physical Large Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-large-deployment.md
description: Learn how to set up disaster recovery to Azure for large numbers of
Last updated 11/14/2019-++ # Set up disaster recovery at scale for VMware VMs/physical servers
site-recovery Vmware Physical Secondary Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-secondary-architecture.md
Last updated 11/12/2019 + # Architecture for VMware/physical server replication to a secondary on-premises site
site-recovery Vmware Physical Secondary Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-secondary-disaster-recovery.md
Last updated 11/05/2019 + # Set up disaster recovery of on-premises VMware virtual machines or physical servers to a secondary site
site-recovery Vmware Physical Secondary Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-secondary-support-matrix.md
Last updated 11/14/2019 + # Support matrix for disaster recovery of VMware VMs and physical servers to a secondary site
static-web-apps Languages Runtimes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/languages-runtimes.md
The following versions are supported for managed functions in Static Web Apps. I
## Deprecations
-The following runtimes are deprecated in Azure Static Web Apps. For more information about changing your runtime, see [Specify API language runtime version in Azure Static Web Apps](https://azure.microsoft.com/updates/generally-available-specify-api-language-runtime-version-in-azure-static-web-apps/) and [Azure Functions runtime versions overview](../azure-functions/functions-versions.md?pivots=programming-language-csharp&tabs=azure-powershell%2cin-process%2cv4#upgrade-your-local-project).
+The following runtimes are deprecated in Azure Static Web Apps. For more information about changing your runtime, see [Specify API language runtime version in Azure Static Web Apps](https://azure.microsoft.com/updates/generally-available-specify-api-language-runtime-version-in-azure-static-web-apps/) and [Migrate apps from Azure Functions version 3.x to version 4.x](../azure-functions/migrate-version-3-version-4.md).
- .NET Core 3.1 - Node.js 12.x
storage Access Tiers Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/access-tiers-overview.md
A per-transaction charge applies to all tiers and increases as the tier gets coo
### Geo-replication data transfer costs
-This charge only applies to accounts with geo-replication configured, including GRS and RA-GRS. Geo-replication data transfer incurs a per-gigabyte charge.
+This charge only applies to accounts with geo-replication configured, including GRS, RA-GRS and GZRS. Geo-replication data transfer incurs a per-gigabyte charge.
### Outbound data transfer costs
storage Archive Rehydrate Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/archive-rehydrate-overview.md
The following table shows the behavior of a blob copy operation, depending on th
If you've configured your storage account to use read-access geo-redundant storage (RA-GRS), then you can use the [Copy Blob](/rest/api/storageservices/copy-blob) operation to rehydrate blobs in the secondary region to another storage account that is located in that same secondary region. See [Rehydrate from a secondary region](archive-rehydrate-to-online-tier.md#rehydrate-from-a-secondary-region).
-To learn more about obtaining read access to secondary regions, see [Read access to data in the secondary region](../common/storage-redundancy.md?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json#read-access-to-data-in-the-secondary-region).
+To learn more about obtaining read access to secondary regions, see [Read access to data in the secondary region](../common/storage-redundancy.md?toc=/azure/storage/blobs/toc.json#read-access-to-data-in-the-secondary-region).
## Change a blob's access tier to an online tier
storage Archive Rehydrate To Online Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/archive-rehydrate-to-online-tier.md
If you've configured your storage account to use read-access geo-redundant stora
To rehydrate from a secondary region, use the same guidance that is presented in the previous section ([Rehydrate a blob to a different storage account in the same region](#rehydrate-a-blob-to-a-different-storage-account-in-the-same-region). Append the suffix `ΓÇôsecondary` to the account name of the source endpoint. For example, if your primary endpoint for Blob storage is `myaccount.blob.core.windows.net`, then the secondary endpoint is `myaccount-secondary.blob.core.windows.net`. The account access keys for your storage account are the same for both the primary and secondary endpoints.
-To learn more about obtaining read access to secondary regions, see [Read access to data in the secondary region](../common/storage-redundancy.md?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json#read-access-to-data-in-the-secondary-region).
+To learn more about obtaining read access to secondary regions, see [Read access to data in the secondary region](../common/storage-redundancy.md?toc=/azure/storage/blobs/toc.json#read-access-to-data-in-the-secondary-region).
## Rehydrate a blob by changing its tier
storage Blob Containers Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-containers-cli.md
In this how-to article, you learn to use the Azure CLI with Bash to work with co
You can authorize access to Blob storage from the Azure CLI either with Azure AD credentials or by using the storage account access key. Using Azure AD credentials is recommended, and this article's examples use Azure AD exclusively.
-Azure CLI commands for data operations against Blob storage support the `--auth-mode` parameter, which enables you to specify how to authorize a given operation. Set the `--auth-mode` parameter to `login` to authorize with Azure AD credentials. For more information, see [Authorize access to blob or queue data with Azure CLI](./authorize-data-operations-cli.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json).
+Azure CLI commands for data operations against Blob storage support the `--auth-mode` parameter, which enables you to specify how to authorize a given operation. Set the `--auth-mode` parameter to `login` to authorize with Azure AD credentials. For more information, see [Authorize access to blob or queue data with Azure CLI](./authorize-data-operations-cli.md?toc=/azure/storage/blobs/toc.json).
Run the `login` command to open a browser and connect to your Azure subscription.
In this how-to article, you learned how to manage containers in Blob Storage. To
> [Manage block blobs with Azure CLI](blob-cli.md) > [!div class="nextstepaction"]
-> [Azure CLI samples for Blob storage](storage-samples-blobs-cli.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)
+> [Azure CLI samples for Blob storage](storage-samples-blobs-cli.md?toc=/azure/storage/blobs/toc.json)
storage Blob Containers Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-containers-portal.md
You can restore a soft-deleted container and its contents within the retention p
## See also -- [Create a storage account](../common/storage-account-create.md?tabs=azure-portal&toc=%2fazure%2fstorage%2fblobs%2ftoc.json)
+- [Create a storage account](../common/storage-account-create.md?tabs=azure-portal&toc=/azure/storage/blobs/toc.json)
- [Manage blob containers using PowerShell](blob-containers-powershell.md) <!--Point-in-time restore: /azure/storage/blobs/point-in-time-restore-manage?tabs=portal-->
storage Blob Containers Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-containers-powershell.md
loop-container4
## See also - [Run PowerShell commands with Azure AD credentials to access blob data](./authorize-data-operations-powershell.md)-- [Create a storage account](../common/storage-account-create.md?tabs=azure-portal&toc=%2fazure%2fstorage%2fblobs%2ftoc.json)
+- [Create a storage account](../common/storage-account-create.md?tabs=azure-portal&toc=/azure/storage/blobs/toc.json)
storage Blob Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-powershell.md
else
## Next steps - [Run PowerShell commands with Azure AD credentials to access blob data](./authorize-data-operations-powershell.md)-- [Create a storage account](../common/storage-account-create.md?tabs=azure-portal&toc=%2fazure%2fstorage%2fblobs%2ftoc.json)
+- [Create a storage account](../common/storage-account-create.md?tabs=azure-portal&toc=/azure/storage/blobs/toc.json)
- [Manage blob containers using PowerShell](blob-containers-powershell.md)
storage Blob Storage Monitoring Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-storage-monitoring-scenarios.md
This article features a collection of common storage monitoring scenarios, and p
## Identify storage accounts with no or low use
-Storage Insights is a dashboard on top of Azure Storage metrics and logs. You can use Storage Insights to examine the transaction volume and used capacity of all your accounts. That information can help you decide which accounts you might want to retire. To configure Storage Insights, see [Monitoring your storage service with Azure Monitor Storage insights](../common/storage-insights-overview.md?toc=%2fazure%2fazure-monitor%2ftoc.json).
+Storage Insights is a dashboard on top of Azure Storage metrics and logs. You can use Storage Insights to examine the transaction volume and used capacity of all your accounts. That information can help you decide which accounts you might want to retire. To configure Storage Insights, see [Monitoring your storage service with Azure Monitor Storage insights](../common/storage-insights-overview.md?toc=/azure/azure-monitor/toc.json).
### Analyze transaction volume
-From the [Storage Insights view in Azure monitor](../common/storage-insights-overview.md?toc=%2fazure%2fazure-monitor%2ftoc.json#view-from-azure-monitor), sort your accounts in ascending order by using the **Transactions** column. The following image shows an account with low transaction volume over the specified period.
+From the [Storage Insights view in Azure monitor](../common/storage-insights-overview.md?toc=/azure/azure-monitor/toc.json#view-from-azure-monitor), sort your accounts in ascending order by using the **Transactions** column. The following image shows an account with low transaction volume over the specified period.
> [!div class="mx-imgBorder"] > ![transaction volume in Storage Insights](./media/blob-storage-monitoring-scenarios/storage-insights-transaction-volume.png)
StorageBlobLogs
| project TimeGenerated, AuthenticationType, RequesterObjectId, OperationName, Uri ```
-Shared Key and SAS authentication provide no means of auditing individual identities. Therefore, if you want to improve your ability to audit based on identity, we recommended that you transition to Azure AD, and prevent shared key and SAS authentication. To learn how to prevent Shared Key and SAS authentication, see [Prevent Shared Key authorization for an Azure Storage account](../common/shared-key-authorization-prevent.md?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json&tabs=portal). To get started with Azure AD, see [Authorize access to blobs using Azure Active Directory](authorize-access-azure-active-directory.md).
+Shared Key and SAS authentication provide no means of auditing individual identities. Therefore, if you want to improve your ability to audit based on identity, we recommended that you transition to Azure AD, and prevent shared key and SAS authentication. To learn how to prevent Shared Key and SAS authentication, see [Prevent Shared Key authorization for an Azure Storage account](../common/shared-key-authorization-prevent.md?toc=/azure/storage/blobs/toc.json&tabs=portal). To get started with Azure AD, see [Authorize access to blobs using Azure Active Directory](authorize-access-azure-active-directory.md).
#### Identifying the SAS token used to authorize a request
storage Blobfuse2 What Is https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-what-is.md
Previously updated : 10/31/2022 Last updated : 11/07/2022
-# What is BlobFuse2 (preview)?
+# What is BlobFuse? - BlobFuse2 (preview)
BlobFuse is a virtual file system driver for Azure Blob Storage. Use BlobFuse to access your existing Azure block blob data through the Linux file system.
storage Client Side Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/client-side-encryption.md
After you update your code to use client-side encryption v2, make sure that you
### [Python v12 SDK](#tab/python)
-To use client-side encryption from your Python code, reference the [Blob Storage client library](/python/api/overview/azure/storage-blob-readme). Make sure that you are using version 12.13.0 or later. If you need to migrate from an earlier version of the Java client library, see the [Blob Storage migration guide for Python](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/storage/azure-storage-blob/migration_guide.md).
+To use client-side encryption from your Python code, reference the [Blob Storage client library](/python/api/overview/azure/storage-blob-readme). Make sure that you are using version 12.13.0 or later. If you need to migrate from an earlier version of the Python client library, see the [Blob Storage migration guide for Python](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/storage/azure-storage-blob/migration_guide.md).
The following example shows how to use client-side migration v2 from Python:
storage Create Data Lake Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/create-data-lake-storage-account.md
To use Data Lake Storage Gen2 capabilities, create a storage account that has a hierarchical namespace.
-For step-by-step guidance, see [Create a storage account](../common/storage-account-create.md?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json).
+For step-by-step guidance, see [Create a storage account](../common/storage-account-create.md?toc=/azure/storage/blobs/toc.json).
As you create the account, make sure to select the options described in this article.
Data Lake Storage capabilities are supported in the following types of storage a
- Standard general-purpose v2 - Premium block blob
-For information about how to choose between them, see [storage account overview](../common/storage-account-overview.md?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json).
+For information about how to choose between them, see [storage account overview](../common/storage-account-overview.md?toc=/azure/storage/blobs/toc.json).
You can choose between these two types of accounts in the **Basics** tab of the **Create a storage account** page.
storage Data Lake Storage Access Control Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-access-control-model.md
By using groups, you're less likely to exceed the maximum number of role assignm
## Shared Key and Shared Access Signature (SAS) authorization
-Azure Data Lake Storage Gen2 also supports [Shared Key](/rest/api/storageservices/authorize-with-shared-key) and [SAS](../common/storage-sas-overview.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) methods for authentication. A characteristic of these authentication methods is that no identity is associated with the caller and therefore security principal permission-based authorization cannot be performed.
+Azure Data Lake Storage Gen2 also supports [Shared Key](/rest/api/storageservices/authorize-with-shared-key) and [SAS](../common/storage-sas-overview.md?toc=/azure/storage/blobs/toc.json) methods for authentication. A characteristic of these authentication methods is that no identity is associated with the caller and therefore security principal permission-based authorization cannot be performed.
In the case of Shared Key, the caller effectively gains 'super-user' access, meaning full access to all operations on all resources including data, setting owner, and changing ACLs.
storage Data Lake Storage Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-best-practices.md
This article provides best practice guidelines that help you optimize performanc
For general suggestions around structuring a data lake, see these articles: -- [Overview of Azure Data Lake Storage for the data management and analytics scenario](/azure/cloud-adoption-framework/scenarios/data-management/best-practices/data-lake-overview?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)-- [Provision three Azure Data Lake Storage Gen2 accounts for each data landing zone](/azure/cloud-adoption-framework/scenarios/data-management/best-practices/data-lake-services?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)
+- [Overview of Azure Data Lake Storage for the data management and analytics scenario](/azure/cloud-adoption-framework/scenarios/data-management/best-practices/data-lake-overview?toc=/azure/storage/blobs/toc.json)
+- [Provision three Azure Data Lake Storage Gen2 accounts for each data landing zone](/azure/cloud-adoption-framework/scenarios/data-management/best-practices/data-lake-services?toc=/azure/storage/blobs/toc.json)
## Find documentation
storage Data Lake Storage Explorer Acl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-explorer-acl.md
After you successfully sign in with an Azure account, the account and the Azure
:::image type="content" alt-text="Screenshot that shows Microsoft Azure Storage Explorer, and highlights the Account Management pane and Open Explorer button." source="./media/data-lake-storage-explorer-acl/storage-explorer-account-panel-sml.png" lightbox="./media/data-lake-storage-explorer-acl/storage-explorer-account-panel-sml.png":::
-When it completes connecting, Azure Storage Explorer loads with the **Explorer** tab shown. This view gives you insight to all of your Azure storage accounts as well as local storage configured through the [Azurite storage emulator](../common/storage-use-azurite.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) or [Azure Stack](/azure-stack/user/azure-stack-storage-connect-se?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) environments.
+When it completes connecting, Azure Storage Explorer loads with the **Explorer** tab shown. This view gives you insight to all of your Azure storage accounts as well as local storage configured through the [Azurite storage emulator](../common/storage-use-azurite.md?toc=/azure/storage/blobs/toc.json) or [Azure Stack](/azure-stack/user/azure-stack-storage-connect-se?toc=/azure/storage/blobs/toc.json) environments.
:::image type="content" alt-text="Microsoft Azure Storage Explorer - Connect window" source="./media/data-lake-storage-explorer-acl/storage-explorer-main-page-sml.png" lightbox="./media/data-lake-storage-explorer-acl/storage-explorer-main-page-lrg.png":::
storage Data Lake Storage Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-explorer.md
After you successfully sign in with an Azure account, the account and the Azure
:::image type="content" alt-text="Screenshot that shows Microsoft Azure Storage Explorer, and highlights the Account Management pane and Open Explorer button." source="./media/data-lake-storage-explorer/storage-explorer-account-panel-sml.png" lightbox="./media/data-lake-storage-explorer-acl/storage-explorer-account-panel-sml.png":::
-When it completes connecting, Azure Storage Explorer loads with the **Explorer** tab shown. This view gives you insight to all of your Azure storage accounts as well as local storage configured through the [Azurite storage emulator](../common/storage-use-azurite.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) or [Azure Stack](/azure-stack/user/azure-stack-storage-connect-se?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) environments.
+When it completes connecting, Azure Storage Explorer loads with the **Explorer** tab shown. This view gives you insight to all of your Azure storage accounts as well as local storage configured through the [Azurite storage emulator](../common/storage-use-azurite.md?toc=/azure/storage/blobs/toc.json) or [Azure Stack](/azure-stack/user/azure-stack-storage-connect-se?toc=/azure/storage/blobs/toc.json) environments.
:::image type="content" alt-text="Microsoft Azure Storage Explorer - Connect window" source="./media/data-lake-storage-explorer/storage-explorer-main-page-sml.png" lightbox="./media/data-lake-storage-explorer-acl/storage-explorer-main-page-lrg.png":::
storage Data Lake Storage Introduction Abfs Uri https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-introduction-abfs-uri.md
However, if the account you wish to address is set as the default file system du
## Next steps -- [Use Azure Data Lake Storage Gen2 with Azure HDInsight clusters](../../hdinsight/hdinsight-hadoop-use-data-lake-storage-gen2.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)
+- [Use Azure Data Lake Storage Gen2 with Azure HDInsight clusters](../../hdinsight/hdinsight-hadoop-use-data-lake-storage-gen2.md?toc=/azure/storage/blobs/toc.json)
storage Data Lake Storage Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-known-issues.md
If [anonymous read access](./anonymous-read-access-configure.md) has been grante
## AzCopy
-Use only the latest version of AzCopy ([AzCopy v10](../common/storage-use-azcopy-v10.md?toc=%2fazure%2fstorage%2ftables%2ftoc.json)). Earlier versions of AzCopy such as AzCopy v8.1, are not supported.
+Use only the latest version of AzCopy ([AzCopy v10](../common/storage-use-azcopy-v10.md?toc=/azure/storage/tables/toc.json)). Earlier versions of AzCopy such as AzCopy v8.1, are not supported.
<a id="storage-explorer"></a>
storage Data Lake Storage Migrate Gen1 To Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-migrate-gen1-to-gen2.md
This table compares the capabilities of Gen1 to that of Gen2.
|Geo-redundancy| [LRS](../common/storage-redundancy.md#locally-redundant-storage)| [LRS](../common/storage-redundancy.md#locally-redundant-storage), [ZRS](../common/storage-redundancy.md#zone-redundant-storage), [GRS](../common/storage-redundancy.md#geo-redundant-storage), [RA-GRS](../common/storage-redundancy.md#read-access-to-data-in-the-secondary-region) | |Authentication|[Azure Active Directory (Azure AD) managed identity](../../active-directory/managed-identities-azure-resources/overview.md)<br>[Service principals](../../active-directory/develop/app-objects-and-service-principals.md)|[Azure AD managed identity](../../active-directory/managed-identities-azure-resources/overview.md)<br>[Service principals](../../active-directory/develop/app-objects-and-service-principals.md)<br>[Shared Access Key](/rest/api/storageservices/authorize-with-shared-key)| |Authorization|Management - [Azure RBAC](../../role-based-access-control/overview.md)<br>Data - [ACLs](data-lake-storage-access-control.md)|Management - [Azure RBAC](../../role-based-access-control/overview.md)<br>Data - [ACLs](data-lake-storage-access-control.md), [Azure RBAC](../../role-based-access-control/overview.md) |
-|Encryption - Data at rest|Server side - with [Microsoft-managed](../common/storage-service-encryption.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) or [customer-managed](../common/customer-managed-keys-overview.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) keys|Server side - with [Microsoft-managed](../common/storage-service-encryption.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) or [customer-managed](../common/customer-managed-keys-overview.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) keys|
-|VNET Support|[VNET Integration](../../data-lake-store/data-lake-store-network-security.md)|[Service Endpoints](../common/storage-network-security.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json), [Private Endpoints](../common/storage-private-endpoints.md)|
+|Encryption - Data at rest|Server side - with [Microsoft-managed](../common/storage-service-encryption.md?toc=/azure/storage/blobs/toc.json) or [customer-managed](../common/customer-managed-keys-overview.md?toc=/azure/storage/blobs/toc.json) keys|Server side - with [Microsoft-managed](../common/storage-service-encryption.md?toc=/azure/storage/blobs/toc.json) or [customer-managed](../common/customer-managed-keys-overview.md?toc=/azure/storage/blobs/toc.json) keys|
+|VNET Support|[VNET Integration](../../data-lake-store/data-lake-store-network-security.md)|[Service Endpoints](../common/storage-network-security.md?toc=/azure/storage/blobs/toc.json), [Private Endpoints](../common/storage-private-endpoints.md)|
|Developer experience|[REST](../../data-lake-store/data-lake-store-data-operations-rest-api.md), [.NET](../../data-lake-store/data-lake-store-data-operations-net-sdk.md), [Java](../../data-lake-store/data-lake-store-get-started-java-sdk.md), [Python](../../data-lake-store/data-lake-store-data-operations-python.md), [PowerShell](../../data-lake-store/data-lake-store-get-started-powershell.md), [Azure CLI](../../data-lake-store/data-lake-store-get-started-cli-2.0.md)|Generally available - [REST](/rest/api/storageservices/data-lake-storage-gen2), [.NET](data-lake-storage-directory-file-acl-dotnet.md), [Java](data-lake-storage-directory-file-acl-java.md), [Python](data-lake-storage-directory-file-acl-python.md)<br>Public preview - [JavaScript](data-lake-storage-directory-file-acl-javascript.md), [PowerShell](data-lake-storage-directory-file-acl-powershell.md), [Azure CLI](data-lake-storage-directory-file-acl-cli.md)| |Resource logs|Classic logs<br>[Azure Monitor integrated](../../data-lake-store/data-lake-store-diagnostic-logs.md)|[Classic logs](../common/storage-analytics-logging.md) - Generally available<br>[Azure Monitor integrated](monitor-blob-storage.md) - Preview| |Ecosystem|[HDInsight (3.6)](../../data-lake-store/data-lake-store-hdinsight-hadoop-use-portal.md), [Azure Databricks (3.1 and above)](https://docs.databricks.com/dat)|
storage Data Lake Storage Supported Azure Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-supported-azure-services.md
This table lists the Azure services that you can use with Azure Data Lake Storag
|Azure service |Support level |Azure AD |Shared Key| Related articles | ||-||||
-|Azure Data Factory|Generally available|Yes|Yes|<ul><li>[Load data into Azure Data Lake Storage Gen2 with Azure Data Factory](../../data-factory/load-azure-data-lake-storage-gen2.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)</li></ul>|
+|Azure Data Factory|Generally available|Yes|Yes|<ul><li>[Load data into Azure Data Lake Storage Gen2 with Azure Data Factory](../../data-factory/load-azure-data-lake-storage-gen2.md?toc=/azure/storage/blobs/toc.json)</li></ul>|
|Azure Databricks|Generally available|Yes|Yes|<ul><li>[Use with Azure Databricks](/azure/databricks/dat)</li></ul>| |Azure Event Hub|Generally available|No|Yes|<ul><li>[Capture events through Azure Event Hubs in Azure Blob Storage or Azure Data Lake Storage](../../event-hubs/event-hubs-capture-overview.md)</li></ul>| |Azure Event Grid|Generally available|Yes|Yes|<ul><li>[Tutorial: Implement the data lake capture pattern to update a Databricks Delta table](data-lake-storage-events.md)</li></ul>|
storage Data Lake Storage Tutorial Extract Transform Load Hive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-tutorial-extract-transform-load-hive.md
All resources used in this tutorial are preexisting. No cleanup is necessary.
To learn more ways to work with data in HDInsight, see the following article: > [!div class="nextstepaction"]
-> [Use Azure Data Lake Storage Gen2 with Azure HDInsight clusters](../../hdinsight/hdinsight-hadoop-use-data-lake-storage-gen2.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)
+> [Use Azure Data Lake Storage Gen2 with Azure HDInsight clusters](../../hdinsight/hdinsight-hadoop-use-data-lake-storage-gen2.md?toc=/azure/storage/blobs/toc.json)
storage Data Lake Storage Use Databricks Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-use-databricks-spark.md
If you don't have an Azure subscription, create a [free account](https://azure.m
- Make sure that your user account has the [Storage Blob Data Contributor role](assign-azure-role-data-access.md) assigned to it. -- Install AzCopy v10. See [Transfer data with AzCopy v10](../common/storage-use-azcopy-v10.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)
+- Install AzCopy v10. See [Transfer data with AzCopy v10](../common/storage-use-azcopy-v10.md?toc=/azure/storage/blobs/toc.json)
- Create a service principal. See [How to: Use the portal to create an Azure AD application and service principal that can access resources](../../active-directory/develop/howto-create-service-principal-portal.md).
storage Data Lake Storage Use Distcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-use-distcp.md
DistCp provides a variety of command-line parameters and we strongly encourage y
- An existing Azure Storage account without Data Lake Storage Gen2 capabilities (hierarchical namespace) enabled. - An Azure Storage account with Data Lake Storage Gen2 capabilities (hierarchical namespace) enabled. For instructions on how to create one, see [Create an Azure Storage account](../common/storage-account-create.md) - A container that has been created in the storage account with hierarchical namespace enabled.-- An Azure HDInsight cluster with access to a storage account with the hierarchical namespace feature enabled. For more information, see [Use Azure Data Lake Storage Gen2 with Azure HDInsight clusters](../../hdinsight/hdinsight-hadoop-use-data-lake-storage-gen2.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json). Make sure you enable Remote Desktop for the cluster.
+- An Azure HDInsight cluster with access to a storage account with the hierarchical namespace feature enabled. For more information, see [Use Azure Data Lake Storage Gen2 with Azure HDInsight clusters](../../hdinsight/hdinsight-hadoop-use-data-lake-storage-gen2.md?toc=/azure/storage/blobs/toc.json). Make sure you enable Remote Desktop for the cluster.
## Use DistCp from an HDInsight Linux cluster
storage Data Lake Storage Use Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-use-sql.md
If you don't have an Azure subscription, create a [free account](https://azure.m
- Make sure that your user account has the [Storage Blob Data Contributor role](assign-azure-role-data-access.md) assigned to it. -- Install AzCopy v10. See [Transfer data with AzCopy v10](../common/storage-use-azcopy-v10.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)
+- Install AzCopy v10. See [Transfer data with AzCopy v10](../common/storage-use-azcopy-v10.md?toc=/azure/storage/blobs/toc.json)
There's a couple of specific things that you'll have to do as you perform the steps in that article.
In this section, you create an Azure Workspace.
1. Select the **Deploy to Azure** button. The template will open in the Azure portal.
- [![Deploy to Azure](../../media/template-deployments/deploy-to-azure.png)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2FSynapse%2Fmaster%2FManage%2FDeployWorkspace%2Fazuredeploy.json)
+ [![Deploy to Azure](../../media/template-deployments/deploy-to-azure.png)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A//raw.githubusercontent.com/Azure-Samples/Synapse/master/Manage/DeployWorkspace/azuredeploy.json)
2. Enter or update the following values:
storage Monitor Blob Storage Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/monitor-blob-storage-reference.md
The following table lists the properties for Azure Storage resource logs when th
{ "properties": { "accountName": "testaccount1",
- "requestUrl": "https://testaccount1.blob.core.windows.net:443/upload?restype=container&comp=list&prefix=&delimiter=%2F&marker=&maxresults=30&include=metadata&_=1551405598426",
+ "requestUrl": "https://testaccount1.blob.core.windows.net:443/upload?restype=container&comp=list&prefix=&delimiter=/&marker=&maxresults=30&include=metadata&_=1551405598426",
"userAgentHeader": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.140 Safari/537.36 Edge/17.17134", "referrerHeader": "blob:https://portal.azure.com/6f50025f-3b88-488d-b29e-3c592a31ddc9", "clientRequestId": "",
storage Quickstart Blobs C Plus Plus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/quickstart-blobs-c-plus-plus.md
Resources:
- [API reference documentation](https://azure.github.io/azure-sdk-for-cpp/storage.html) - [Library source code](https://github.com/Azure/azure-sdk-for-cpp/tree/master/sdk/storage)-- [Samples](../common/storage-samples-c-plus-plus.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)
+- [Samples](../common/storage-samples-c-plus-plus.md?toc=/azure/storage/blobs/toc.json)
## Prerequisites
storage Quickstart Storage Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/quickstart-storage-explorer.md
After you successfully sign in with an Azure account, the account and the Azure
:::image type="content" source="media/quickstart-storage-explorer/storage-explorer-account-panel-sml.png" alt-text="Select Azure subscriptions" lightbox="media/quickstart-storage-explorer/storage-explorer-account-panel-lrg.png":::
-After Storage Explorer finishes connecting, it displays the **Explorer** tab. This view gives you insight to all of your Azure storage accounts as well as local storage configured through the [Azurite storage emulator](../common/storage-use-azurite.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) or [Azure Stack](/azure-stack/user/azure-stack-storage-connect-se?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) environments.
+After Storage Explorer finishes connecting, it displays the **Explorer** tab. This view gives you insight to all of your Azure storage accounts as well as local storage configured through the [Azurite storage emulator](../common/storage-use-azurite.md?toc=/azure/storage/blobs/toc.json) or [Azure Stack](/azure-stack/user/azure-stack-storage-connect-se?toc=/azure/storage/blobs/toc.json) environments.
:::image type="content" source="media/quickstart-storage-explorer/storage-explorer-main-page-sml.png" alt-text="Screenshot showing Storage Explorer main page" lightbox="media/quickstart-storage-explorer/storage-explorer-main-page-lrg.png":::
storage Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/security-recommendations.md
Microsoft Defender for Cloud periodically analyzes the security state of your Az
|-|-|--| | Use Azure Active Directory (Azure AD) to authorize access to blob data | Azure AD provides superior security and ease of use over Shared Key for authorizing requests to Blob storage. For more information, see [Authorize access to data in Azure Storage](../common/authorize-data-access.md). | - | | Keep in mind the principal of least privilege when assigning permissions to an Azure AD security principal via Azure RBAC | When assigning a role to a user, group, or application, grant that security principal only those permissions that are necessary for them to perform their tasks. Limiting access to resources helps prevent both unintentional and malicious misuse of your data. | - |
-| Use a user delegation SAS to grant limited access to blob data to clients | A user delegation SAS is secured with Azure Active Directory (Azure AD) credentials and also by the permissions specified for the SAS. A user delegation SAS is analogous to a service SAS in terms of its scope and function, but offers security benefits over the service SAS. For more information, see [Grant limited access to Azure Storage resources using shared access signatures (SAS)](../common/storage-sas-overview.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json). | - |
+| Use a user delegation SAS to grant limited access to blob data to clients | A user delegation SAS is secured with Azure Active Directory (Azure AD) credentials and also by the permissions specified for the SAS. A user delegation SAS is analogous to a service SAS in terms of its scope and function, but offers security benefits over the service SAS. For more information, see [Grant limited access to Azure Storage resources using shared access signatures (SAS)](../common/storage-sas-overview.md?toc=/azure/storage/blobs/toc.json). | - |
| Secure your account access keys with Azure Key Vault | Microsoft recommends using Azure AD to authorize requests to Azure Storage. However, if you must use Shared Key authorization, then secure your account keys with Azure Key Vault. You can retrieve the keys from the key vault at runtime, instead of saving them with your application. For more information about Azure Key Vault, see [Azure Key Vault overview](../../key-vault/general/overview.md). | - | | Regenerate your account keys periodically | Rotating the account keys periodically reduces the risk of exposing your data to malicious actors. | - | | Disallow Shared Key authorization | When you disallow Shared Key authorization for a storage account, Azure Storage rejects all subsequent requests to that account that are authorized with the account access keys. Only secured requests that are authorized with Azure AD will succeed. For more information, see [Prevent Shared Key authorization for an Azure Storage account](../common/shared-key-authorization-prevent.md). | - |
Microsoft Defender for Cloud periodically analyzes the security state of your Az
| Recommendation | Comments | Defender for Cloud | |-|-|--|
-| Configure the minimum required version of Transport Layer Security (TLS) for a storage account. | Require that clients use a more secure version of TLS to make requests against an Azure Storage account by configuring the minimum version of TLS for that account. For more information, see [Configure minimum required version of Transport Layer Security (TLS) for a storage account](../common/transport-layer-security-configure-minimum-version.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)| - |
-| Enable the **Secure transfer required** option on all of your storage accounts | When you enable the **Secure transfer required** option, all requests made against the storage account must take place over secure connections. Any requests made over HTTP will fail. For more information, see [Require secure transfer in Azure Storage](../common/storage-require-secure-transfer.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json). | [Yes](../../defender-for-cloud/implement-security-recommendations.md) |
-| Enable firewall rules | Configure firewall rules to limit access to your storage account to requests that originate from specified IP addresses or ranges, or from a list of subnets in an Azure Virtual Network (VNet). For more information about configuring firewall rules, see [Configure Azure Storage firewalls and virtual networks](../common/storage-network-security.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json). | - |
-| Allow trusted Microsoft services to access the storage account | Turning on firewall rules for your storage account blocks incoming requests for data by default, unless the requests originate from a service operating within an Azure Virtual Network (VNet) or from allowed public IP addresses. Requests that are blocked include those from other Azure services, from the Azure portal, from logging and metrics services, and so on. You can permit requests from other Azure services by adding an exception to allow trusted Microsoft services to access the storage account. For more information about adding an exception for trusted Microsoft services, see [Configure Azure Storage firewalls and virtual networks](../common/storage-network-security.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json).| - |
+| Configure the minimum required version of Transport Layer Security (TLS) for a storage account. | Require that clients use a more secure version of TLS to make requests against an Azure Storage account by configuring the minimum version of TLS for that account. For more information, see [Configure minimum required version of Transport Layer Security (TLS) for a storage account](../common/transport-layer-security-configure-minimum-version.md?toc=/azure/storage/blobs/toc.json)| - |
+| Enable the **Secure transfer required** option on all of your storage accounts | When you enable the **Secure transfer required** option, all requests made against the storage account must take place over secure connections. Any requests made over HTTP will fail. For more information, see [Require secure transfer in Azure Storage](../common/storage-require-secure-transfer.md?toc=/azure/storage/blobs/toc.json). | [Yes](../../defender-for-cloud/implement-security-recommendations.md) |
+| Enable firewall rules | Configure firewall rules to limit access to your storage account to requests that originate from specified IP addresses or ranges, or from a list of subnets in an Azure Virtual Network (VNet). For more information about configuring firewall rules, see [Configure Azure Storage firewalls and virtual networks](../common/storage-network-security.md?toc=/azure/storage/blobs/toc.json). | - |
+| Allow trusted Microsoft services to access the storage account | Turning on firewall rules for your storage account blocks incoming requests for data by default, unless the requests originate from a service operating within an Azure Virtual Network (VNet) or from allowed public IP addresses. Requests that are blocked include those from other Azure services, from the Azure portal, from logging and metrics services, and so on. You can permit requests from other Azure services by adding an exception to allow trusted Microsoft services to access the storage account. For more information about adding an exception for trusted Microsoft services, see [Configure Azure Storage firewalls and virtual networks](../common/storage-network-security.md?toc=/azure/storage/blobs/toc.json).| - |
| Use private endpoints | A private endpoint assigns a private IP address from your Azure Virtual Network (VNet) to the storage account. It secures all traffic between your VNet and the storage account over a private link. For more information about private endpoints, see [Connect privately to a storage account using Azure Private Endpoint](../../private-link/tutorial-private-endpoint-storage-portal.md). | - | | Use VNet service tags | A service tag represents a group of IP address prefixes from a given Azure service. Microsoft manages the address prefixes encompassed by the service tag and automatically updates the service tag as addresses change. For more information about service tags supported by Azure Storage, see [Azure service tags overview](../../virtual-network/service-tags-overview.md). For a tutorial that shows how to use service tags to create outbound network rules, see [Restrict access to PaaS resources](../../virtual-network/tutorial-restrict-network-access-to-resources.md). | - | | Limit network access to specific networks | Limiting network access to networks hosting clients requiring access reduces the exposure of your resources to network attacks. | [Yes](../../defender-for-cloud/implement-security-recommendations.md) |
storage Storage Blob Account Delegation Sas Create Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-account-delegation-sas-create-javascript.md
This article shows you how to create and use account SAS tokens to use the Azure
The [sample code snippets](https://github.com/Azure-Samples/AzureStorageSnippets/tree/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide) are available in GitHub as runnable Node.js files.
-[Package (npm)](https://www.npmjs.com/package/@azure/storage-blob) | [Samples](../common/storage-samples-javascript.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#blob-samples) | [API reference](/javascript/api/preview-docs/@azure/storage-blob) | [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/storage/storage-blob) | [Give Feedback](https://github.com/Azure/azure-sdk-for-js/issues)
+[Package (npm)](https://www.npmjs.com/package/@azure/storage-blob) | [Samples](../common/storage-samples-javascript.md?toc=/azure/storage/blobs/toc.json#blob-samples) | [API reference](/javascript/api/preview-docs/@azure/storage-blob) | [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/storage/storage-blob) | [Give Feedback](https://github.com/Azure/azure-sdk-for-js/issues)
## Account SAS tokens
-An **account SAS token** is one [type of SAS token](../common/storage-sas-overview.md?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json#types-of-shared-access-signatures) for access delegation provided by Azure Storage. An account SAS token provides access to Azure Storage. The token is only as restrictive as you define it when creating it. Because anyone with the token can use it to access your Storage account, you should define the token with the most restrictive permissions that still allow the token to complete the required tasks.
+An **account SAS token** is one [type of SAS token](../common/storage-sas-overview.md?toc=/azure/storage/blobs/toc.json#types-of-shared-access-signatures) for access delegation provided by Azure Storage. An account SAS token provides access to Azure Storage. The token is only as restrictive as you define it when creating it. Because anyone with the token can use it to access your Storage account, you should define the token with the most restrictive permissions that still allow the token to complete the required tasks.
[Best practices for token](../common/storage-sas-overview.md#best-practices-when-using-sas) creation include limiting permissions:
To use the account SAS token, you need to combine it with the account name to cr
## See also -- [Types of SAS tokens](../common/storage-sas-overview.md?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json)-- [How a shared access signature works](../common/storage-sas-overview.md?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json#how-a-shared-access-signature-works)
+- [Types of SAS tokens](../common/storage-sas-overview.md?toc=/azure/storage/blobs/toc.json)
+- [How a shared access signature works](../common/storage-sas-overview.md?toc=/azure/storage/blobs/toc.json#how-a-shared-access-signature-works)
- [API reference](/javascript/api/@azure/storage-blob/) - [Library source code](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/storage/storage-blob) - [Give Feedback](https://github.com/Azure/azure-sdk-for-js/issues)
storage Storage Blob Change Feed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-change-feed.md
Change feed files are stored in the `$blobchangefeed/log/` virtual directory as
### Event record schemas
-For a description of each property, see [Azure Event Grid event schema for Blob Storage](../../event-grid/event-schema-blob-storage.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#event-properties). The BlobPropertiesUpdated and BlobSnapshotCreated events are currently exclusive to change feed and not yet supported for Blob Storage Events.
+For a description of each property, see [Azure Event Grid event schema for Blob Storage](../../event-grid/event-schema-blob-storage.md?toc=/azure/storage/blobs/toc.json#event-properties). The BlobPropertiesUpdated and BlobSnapshotCreated events are currently exclusive to change feed and not yet supported for Blob Storage Events.
> [!NOTE] > The change feed files for a segment don't immediately appear after a segment is created. The length of delay is within the normal interval of publishing latency of the change feed which is within a few minutes of the change.
storage Storage Blob Create User Delegation Sas Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-create-user-delegation-sas-javascript.md
The permissions granted to a client who possesses the SAS are the intersection o
The [sample code snippets](https://github.com/Azure-Samples/AzureStorageSnippets/tree/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide) are available in GitHub as runnable Node.js files.
-[Package (npm)](https://www.npmjs.com/package/@azure/storage-blob) | [Samples](../common/storage-samples-javascript.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#blob-samples) | [API reference](/javascript/api/preview-docs/@azure/storage-blob) | [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/storage/storage-blob) | [Give Feedback](https://github.com/Azure/azure-sdk-for-js/issues)
+[Package (npm)](https://www.npmjs.com/package/@azure/storage-blob) | [Samples](../common/storage-samples-javascript.md?toc=/azure/storage/blobs/toc.json#blob-samples) | [API reference](/javascript/api/preview-docs/@azure/storage-blob) | [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/storage/storage-blob) | [Give Feedback](https://github.com/Azure/azure-sdk-for-js/issues)
## Best practices for user delegation SAS tokens
The preceding server code creates a flow of values in order to create the contai
* Create the [**BlobServiceClient**](/javascript/api/@azure/storage-blob/blobserviceclient) with the [_DefaultAzureCredential_](/javascript/api/@azure/identity/defaultazurecredential) * Use the [blobServiceClient.getUserDelegationKey](/javascript/api/@azure/storage-blob/blobserviceclient#@azure-storage-blob-blobserviceclient-getuserdelegationkey) operation to create a [**UserDelegationKey**](/rest/api/storageservices/create-user-delegation-sas)
-* Use the key to create the [**SAS token**](../common/storage-sas-overview.md?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json#sas-token) string with [generateBlobSASQueryParameters](/javascript/api/@azure/storage-blob#@azure-storage-blob-generateblobsasqueryparameters)
+* Use the key to create the [**SAS token**](../common/storage-sas-overview.md?toc=/azure/storage/blobs/toc.json#sas-token) string with [generateBlobSASQueryParameters](/javascript/api/@azure/storage-blob#@azure-storage-blob-generateblobsasqueryparameters)
Once you're created the container SAS token, you can provide it to the client that will consume the token. The client can then use it to list the blobs in a container. A [client code example](#container-use-sas-token) shows how to test the SAS as a consumer.
The preceding code creates a flow of values in order to create the container SAS
* Create the [**BlobServiceClient**](/javascript/api/@azure/storage-blob/blobserviceclient) with [_DefaultAzureCredential_](/javascript/api/@azure/identity/defaultazurecredential) * Use the [blobServiceClient.getUserDelegationKey](/javascript/api/@azure/storage-blob/blobserviceclient#@azure-storage-blob-blobserviceclient-getuserdelegationkey) operation to create a [**UserDelegationKey**](/rest/api/storageservices/create-user-delegation-sas)
-* Use the key to create the [**SAS token**](../common/storage-sas-overview.md?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json#sas-token) string. If the blob name wasn't specified in the options, the SAS token is a container token.
+* Use the key to create the [**SAS token**](../common/storage-sas-overview.md?toc=/azure/storage/blobs/toc.json#sas-token) string. If the blob name wasn't specified in the options, the SAS token is a container token.
Once you're created the blob SAS token, you can provide it to the client that will consume the token. The client can then use it to upload a blob. A [client code example](#blob-use-sas-token) shows how to test the SAS as a consumer.
Once the blob SAS token is created, use the token. As an example of using the SA
## See also -- [Types of SAS tokens](../common/storage-sas-overview.md?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json)
+- [Types of SAS tokens](../common/storage-sas-overview.md?toc=/azure/storage/blobs/toc.json)
- [API reference](/javascript/api/@azure/storage-blob/) - [Library source code](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/storage/storage-blob) - [Give Feedback](https://github.com/Azure/azure-sdk-for-js/issues)
storage Storage Blob Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-dotnet-get-started.md
This article shows you how to connect to Azure Blob Storage by using the Azure Blob Storage client library v12 for .NET. Once connected, your code can operate on containers, blobs, and features of the Blob Storage service.
-[Package (NuGet)](https://www.nuget.org/packages/Azure.Storage.Blobs) | [Samples](../common/storage-samples-dotnet.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#blob-samples) | [API reference](/dotnet/api/azure.storage.blobs) | [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Azure.Storage.Blobs) | [Give Feedback](https://github.com/Azure/azure-sdk-for-net/issues)
+[Package (NuGet)](https://www.nuget.org/packages/Azure.Storage.Blobs) | [Samples](../common/storage-samples-dotnet.md?toc=/azure/storage/blobs/toc.json#blob-samples) | [API reference](/dotnet/api/azure.storage.blobs) | [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Azure.Storage.Blobs) | [Give Feedback](https://github.com/Azure/azure-sdk-for-net/issues)
## Prerequisites
To authorize with Azure AD, you'll need to use a security principal. Which type
|--|--|| | Local machine (developing and testing) | User identity or service principal | [Use the Azure Identity library to get an access token for authorization](../common/identity-library-acquire-token.md) | | Azure | Managed identity | [Authorize access to blob data with managed identities for Azure resources](authorize-managed-identity.md) |
-| Servers or clients outside of Azure | Service principal | [Authorize access to blob or queue data from a native or web application](../common/storage-auth-aad-app.md?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json) |
+| Servers or clients outside of Azure | Service principal | [Authorize access to blob or queue data from a native or web application](../common/storage-auth-aad-app.md?toc=/azure/storage/blobs/toc.json) |
If you're testing on a local machine, or your application will run in Azure virtual machines (VMs), function apps, virtual machine scale sets, or in other Azure services, obtain an OAuth token by creating a [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential) instance. Use that object to create a [BlobServiceClient](/dotnet/api/azure.storage.blobs.blobserviceclient).
The following guides show you how to use each of these classes to build your app
## See also - [Package (NuGet)](https://www.nuget.org/packages/Azure.Storage.Blobs)-- [Samples](../common/storage-samples-dotnet.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#blob-samples)
+- [Samples](../common/storage-samples-dotnet.md?toc=/azure/storage/blobs/toc.json#blob-samples)
- [API reference](/dotnet/api/azure.storage.blobs) - [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Azure.Storage.Blobs) - [Give Feedback](https://github.com/Azure/azure-sdk-for-net/issues)
storage Storage Blob Event Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-event-overview.md
Azure Storage events allow applications to react to events, such as the creation
Blob storage events are pushed using [Azure Event Grid](https://azure.microsoft.com/services/event-grid/) to subscribers such as Azure Functions, Azure Logic Apps, or even to your own http listener. Event Grid provides reliable event delivery to your applications through rich retry policies and dead-lettering.
-See the [Blob storage events schema](../../event-grid/event-schema-blob-storage.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) article to view the full list of the events that Blob storage supports.
+See the [Blob storage events schema](../../event-grid/event-schema-blob-storage.md?toc=/azure/storage/blobs/toc.json) article to view the full list of the events that Blob storage supports.
Common Blob storage event scenarios include image or video processing, search indexing, or any file-oriented workflow. Asynchronous file uploads are a great fit for events. When changes are infrequent, but your scenario requires immediate responsiveness, event-based architecture can be especially efficient.
If you want to try blob storage events, see any of these quickstart articles:
|If you want to use this tool: |See this article: | |--|-|
-|Azure portal |[Quickstart: Route Blob storage events to web endpoint with the Azure portal](../../event-grid/blob-event-quickstart-portal.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)|
-|PowerShell |[Quickstart: Route storage events to web endpoint with PowerShell](./storage-blob-event-quickstart-powershell.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)|
-|Azure CLI |[Quickstart: Route storage events to web endpoint with Azure CLI](./storage-blob-event-quickstart.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)|
+|Azure portal |[Quickstart: Route Blob storage events to web endpoint with the Azure portal](../../event-grid/blob-event-quickstart-portal.md?toc=/azure/storage/blobs/toc.json)|
+|PowerShell |[Quickstart: Route storage events to web endpoint with PowerShell](./storage-blob-event-quickstart-powershell.md?toc=/azure/storage/blobs/toc.json)|
+|Azure CLI |[Quickstart: Route storage events to web endpoint with Azure CLI](./storage-blob-event-quickstart.md?toc=/azure/storage/blobs/toc.json)|
To view in-depth examples of reacting to Blob storage events by using Azure functions, see these articles:
Event Grid uses [event subscriptions](../../event-grid/concepts.md#event-subscri
First, subscribe an endpoint to an event. Then, when an event is triggered, the Event Grid service will send data about that event to the endpoint.
-See the [Blob storage events schema](../../event-grid/event-schema-blob-storage.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) article to view:
+See the [Blob storage events schema](../../event-grid/event-schema-blob-storage.md?toc=/azure/storage/blobs/toc.json) article to view:
> [!div class="checklist"] > - A complete list of Blob storage events and how each event is triggered.
Applications that handle Blob storage events should follow a few recommended pra
> [!div class="checklist"] > - As multiple subscriptions can be configured to route events to the same event handler, it is important not to assume events are from a particular source, but to check the topic of the message to ensure that it comes from the storage account you are expecting. > - Similarly, check that the eventType is one you are prepared to process, and do not assume that all events you receive will be the types you expect.
-> - As messages can arrive after some delay, use the etag fields to understand if your information about objects is still up-to-date. To learn how to use the etag field, see [Managing concurrency in Blob storage](./concurrency-manage.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#managing-concurrency-in-blob-storage).
+> - As messages can arrive after some delay, use the etag fields to understand if your information about objects is still up-to-date. To learn how to use the etag field, see [Managing concurrency in Blob storage](./concurrency-manage.md?toc=/azure/storage/blobs/toc.json#managing-concurrency-in-blob-storage).
> - As messages can arrive out of order, use the sequencer fields to understand the order of events on any particular object. The sequencer field is a string value that represents the logical sequence of events for any particular blob name. You can use standard string comparison to understand the relative sequence of two events on the same blob name. > - Storage events guarantees at-least-once delivery to subscribers, which ensures that all messages are outputted. However due to retries between backend nodes and services or availability of subscriptions, duplicate messages may occur. To learn more about message delivery and retry, see [Event Grid message delivery and retry](../../event-grid/delivery-and-retry.md). > - Use the blobType field to understand what type of operations are allowed on the blob, and which client library types you should use to access the blob. Valid values are either `BlockBlob` or `PageBlob`.
Applications that handle Blob storage events should follow a few recommended pra
Learn more about Event Grid and give Blob storage events a try: - [About Event Grid](../../event-grid/overview.md)-- [Blob storage events schema](../../event-grid/event-schema-blob-storage.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)
+- [Blob storage events schema](../../event-grid/event-schema-blob-storage.md?toc=/azure/storage/blobs/toc.json)
- [Route Blob storage Events to a custom web endpoint](storage-blob-event-quickstart.md)
storage Storage Blob Javascript Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-javascript-get-started.md
To authorize passwordless access with Azure AD, you'll need to use an Azure cred
|Developer environment|[Visual Studio Code](/azure/developer/javascript/sdk/authentication/local-development-environment-developer-account?tabs=azure-portal%2Csign-in-vscode)| |Developer environment|[Service principal](../common/identity-library-acquire-token.md)| |Azure-hosted apps|[Azure-hosted apps setup](./authorize-managed-identity.md)|
-|On-premises|[On-premises app setup](../common/storage-auth-aad-app.md?tabs=dotnet&toc=%2fazure%2fstorage%2fblobs%2ftoc.json)|
+|On-premises|[On-premises app setup](../common/storage-auth-aad-app.md?tabs=dotnet&toc=/azure/storage/blobs/toc.json)|
### Set up storage account roles
storage Storage Blob Scalable App Verify Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-scalable-app-verify-metrics.md
In part four of the series, you learn how to:
> - Configure charts in the Azure portal > - Verify throughput and latency metrics
-[Azure storage metrics](./monitor-blob-storage.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) uses Azure monitor to provide a unified view into the performance and availability of your storage account.
+[Azure storage metrics](./monitor-blob-storage.md?toc=/azure/storage/blobs/toc.json) uses Azure monitor to provide a unified view into the performance and availability of your storage account.
## Configure metrics
Charts can have more than one metric assigned to them, but assigning more than o
## Dimensions
-[Dimensions](./monitor-blob-storage-reference.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#metrics-dimensions) are used to look deeper into the charts and get more detailed information. Different metrics have different dimensions. One dimension that is available is the **API name** dimension. This dimension breaks out the chart into each separate API call. The first image below shows an example chart of total transactions for a storage account. The second image shows the same chart but with the API name dimension selected. As you can see, each transaction is listed giving more details into how many calls were made by API name.
+[Dimensions](./monitor-blob-storage-reference.md?toc=/azure/storage/blobs/toc.json#metrics-dimensions) are used to look deeper into the charts and get more detailed information. Different metrics have different dimensions. One dimension that is available is the **API name** dimension. This dimension breaks out the chart into each separate API call. The first image below shows an example chart of total transactions for a storage account. The second image shows the same chart but with the API name dimension selected. As you can see, each transaction is listed giving more details into how many calls were made by API name.
![Storage account metrics - transactions without a dimension](./media/storage-blob-scalable-app-verify-metrics/transactionsnodimensions.png)
storage Storage Blobs Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-introduction.md
Title: Introduction to Blob (object) storage
+ Title: Introduction to Blob (object) Storage
-description: Use Azure Blob storage to store massive amounts of unstructured object data, such as text or binary data. Azure Blob storage is highly scalable and available.
+description: Use Azure Blob Storage to store massive amounts of unstructured object data, such as text or binary data. Azure Blob Storage is highly scalable and available.
Previously updated : 08/18/2022 Last updated : 11/07/2022 +
-# Introduction to Azure Blob storage
+# Introduction to Azure Blob Storage
[!INCLUDE [storage-blob-concepts-include](../../../includes/storage-blob-concepts-include.md)]
-## Blob storage resources
+## Blob Storage resources
-Blob storage offers three types of resources:
+Blob Storage offers three types of resources:
- The **storage account** - A **container** in the storage account
The following diagram shows the relationship between these resources.
A storage account provides a unique namespace in Azure for your data. Every object that you store in Azure Storage has an address that includes your unique account name. The combination of the account name and the Blob Storage endpoint forms the base address for the objects in your storage account.
-For example, if your storage account is named *mystorageaccount*, then the default endpoint for Blob storage is:
+For example, if your storage account is named *mystorageaccount*, then the default endpoint for Blob Storage is:
``` http://mystorageaccount.blob.core.windows.net
The following table describes the different types of storage accounts that are s
| Block blob | Premium | Premium storage account type for block blobs and append blobs. Recommended for scenarios with high transaction rates or that use smaller objects or require consistently low storage latency. [Learn more about workloads for premium block blob accounts...](../blobs/storage-blob-block-blob-premium.md) | | Page blob | Premium | Premium storage account type for page blobs only. [Learn more about workloads for premium page blob accounts...](../blobs/storage-blob-pageblob-overview.md) |
-To learn more about types of storage accounts, see [Azure storage account overview](../common/storage-account-overview.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json). For information about legacy storage account types, see [Legacy storage account types](../common/storage-account-overview.md#legacy-storage-account-types).
+To learn more about types of storage accounts, see [Azure storage account overview](../common/storage-account-overview.md?toc=/azure/storage/blobs/toc.json). For information about legacy storage account types, see [Legacy storage account types](../common/storage-account-overview.md#legacy-storage-account-types).
To learn how to create a storage account, see [Create a storage account](../common/storage-account-create.md).
Follow these rules when naming a blob:
For more information about naming blobs, see [Naming and Referencing Containers, Blobs, and Metadata](/rest/api/storageservices/Naming-and-Referencing-Containers--Blobs--and-Metadata).
-## Move data to Blob storage
+## Move data to Blob Storage
-A number of solutions exist for migrating existing data to Blob storage:
+A number of solutions exist for migrating existing data to Blob Storage:
-- **AzCopy** is an easy-to-use command-line tool for Windows and Linux that copies data to and from Blob storage, across containers, or across storage accounts. For more information about AzCopy, see [Transfer data with the AzCopy v10](../common/storage-use-azcopy-v10.md).
+- **AzCopy** is an easy-to-use command-line tool for Windows and Linux that copies data to and from Blob Storage, across containers, or across storage accounts. For more information about AzCopy, see [Transfer data with the AzCopy v10](../common/storage-use-azcopy-v10.md).
- The **Azure Storage Data Movement library** is a .NET library for moving data between Azure Storage services. The AzCopy utility is built with the Data Movement library. For more information, see the [reference documentation](/dotnet/api/microsoft.azure.storage.datamovement) for the Data Movement library.-- **Azure Data Factory** supports copying data to and from Blob storage by using the account key, a shared access signature, a service principal, or managed identities for Azure resources. For more information, see [Copy data to or from Azure Blob storage by using Azure Data Factory](../../data-factory/connector-azure-blob-storage.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json).-- **Blobfuse** is a virtual file system driver for Azure Blob storage. You can use blobfuse to access your existing block blob data in your Storage account through the Linux file system. For more information, see [How to mount Blob storage as a file system with blobfuse](storage-how-to-mount-container-linux.md).-- **Azure Data Box** service is available to transfer on-premises data to Blob storage when large datasets or network constraints make uploading data over the wire unrealistic. Depending on your data size, you can request [Azure Data Box Disk](../../databox/data-box-disk-overview.md), [Azure Data Box](../../databox/data-box-overview.md), or [Azure Data Box Heavy](../../databox/data-box-heavy-overview.md) devices from Microsoft. You can then copy your data to those devices and ship them back to Microsoft to be uploaded into Blob storage.-- The **Azure Import/Export service** provides a way to import or export large amounts of data to and from your storage account using hard drives that you provide. For more information, see [Use the Microsoft Azure Import/Export service to transfer data to Blob storage](../../import-export/storage-import-export-service.md).
+- **Azure Data Factory** supports copying data to and from Blob Storage by using the account key, a shared access signature, a service principal, or managed identities for Azure resources. For more information, see [Copy data to or from Azure Blob Storage by using Azure Data Factory](../../data-factory/connector-azure-blob-storage.md?toc=/azure/storage/blobs/toc.json).
+- **Blobfuse** is a virtual file system driver for Azure Blob Storage. You can use BlobFuse to access your existing block blob data in your Storage account through the Linux file system. For more information, see [What is BlobFuse? - BlobFuse2 (preview)](blobfuse2-what-is.md).
+- **Azure Data Box** service is available to transfer on-premises data to Blob Storage when large datasets or network constraints make uploading data over the wire unrealistic. Depending on your data size, you can request [Azure Data Box Disk](../../databox/data-box-disk-overview.md), [Azure Data Box](../../databox/data-box-overview.md), or [Azure Data Box Heavy](../../databox/data-box-heavy-overview.md) devices from Microsoft. You can then copy your data to those devices and ship them back to Microsoft to be uploaded into Blob Storage.
+- The **Azure Import/Export service** provides a way to import or export large amounts of data to and from your storage account using hard drives that you provide. For more information, see [Use the Microsoft Azure Import/Export service to transfer data to Blob Storage](../../import-export/storage-import-export-service.md).
## Next steps -- [Create a storage account](../common/storage-account-create.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)-- [Scalability and performance targets for Blob storage](scalability-targets.md)
+- [Create a storage account](../common/storage-account-create.md?toc=/azure/storage/blobs/toc.json)
+- [Scalability and performance targets for Blob Storage](scalability-targets.md)
storage Storage Encrypt Decrypt Blobs Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-encrypt-decrypt-blobs-key-vault.md
The following example shows how to assign the **Key Vault Crypto Officer** role
1. In the Azure portal, locate your key vault using the main search bar or left navigation.
-2. On the storage account overview page, select **Access control (IAM)** from the left-hand menu.
+2. On the key vault overview page, select **Access control (IAM)** from the left-hand menu.
3. On the **Access control (IAM)** page, select the **Role assignments** tab.
storage Storage Feature Support In Storage Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-feature-support-in-storage-accounts.md
The following table describes whether a feature is supported in a standard gener
| [Blobfuse](storage-how-to-mount-container-linux.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Change feed](storage-blob-change-feed.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | | [Custom domains](storage-custom-domain-name.md) | &#x2705; | &#x1F7E6; | &#x1F7E6; | &#x1F7E6; |
-| [Customer-managed account failover](../common/storage-disaster-recovery-guidance.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
+| [Customer-managed account failover](../common/storage-disaster-recovery-guidance.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
| [Customer-managed keys in a single-tenant scenario (encryption)](../common/customer-managed-keys-overview.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Customer-managed keys in a multi-tenant scenario (encryption)](../common/customer-managed-keys-overview.md?toc=/azure/storage/blobs/toc.json) | &#x1F7E6; | &#x1F7E6; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | | [Customer-provided keys (encryption)](encryption-customer-provided-keys.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
The following table describes whether a feature is supported in a standard gener
| [Lifecycle management policies (delete blob)](./lifecycle-management-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Lifecycle management policies (tiering)](./lifecycle-management-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Logging in Azure Monitor](./monitor-blob-storage.md) | &#x2705; | &#x2705; | &nbsp;&#x2B24; | &#x2705; |
-| [Metrics in Azure Monitor](./monitor-blob-storage.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Metrics in Azure Monitor](./monitor-blob-storage.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [Object replication for block blobs](object-replication-overview.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | | [Point-in-time restore for block blobs](point-in-time-restore-overview.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | | [Soft delete for blobs](./soft-delete-blob-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Soft delete for containers](soft-delete-container-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Static websites](storage-blob-static-website.md) | &#x2705; | &#x2705; | &#x1F7E6; | &#x2705; |
-| [Storage Analytics logs (classic)](../common/storage-analytics-logging.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | &#x2705; | &#x2705; | &nbsp;&#x2B24; | &#x2705; |
-| [Storage Analytics metrics (classic)](../common/storage-analytics-metrics.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Storage Analytics logs (classic)](../common/storage-analytics-logging.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x2705; | &nbsp;&#x2B24; | &#x2705; |
+| [Storage Analytics metrics (classic)](../common/storage-analytics-metrics.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
<sup>1</sup> Requests that clients make by using NFS 3.0 or SFTP can't be authorized by using Azure Active Directory (AD) security.
The following table describes whether a feature is supported in a premium block
| [Blobfuse](storage-how-to-mount-container-linux.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Change feed](storage-blob-change-feed.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | | [Custom domains](storage-custom-domain-name.md) | &#x2705; | &#x1F7E6; | &#x1F7E6; | &#x1F7E6; |
-| [Customer-managed account failover](../common/storage-disaster-recovery-guidance.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
+| [Customer-managed account failover](../common/storage-disaster-recovery-guidance.md?toc=/azure/storage/blobs/toc.json) | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
| [Customer-managed keys in a single-tenant scenario (encryption)](../common/customer-managed-keys-overview.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Customer-managed keys in a multi-tenant scenario (encryption)](../common/customer-managed-keys-overview.md?toc=/azure/storage/blobs/toc.json) | &#x1F7E6; | &#x1F7E6; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | | [Customer-provided keys (encryption)](encryption-customer-provided-keys.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
The following table describes whether a feature is supported in a premium block
| [Lifecycle management policies (delete blob)](./lifecycle-management-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Lifecycle management policies (tiering)](./lifecycle-management-overview.md) | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | | [Logging in Azure Monitor](./monitor-blob-storage.md) | &#x2705; | &#x2705; | &nbsp;&#x2B24; | &#x2705; |
-| [Metrics in Azure Monitor](./monitor-blob-storage.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | &#x2705; | &#x1F7E6; | &#x1F7E6; | &#x1F7E6; |
+| [Metrics in Azure Monitor](./monitor-blob-storage.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x1F7E6; | &#x1F7E6; | &#x1F7E6; |
| [Object replication for block blobs](object-replication-overview.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | | [Point-in-time restore for block blobs](point-in-time-restore-overview.md) | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | | [Soft delete for blobs](./soft-delete-blob-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Soft delete for containers](soft-delete-container-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Static websites](storage-blob-static-website.md) | &#x2705; | &#x2705; | &#x1F7E6; | &#x2705; |
-| [Storage Analytics logs (classic)](../common/storage-analytics-logging.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | &#x2705; | &#x1F7E6; | &nbsp;&#x2B24;| &#x2705; |
-| [Storage Analytics metrics (classic)](../common/storage-analytics-metrics.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Storage Analytics logs (classic)](../common/storage-analytics-logging.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x1F7E6; | &nbsp;&#x2B24;| &#x2705; |
+| [Storage Analytics metrics (classic)](../common/storage-analytics-metrics.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
<sup>1</sup> Requests that clients make by using NFS 3.0 or SFTP can't be authorized by using Azure Active Directory (AD) security.
storage Storage Performance Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-performance-checklist.md
Microsoft has developed a number of proven practices for developing high-performance applications with Blob storage. This checklist identifies key practices that developers can follow to optimize performance. Keep these practices in mind while you are designing your application and throughout the process.
-Azure Storage has scalability and performance targets for capacity, transaction rate, and bandwidth. For more information about Azure Storage scalability targets, see [Scalability and performance targets for standard storage accounts](../common/scalability-targets-standard-account.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) and [Scalability and performance targets for Blob storage](scalability-targets.md).
+Azure Storage has scalability and performance targets for capacity, transaction rate, and bandwidth. For more information about Azure Storage scalability targets, see [Scalability and performance targets for standard storage accounts](../common/scalability-targets-standard-account.md?toc=/azure/storage/blobs/toc.json) and [Scalability and performance targets for Blob storage](scalability-targets.md).
## Checklist
For more information on Azure Storage error codes, see [Status and error codes](
## Copying and moving blobs
-Azure Storage provides a number of solutions for copying and moving blobs within a storage account, between storage accounts, and between on-premises systems and the cloud. This section describes some of these options in terms of their effects on performance. For information about efficiently transferring data to or from Blob storage, see [Choose an Azure solution for data transfer](../common/storage-choose-data-transfer-solution.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json).
+Azure Storage provides a number of solutions for copying and moving blobs within a storage account, between storage accounts, and between on-premises systems and the cloud. This section describes some of these options in terms of their effects on performance. For information about efficiently transferring data to or from Blob storage, see [Choose an Azure solution for data transfer](../common/storage-choose-data-transfer-solution.md?toc=/azure/storage/blobs/toc.json).
### Blob copy APIs
Page blobs are appropriate if the application needs to perform random writes on
## Next steps - [Scalability and performance targets for Blob storage](scalability-targets.md)-- [Scalability and performance targets for standard storage accounts](../common/scalability-targets-standard-account.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)
+- [Scalability and performance targets for standard storage accounts](../common/scalability-targets-standard-account.md?toc=/azure/storage/blobs/toc.json)
- [Status and error codes](/rest/api/storageservices/Status-and-Error-Codes2)
storage Storage Quickstart Blobs Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-cli.md
The Azure CLI is Azure's command-line experience for managing Azure resources. Y
You can authorize access to Blob storage from the Azure CLI either with Azure AD credentials or by using the storage account access key. Using Azure AD credentials is recommended. This article shows how to authorize Blob storage operations using Azure AD.
-Azure CLI commands for data operations against Blob storage support the `--auth-mode` parameter, which enables you to specify how to authorize a given operation. Set the `--auth-mode` parameter to `login` to authorize with Azure AD credentials. For more information, see [Authorize access to blob or queue data with Azure CLI](./authorize-data-operations-cli.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json).
+Azure CLI commands for data operations against Blob storage support the `--auth-mode` parameter, which enables you to specify how to authorize a given operation. Set the `--auth-mode` parameter to `login` to authorize with Azure AD credentials. For more information, see [Authorize access to blob or queue data with Azure CLI](./authorize-data-operations-cli.md?toc=/azure/storage/blobs/toc.json).
Only Blob storage data operations support the `--auth-mode` parameter. Management operations, such as creating a resource group or storage account, automatically use Azure AD credentials for authorization.
az storage container create \
> [!IMPORTANT] > Azure role assignments may take a few minutes to propagate.
-You can also use the storage account key to authorize the operation to create the container. For more information about authorizing data operations with Azure CLI, see [Authorize access to blob or queue data with Azure CLI](./authorize-data-operations-cli.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json).
+You can also use the storage account key to authorize the operation to create the container. For more information about authorizing data operations with Azure CLI, see [Authorize access to blob or queue data with Azure CLI](./authorize-data-operations-cli.md?toc=/azure/storage/blobs/toc.json).
## Upload a blob
In this quickstart, you learned how to transfer files between a local file syste
> [Manage block blobs with Azure CLI](blob-cli.md) > [!div class="nextstepaction"]
-> [Azure CLI samples for Blob storage](./storage-samples-blobs-cli.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)
+> [Azure CLI samples for Blob storage](./storage-samples-blobs-cli.md?toc=/azure/storage/blobs/toc.json)
storage Storage Quickstart Blobs Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-dotnet.md
Additional resources:
- [API reference documentation](/dotnet/api/azure.storage.blobs) - [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Azure.Storage.Blobs) - [Package (NuGet)](https://www.nuget.org/packages/Azure.Storage.Blobs)-- [Samples](../common/storage-samples-dotnet.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#blob-samples)
+- [Samples](../common/storage-samples-dotnet.md?toc=/azure/storage/blobs/toc.json#blob-samples)
## Prerequisites
storage Storage Quickstart Blobs Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-java.md
ms.devlang: java
Get started with the Azure Blob Storage client library for Java to manage blobs and containers. Follow steps to install the package and try out example code for basic tasks.
-[API reference documentation](/jav?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#blob-samples)
+[API reference documentation](/jav?toc=/azure/storage/blobs/toc.json#blob-samples)
## Prerequisites
storage Storage Quickstart Blobs Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-nodejs.md
In this quickstart, you learn to manage blobs by using Node.js. Blobs are objects that can hold large amounts of text or binary data, including images, documents, streaming media, and archive data. [API reference](/javascript/api/@azure/storage-blob) |
-[Library source code](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/storage/storage-blob) | [Package (npm)](https://www.npmjs.com/package/@azure/storage-blob) | [Samples](../common/storage-samples-javascript.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#blob-samples)
+[Library source code](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/storage/storage-blob) | [Package (npm)](https://www.npmjs.com/package/@azure/storage-blob) | [Samples](../common/storage-samples-javascript.md?toc=/azure/storage/blobs/toc.json#blob-samples)
## Prerequisites
storage Storage Quickstart Blobs Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-php.md
In this quickstart, you learned how to transfer files between a local disk and A
> [!div class="nextstepaction"] > [PHP Developer Center](https://azure.microsoft.com/develop/php/)
-For more information about the Storage Explorer and Blobs, see [Manage Azure Blob storage resources with Storage Explorer](../../vs-azure-tools-storage-explorer-blobs.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json).
+For more information about the Storage Explorer and Blobs, see [Manage Azure Blob storage resources with Storage Explorer](../../vs-azure-tools-storage-explorer-blobs.md?toc=/azure/storage/blobs/toc.json).
storage Storage Quickstart Blobs Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-powershell.md
In this quickstart, you transferred files between a local file system and Azure
> [Manage block blobs with PowerShell](blob-powershell.md) > [!div class="nextstepaction"]
-> [Azure PowerShell samples for Azure Blob storage](storage-samples-blobs-powershell.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)
+> [Azure PowerShell samples for Azure Blob storage](storage-samples-blobs-powershell.md?toc=/azure/storage/blobs/toc.json)
### Microsoft Azure PowerShell Storage cmdlets reference
In this quickstart, you transferred files between a local file system and Azure
### Microsoft Azure Storage Explorer -- [Microsoft Azure Storage Explorer](../../vs-azure-tools-storage-manage-with-storage-explorer.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) is a free, standalone app from Microsoft that enables you to work visually with Azure Storage data on Windows, macOS, and Linux.
+- [Microsoft Azure Storage Explorer](../../vs-azure-tools-storage-manage-with-storage-explorer.md?toc=/azure/storage/blobs/toc.json) is a free, standalone app from Microsoft that enables you to work visually with Azure Storage data on Windows, macOS, and Linux.
storage Storage Quickstart Blobs Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-python.md
Get started with the Azure Blob Storage client library for Python to manage blobs and containers. Follow steps to install the package and try out example code for basic tasks in an interactive console app.
-[API reference documentation](/python/api/azure-storage-blob) | [Library source code](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/storage/azure-storage-blob) | [Package (PyPi)](https://pypi.org/project/azure-storage-blob/) | [Samples](../common/storage-samples-python.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#blob-samples)
+[API reference documentation](/python/api/azure-storage-blob) | [Library source code](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/storage/azure-storage-blob) | [Package (PyPi)](https://pypi.org/project/azure-storage-blob/) | [Samples](../common/storage-samples-python.md?toc=/azure/storage/blobs/toc.json#blob-samples)
## Prerequisites
storage Storage Quickstart Blobs Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-ruby.md
In this quickstart, you learned how to transfer files between Azure Blob Storage
> [!div class="nextstepaction"] > [Storage account overview](../common/storage-account-overview.md)
-For more information about the Storage Explorer and Blobs, see [Manage Azure Blob Storage resources with Storage Explorer](../../vs-azure-tools-storage-explorer-blobs.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json).
+For more information about the Storage Explorer and Blobs, see [Manage Azure Blob Storage resources with Storage Explorer](../../vs-azure-tools-storage-explorer-blobs.md?toc=/azure/storage/blobs/toc.json).
storage Storage Samples Blobs Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-samples-blobs-powershell.md
The following table includes links to PowerShell script samples that create and
| Script | Description | ||| |**Storage accounts**||
-| [Create a storage account and retrieve/rotate the access keys](../scripts/storage-common-rotate-account-keys-powershell.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Creates an Azure Storage account and retrieves and rotates one of its access keys. |
-| [Migrate Blobs across storage accounts using AzCopy on Windows](/previous-versions/azure/storage/storage-common-transfer-between-storage-accounts?toc=%2fpowershell%2fmodule%2ftoc.json)| Migrate blobs across Azure Storage accounts using AzCopy on Windows. |
+| [Create a storage account and retrieve/rotate the access keys](../scripts/storage-common-rotate-account-keys-powershell.md?toc=/powershell/module/toc.json)| Creates an Azure Storage account and retrieves and rotates one of its access keys. |
+| [Migrate Blobs across storage accounts using AzCopy on Windows](/previous-versions/azure/storage/storage-common-transfer-between-storage-accounts?toc=/powershell/module/toc.json)| Migrate blobs across Azure Storage accounts using AzCopy on Windows. |
|**Blob storage**||
-| [Calculate the total size of a Blob storage container](../scripts/storage-blobs-container-calculate-size-powershell.md?toc=%2fpowershell%2fmodule%2ftoc.json) | Calculates the total size of all the blobs in a container. |
-| [Calculate the size of a Blob storage container for billing purposes](../scripts/storage-blobs-container-calculate-billing-size-powershell.md?toc=%2fpowershell%2fmodule%2ftoc.json) | Calculates the size of a container in Blob storage for the purpose of estimating billing costs. |
-| [Delete containers with a specific prefix](../scripts/storage-blobs-container-delete-by-prefix-powershell.md?toc=%2fpowershell%2fmodule%2ftoc.json) | Deletes containers starting with a specified string. |
+| [Calculate the total size of a Blob storage container](../scripts/storage-blobs-container-calculate-size-powershell.md?toc=/powershell/module/toc.json) | Calculates the total size of all the blobs in a container. |
+| [Calculate the size of a Blob storage container for billing purposes](../scripts/storage-blobs-container-calculate-billing-size-powershell.md?toc=/powershell/module/toc.json) | Calculates the size of a container in Blob storage for the purpose of estimating billing costs. |
+| [Delete containers with a specific prefix](../scripts/storage-blobs-container-delete-by-prefix-powershell.md?toc=/powershell/module/toc.json) | Deletes containers starting with a specified string. |
storage Manage Storage Analytics Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/manage-storage-analytics-logs.md
The following example shows how you can download the log data for the queue serv
azcopy copy 'https://mystorageaccount.blob.core.windows.net/$logs/queue' 'C:\Logs\Storage' --include-path '2014/05/20/09;2014/05/20/10;2014/05/20/11' --recursive ```
-To learn more about how to download specific files, see [Download blobs from Azure Blob storage by using AzCopy v10](./storage-use-azcopy-blobs-download.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json).
+To learn more about how to download specific files, see [Download blobs from Azure Blob storage by using AzCopy v10](./storage-use-azcopy-blobs-download.md?toc=/azure/storage/blobs/toc.json).
When you have downloaded your log data, you can view the log entries in the files. These log files use a delimited text format that many log reading tools are able to parse (for more information, see the guide [Monitoring, Diagnosing, and Troubleshooting Microsoft Azure Storage](storage-monitoring-diagnosing-troubleshooting.md)). Different tools have different facilities for formatting, filtering, sorting, ad searching the contents of your log files. For more information about the Storage Logging log file format and content, see [Storage Analytics Log Format](/rest/api/storageservices/storage-analytics-log-format) and [Storage Analytics Logged Operations and Status Messages](/rest/api/storageservices/storage-analytics-logged-operations-and-status-messages).
storage Manage Storage Analytics Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/manage-storage-analytics-metrics.md
[Azure Storage Analytics](storage-analytics.md) provides metrics for all storage services for blobs, queues, and tables. You can use the [Azure portal](https://portal.azure.com) to configure which metrics are recorded for your account, and configure charts that provide visual representations of your metrics data. This article shows you how to enable and manage metrics. To learn how to enable logs, see [Enable and manage Azure Storage Analytics logs (classic)](manage-storage-analytics-logs.md).
-We recommend you review [Azure Monitor for Storage](./storage-insights-overview.md?toc=%2fazure%2fazure-monitor%2ftoc.json). It is a feature of Azure Monitor that offers comprehensive monitoring of your Azure Storage accounts by delivering a unified view of your Azure Storage services performance, capacity, and availability. It does not require you to enable or configure anything, and you can immediately view these metrics from the pre-defined interactive charts and other visualizations included.
+We recommend you review [Azure Monitor for Storage](./storage-insights-overview.md?toc=/azure/azure-monitor/toc.json). It is a feature of Azure Monitor that offers comprehensive monitoring of your Azure Storage accounts by delivering a unified view of your Azure Storage services performance, capacity, and availability. It does not require you to enable or configure anything, and you can immediately view these metrics from the pre-defined interactive charts and other visualizations included.
> [!NOTE] > There are costs associated with examining monitoring data in the Azure portal. For more information, see [Storage Analytics](storage-analytics.md).
storage Storage Account Move https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-move.md
The following table lists these features along with guidance for adding them to
### Move data to the new storage account
-AzCopy is the preferred tool to move your data over. It's optimized for performance. One way that it's faster, is that data is copied directly between storage servers, so AzCopy doesn't use the network bandwidth of your computer. Use AzCopy at the command line or as part of a custom script. See [Get started with AzCopy](/azure/storage/common/storage-use-azcopy-v10?toc=%2fazure%2fstorage%2fblobs%2ftoc.json).
+AzCopy is the preferred tool to move your data over. It's optimized for performance. One way that it's faster, is that data is copied directly between storage servers, so AzCopy doesn't use the network bandwidth of your computer. Use AzCopy at the command line or as part of a custom script. See [Get started with AzCopy](/azure/storage/common/storage-use-azcopy-v10?toc=/azure/storage/blobs/toc.json).
You can also use Azure Data Factory to move your data over. It provides an intuitive user interface. To use Azure Data Factory, see any of these links:.
storage Storage Account Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-upgrade.md
To monitor data access patterns for Blob storage, you need to enable the hourly
To get a good approximation of your data consumption and access pattern, we recommend you choose a retention period for the metrics that is representative of your regular usage and extrapolate. One option is to retain the metrics data for seven days and collect the data every week, for analysis at the end of the month. Another option is to retain the metrics data for the last 30 days and collect and analyze the data at the end of the 30-day period.
-For details on enabling, collecting, and viewing metrics data, see [Storage analytics metrics](../common/storage-analytics-metrics.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json).
+For details on enabling, collecting, and viewing metrics data, see [Storage analytics metrics](../common/storage-analytics-metrics.md?toc=/azure/storage/blobs/toc.json).
> [!NOTE] > Storing, accessing, and downloading analytics data is also charged just like regular user data.
storage Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-introduction.md
Previously updated : 09/29/2022 Last updated : 11/07/2022
The following table compares Files, Blobs, Disks, Queues, Tables, and Azure NetA
| **Azure Files** |Offers fully managed cloud file shares that you can access from anywhere via the industry standard Server Message Block (SMB) protocol.<br><br>You can mount Azure file shares from cloud or on-premises deployments of Windows, Linux, and macOS. | You want to "lift and shift" an application to the cloud that already uses the native file system APIs to share data between it and other applications running in Azure.<br/><br/>You want to replace or supplement on-premises file servers or NAS devices.<br><br> You want to store development and debugging tools that need to be accessed from many virtual machines. | | **Azure Blobs** | Allows unstructured data to be stored and accessed at a massive scale in block blobs.<br/><br/>Also supports [Azure Data Lake Storage Gen2](../blobs/data-lake-storage-introduction.md) for enterprise big data analytics solutions. | You want your application to support streaming and random access scenarios.<br/><br/>You want to be able to access application data from anywhere.<br/><br/>You want to build an enterprise data lake on Azure and perform big data analytics. | | **Azure Disks** | Allows data to be persistently stored and accessed from an attached virtual hard disk. | You want to "lift and shift" applications that use native file system APIs to read and write data to persistent disks.<br/><br/>You want to store data that is not required to be accessed from outside the virtual machine to which the disk is attached. |
-| **Azure Queues** | Allows for asynchronous message queueing between application components. | You want to decouple application components and use asynchronous messaging to communicate between them.<br><br>For guidance around when to use Queue storage versus Service Bus queues, see [Storage queues and Service Bus queues - compared and contrasted](../../service-bus-messaging/service-bus-azure-and-service-bus-queues-compared-contrasted.md). |
-| **Azure Tables** | Allows you to store structured NoSQL data in the cloud, providing a key/attribute store with a schemaless design. | You want to store flexible datasets like user data for web applications, address books, device information, or other types of metadata your service requires. <br/><br/>For guidance around when to use Table storage versus Azure Cosmos DB for Table, see [Developing with Azure Cosmos DB for Table and Azure Table storage](../../cosmos-db/table-support.md). |
+| **Azure Queues** | Allows for asynchronous message queueing between application components. | You want to decouple application components and use asynchronous messaging to communicate between them.<br><br>For guidance around when to use Queue Storage versus Service Bus queues, see [Storage queues and Service Bus queues - compared and contrasted](../../service-bus-messaging/service-bus-azure-and-service-bus-queues-compared-contrasted.md). |
+| **Azure Tables** | Allows you to store structured NoSQL data in the cloud, providing a key/attribute store with a schemaless design. | You want to store flexible datasets like user data for web applications, address books, device information, or other types of metadata your service requires. <br/><br/>For guidance around when to use Table Storage versus Azure Cosmos DB for Table, see [Developing with Azure Cosmos DB for Table and Azure Table Storage](../../cosmos-db/table-support.md). |
| **Azure NetApp Files** | Offers a fully managed, highly available, enterprise-grade NAS service that can handle the most demanding, high-performance, low-latency workloads requiring advanced data management capabilities. | You have a difficult-to-migrate workload such as POSIX-compliant Linux and Windows applications, SAP HANA, databases, high-performance compute (HPC) infrastructure and apps, and enterprise web applications. <br></br> You require support for multiple file-storage protocols in a single service, including NFSv3, NFSv4.1, and SMB3.1.x, enables a wide range of application lift-and-shift scenarios, with no need for code changes. |
-## Blob storage
+## Blob Storage
-Azure Blob storage is Microsoft's object storage solution for the cloud. Blob storage is optimized for storing massive amounts of unstructured data, such as text or binary data.
+Azure Blob Storage is Microsoft's object storage solution for the cloud. Blob Storage is optimized for storing massive amounts of unstructured data, such as text or binary data.
-Blob storage is ideal for:
+Blob Storage is ideal for:
- Serving images or documents directly to a browser. - Storing files for distributed access.
Blob storage is ideal for:
- Storing data for backup and restore, disaster recovery, and archiving. - Storing data for analysis by an on-premises or Azure-hosted service.
-Objects in Blob storage can be accessed from anywhere in the world via HTTP or HTTPS. Users or client applications can access blobs via URLs, the [Azure Storage REST API](/rest/api/storageservices/blob-service-rest-api), [Azure PowerShell](/powershell/module/azure.storage), [Azure CLI](/cli/azure/storage), or an Azure Storage client library. The storage client libraries are available for multiple languages, including [.NET](/dotnet/api/overview/azure/storage), [Java](/java/api/overview/azure/storage), [Node.js](https://azure.github.io/azure-storage-node), [Python](https://azure-storage.readthedocs.io/), [PHP](https://azure.github.io/azure-storage-php/), and [Ruby](https://azure.github.io/azure-storage-ruby).
+Objects in Blob Storage can be accessed from anywhere in the world via HTTP or HTTPS. Users or client applications can access blobs via URLs, the [Azure Storage REST API](/rest/api/storageservices/blob-service-rest-api), [Azure PowerShell](/powershell/module/azure.storage), [Azure CLI](/cli/azure/storage), or an Azure Storage client library. The storage client libraries are available for multiple languages, including [.NET](/dotnet/api/overview/azure/storage), [Java](/java/api/overview/azure/storage), [Node.js](https://azure.github.io/azure-storage-node), [Python](https://azure-storage.readthedocs.io/), [PHP](https://azure.github.io/azure-storage-php/), and [Ruby](https://azure.github.io/azure-storage-ruby).
-For more information about Blob storage, see [Introduction to Blob storage](../blobs/storage-blobs-introduction.md).
+Clients can also securely connect to Blob Storage by using SSH File Transfer Protocol (SFTP) and mount Blob Storage containers by using the Network File System (NFS) 3.0 protocol.
+
+For more information about Blob Storage, see [Introduction to Blob Storage](../blobs/storage-blobs-introduction.md).
## Azure Files
For more information about Azure Files, see [Introduction to Azure Files](../fil
Some SMB features are not applicable to the cloud. For more information, see [Features not supported by the Azure File service](/rest/api/storageservices/features-not-supported-by-the-azure-file-service).
-## Queue storage
+## Queue Storage
The Azure Queue service is used to store and retrieve messages. Queue messages can be up to 64 KB in size, and a queue can contain millions of messages. Queues are generally used to store lists of messages to be processed asynchronously.
For example, say you want your customers to be able to upload pictures, and you
For more information about Azure Queues, see [Introduction to Queues](../queues/storage-queues-introduction.md).
-## Table storage
+## Table Storage
-Azure Table storage is now part of Azure Cosmos DB. To see Azure Table storage documentation, see the [Azure Table storage overview](../tables/table-storage-overview.md). In addition to the existing Azure Table storage service, there is a new Azure Cosmos DB for Table offering that provides throughput-optimized tables, global distribution, and automatic secondary indexes. To learn more and try out the new premium experience, see [Azure Cosmos DB for Table](../../cosmos-db/table-introduction.md).
+Azure Table Storage is now part of Azure Cosmos DB. To see Azure Table Storage documentation, see the [Azure Table Storage overview](../tables/table-storage-overview.md). In addition to the existing Azure Table Storage service, there is a new Azure Cosmos DB for Table offering that provides throughput-optimized tables, global distribution, and automatic secondary indexes. To learn more and try out the new premium experience, see [Azure Cosmos DB for Table](../../cosmos-db/table-introduction.md).
-For more information about Table storage, see [Overview of Azure Table storage](../tables/table-storage-overview.md).
+For more information about Table Storage, see [Overview of Azure Table Storage](../tables/table-storage-overview.md).
-## Disk storage
+## Disk Storage
An Azure managed disk is a virtual hard disk (VHD). You can think of it like a physical disk in an on-premises server but, virtualized. Azure-managed disks are stored as page blobs, which are a random IO storage object in Azure. We call a managed disk 'managed' because it is an abstraction over page blobs, blob containers, and Azure storage accounts. With managed disks, all you have to do is provision the disk, and Azure takes care of the rest.
Azure NetApp Files data traffic is inherently secure by design, as it does not p
## Redundancy
-To ensure that your data is durable, Azure Storage stores multiple copies of your data. When you set up your storage account, you select a redundancy option. For more information, see [Azure Storage redundancy](./storage-redundancy.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json).
+To ensure that your data is durable, Azure Storage stores multiple copies of your data. When you set up your storage account, you select a redundancy option. For more information, see [Azure Storage redundancy](./storage-redundancy.md?toc=/azure/storage/blobs/toc.json).
Azure NetApp Files provides locally redundant storage with [99.99% availability](https://azure.microsoft.com/support/legal/sla/netapp/v1_1/).
storage Storage Monitoring Diagnosing Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-monitoring-diagnosing-troubleshooting.md
If you are familiar with Windows performance monitoring, you can think of Storag
You can choose which hourly metrics you want to display in the [Azure portal](https://portal.azure.com) and configure rules that notify administrators by email whenever an hourly metric exceeds a particular threshold. For more information, see [Receive Alert Notifications](../../azure-monitor/alerts/alerts-overview.md).
-We recommend you review [Azure Monitor for Storage](./storage-insights-overview.md?toc=%2fazure%2fazure-monitor%2ftoc.json) (preview). It is a feature of Azure Monitor that offers comprehensive monitoring of your Azure Storage accounts by delivering a unified view of your Azure Storage services performance, capacity, and availability. It does not require you to enable or configure anything, and you can immediately view these metrics from the pre-defined interactive charts and other visualizations included.
+We recommend you review [Azure Monitor for Storage](./storage-insights-overview.md?toc=/azure/azure-monitor/toc.json) (preview). It is a feature of Azure Monitor that offers comprehensive monitoring of your Azure Storage accounts by delivering a unified view of your Azure Storage services performance, capacity, and availability. It does not require you to enable or configure anything, and you can immediately view these metrics from the pre-defined interactive charts and other visualizations included.
The storage service collects metrics using a best effort, but may not record every storage operation.
storage Storage Sas Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-sas-overview.md
The following recommendations for using shared access signatures can help mitiga
- **Know when not to use a SAS.** Sometimes the risks associated with a particular operation against your storage account outweigh the benefits of using a SAS. For such operations, create a middle-tier service that writes to your storage account after performing business rule validation, authentication, and auditing. Also, sometimes it's simpler to manage access in other ways. For example, if you want to make all blobs in a container publicly readable, you can make the container Public, rather than providing a SAS to every client for access. -- **Use Azure Monitor and Azure Storage logs to monitor your application.** Authorization failures can occur because of an outage in your SAS provider service. They can also occur from an inadvertent removal of a stored access policy. You can use Azure Monitor and storage analytics logging to observe any spike in these types of authorization failures. For more information, see [Azure Storage metrics in Azure Monitor](../blobs/monitor-blob-storage.md?toc=%252fazure%252fstorage%252fblobs%252ftoc.json) and [Azure Storage Analytics logging](storage-analytics-logging.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json).
+- **Use Azure Monitor and Azure Storage logs to monitor your application.** Authorization failures can occur because of an outage in your SAS provider service. They can also occur from an inadvertent removal of a stored access policy. You can use Azure Monitor and storage analytics logging to observe any spike in these types of authorization failures. For more information, see [Azure Storage metrics in Azure Monitor](../blobs/monitor-blob-storage.md?toc=%252fazure%252fstorage%252fblobs%252ftoc.json) and [Azure Storage Analytics logging](storage-analytics-logging.md?toc=/azure/storage/blobs/toc.json).
- **Configure a SAS expiration policy for the storage account.** Best practices recommend that you limit the interval for a SAS in case it is compromised. By setting a SAS expiration policy for your storage accounts, you can provide a recommended upper expiration limit when a user creates a service SAS or an account SAS. For more information, see [Create an expiration policy for shared access signatures](sas-expiration-policy.md).
storage Storage Service Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-service-encryption.md
The following table compares key management options for Azure Storage encryption
| Key control | Microsoft | Customer | Customer | | Key scope | Account (default), container, or blob | Account (default), container, or blob | N/A |
-<sup>1</sup> For information about creating an account that supports using customer-managed keys with Queue storage, see [Create an account that supports customer-managed keys for queues](account-encryption-key-create.md?toc=%2fazure%2fstorage%2fqueues%2ftoc.json).<br />
-<sup>2</sup> For information about creating an account that supports using customer-managed keys with Table storage, see [Create an account that supports customer-managed keys for tables](account-encryption-key-create.md?toc=%2fazure%2fstorage%2ftables%2ftoc.json).
+<sup>1</sup> For information about creating an account that supports using customer-managed keys with Queue storage, see [Create an account that supports customer-managed keys for queues](account-encryption-key-create.md?toc=/azure/storage/queues/toc.json).<br />
+<sup>2</sup> For information about creating an account that supports using customer-managed keys with Table storage, see [Create an account that supports customer-managed keys for tables](account-encryption-key-create.md?toc=/azure/storage/tables/toc.json).
> [!NOTE] > Microsoft-managed keys are rotated appropriately per compliance requirements. If you have specific key rotation requirements, Microsoft recommends that you move to customer-managed keys so that you can manage and audit the rotation yourself.
storage Storage Solution Periodic Data Transfer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-solution-periodic-data-transfer.md
The following table summarizes the differences in key capabilities.
## Next steps -- [Transfer data with AzCopy](./storage-use-azcopy-v10.md?toc=%2fazure%2fstorage%2ftables%2ftoc.json).
+- [Transfer data with AzCopy](./storage-use-azcopy-v10.md?toc=/azure/storage/tables/toc.json).
- [More information on data transfer with Storage REST APIs](/dotnet/api/overview/azure/storage). - Understand how to: - [Transfer data with Data Box Gateway](../../databox-gateway/data-box-gateway-deploy-add-shares.md).
storage Storage Use Azcopy Blobs Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-blobs-download.md
azcopy copy 'https://mystorageaccount.dfs.core.windows.net/mycontainer/myTextFil
``` > [!NOTE]
-> If you are using a SAS token to authorize access to blob data, then append snapshot **DateTime** after the SAS token. For example: `'https://mystorageaccount.blob.core.windows.net/mycontainer/myTextFile.txt?sv=2018-03-28&ss=bjqt&srs=sco&sp=rjklhjup&se=2019-05-10T04:37:48Z&st=2019-05-09T20:37:48Z&spr=https&sig=%2FSOVEFfsKDqRry4bk3qz1vAQFwY5DDzp2%2B%2F3Eykf%2FJLs%3D&sharesnapshot=2020-09-23T08:21:07.0000000Z'`.
+> If you are using a SAS token to authorize access to blob data, then append snapshot **DateTime** after the SAS token. For example: `'https://mystorageaccount.blob.core.windows.net/mycontainer/myTextFile.txt?sv=2018-03-28&ss=bjqt&srs=sco&sp=rjklhjup&se=2019-05-10T04:37:48Z&st=2019-05-09T20:37:48Z&spr=https&sig=/SOVEFfsKDqRry4bk3qz1vAQFwY5DDzp2%2B/3Eykf/JLs%3D&sharesnapshot=2020-09-23T08:21:07.0000000Z'`.
## Download with optional flags
storage Storage Use Azcopy Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-files.md
You can use the [azcopy make](storage-ref-azcopy-make.md) command to create a fi
**Example** ```azcopy
-azcopy make 'https://mystorageaccount.file.core.windows.net/myfileshare?sv=2018-03-28&ss=bjqt&srs=sco&sp=rjklhjup&se=2019-05-10T04:37:48Z&st=2019-05-09T20:37:48Z&spr=https&sig=%2FSOVEFfsKDqRry4bk3qz1vAQFwY5DDzp2%2B%2F3Eykf%2FJLs%3D'
+azcopy make 'https://mystorageaccount.file.core.windows.net/myfileshare?sv=2018-03-28&ss=bjqt&srs=sco&sp=rjklhjup&se=2019-05-10T04:37:48Z&st=2019-05-09T20:37:48Z&spr=https&sig=/SOVEFfsKDqRry4bk3qz1vAQFwY5DDzp2%2B/3Eykf/JLs%3D'
``` For detailed reference docs, see [azcopy make](storage-ref-azcopy-make.md).
This section contains the following examples:
**Example** ```azcopy
-azcopy copy 'C:\myDirectory\myTextFile.txt' 'https://mystorageaccount.file.core.windows.net/myfileshare/myTextFile.txt?sv=2018-03-28&ss=bjqt&srs=sco&sp=rjklhjup&se=2019-05-10T04:37:48Z&st=2019-05-09T20:37:48Z&spr=https&sig=%2FSOVEFfsKDqRry4bk3qz1vAQFwY5DDzp2%2B%2F3Eykf%2FJLs%3D' --preserve-smb-permissions=true --preserve-smb-info=true
+azcopy copy 'C:\myDirectory\myTextFile.txt' 'https://mystorageaccount.file.core.windows.net/myfileshare/myTextFile.txt?sv=2018-03-28&ss=bjqt&srs=sco&sp=rjklhjup&se=2019-05-10T04:37:48Z&st=2019-05-09T20:37:48Z&spr=https&sig=/SOVEFfsKDqRry4bk3qz1vAQFwY5DDzp2%2B/3Eykf/JLs%3D' --preserve-smb-permissions=true --preserve-smb-info=true
``` You can also upload a file by using a wildcard symbol (*) anywhere in the file path or file name. For example: `'C:\myDirectory\*.txt'`, or `C:\my*\*.txt`.
This example copies a directory (and all of the files in that directory) to a fi
**Example** ```azcopy
-azcopy copy 'C:\myDirectory' 'https://mystorageaccount.file.core.windows.net/myfileshare?sv=2018-03-28&ss=bjqt&srs=sco&sp=rjklhjup&se=2019-05-10T04:37:48Z&st=2019-05-09T20:37:48Z&spr=https&sig=%2FSOVEFfsKDqRry4bk3qz1vAQFwY5DDzp2%2B%2F3Eykf%2FJLs%3D' --recursive --preserve-smb-permissions=true --preserve-smb-info=true
+azcopy copy 'C:\myDirectory' 'https://mystorageaccount.file.core.windows.net/myfileshare?sv=2018-03-28&ss=bjqt&srs=sco&sp=rjklhjup&se=2019-05-10T04:37:48Z&st=2019-05-09T20:37:48Z&spr=https&sig=/SOVEFfsKDqRry4bk3qz1vAQFwY5DDzp2%2B/3Eykf/JLs%3D' --recursive --preserve-smb-permissions=true --preserve-smb-info=true
``` To copy to a directory within the file share, just specify the name of that directory in your command string.
To copy to a directory within the file share, just specify the name of that dire
**Example** ```azcopy
-azcopy copy 'C:\myDirectory' 'https://mystorageaccount.file.core.windows.net/myfileshare/myFileShareDirectory?sv=2018-03-28&ss=bjqt&srs=sco&sp=rjklhjup&se=2019-05-10T04:37:48Z&st=2019-05-09T20:37:48Z&spr=https&sig=%2FSOVEFfsKDqRry4bk3qz1vAQFwY5DDzp2%2B%2F3Eykf%2FJLs%3D' --recursive --preserve-smb-permissions=true --preserve-smb-info=true
+azcopy copy 'C:\myDirectory' 'https://mystorageaccount.file.core.windows.net/myfileshare/myFileShareDirectory?sv=2018-03-28&ss=bjqt&srs=sco&sp=rjklhjup&se=2019-05-10T04:37:48Z&st=2019-05-09T20:37:48Z&spr=https&sig=/SOVEFfsKDqRry4bk3qz1vAQFwY5DDzp2%2B/3Eykf/JLs%3D' --recursive --preserve-smb-permissions=true --preserve-smb-info=true
``` If you specify the name of a directory that doesn't exist in the file share, AzCopy creates a new directory by that name.
You can upload the contents of a directory without copying the containing direct
**Example** ```azcopy
-azcopy copy 'C:\myDirectory\*' 'https://mystorageaccount.file.core.windows.net/myfileshare/myFileShareDirectory?sv=2018-03-28&ss=bjqt&srs=sco&sp=rjklhjup&se=2019-05-10T04:37:48Z&st=2019-05-09T20:37:48Z&spr=https&sig=%2FSOVEFfsKDqRry4bk3qz1vAQFwY5DDzp2%2B%2F3Eykf%2FJLs%3D' --preserve-smb-permissions=true --preserve-smb-info=true
+azcopy copy 'C:\myDirectory\*' 'https://mystorageaccount.file.core.windows.net/myfileshare/myFileShareDirectory?sv=2018-03-28&ss=bjqt&srs=sco&sp=rjklhjup&se=2019-05-10T04:37:48Z&st=2019-05-09T20:37:48Z&spr=https&sig=/SOVEFfsKDqRry4bk3qz1vAQFwY5DDzp2%2B/3Eykf/JLs%3D' --preserve-smb-permissions=true --preserve-smb-info=true
``` > [!NOTE]
This section contains the following examples:
**Example** ```azcopy
-azcopy copy 'https://mystorageaccount.file.core.windows.net/myfileshare/myTextFile.txt?sv=2018-03-28&ss=bjqt&srs=sco&sp=rjklhjup&se=2019-05-10T04:37:48Z&st=2019-05-09T20:37:48Z&spr=https&sig=%2FSOVEFfsKDqRry4bk3qz1vAQFwY5DDzp2%2B%2F3Eykf%2FJLs%3D' 'C:\myDirectory\myTextFile.txt' --preserve-smb-permissions=true --preserve-smb-info=true
+azcopy copy 'https://mystorageaccount.file.core.windows.net/myfileshare/myTextFile.txt?sv=2018-03-28&ss=bjqt&srs=sco&sp=rjklhjup&se=2019-05-10T04:37:48Z&st=2019-05-09T20:37:48Z&spr=https&sig=/SOVEFfsKDqRry4bk3qz1vAQFwY5DDzp2%2B/3Eykf/JLs%3D' 'C:\myDirectory\myTextFile.txt' --preserve-smb-permissions=true --preserve-smb-info=true
``` ### Download a directory
azcopy copy 'https://mystorageaccount.file.core.windows.net/myfileshare/myTextFi
**Example** ```azcopy
-azcopy copy 'https://mystorageaccount.file.core.windows.net/myfileshare/myFileShareDirectory?sv=2018-03-28&ss=bjqt&srs=sco&sp=rjklhjup&se=2019-05-10T04:37:48Z&st=2019-05-09T20:37:48Z&spr=https&sig=%2FSOVEFfsKDqRry4bk3qz1vAQFwY5DDzp2%2B%2F3Eykf%2FJLs%3D' 'C:\myDirectory' --recursive --preserve-smb-permissions=true --preserve-smb-info=true
+azcopy copy 'https://mystorageaccount.file.core.windows.net/myfileshare/myFileShareDirectory?sv=2018-03-28&ss=bjqt&srs=sco&sp=rjklhjup&se=2019-05-10T04:37:48Z&st=2019-05-09T20:37:48Z&spr=https&sig=/SOVEFfsKDqRry4bk3qz1vAQFwY5DDzp2%2B/3Eykf/JLs%3D' 'C:\myDirectory' --recursive --preserve-smb-permissions=true --preserve-smb-info=true
``` This example results in a directory named `C:\myDirectory\myFileShareDirectory` that contains all of the downloaded files.
You can download the contents of a directory without copying the containing dire
**Example** ```azcopy
-azcopy copy 'https://mystorageaccount.file.core.windows.net/myfileshare/myFileShareDirectory/*?sv=2018-03-28&ss=bjqt&srs=sco&sp=rjklhjup&se=2019-05-10T04:37:48Z&st=2019-05-09T20:37:48Z&spr=https&sig=%2FSOVEFfsKDqRry4bk3qz1vAQFwY5DDzp2%2B%2F3Eykf%2FJLs%3D' 'C:\myDirectory' --preserve-smb-permissions=true --preserve-smb-info=true
+azcopy copy 'https://mystorageaccount.file.core.windows.net/myfileshare/myFileShareDirectory/*?sv=2018-03-28&ss=bjqt&srs=sco&sp=rjklhjup&se=2019-05-10T04:37:48Z&st=2019-05-09T20:37:48Z&spr=https&sig=/SOVEFfsKDqRry4bk3qz1vAQFwY5DDzp2%2B/3Eykf/JLs%3D' 'C:\myDirectory' --preserve-smb-permissions=true --preserve-smb-info=true
``` > [!NOTE]
You can download a specific version of a file or directory by referencing the **
**Example (Download a file)** ```azcopy
-azcopy copy 'https://mystorageaccount.file.core.windows.net/myfileshare/myTextFile.txt?sv=2018-03-28&ss=bjqt&srs=sco&sp=rjklhjup&se=2019-05-10T04:37:48Z&st=2019-05-09T20:37:48Z&spr=https&sig=%2FSOVEFfsKDqRry4bk3qz1vAQFwY5DDzp2%2B%2F3Eykf%2FJLs%3D&sharesnapshot=2020-09-23T08:21:07.0000000Z' 'C:\myDirectory\myTextFile.txt' --preserve-smb-permissions=true --preserve-smb-info=true
+azcopy copy 'https://mystorageaccount.file.core.windows.net/myfileshare/myTextFile.txt?sv=2018-03-28&ss=bjqt&srs=sco&sp=rjklhjup&se=2019-05-10T04:37:48Z&st=2019-05-09T20:37:48Z&spr=https&sig=/SOVEFfsKDqRry4bk3qz1vAQFwY5DDzp2%2B/3Eykf/JLs%3D&sharesnapshot=2020-09-23T08:21:07.0000000Z' 'C:\myDirectory\myTextFile.txt' --preserve-smb-permissions=true --preserve-smb-info=true
``` **Example (Download a directory)** ```azcopy
-azcopy copy 'https://mystorageaccount.file.core.windows.net/myfileshare/myFileShareDirectory?sv=2018-03-28&ss=bjqt&srs=sco&sp=rjklhjup&se=2019-05-10T04:37:48Z&st=2019-05-09T20:37:48Z&spr=https&sig=%2FSOVEFfsKDqRry4bk3qz1vAQFwY5DDzp2%2B%2F3Eykf%2FJLs%3D&sharesnapshot=2020-09-23T08:21:07.0000000Z' 'C:\myDirectory' --recursive --preserve-smb-permissions=true --preserve-smb-info=true
+azcopy copy 'https://mystorageaccount.file.core.windows.net/myfileshare/myFileShareDirectory?sv=2018-03-28&ss=bjqt&srs=sco&sp=rjklhjup&se=2019-05-10T04:37:48Z&st=2019-05-09T20:37:48Z&spr=https&sig=/SOVEFfsKDqRry4bk3qz1vAQFwY5DDzp2%2B/3Eykf/JLs%3D&sharesnapshot=2020-09-23T08:21:07.0000000Z' 'C:\myDirectory' --recursive --preserve-smb-permissions=true --preserve-smb-info=true
``` ## Copy files between storage accounts
storage Storage Use Azcopy Migrate On Premises Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-migrate-on-premises-data.md
These examples assume that your folder is named `myFolder`, your storage account
# [Linux](#tab/linux) ```bash
-azcopy sync "/mnt/myfiles" "https://mystorageaccount.blob.core.windows.net/mycontainer?sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-05-30T06:57:40Z&st=2019-05-29T22:57:40Z&spr=https&sig=BXHippZxxx54hQn%2F4tBY%2BE2JHGCTRv52445rtoyqgFBUo%3D" --recursive=true
+azcopy sync "/mnt/myfiles" "https://mystorageaccount.blob.core.windows.net/mycontainer?sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-05-30T06:57:40Z&st=2019-05-29T22:57:40Z&spr=https&sig=BXHippZxxx54hQn/4tBY%2BE2JHGCTRv52445rtoyqgFBUo%3D" --recursive=true
``` # [Windows](#tab/windows)
To validate that the scheduled task/cron job runs correctly, create new files in
To learn more about ways to move on-premises data to Azure Storage and vice versa, follow this link: -- [Move data to and from Azure Storage](./storage-choose-data-transfer-solution.md?toc=%2fazure%2fstorage%2ffiles%2ftoc.json).
+- [Move data to and from Azure Storage](./storage-choose-data-transfer-solution.md?toc=/azure/storage/files/toc.json).
For more information about AzCopy, see any of these articles:
storage Storage Use Azcopy Optimize https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-optimize.md
Use the following command to run a performance benchmark test.
**Example** ```azcopy
-azcopy benchmark 'https://mystorageaccount.blob.core.windows.net/mycontainer/myBlobDirectory?sv=2018-03-28&ss=bjqt&srs=sco&sp=rjklhjup&se=2019-05-10T04:37:48Z&st=2019-05-09T20:37:48Z&spr=https&sig=%2FSOVEFfsKDqRry4bk3qz1vAQFwY5DDzp2%2B%2F3Eykf%2FJLs%3D'
+azcopy benchmark 'https://mystorageaccount.blob.core.windows.net/mycontainer/myBlobDirectory?sv=2018-03-28&ss=bjqt&srs=sco&sp=rjklhjup&se=2019-05-10T04:37:48Z&st=2019-05-09T20:37:48Z&spr=https&sig=/SOVEFfsKDqRry4bk3qz1vAQFwY5DDzp2%2B/3Eykf/JLs%3D'
``` > [!TIP]
storage Storage Use Emulator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-emulator.md
New-AzStorageContainerSASToken -Name CONTAINER_NAME -Permission rwdl -ExpiryTime
The resulting shared access signature URI for the new container should be similar to: ```
-http://127.0.0.1:10000/devstoreaccount1/sascontainer?sv=2012-02-12&se=2015-07-08T00%3A12%3A08Z&sr=c&sp=wl&sig=t%2BbzU9%2B7ry4okULN9S0wst%2F8MCUhTjrHyV9rDNLSe8g%3Dsss
+http://127.0.0.1:10000/devstoreaccount1/sascontainer?sv=2012-02-12&se=2015-07-08T00%3A12%3A08Z&sr=c&sp=wl&sig=t%2BbzU9%2B7ry4okULN9S0wst/8MCUhTjrHyV9rDNLSe8g%3Dsss
``` The shared access signature created with this example is valid for one day. The signature grants full access (read, write, delete, list) to blobs within the container.
storage File Sync Deployment Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-deployment-guide.md
We strongly recommend that you read [Planning for an Azure Files deployment](../
1. An **Azure file share** in the same region that you want to deploy Azure File Sync. For more information, see: - [Region availability](file-sync-planning.md#azure-file-sync-region-availability) for Azure File Sync.
- - [Create a file share](../files/storage-how-to-create-file-share.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json) for a step-by-step description of how to create a file share.
+ - [Create a file share](../files/storage-how-to-create-file-share.md?toc=/azure/storage/filesync/toc.json) for a step-by-step description of how to create a file share.
2. The following **storage account** settings must be enabled to allow Azure File Sync access to the storage account: - **SMB security settings** must allow **SMB 3.1.1** protocol version, **NTLM v2** authentication and **AES-128-GCM** encryption. To check the SMB security settings on the storage account, see [SMB security settings](../files/files-smb-protocol.md#smb-security-settings). - **Allow storage account key access** must be **Enabled**. To check this setting, navigate to your storage account and select Configuration under the Settings section.
We strongly recommend that you read [Planning for an Azure Files deployment](../
1. An **Azure file share** in the same region that you want to deploy Azure File Sync. For more information, see: - [Region availability](file-sync-planning.md#azure-file-sync-region-availability) for Azure File Sync.
- - [Create a file share](../files/storage-how-to-create-file-share.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json) for a step-by-step description of how to create a file share.
+ - [Create a file share](../files/storage-how-to-create-file-share.md?toc=/azure/storage/filesync/toc.json) for a step-by-step description of how to create a file share.
2. The following **storage account** settings must be enabled to allow Azure File Sync access to the storage account: - **SMB security settings** must allow **SMB 3.1.1** protocol version, **NTLM v2** authentication and **AES-128-GCM** encryption. To check the SMB security settings on the storage account, see [SMB security settings](../files/files-smb-protocol.md#smb-security-settings). - **Allow storage account key access** must be **Enabled**. To check this setting, navigate to your storage account and select Configuration under the Settings section.
We strongly recommend that you read [Planning for an Azure Files deployment](../
1. An **Azure file share** in the same region that you want to deploy Azure File Sync. For more information, see: - [Region availability](file-sync-planning.md#azure-file-sync-region-availability) for Azure File Sync.
- - [Create a file share](../files/storage-how-to-create-file-share.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json) for a step-by-step description of how to create a file share.
+ - [Create a file share](../files/storage-how-to-create-file-share.md?toc=/azure/storage/filesync/toc.json) for a step-by-step description of how to create a file share.
2. The following **storage account** settings must be enabled to allow Azure File Sync access to the storage account: - **SMB security settings** must allow **SMB 3.1.1** protocol version, **NTLM v2** authentication and **AES-128-GCM** encryption. To check the SMB security settings on the storage account, see [SMB security settings](../files/files-smb-protocol.md#smb-security-settings). - **Allow storage account key access** must be **Enabled**. To check this setting, navigate to your storage account and select Configuration under the Settings section.
A sync group defines the sync topology for a set of files. Endpoints within a sy
A cloud endpoint is a pointer to an Azure file share. All server endpoints will sync with a cloud endpoint, making the cloud endpoint the hub. The storage account for the Azure file share must be located in the same region as the Storage Sync Service. The entirety of the Azure file share will be synced, with one exception: A special folder, comparable to the hidden "System Volume Information" folder on an NTFS volume, will be provisioned. This directory is called ".SystemShareInformation". It contains important sync metadata that will not sync to other endpoints. Do not use or delete it! > [!IMPORTANT]
-> You can make changes to any cloud endpoint or server endpoint in the sync group and have your files synced to the other endpoints in the sync group. If you make a change to the cloud endpoint (Azure file share) directly, changes first need to be discovered by an Azure File Sync change detection job. A change detection job is initiated for a cloud endpoint only once every 24 hours. For more information, see [Azure Files frequently asked questions](../files/storage-files-faq.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json#afs-change-detection).
+> You can make changes to any cloud endpoint or server endpoint in the sync group and have your files synced to the other endpoints in the sync group. If you make a change to the cloud endpoint (Azure file share) directly, changes first need to be discovered by an Azure File Sync change detection job. A change detection job is initiated for a cloud endpoint only once every 24 hours. For more information, see [Azure Files frequently asked questions](../files/storage-files-faq.md?toc=/azure/storage/filesync/toc.json#afs-change-detection).
The administrator creating the cloud endpoint must be a member of the management role **Owner** for the storage account that contains the Azure file share the cloud endpoint is pointing to. This can be configured under **Access Control (IAM)** in the Azure portal for the storage account.
storage File Sync Disaster Recovery Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-disaster-recovery-best-practices.md
Protecting your actual data is a key component of a disaster recovery solution.
### Back up your data in the cloud
-You should use [Azure Backup](../../backup/azure-file-share-backup-overview.md?toc=%2fazure%2fstorage%2ffile-sync%2ftoc.json) as your cloud backup solution. Azure Backup handles backup scheduling, retention, and restores, amongst other things. If you prefer, you could manually take [share snapshots](../files/storage-snapshots-files.md?toc=/azure/storage/file-sync/toc.json) and configure your own scheduling and retention solution but, this isn't ideal. Alternatively, you can use third-party solutions to directly back up your Azure file shares.
+You should use [Azure Backup](../../backup/azure-file-share-backup-overview.md?toc=/azure/storage/file-sync/toc.json) as your cloud backup solution. Azure Backup handles backup scheduling, retention, and restores, amongst other things. If you prefer, you could manually take [share snapshots](../files/storage-snapshots-files.md?toc=/azure/storage/file-sync/toc.json) and configure your own scheduling and retention solution but, this isn't ideal. Alternatively, you can use third-party solutions to directly back up your Azure file shares.
If a disaster happens, you can restore from a share snapshot, which is a point in time, read-only copy of your file share. Since these snapshots are read-only, they won't be affected by ransomware. For large datasets, in which full share restore operations take a long time, you can enable direct user access to the snapshot so that users can copy the data they need to their local drive, while the restore completes.
Once a failover occurs, server endpoints will switch over to sync with the cloud
## Next steps
-[Learn about Azure file share backup](../../backup/azure-file-share-backup-overview.md?toc=%2fazure%2fstorage%2ffile-sync%2ftoc.json)
+[Learn about Azure file share backup](../../backup/azure-file-share-backup-overview.md?toc=/azure/storage/file-sync/toc.json)
storage File Sync How To Manage Tiered Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-how-to-manage-tiered-files.md
# How to manage tiered files
-This article provides guidance for users who have questions related to managing tiered files. For conceptual questions regarding cloud tiering, please see [Azure Files FAQ](../files/storage-files-faq.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json).
+This article provides guidance for users who have questions related to managing tiered files. For conceptual questions regarding cloud tiering, please see [Azure Files FAQ](../files/storage-files-faq.md?toc=/azure/storage/filesync/toc.json).
## How to check if your files are being tiered
Invoke-StorageSyncFileRecall -Path <path-to-to-your-server-endpoint> -ThreadCoun
## Next steps -- [Frequently asked questions (FAQ) about Azure Files](../files/storage-files-faq.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json)
+- [Frequently asked questions (FAQ) about Azure Files](../files/storage-files-faq.md?toc=/azure/storage/filesync/toc.json)
storage File Sync Networking Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-networking-endpoints.md
Azure Files and Azure File Sync provide two main types of endpoints for accessin
For both Azure Files and Azure File Sync, the Azure management objects, the storage account and the Storage Sync Service respectively, control both the public and private endpoints. The storage account is a management construct that represents a shared pool of storage in which you can deploy multiple file shares, as well as other storage resources, such as blob containers or queues. The Storage Sync Service is a management construct that represents registered servers, which are Windows file servers with an established trust relationship with Azure File Sync, and sync groups, which define the topology of the sync relationship.
-This article focuses on how to configure the networking endpoints for both Azure Files and Azure File Sync. To learn more about how to configure networking endpoints for accessing Azure file shares directly, rather than caching on-premises with Azure File Sync, see [Configuring Azure Files network endpoints](../files/storage-files-networking-endpoints.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json).
+This article focuses on how to configure the networking endpoints for both Azure Files and Azure File Sync. To learn more about how to configure networking endpoints for accessing Azure file shares directly, rather than caching on-premises with Azure File Sync, see [Configuring Azure Files network endpoints](../files/storage-files-networking-endpoints.md?toc=/azure/storage/filesync/toc.json).
We recommend reading [Azure File Sync networking considerations](file-sync-networking-overview.md) prior to reading this how to guide. ## Prerequisites This article assumes that: - You have an Azure subscription. If you don't already have a subscription, then create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- You have already created an Azure file share in a storage account which you would like to connect to from on-premises. To learn how to create an Azure file share, see [Create an Azure file share](../files/storage-how-to-create-file-share.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json).
+- You have already created an Azure file share in a storage account which you would like to connect to from on-premises. To learn how to create an Azure file share, see [Create an Azure file share](../files/storage-how-to-create-file-share.md?toc=/azure/storage/filesync/toc.json).
- You allow domain traffic to the following endpoints, see [Azure service endpoints](../file-sync/file-sync-firewall-and-proxy.md#firewall): Additionally:
When you are creating a private endpoint for an Azure resource, the following re
# [Portal](#tab/azure-portal) [!INCLUDE [storage-files-networking-endpoints-private-portal](../../../includes/storage-files-networking-endpoints-private-portal.md)]
-If you have a virtual machine inside of your virtual network, or you've configured DNS forwarding as described in [Configuring DNS forwarding for Azure Files](../files/storage-files-networking-dns.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json), you can test that your private endpoint has been set up correctly by running the following commands from PowerShell, the command line, or the terminal (works for Windows, Linux, or macOS). You must replace `<storage-account-name>` with the appropriate storage account name:
+If you have a virtual machine inside of your virtual network, or you've configured DNS forwarding as described in [Configuring DNS forwarding for Azure Files](../files/storage-files-networking-dns.md?toc=/azure/storage/filesync/toc.json), you can test that your private endpoint has been set up correctly by running the following commands from PowerShell, the command line, or the terminal (works for Windows, Linux, or macOS). You must replace `<storage-account-name>` with the appropriate storage account name:
```console nslookup <storage-account-name>.file.core.windows.net
Aliases: storageaccount.file.core.windows.net
# [PowerShell](#tab/azure-powershell) [!INCLUDE [storage-files-networking-endpoints-private-powershell](../../../includes/storage-files-networking-endpoints-private-powershell.md)]
-If you have a virtual machine inside of your virtual network, or you've configured DNS forwarding as described in [Configuring DNS forwarding for Azure Files](../files/storage-files-networking-dns.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json), you can test that your private endpoint has been set up correctly with the following commands:
+If you have a virtual machine inside of your virtual network, or you've configured DNS forwarding as described in [Configuring DNS forwarding for Azure Files](../files/storage-files-networking-dns.md?toc=/azure/storage/filesync/toc.json), you can test that your private endpoint has been set up correctly with the following commands:
```powershell $storageAccountHostName = [System.Uri]::new($storageAccount.PrimaryEndpoints.file) | `
IP4Address : 192.168.0.5
# [Azure CLI](#tab/azure-cli) [!INCLUDE [storage-files-networking-endpoints-private-cli](../../../includes/storage-files-networking-endpoints-private-cli.md)]
-If you have a virtual machine inside of your virtual network, or you've configured DNS forwarding as described in [Configuring DNS forwarding for Azure Files](../files/storage-files-networking-dns.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json), you can test that your private endpoint has been set up correctly with the following commands:
+If you have a virtual machine inside of your virtual network, or you've configured DNS forwarding as described in [Configuring DNS forwarding for Azure Files](../files/storage-files-networking-dns.md?toc=/azure/storage/filesync/toc.json), you can test that your private endpoint has been set up correctly with the following commands:
```azurecli httpEndpoint=$(az storage account show \
storage File Sync Networking Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-networking-overview.md
You can connect to an Azure file share in two ways:
- Accessing the share directly via the SMB or FileREST protocols. This access pattern is primarily employed to eliminate as many on-premises servers as possible. - Creating a cache of the Azure file share on an on-premises server (or Azure VM) with Azure File Sync, and accessing the file share's data from the on-premises server with your protocol of choice (SMB, NFS, FTPS, etc.) for your use case. This access pattern is handy because it combines the best of both on-premises performance and cloud scale and serverless attachable services, such as Azure Backup.
-This article focuses on how to configure networking when your use case calls for using Azure File Sync to cache files on-premises rather than directly mounting the Azure file share over SMB. For more information about networking considerations for an Azure Files deployment, see [Azure Files networking considerations](../files/storage-files-networking-overview.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json).
+This article focuses on how to configure networking when your use case calls for using Azure File Sync to cache files on-premises rather than directly mounting the Azure file share over SMB. For more information about networking considerations for an Azure Files deployment, see [Azure Files networking considerations](../files/storage-files-networking-overview.md?toc=/azure/storage/filesync/toc.json).
Networking configuration for Azure File Sync spans two different Azure objects: a Storage Sync Service and an Azure storage account. A storage account is a management construct that represents a shared pool of storage in which you can deploy multiple file shares, as well as other storage resources, such as blob containers or queues. A Storage Sync Service is a management construct that represents registered servers, which are Windows file servers with an established trust relationship with Azure File Sync, and sync groups, which define the topology of the sync relationship.
To set up and use Azure Files and Azure File Sync with an on-premises Windows fi
- The FileREST protocol, which is an HTTPS-based protocol used for accessing your Azure file share. Because the FileREST protocol uses standard HTTPS for data transfer, only port 443 must be accessible outbound. Azure File Sync does not use the SMB protocol to transfer data between your on-premises Windows Servers and your Azure file share. - The Azure File Sync sync protocol, which is an HTTPS-based protocol used for exchanging synchronization knowledge, i.e. the version information about the files and folders between endpoints in your environment. This protocol is also used to exchange metadata about the files and folders in your environment, such as timestamps and access control lists (ACLs).
-Because Azure Files offers direct SMB protocol access on Azure file shares, customers often wonder if they need to configure special networking to mount the Azure file shares using SMB for the Azure File Sync agent to access. This is not required and is discouraged except in administrator scenarios, due to the lack of quick change detection on changes made directly to the Azure file share (changes may not be discovered for more than 24 hours depending on the size and number of items in the Azure file share). If you want to use the Azure file share directly, i.e. not use Azure File Sync to cache on-premises, see [Azure Files networking overview](../files/storage-files-networking-overview.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json).
+Because Azure Files offers direct SMB protocol access on Azure file shares, customers often wonder if they need to configure special networking to mount the Azure file shares using SMB for the Azure File Sync agent to access. This is not required and is discouraged except in administrator scenarios, due to the lack of quick change detection on changes made directly to the Azure file share (changes may not be discovered for more than 24 hours depending on the size and number of items in the Azure file share). If you want to use the Azure file share directly, i.e. not use Azure File Sync to cache on-premises, see [Azure Files networking overview](../files/storage-files-networking-overview.md?toc=/azure/storage/filesync/toc.json).
Although Azure File Sync does not require any special networking configuration, some customers may wish to configure advanced networking settings to enable the following scenarios:
Azure Files and Azure File Sync support the following mechanisms to tunnel traff
- [Azure VPN Gateway](../../vpn-gateway/vpn-gateway-about-vpngateways.md): A VPN gateway is a specific type of virtual network gateway that is used to send encrypted traffic between an Azure virtual network and an alternate location (such as on-premises) over the internet. An Azure VPN Gateway is an Azure resource that can be deployed in a resource group along side of a storage account or other Azure resources. Because Azure File Sync is meant to be used with an on-premises Windows file server, you would normally use a [Site-to-Site (S2S) VPN](../../vpn-gateway/design.md#s2smulti), although it is technically possible to use a [Point-to-Site (P2S) VPN](../../vpn-gateway/point-to-site-about.md).
- Site-to-Site (S2S) VPN connections connect your Azure virtual network and your organization's on-premises network. A S2S VPN connection enables you to configure a VPN connection once, for a VPN server or device hosted on your organization's network, rather than doing for every client device that needs to access your Azure file share. To simplify the deployment of a S2S VPN connection, see [Configure a Site-to-Site (S2S) VPN for use with Azure Files](../files/storage-files-configure-s2s-vpn.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json).
+ Site-to-Site (S2S) VPN connections connect your Azure virtual network and your organization's on-premises network. A S2S VPN connection enables you to configure a VPN connection once, for a VPN server or device hosted on your organization's network, rather than doing for every client device that needs to access your Azure file share. To simplify the deployment of a S2S VPN connection, see [Configure a Site-to-Site (S2S) VPN for use with Azure Files](../files/storage-files-configure-s2s-vpn.md?toc=/azure/storage/filesync/toc.json).
- [ExpressRoute](../../expressroute/expressroute-introduction.md), which enables you to create a defined route (private connection) between Azure and your on-premises network that doesn't traverse the internet. Because ExpressRoute provides a dedicated path between your on-premises datacenter and Azure, ExpressRoute may be useful when network performance is a key consideration. ExpressRoute is also a good option when your organization's policy or regulatory requirements require a deterministic path to your resources in the cloud.
This reflects the fact that the Azure Files and Azure File Sync can expose both
- Modifying the hosts file on your clients to make the fully qualified domain names for your storage accounts and Storage Sync Services resolve to the desired private IP addresses. This is strongly discouraged for production environments, since you will need to make these changes to every client that needs to access your private endpoints. Changes to your private endpoints/resources (deletions, modifications, etc.) will not be automatically handled. - Creating DNS zones on your on-premises servers for `privatelink.file.core.windows.net` and `privatelink.afs.azure.net` with A records for your Azure resources. This has the advantage that clients in your on-premises environment will be able to automatically resolve Azure resources without needing to configure each client, however this solution is similarly brittle to modifying the hosts file because changes are not reflected. Although this solution is brittle, it may be the best choice for some environments.-- Forward the `core.windows.net` and `afs.azure.net` zones from your on-premises DNS servers to your Azure private DNS zone. The Azure private DNS host can be reached through a special IP address (`168.63.129.16`) that is only accessible inside virtual networks that are linked to the Azure private DNS zone. To work around this limitation, you can run additional DNS servers within your virtual network that will forward `core.windows.net` and `afs.azure.net` to the equivalent Azure private DNS zones. To simplify this set up, we have provided PowerShell cmdlets that will auto-deploy DNS servers in your Azure virtual network and configure them as desired. To learn how to set up DNS forwarding, see [Configuring DNS with Azure Files](../files/storage-files-networking-dns.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json).
+- Forward the `core.windows.net` and `afs.azure.net` zones from your on-premises DNS servers to your Azure private DNS zone. The Azure private DNS host can be reached through a special IP address (`168.63.129.16`) that is only accessible inside virtual networks that are linked to the Azure private DNS zone. To work around this limitation, you can run additional DNS servers within your virtual network that will forward `core.windows.net` and `afs.azure.net` to the equivalent Azure private DNS zones. To simplify this set up, we have provided PowerShell cmdlets that will auto-deploy DNS servers in your Azure virtual network and configure them as desired. To learn how to set up DNS forwarding, see [Configuring DNS with Azure Files](../files/storage-files-networking-dns.md?toc=/azure/storage/filesync/toc.json).
## Encryption in transit Connections made from the Azure File Sync agent to your Azure file share or Storage Sync Service are always encrypted. Although Azure storage accounts have a setting to disable requiring encryption in transit for communications to Azure Files (and the other Azure storage services that are managed out of the storage account), disabling this setting will not affect Azure File Sync's encryption when communicating with the Azure Files. By default, all Azure storage accounts have encryption in transit enabled.
-For more information about encryption in transit, see [requiring secure transfer in Azure storage](../common/storage-require-secure-transfer.md?toc=%2fazure%2fstorage%2ffiles%2ftoc.json).
+For more information about encryption in transit, see [requiring secure transfer in Azure storage](../common/storage-require-secure-transfer.md?toc=/azure/storage/files/toc.json).
## See also - [Planning for an Azure File Sync deployment](file-sync-planning.md)
storage File Sync Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-planning.md
Before you can create a sync group in a Storage Sync Service, you must first reg
A sync group contains one cloud endpoint, or Azure file share, and at least one server endpoint. The server endpoint object contains the settings that configure the **cloud tiering** capability, which provides the caching capability of Azure File Sync. In order to sync with an Azure file share, the storage account containing the Azure file share must be in the same Azure region as the Storage Sync Service. > [!Important]
-> You can make changes to the namespace of any cloud endpoint or server endpoint in the sync group and have your files synced to the other endpoints in the sync group. If you make a change to the cloud endpoint (Azure file share) directly, changes first need to be discovered by an Azure File Sync change detection job. A change detection job is initiated for a cloud endpoint only once every 24 hours. For more information, see [Azure Files frequently asked questions](../files/storage-files-faq.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json#afs-change-detection).
+> You can make changes to the namespace of any cloud endpoint or server endpoint in the sync group and have your files synced to the other endpoints in the sync group. If you make a change to the cloud endpoint (Azure file share) directly, changes first need to be discovered by an Azure File Sync change detection job. A change detection job is initiated for a cloud endpoint only once every 24 hours. For more information, see [Azure Files frequently asked questions](../files/storage-files-faq.md?toc=/azure/storage/filesync/toc.json#afs-change-detection).
### Consider the count of Storage Sync Services needed A previous section discusses the core resource to configure for Azure File Sync: a *Storage Sync Service*. A Windows Server can only be registered to one Storage Sync Service. So it is often best to only deploy a single Storage Sync Service and register all servers on it. Create multiple Storage Sync Services only if you have: * distinct sets of servers that must never exchange data with one another. In this case, you want to design the system to exclude certain sets of servers to sync with an Azure file share that is already in use as a cloud endpoint in a sync group in a different Storage Sync Service. Another way to look at this is that Windows Servers registered to different storage sync service cannot sync with the same Azure file share.
-* a need to have more registered servers or sync groups than a single Storage Sync Service can support. Review the [Azure File Sync scale targets](../files/storage-files-scale-targets.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json#azure-file-sync-scale-targets) for more details.
+* a need to have more registered servers or sync groups than a single Storage Sync Service can support. Review the [Azure File Sync scale targets](../files/storage-files-scale-targets.md?toc=/azure/storage/filesync/toc.json#azure-file-sync-scale-targets) for more details.
## Plan for balanced sync topologies Before you deploy any resources, it is important to plan out what you will sync on a local server, with which Azure file share. Making a plan will help you determine how many storage accounts, Azure file shares, and sync resources you will need. These considerations are still relevant, even if your data doesn't currently reside on a Windows Server or the server you want to use long term. The [migration section](#migration) can help determine appropriate migration paths for your situation.
In the following table, we have provided both the size of the namespace as well
| 50 | 23.3 | 16 | 64 (initial sync)/ 32 (typical churn) | | 100* | 46.6 | 32 | 128 (initial sync)/ 32 (typical churn) |
-\*Syncing more than 100 million files & directories is not recommended at this time. This is a soft limit based on our tested thresholds. For more information, see [Azure File Sync scale targets](../files/storage-files-scale-targets.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json#azure-file-sync-scale-targets).
+\*Syncing more than 100 million files & directories is not recommended at this time. This is a soft limit based on our tested thresholds. For more information, see [Azure File Sync scale targets](../files/storage-files-scale-targets.md?toc=/azure/storage/filesync/toc.json#azure-file-sync-scale-targets).
> [!TIP] > Initial synchronization of a namespace is an intensive operation and we recommend allocating more memory until initial synchronization is complete. This isn't required but, may speed up initial sync.
Changes made to the Azure file share by using the Azure portal or SMB are not im
To detect changes to the Azure file share, Azure File Sync has a scheduled job called a change detection job. A change detection job enumerates every file in the file share, and then compares it to the sync version for that file. When the change detection job determines that files have changed, Azure File Sync initiates a sync session. The change detection job is initiated every 24 hours. Because the change detection job works by enumerating every file in the Azure file share, change detection takes longer in larger namespaces than in smaller namespaces. For large namespaces, it might take longer than once every 24 hours to determine which files have changed.
-For more information, see [Azure File Sync performance metrics](../files/storage-files-scale-targets.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json#azure-file-sync-performance-metrics) and [Azure File Sync scale targets](../files/storage-files-scale-targets.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json#azure-file-sync-scale-targets)
+For more information, see [Azure File Sync performance metrics](../files/storage-files-scale-targets.md?toc=/azure/storage/filesync/toc.json#azure-file-sync-performance-metrics) and [Azure File Sync scale targets](../files/storage-files-scale-targets.md?toc=/azure/storage/filesync/toc.json#azure-file-sync-scale-targets)
## Identity Azure File Sync works with your standard AD-based identity without any special setup beyond setting up sync. When you are using Azure File Sync, the general expectation is that most accesses go through the Azure File Sync caching servers, rather than through the Azure file share. Since the server endpoints are located on Windows Server, and Windows Server has supported AD and Windows-style ACLs for a long time, nothing is needed beyond ensuring the Windows file servers registered with the Storage Sync Service are domain joined. Azure File Sync will store ACLs on the files in the Azure file share, and will replicate them to all server endpoints.
-Even though changes made directly to the Azure file share will take longer to sync to the server endpoints in the sync group, you may also want to ensure that you can enforce your AD permissions on your file share directly in the cloud as well. To do this, you must domain join your storage account to your on-premises AD, just like how your Windows file servers are domain joined. To learn more about domain joining your storage account to a customer-owned Active Directory, see [Azure Files Active Directory overview](../files/storage-files-active-directory-overview.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json).
+Even though changes made directly to the Azure file share will take longer to sync to the server endpoints in the sync group, you may also want to ensure that you can enforce your AD permissions on your file share directly in the cloud as well. To do this, you must domain join your storage account to your on-premises AD, just like how your Windows file servers are domain joined. To learn more about domain joining your storage account to a customer-owned Active Directory, see [Azure Files Active Directory overview](../files/storage-files-active-directory-overview.md?toc=/azure/storage/filesync/toc.json).
> [!Important] > Domain joining your storage account to Active Directory is not required to successfully deploy Azure File Sync. This is a strictly optional step that allows the Azure file share to enforce on-premises ACLs when users mount the Azure file share directly.
The primary reason to disable encryption in transit for the storage account is t
We strongly recommend ensuring encryption of data in-transit is enabled.
-For more information about encryption in transit, see [requiring secure transfer in Azure storage](../common/storage-require-secure-transfer.md?toc=%2fazure%2fstorage%2ffiles%2ftoc.json).
+For more information about encryption in transit, see [requiring secure transfer in Azure storage](../common/storage-require-secure-transfer.md?toc=/azure/storage/files/toc.json).
### Azure file share encryption at rest [!INCLUDE [storage-files-encryption-at-rest](../../../includes/storage-files-encryption-at-rest.md)]
To request access for these regions, follow the process in [this document](/trou
## Migration If you have an existing Windows file server 2012R2 or newer, Azure File Sync can be directly installed in place, without the need to move data over to a new server. If you are planning to migrate to a new Windows file server as a part of adopting Azure File Sync, or if your data is currently located on Network Attached Storage (NAS) there are several possible migration approaches to use Azure File Sync with this data. Which migration approach you should choose, depends on where your data currently resides.
-Check out the [Azure File Sync and Azure file share migration overview](../files/storage-files-migration-overview.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json) article where you can find detailed guidance for your scenario.
+Check out the [Azure File Sync and Azure file share migration overview](../files/storage-files-migration-overview.md?toc=/azure/storage/filesync/toc.json) article where you can find detailed guidance for your scenario.
## Antivirus Because antivirus works by scanning files for known malicious code, an antivirus product might cause the recall of tiered files, resulting in high egress charges. Tiered files have the secure Windows attribute FILE_ATTRIBUTE_RECALL_ON_DATA_ACCESS set and we recommend consulting with your software vendor to learn how to configure their solution to skip reading files with this attribute set (many do it automatically).
Microsoft's in-house antivirus solutions, Windows Defender and System Center End
> Antivirus vendors can check compatibility between their product and Azure File Sync using the [Azure File Sync Antivirus Compatibility Test Suite](https://www.microsoft.com/download/details.aspx?id=58322), which is available for download on the Microsoft Download Center. ## Backup
-If cloud tiering is enabled, solutions that directly back up the server endpoint or a VM on which the server endpoint is located should not be used. Cloud tiering causes only a subset of your data to be stored on the server endpoint, with the full dataset residing in your Azure file share. Depending on the backup solution used, tiered files will either be skipped and not backed up (because they have the FILE_ATTRIBUTE_RECALL_ON_DATA_ACCESS attribute set), or they will be recalled to disk, resulting in high egress charges. We recommend using a cloud backup solution to back up the Azure file share directly. For more information, see [About Azure file share backup](../../backup/azure-file-share-backup-overview.md?toc=%2fazure%2fstorage%2ffiles%2ftoc.json) or contact your backup provider to see if they support backing up Azure file shares.
+If cloud tiering is enabled, solutions that directly back up the server endpoint or a VM on which the server endpoint is located should not be used. Cloud tiering causes only a subset of your data to be stored on the server endpoint, with the full dataset residing in your Azure file share. Depending on the backup solution used, tiered files will either be skipped and not backed up (because they have the FILE_ATTRIBUTE_RECALL_ON_DATA_ACCESS attribute set), or they will be recalled to disk, resulting in high egress charges. We recommend using a cloud backup solution to back up the Azure file share directly. For more information, see [About Azure file share backup](../../backup/azure-file-share-backup-overview.md?toc=/azure/storage/files/toc.json) or contact your backup provider to see if they support backing up Azure file shares.
If you prefer to use an on-premises backup solution, backups should be performed on a server in the sync group that has cloud tiering disabled. When performing a restore, use the volume-level or file-level restore options. Files restored using the file-level restore option will be synced to all endpoints in the sync group and existing files will be replaced with the version restored from backup. Volume-level restores will not replace newer file versions in the Azure file share or other server endpoints.
These increases in both the number of recalls and the amount of data being recal
## Next steps * [Consider firewall and proxy settings](file-sync-firewall-and-proxy.md)
-* [Deploy Azure Files](../files/storage-how-to-create-file-share.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json)
+* [Deploy Azure Files](../files/storage-how-to-create-file-share.md?toc=/azure/storage/filesync/toc.json)
* [Deploy Azure File Sync](file-sync-deployment-guide.md) * [Monitor Azure File Sync](file-sync-monitoring.md)
storage File Sync Resource Move https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-resource-move.md
A regional failover can be started by Microsoft in a catastrophic event that wil
## See also -- [Overview of Azure file share and sync migration guides](../files/storage-files-migration-overview.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json)
+- [Overview of Azure file share and sync migration guides](../files/storage-files-migration-overview.md?toc=/azure/storage/filesync/toc.json)
- [Troubleshoot Azure File Sync](file-sync-troubleshoot.md) - [Planning for an Azure File Sync deployment](file-sync-planning.md)
storage File Sync Troubleshoot Cloud Tiering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-troubleshoot-cloud-tiering.md
$orphanFilesRemoved.OrphanedTieredFiles > DeletedOrphanFiles.txt
This option doesn't require removing the server endpoint but requires sufficient disk space to copy the full files locally.
-1. [Mount](../files/storage-how-to-use-files-windows.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json) the Azure file share on the Windows Server that has orphaned tiered files.
+1. [Mount](../files/storage-how-to-use-files-windows.md?toc=/azure/storage/filesync/toc.json) the Azure file share on the Windows Server that has orphaned tiered files.
2. Run the following PowerShell commands to list orphaned tiered files: ```powershell Import-Module "C:\Program Files\Azure\StorageSyncAgent\StorageSync.Management.ServerCmdlets.dll"
storage File Sync Troubleshoot Sync Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-troubleshoot-sync-errors.md
To see these errors, run the **FileSyncErrorsReport.ps1** PowerShell script (loc
| 0x80070020 | -2147024864 | ERROR_SHARING_VIOLATION | The file cannot be synced because it's in use. The file will be synced when it's no longer in use. | No action required. | | 0x80c80017 | -2134376425 | ECS_E_SYNC_OPLOCK_BROKEN | The file was changed during sync, so it needs to be synced again. | No action required. | | 0x80070017 | -2147024873 | ERROR_CRC | The file cannot be synced due to CRC error. This error can occur if a tiered file was not recalled prior to deleting a server endpoint or if the file is corrupt. | To resolve this issue, see [Tiered files are not accessible on the server after deleting a server endpoint](file-sync-troubleshoot-cloud-tiering.md#tiered-files-are-not-accessible-on-the-server-after-deleting-a-server-endpoint) to remove tiered files that are orphaned. If the error continues to occur after removing orphaned tiered files, run [chkdsk](/windows-server/administration/windows-commands/chkdsk) on the volume. |
-| 0x80c80200 | -2134375936 | ECS_E_SYNC_CONFLICT_NAME_EXISTS | The file cannot be synced because the maximum number of conflict files has been reached. Azure File Sync supports 100 conflict files per file. To learn more about file conflicts, see Azure File Sync [FAQ](../files/storage-files-faq.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json#afs-conflict-resolution). | To resolve this issue, reduce the number of conflict files. The file will sync once the number of conflict files is less than 100. |
+| 0x80c80200 | -2134375936 | ECS_E_SYNC_CONFLICT_NAME_EXISTS | The file cannot be synced because the maximum number of conflict files has been reached. Azure File Sync supports 100 conflict files per file. To learn more about file conflicts, see Azure File Sync [FAQ](../files/storage-files-faq.md?toc=/azure/storage/filesync/toc.json#afs-conflict-resolution). | To resolve this issue, reduce the number of conflict files. The file will sync once the number of conflict files is less than 100. |
| 0x80c8027d | -2134375811 | ECS_E_DIRECTORY_RENAME_FAILED | Rename of a directory cannot be synced because files or folders within the directory have open handles. | No action required. The rename of the directory will be synced once all open file handles within the directory are closed. | | 0x800700de | -2147024674 | ERROR_BAD_FILE_TYPE | The tiered file on the server is not accessible because it's referencing a version of the file which no longer exists in the Azure file share. | This issue can occur if the tiered file was restored from a backup of the Windows Server. To resolve this issue, restore the file from a snapshot in the Azure file share. |
This error occurs if the Azure File Sync agent version installed on the server i
| **Error string** | ECS_E_NOT_ENOUGH_REMOTE_STORAGE | | **Remediation required** | Yes |
-Sync sessions fail with either of these errors when the Azure file share storage limit has been reached, which can happen if a quota is applied for an Azure file share or if the usage exceeds the limits for an Azure file share. For more information, see the [current limits for an Azure file share](../files/storage-files-scale-targets.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json).
+Sync sessions fail with either of these errors when the Azure file share storage limit has been reached, which can happen if a quota is applied for an Azure file share or if the usage exceeds the limits for an Azure file share. For more information, see the [current limits for an Azure file share](../files/storage-files-scale-targets.md?toc=/azure/storage/filesync/toc.json).
1. Navigate to the sync group within the Storage Sync Service. 2. Select the cloud endpoint within the sync group.
Sync sessions fail with either of these errors when the Azure file share storage
5. Select **Files** to view the list of file shares. 6. Click the three dots at the end of the row for the Azure file share referenced by the cloud endpoint.
-7. Verify that the **Usage** is below the **Quota**. Note unless an alternate quota has been specified, the quota will match the [maximum size of the Azure file share](../files/storage-files-scale-targets.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json).
+7. Verify that the **Usage** is below the **Quota**. Note unless an alternate quota has been specified, the quota will match the [maximum size of the Azure file share](../files/storage-files-scale-targets.md?toc=/azure/storage/filesync/toc.json).
![A screenshot of the Azure file share properties.](media/storage-sync-files-troubleshoot/file-share-limit-reached-1.png)
storage File Sync Troubleshoot Sync Group Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-troubleshoot-sync-group-management.md
This error occurs because Azure File Sync does not support server endpoints on v
<a id="-2134376345"></a>**Server endpoint creation fails, with this error: "MgmtServerJobFailed" (Error code: -2134376345 or 0x80C80067)** This error occurs if the limit of server endpoints per server is reached. Azure File Sync currently supports up to 30 server endpoints per server. For more information, see
-[Azure File Sync scale targets](../files/storage-files-scale-targets.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json#azure-file-sync-scale-targets).
+[Azure File Sync scale targets](../files/storage-files-scale-targets.md?toc=/azure/storage/filesync/toc.json#azure-file-sync-scale-targets).
<a id="-2134376427"></a>**Server endpoint creation fails, with this error: "MgmtServerJobFailed" (Error code: -2134376427 or 0x80c80015)** This error occurs if another server endpoint is already syncing the server endpoint path specified. Azure File Sync does not support multiple server endpoints syncing the same directory or volume.
storage Files Nfs Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-nfs-protocol.md
The status of items that appear in this table may change over time as support co
| [Private endpoints](storage-files-networking-overview.md#private-endpoints) | ✔️ | | Subdirectory mounts| ✔️ | | [Grant network access to specific Azure virtual networks](storage-files-networking-endpoints.md#restrict-access-to-the-public-endpoint-to-specific-virtual-networks)| ✔️ |
-| [Grant network access to specific IP addresses](../common/storage-network-security.md?toc=%2fazure%2fstorage%2ffiles%2ftoc.json#grant-access-from-an-internet-ip-range)| Γ¢ö |
+| [Grant network access to specific IP addresses](../common/storage-network-security.md?toc=/azure/storage/files/toc.json#grant-access-from-an-internet-ip-range)| Γ¢ö |
| [Premium tier](storage-files-planning.md#storage-tiers) | ✔️ | | [Standard tiers (Hot, Cool, and Transaction optimized)](storage-files-planning.md#storage-tiers)| ⛔ | | [POSIX-permissions](https://en.wikipedia.org/wiki/File-system_permissions#Notation_of_traditional_Unix_permissions)| ✔️ |
The status of items that appear in this table may change over time as support co
| [Azure file share backups](../../backup/azure-file-share-backup-overview.md)| Γ¢ö | | [Azure file share snapshots](storage-snapshots-files.md)| Γ¢ö | | [GRS or GZRS redundancy types](storage-files-planning.md#redundancy)| Γ¢ö |
-| [AzCopy](../common/storage-use-azcopy-v10.md?toc=%2fazure%2fstorage%2ffiles%2ftoc.json)| Γ¢ö |
+| [AzCopy](../common/storage-use-azcopy-v10.md?toc=/azure/storage/files/toc.json)| Γ¢ö |
| Azure Storage Explorer| Γ¢ö | | Support for more than 16 groups| Γ¢ö |
The following workloads have known issues:
## Next steps - [Create an NFS file share](storage-files-how-to-create-nfs-shares.md)-- [Compare access to Azure Files, Blob Storage, and Azure NetApp Files with NFS](../common/nfs-comparison.md?toc=%2fazure%2fstorage%2ffiles%2ftoc.json)
+- [Compare access to Azure Files, Blob Storage, and Azure NetApp Files with NFS](../common/nfs-comparison.md?toc=/azure/storage/files/toc.json)
storage Files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-whats-new.md
description: Learn about new features and enhancements in Azure Files and Azure
Previously updated : 09/21/2022 Last updated : 11/07/2022
Azure Files is updated regularly to offer new features and enhancements. This ar
## What's new in 2022
-### 2022 quarter 3 (July, August, September)
-#### Azure Active Directory (Azure AD) Kerberos authentication for hybrid identities on Azure Files (public preview)
-This [preview release](storage-files-identity-auth-azure-active-directory-enable.md) builds on top of [FSLogix profile container support](../../virtual-desktop/create-profile-container-azure-ad.md) released in December 2022 and expands it to support more use cases with an easy, two-step portal experience (SMB only). Azure AD Kerberos allows Kerberos authentication for hybrid identities in Azure AD, reducing the need for customers to configure another domain service and allowing customers to authenticate with Azure Files without the need for line-of-sight to domain controllers. While the initial support is limited to hybrid user identities, which are identities created in AD DS and synced to Azure AD, itΓÇÖs a significant milestone as we simplify identity-based authentication for Azure Files customers. [Read the blog post](https://techcommunity.microsoft.com/t5/azure-storage-blog/public-preview-leverage-azure-active-directory-kerberos-with/ba-p/3612111).
+### 2022 quarter 4 (October, November, December)
+#### Azure Active Directory (Azure AD) Kerberos authentication for hybrid identities on Azure Files is generally available
+This [feature](storage-files-identity-auth-azure-active-directory-enable.md) builds on top of [FSLogix profile container support](../../virtual-desktop/create-profile-container-azure-ad.md) released in December 2022 and expands it to support more use cases (SMB only). Hybrid identities, which are user identities created in Active Directory Domain Services (AD DS) and synced to Azure AD, can mount and access Azure file shares without the need for line-of-sight to an Active Directory domain controller. While the initial support is limited to hybrid identities, itΓÇÖs a significant milestone as we simplify identity-based authentication for Azure Files customers.
### 2022 quarter 2 (April, May, June) #### SUSE Linux support for SAP HANA System Replication (HSR) and Pacemaker
storage Storage Files Identity Auth Azure Active Directory Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-azure-active-directory-enable.md
description: Learn how to enable identity-based Kerberos authentication for hybr
Previously updated : 10/31/2022 Last updated : 11/07/2022
-# Enable Azure Active Directory Kerberos authentication for hybrid identities on Azure Files (preview)
+# Enable Azure Active Directory Kerberos authentication for hybrid identities on Azure Files
[!INCLUDE [storage-files-aad-auth-include](../../../includes/storage-files-aad-auth-include.md)] This article focuses on enabling and configuring Azure AD for authenticating [hybrid user identities](../../active-directory/hybrid/whatis-hybrid-identity.md), which are on-premises AD identities that are synced to the cloud. This allows Azure AD users to access Azure file shares using Kerberos authentication. This configuration uses Azure AD to issue the necessary Kerberos tickets to access the file share with the industry-standard SMB protocol. This means your end users can access Azure file shares over the internet without requiring a line-of-sight to domain controllers from hybrid Azure AD-joined and Azure AD-joined VMs. However, configuring Windows access control lists (ACLs) and permissions might require line-of-sight to the domain controller.
-> [!IMPORTANT]
-> Azure Files authentication with Azure Active Directory Kerberos is currently in public preview.
- For more information on supported options and considerations, see [Overview of Azure Files identity-based authentication options for SMB access](storage-files-active-directory-overview.md). For more information about Azure AD Kerberos, see [Deep dive: How Azure AD Kerberos works](https://techcommunity.microsoft.com/t5/itops-talk-blog/deep-dive-how-azure-ad-kerberos-works/ba-p/3070889). ## Applies to
Azure Files authentication with Azure AD Kerberos is available in Azure public c
## Enable Azure AD Kerberos authentication for hybrid user accounts
-To enable Azure AD Kerberos authentication on Azure Files for hybrid user accounts, use the Azure portal.
+You can enable Azure AD Kerberos authentication on Azure Files for hybrid user accounts using the Azure portal, PowerShell, or Azure CLI.
+
+# [Portal](#tab/azure-portal)
+
+To enable Azure AD Kerberos authentication using the [Azure portal](https://portal.azure.com), follow these steps.
1. Sign in to the Azure portal and select the storage account you want to enable Azure AD Kerberos authentication for. 1. Under **Data storage**, select **File shares**.
To enable Azure AD Kerberos authentication on Azure Files for hybrid user accoun
:::image type="content" source="media/storage-files-identity-auth-azure-active-directory-enable/configure-active-directory.png" alt-text="Screenshot of the Azure portal showing file share settings for a storage account. Active Directory configuration settings are selected." lightbox="media/storage-files-identity-auth-azure-active-directory-enable/configure-active-directory.png" border="true":::
-1. Under **Azure AD Kerberos (preview)**, select **Set up**.
+1. Under **Azure AD Kerberos**, select **Set up**.
1. Select the **Azure AD Kerberos** checkbox.
- :::image type="content" source="media/storage-files-identity-auth-azure-active-directory-enable/setup-azure-ad-kerberos.png" alt-text="Screenshot of the Azure portal showing Active Directory configuration settings for a storage account. Azure AD Kerberos is selected." lightbox="media/storage-files-identity-auth-azure-active-directory-enable/setup-azure-ad-kerberos.png" border="true":::
+ :::image type="content" source="media/storage-files-identity-auth-azure-active-directory-enable/enable-azure-ad-kerberos.png" alt-text="Screenshot of the Azure portal showing Active Directory configuration settings for a storage account. Azure AD Kerberos is selected." lightbox="media/storage-files-identity-auth-azure-active-directory-enable/enable-azure-ad-kerberos.png" border="true":::
-1. Optional: If you want to configure directory and file-level permissions through Windows File Explorer, then you also need to specify the domain name and domain GUID for your on-premises AD. You can get this information from your domain admin or by running the following PowerShell cmdlets from an on-premises AD-joined client:
+1. **Optional:** If you want to configure directory and file-level permissions through Windows File Explorer, then you also need to specify the domain name and domain GUID for your on-premises AD. You can get this information from your domain admin or by running the following Active Directory PowerShell cmdlets from an on-premises AD-joined client:
```PowerShell $domainInformation = Get-ADDomain
To enable Azure AD Kerberos authentication on Azure Files for hybrid user accoun
1. Select **Save**.
+# [Azure PowerShell](#tab/azure-powershell)
+
+To enable Azure AD Kerberos using Azure PowerShell, run the following command. Remember to replace placeholder values, including brackets, with your values.
+
+```azurepowershell
+Set-AzStorageAccount -ResourceGroupName <resourceGroupName> -StorageAccountName <storageAccountName> -EnableAzureActiveDirectoryKerberosForFile $true
+```
+
+**Optional:** If you want to configure directory and file-level permissions through Windows File Explorer, then you also need to specify the domain name and domain GUID for your on-premises AD. If you'd prefer to configure directory and file-level permissions using icacls, you can skip this step. However, if you want to use icacls, the client will need line-of-sight to the on-premises AD.
+
+You can get this information from your domain admin or by running the following Active Directory PowerShell cmdlets from an on-premises AD-joined client:
+
+```PowerShell
+$domainInformation = Get-ADDomain
+$domainGuid = $domainInformation.ObjectGUID.ToString()
+$domainName = $domainInformation.DnsRoot
+```
+
+To specify the domain name and domain GUID for your on-premises AD, run the following Azure PowerShell command. Remember to replace placeholder values, including brackets, with your values.
+
+```azurepowershell
+Set-AzStorageAccount -ResourceGroupName <resourceGroupName> -StorageAccountName <storageAccountName> -EnableAzureActiveDirectoryKerberosForFile $true -ActiveDirectoryDomainName $domainName -ActiveDirectoryDomainGuid $domainGuid
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+To enable Azure AD Kerberos using Azure CLI, run the following command. Remember to replace placeholder values, including brackets, with your values.
+
+```azurecli
+az storage account update --name <storageaccountname> --resource-group <resourcegroupname> --enable-files-aadkerb true
+```
+
+**Optional:** If you want to configure directory and file-level permissions through Windows File Explorer, then you also need to specify the domain name and domain GUID for your on-premises AD. If you'd prefer to configure directory and file-level permissions using icacls, you can skip this step. However, if you want to use icacls, the client will need line-of-sight to the on-premises AD.
+
+You can get this information from your domain admin or by running the following Active Directory PowerShell cmdlets from an on-premises AD-joined client:
+
+```PowerShell
+$domainInformation = Get-ADDomain
+$domainGuid = $domainInformation.ObjectGUID.ToString()
+$domainName = $domainInformation.DnsRoot
+```
+
+To specify the domain name and domain GUID for your on-premises AD, run the following command. Remember to replace placeholder values, including brackets, with your values.
+
+```azurecli
+az storage account update --name <storageAccountName> --resource-group <resourceGroupName> --enable-files-aadkerb true --domain-name <domainName> --domain-guid <domainGuid>
+```
+++ > [!WARNING] > If you've previously enabled Azure AD Kerberos authentication through manual limited preview steps to store FSLogix profiles on Azure Files for Azure AD-joined VMs, the password for the storage account's service principal is set to expire every six months. Once the password expires, users won't be able to get Kerberos tickets to the file share. To mitigate this, see "Error - Service principal password has expired in Azure AD" under [Potential errors when enabling Azure AD Kerberos authentication for hybrid users](storage-troubleshoot-windows-file-connection-problems.md#potential-errors-when-enabling-azure-ad-kerberos-authentication-for-hybrid-users).
After enabling Azure AD Kerberos authentication, you'll need to explicitly grant
Azure AD Kerberos doesn't support using MFA to access Azure file shares configured with Azure AD Kerberos. You must exclude the Azure AD app representing your storage account from your MFA conditional access policies if they apply to all apps. The storage account app should have the same name as the storage account in the conditional access exclusion list. > [!IMPORTANT]
- > If you don't exclude MFA policies from the storage account app, you won't be able to access the file share. Trying to map the file share using *net use* will result in an error message that says "System error 1327: Account restrictions are preventing this user from signing in. For example: blank passwords aren't allowed, sign-in times are limited, or a policy restriction has been enforced."
+ > If you don't exclude MFA policies from the storage account app, you won't be able to access the file share. Trying to map the file share using `net use` will result in an error message that says "System error 1327: Account restrictions are preventing this user from signing in. For example: blank passwords aren't allowed, sign-in times are limited, or a policy restriction has been enforced."
## Assign share-level permissions
Once your share-level permissions are in place, there are two options for config
- **Windows Explorer experience:** If you choose this option, then the client must be domain-joined to the on-premises AD. - **icacls utility:** If you choose this option, then the client needs line-of-sight to the on-premises AD.
-To configure directory and file level permissions through Windows File explorer, you also need to specify domain name and domain GUID for your on-premises AD. You can get this information from your domain admin or from an on-premises AD-joined client. If you prefer to configure using icacls, this step is not required.
+To configure directory and file-level permissions through Windows File explorer, you also need to specify domain name and domain GUID for your on-premises AD. You can get this information from your domain admin or from an on-premises AD-joined client. If you prefer to configure using icacls, this step is not required.
To configure directory and file-level permissions, follow the instructions in [Configure directory and file-level permissions over SMB](storage-files-identity-ad-ds-configure-permissions.md).
Changes are not instant, and require a policy refresh or a reboot to take effect
## Disable Azure AD authentication on your storage account
-If you want to use another authentication method, you can disable Azure AD authentication on your storage account by using the Azure portal.
+If you want to use another authentication method, you can disable Azure AD authentication on your storage account by using the Azure portal, Azure PowerShell, or Azure CLI.
> [!NOTE] > Disabling this feature means that there will be no Active Directory configuration for file shares in your storage account until you enable one of the other Active Directory sources to reinstate your Active Directory configuration.
-1. Sign in to the Azure portal and select the storage account you want to enable Azure AD Kerberos authentication for.
+# [Portal](#tab/azure-portal)
+
+To disable Azure AD Kerberos authentication on your storage account by using the Azure portal, follow these steps.
+
+1. Sign in to the Azure portal and select the storage account you want to disable Azure AD Kerberos authentication for.
1. Under **Data storage**, select **File shares**.
-1. Next to **Active Directory**, select the configuration status (for example, **Not configured**).
-1. Under **Azure AD Kerberos (preview)**, select **Set up**.
+1. Next to **Active Directory**, select the configuration status.
+1. Under **Azure AD Kerberos**, select **Configure**.
1. Uncheck the **Azure AD Kerberos** checkbox. 1. Select **Save**.
+# [Azure PowerShell](#tab/azure-powershell)
+
+To disable Azure AD Kerberos authentication on your storage account by using Azure PowerShell, run the following command. Remember to replace placeholder values, including brackets, with your values.
+
+```azurepowershell
+Set-AzStorageAccount -ResourceGroupName <resourceGroupName> -StorageAccountName <storageAccountName> -EnableAzureActiveDirectoryKerberosForFile $false
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+To disable Azure AD Kerberos authentication on your storage account by using Azure CLI, run the following command. Remember to replace placeholder values, including brackets, with your values.
+
+```azurecli
+az storage account update --name <storageaccountname> --resource-group <resourcegroupname> --enable-files-aadkerb false
+```
+++ ## Next steps For more information, see these resources: - [Potential errors when enabling Azure AD Kerberos authentication for hybrid users](storage-troubleshoot-windows-file-connection-problems.md#potential-errors-when-enabling-azure-ad-kerberos-authentication-for-hybrid-users) - [Overview of Azure Files identity-based authentication support for SMB access](storage-files-active-directory-overview.md)-- [Enable AD DS authentication to Azure file shares](storage-files-identity-ad-ds-enable.md) - [Create a profile container with Azure Files and Azure Active Directory](../../virtual-desktop/create-profile-container-azure-ad.md) - [FAQ](storage-files-faq.md)
storage Storage Files Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-planning.md
You can disable encryption in transit for an Azure storage account. When encrypt
We strongly recommend ensuring encryption of data in-transit is enabled.
-For more information about encryption in transit, see [requiring secure transfer in Azure storage](../common/storage-require-secure-transfer.md?toc=%2fazure%2fstorage%2ffiles%2ftoc.json).
+For more information about encryption in transit, see [requiring secure transfer in Azure storage](../common/storage-require-secure-transfer.md?toc=/azure/storage/files/toc.json).
### Encryption at rest [!INCLUDE [storage-files-encryption-at-rest](../../../includes/storage-files-encryption-at-rest.md)]
We recommend turning on soft delete for most SMB file shares. If you have a work
For more information about soft delete, see [Prevent accidental data deletion](./storage-files-prevent-file-share-deletion.md). ### Backup
-You can back up your Azure file share via [share snapshots](./storage-snapshots-files.md), which are read-only, point-in-time copies of your share. Snapshots are incremental, meaning they only contain as much data as has changed since the previous snapshot. You can have up to 200 snapshots per file share and retain them for up to 10 years. You can either manually take these snapshots in the Azure portal, via PowerShell, or command-line interface (CLI), or you can use [Azure Backup](../../backup/azure-file-share-backup-overview.md?toc=%2fazure%2fstorage%2ffiles%2ftoc.json). Snapshots are stored within your file share, meaning that if you delete your file share, your snapshots will also be deleted. To protect your snapshot backups from accidental deletion, ensure soft delete is enabled for your share.
+You can back up your Azure file share via [share snapshots](./storage-snapshots-files.md), which are read-only, point-in-time copies of your share. Snapshots are incremental, meaning they only contain as much data as has changed since the previous snapshot. You can have up to 200 snapshots per file share and retain them for up to 10 years. You can either manually take these snapshots in the Azure portal, via PowerShell, or command-line interface (CLI), or you can use [Azure Backup](../../backup/azure-file-share-backup-overview.md?toc=/azure/storage/files/toc.json). Snapshots are stored within your file share, meaning that if you delete your file share, your snapshots will also be deleted. To protect your snapshot backups from accidental deletion, ensure soft delete is enabled for your share.
-[Azure Backup for Azure file shares](../../backup/azure-file-share-backup-overview.md?toc=%2fazure%2fstorage%2ffiles%2ftoc.json) handles the scheduling and retention of snapshots. Its grandfather-father-son (GFS) capabilities mean that you can take daily, weekly, monthly, and yearly snapshots, each with their own distinct retention period. Azure Backup also orchestrates the enablement of soft delete and takes a delete lock on a storage account as soon as any file share within it is configured for backup. Lastly, Azure Backup provides certain key monitoring and alerting capabilities that allow customers to have a consolidated view of their backup estate.
+[Azure Backup for Azure file shares](../../backup/azure-file-share-backup-overview.md?toc=/azure/storage/files/toc.json) handles the scheduling and retention of snapshots. Its grandfather-father-son (GFS) capabilities mean that you can take daily, weekly, monthly, and yearly snapshots, each with their own distinct retention period. Azure Backup also orchestrates the enablement of soft delete and takes a delete lock on a storage account as soon as any file share within it is configured for backup. Lastly, Azure Backup provides certain key monitoring and alerting capabilities that allow customers to have a consolidated view of their backup estate.
You can perform both item-level and share-level restores in the Azure portal using Azure Backup. All you need to do is choose the restore point (a particular snapshot), the particular file or directory if relevant, and then the location (original or alternate) you wish you restore to. The backup service handles copying the snapshot data over and shows your restore progress in the portal.
-For more information about backup, see [About Azure file share backup](../../backup/azure-file-share-backup-overview.md?toc=%2fazure%2fstorage%2ffiles%2ftoc.json).
+For more information about backup, see [About Azure file share backup](../../backup/azure-file-share-backup-overview.md?toc=/azure/storage/files/toc.json).
### Protect Azure Files with Microsoft Defender for Storage Microsoft Defender for Storage provides an additional layer of security intelligence that generates alerts when it detects anomalous activity on your storage account, for example unusual access attempts. It also runs malware hash reputation analysis and will alert on known malware. You can configure Microsoft Defender for Storage at the subscription or storage account level via Microsoft Defender for Cloud.
storage Storage How To Create File Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-create-file-share.md
To create an Azure file share, you need to answer three questions about how you
- **What are your redundancy requirements for your Azure file share?** Standard file shares offer locally-redundant (LRS), zone redundant (ZRS), geo-redundant (GRS), or geo-zone-redundant (GZRS) storage, however the large file share feature is only supported on locally redundant and zone redundant file shares. Premium file shares do not support any form of geo-redundancy.
- Premium file shares are available with locally redundancy and zone redundancy in a subset of regions. To find out if premium file shares are currently available in your region, see the [products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=storage) page for Azure. For information about regions that support ZRS, see [Azure Storage redundancy](../common/storage-redundancy.md?toc=%2fazure%2fstorage%2ffiles%2ftoc.json).
+ Premium file shares are available with locally redundancy and zone redundancy in a subset of regions. To find out if premium file shares are currently available in your region, see the [products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=storage) page for Azure. For information about regions that support ZRS, see [Azure Storage redundancy](../common/storage-redundancy.md?toc=/azure/storage/files/toc.json).
- **What size file share do you need?** In local and zone redundant storage accounts, Azure file shares can span up to 100 TiB. However, in geo- and geo-zone redundant storage accounts, Azure file shares can span only up to 5 TiB.
storage Storage How To Use Files Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-use-files-linux.md
The recommended way to mount an Azure file share on Linux is using SMB 3.1.1. By default, Azure Files requires encryption in transit, which is supported by SMB 3.0+. Azure Files also supports SMB 2.1, which doesn't support encryption in transit, but you can't mount Azure file shares with SMB 2.1 from another Azure region or on-premises for security reasons. Unless your application specifically requires SMB 2.1, use SMB 3.1.1.
-| Distribution | SMB 3.1.1 | SMB 3.0 |
+| Distribution | SMB 3.1.1 (Recommended) | SMB 3.0 |
|-|--|| | Linux kernel version | <ul><li>Basic 3.1.1 support: 4.17</li><li>Default mount: 5.0</li><li>AES-128-GCM encryption: 5.3</li><li>AES-256-GCM encryption: 5.10</li></ul> | <ul><li>Basic 3.0 support: 3.12</li><li>AES-128-CCM encryption: 4.11</li></ul> | | [Ubuntu](https://wiki.ubuntu.com/Releases) | AES-128-GCM encryption: 18.04.5 LTS+ | AES-128-CCM encryption: 16.04.4 LTS+ |
storage Storage Troubleshoot Linux File Connection Problems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-troubleshoot-linux-file-connection-problems.md
To close open handles for a file share, directory or file, use the [Close-AzStor
- If you don't have a specific minimum I/O size requirement, we recommend that you use 1 MiB as the I/O size for optimal performance. - Use the right copy method:
- - Use [AzCopy](../common/storage-use-azcopy-v10.md?toc=%2fazure%2fstorage%2ffiles%2ftoc.json) for any transfer between two file shares.
+ - Use [AzCopy](../common/storage-use-azcopy-v10.md?toc=/azure/storage/files/toc.json) for any transfer between two file shares.
- Using cp or dd with parallel could improve copy speed, the number of threads depends on your use case and workload. The following examples use six: - cp example (cp will use the default block size of the file system as the chunk size): `find * -type f | parallel --will-cite -j 6 cp {} /mntpremium/ &`. - dd example (this command explicitly sets chunk size to 1 MiB): `find * -type f | parallel --will-cite-j 6 dd if={} of=/mnt/share/{} bs=1M`
storage Storage Troubleshoot Windows File Connection Problems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-troubleshoot-windows-file-connection-problems.md
Error 1816 happens when you reach the upper limit of concurrent open handles tha
### Solution
-Reduce the number of concurrent open handles by closing some handles, and then retry. For more information, see [Microsoft Azure Storage performance and scalability checklist](../blobs/storage-performance-checklist.md?toc=%2fazure%2fstorage%2ffiles%2ftoc.json).
+Reduce the number of concurrent open handles by closing some handles, and then retry. For more information, see [Microsoft Azure Storage performance and scalability checklist](../blobs/storage-performance-checklist.md?toc=/azure/storage/files/toc.json).
To view open handles for a file share, directory or file, use the [Get-AzStorageFileHandle](/powershell/module/az.storage/get-azstoragefilehandle) PowerShell cmdlet.
You might see slow performance when you try to transfer files to the Azure File
- If you don't have a specific minimum I/O size requirement, we recommend that you use 1 MiB as the I/O size for optimal performance. - If you know the final size of a file that you are extending with writes, and your software doesn't have compatibility problems when the unwritten tail on the file contains zeros, then set the file size in advance instead of making every write an extending write. - Use the right copy method:
- - Use [AzCopy](../common/storage-use-azcopy-v10.md?toc=%2fazure%2fstorage%2ffiles%2ftoc.json) for any transfer between two file shares.
+ - Use [AzCopy](../common/storage-use-azcopy-v10.md?toc=/azure/storage/files/toc.json) for any transfer between two file shares.
- Use [Robocopy](./storage-how-to-create-file-share.md) between file shares on an on-premises computer. ### Considerations for Windows 8.1 or Windows Server 2012 R2
If this is the case, ask your Azure AD admin to grant admin consent to the new A
When enabling Azure AD Kerberos authentication, you might encounter this error if the following conditions are met:
-1. You're using the beta/preview feature of [application management policies](/graph/api/resources/applicationauthenticationmethodpolicy?view=graph-rest-beta).
-2. You (or your administrator) have set a [tenant-wide policy](/graph/api/resources/tenantappmanagementpolicy?view=graph-rest-beta) that:
+1. You're using the beta/preview feature of [application management policies](/graph/api/resources/applicationauthenticationmethodpolicy).
+2. You (or your administrator) have set a [tenant-wide policy](/graph/api/resources/tenantappmanagementpolicy) that:
- Has no start date, or has a start date before 2019-01-01 - Sets a restriction on service principal passwords, which either disallows custom passwords or sets a maximum password lifetime of less than 365.5 days
storage Queues Storage Monitoring Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/queues-storage-monitoring-scenarios.md
StorageQueueLogs
| project TimeGenerated, AuthenticationType, RequesterObjectId, OperationName, Uri ```
-Shared Key and SAS authentication provide no means of auditing individual identities. Therefore, if you want to improve your ability to audit based on identity, we recommended that you transition to Azure AD, and prevent shared key and SAS authentication. To learn how to prevent Shared Key and SAS authentication, see [Prevent Shared Key authorization for an Azure Storage account](../common/shared-key-authorization-prevent.md?toc=%2fazure%2fstorage%2fqueues%2ftoc.json&tabs=portal). To get started with Azure AD, see [Authorize access to blobs using Azure Active Directory](authorize-access-azure-active-directory.md)
+Shared Key and SAS authentication provide no means of auditing individual identities. Therefore, if you want to improve your ability to audit based on identity, we recommended that you transition to Azure AD, and prevent shared key and SAS authentication. To learn how to prevent Shared Key and SAS authentication, see [Prevent Shared Key authorization for an Azure Storage account](../common/shared-key-authorization-prevent.md?toc=/azure/storage/queues/toc.json&tabs=portal). To get started with Azure AD, see [Authorize access to blobs using Azure Active Directory](authorize-access-azure-active-directory.md)
## Optimize cost for infrequent queries
storage Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/security-recommendations.md
Microsoft Defender for Cloud periodically analyzes the security state of your Az
| Recommendation | Comments | Defender for Cloud | |-|-|--|
-| Configure the minimum required version of Transport Layer Security (TLS) for a storage account. | Require that clients use a more secure version of TLS to make requests against an Azure Storage account by configuring the minimum version of TLS for that account. For more information, see [Configure minimum required version of Transport Layer Security (TLS) for a storage account](../common/transport-layer-security-configure-minimum-version.md?toc=%2fazure%2fstorage%2fqueues%2ftoc.json)| - |
-| Enable the **Secure transfer required** option on all of your storage accounts | When you enable the **Secure transfer required** option, all requests made against the storage account must take place over secure connections. Any requests made over HTTP will fail. For more information, see [Require secure transfer in Azure Storage](../common/storage-require-secure-transfer.md?toc=%2fazure%2fstorage%2fqueues%2ftoc.json). | [Yes](../../defender-for-cloud/implement-security-recommendations.md) |
-| Enable firewall rules | Configure firewall rules to limit access to your storage account to requests that originate from specified IP addresses or ranges, or from a list of subnets in an Azure virtual network (VNet). For more information about configuring firewall rules, see [Configure Azure Storage firewalls and virtual networks](../common/storage-network-security.md?toc=%2fazure%2fstorage%2fqueues%2ftoc.json). | - |
-| Allow trusted Microsoft services to access the storage account | Turning on firewall rules for your storage account blocks incoming requests for data by default, unless the requests originate from a service operating within an Azure VNet or from allowed public IP addresses. Requests that are blocked include those from other Azure services, from the Azure portal, from logging and metrics services, and so on. You can permit requests from other Azure services by adding an exception to allow trusted Microsoft services to access the storage account. For more information about adding an exception for trusted Microsoft services, see [Configure Azure Storage firewalls and virtual networks](../common/storage-network-security.md?toc=%2fazure%2fstorage%2fqueues%2ftoc.json).| - |
+| Configure the minimum required version of Transport Layer Security (TLS) for a storage account. | Require that clients use a more secure version of TLS to make requests against an Azure Storage account by configuring the minimum version of TLS for that account. For more information, see [Configure minimum required version of Transport Layer Security (TLS) for a storage account](../common/transport-layer-security-configure-minimum-version.md?toc=/azure/storage/queues/toc.json)| - |
+| Enable the **Secure transfer required** option on all of your storage accounts | When you enable the **Secure transfer required** option, all requests made against the storage account must take place over secure connections. Any requests made over HTTP will fail. For more information, see [Require secure transfer in Azure Storage](../common/storage-require-secure-transfer.md?toc=/azure/storage/queues/toc.json). | [Yes](../../defender-for-cloud/implement-security-recommendations.md) |
+| Enable firewall rules | Configure firewall rules to limit access to your storage account to requests that originate from specified IP addresses or ranges, or from a list of subnets in an Azure virtual network (VNet). For more information about configuring firewall rules, see [Configure Azure Storage firewalls and virtual networks](../common/storage-network-security.md?toc=/azure/storage/queues/toc.json). | - |
+| Allow trusted Microsoft services to access the storage account | Turning on firewall rules for your storage account blocks incoming requests for data by default, unless the requests originate from a service operating within an Azure VNet or from allowed public IP addresses. Requests that are blocked include those from other Azure services, from the Azure portal, from logging and metrics services, and so on. You can permit requests from other Azure services by adding an exception to allow trusted Microsoft services to access the storage account. For more information about adding an exception for trusted Microsoft services, see [Configure Azure Storage firewalls and virtual networks](../common/storage-network-security.md?toc=/azure/storage/queues/toc.json).| - |
| Use private endpoints | A private endpoint assigns a private IP address from your Azure VNet to the storage account. It secures all traffic between your VNet and the storage account over Private Link. For more information about private endpoints, see [Connect privately to a storage account using an Azure private endpoint](../../private-link/tutorial-private-endpoint-storage-portal.md). | - | | Use VNet service tags | A service tag represents a group of IP address prefixes from a given Azure service. Microsoft manages the address prefixes encompassed by the service tag and automatically updates the service tag as addresses change. For more information about service tags supported by Azure Storage, see [Azure service tags overview](../../virtual-network/service-tags-overview.md). For a tutorial that shows how to use service tags to create outbound network rules, see [Restrict access to PaaS resources](../../virtual-network/tutorial-restrict-network-access-to-resources.md). | - | | Limit network access to specific networks | Limiting network access to networks hosting clients requiring access reduces the exposure of your resources to network attacks. | [Yes](../../defender-for-cloud/implement-security-recommendations.md) |
storage Storage Dotnet How To Use Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/storage-dotnet-how-to-use-queues.md
This tutorial shows how to write .NET code for some common scenarios using Azure
### Prerequisites - [Microsoft Visual Studio](https://www.visualstudio.com/downloads/)-- An [Azure Storage account](../common/storage-account-create.md?toc=%2fazure%2fstorage%2fqueues%2ftoc.json)
+- An [Azure Storage account](../common/storage-account-create.md?toc=/azure/storage/queues/toc.json)
[!INCLUDE [storage-queue-concepts-include](../../../includes/storage-queue-concepts-include.md)]
storage Storage Performance Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/storage-performance-checklist.md
Microsoft has developed a number of proven practices for developing high-performance applications with Queue Storage. This checklist identifies key practices that developers can follow to optimize performance. Keep these practices in mind while you are designing your application and throughout the process.
-Azure Storage has scalability and performance targets for capacity, transaction rate, and bandwidth. For more information about Azure Storage scalability targets, see [Scalability and performance targets for standard storage accounts](../common/scalability-targets-standard-account.md?toc=%2fazure%2fstorage%2fqueues%2ftoc.json) and [Scalability and performance targets for Queue Storage](scalability-targets.md).
+Azure Storage has scalability and performance targets for capacity, transaction rate, and bandwidth. For more information about Azure Storage scalability targets, see [Scalability and performance targets for standard storage accounts](../common/scalability-targets-standard-account.md?toc=/azure/storage/queues/toc.json) and [Scalability and performance targets for Queue Storage](scalability-targets.md).
## Checklist
Use queues to make your application architecture scalable. The following lists s
## Next steps - [Scalability and performance targets for Queue Storage](scalability-targets.md)-- [Scalability and performance targets for standard storage accounts](../common/scalability-targets-standard-account.md?toc=%2fazure%2fstorage%2fqueues%2ftoc.json)
+- [Scalability and performance targets for standard storage accounts](../common/scalability-targets-standard-account.md?toc=/azure/storage/queues/toc.json)
- [Status and error codes](/rest/api/storageservices/Status-and-Error-Codes2)
storage Storage Powershell How To Use Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/storage-powershell-how-to-use-queues.md
In this how-to article, you learned about basic Queue Storage management with Po
### Microsoft Azure Storage Explorer -- [Microsoft Azure Storage Explorer](../../vs-azure-tools-storage-manage-with-storage-explorer.md?toc=%2fazure%2fstorage%2fqueues%2ftoc.json) is a free, standalone app from Microsoft that enables you to work visually with Azure Storage data on Windows, macOS, and Linux.
+- [Microsoft Azure Storage Explorer](../../vs-azure-tools-storage-manage-with-storage-explorer.md?toc=/azure/storage/queues/toc.json) is a free, standalone app from Microsoft that enables you to work visually with Azure Storage data on Windows, macOS, and Linux.
storage Storage Queues Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/storage-queues-introduction.md
Queue Storage contains the following components:
`https://myaccount.queue.core.windows.net/images-to-download` -- **Storage account:** All access to Azure Storage is done through a storage account. For information about storage account capacity, see [Scalability and performance targets for standard storage accounts](../common/scalability-targets-standard-account.md?toc=%2fazure%2fstorage%2fqueues%2ftoc.json).
+- **Storage account:** All access to Azure Storage is done through a storage account. For information about storage account capacity, see [Scalability and performance targets for standard storage accounts](../common/scalability-targets-standard-account.md?toc=/azure/storage/queues/toc.json).
- **Queue:** A queue contains a set of messages. The queue name **must** be all lowercase. For information on naming queues, see [Naming queues and metadata](/rest/api/storageservices/naming-queues-and-metadata).
Queue Storage contains the following components:
## Next steps -- [Create a storage account](../common/storage-account-create.md?toc=%2fazure%2fstorage%2fqueues%2ftoc.json)
+- [Create a storage account](../common/storage-account-create.md?toc=/azure/storage/queues/toc.json)
- [Get started with Queue Storage using .NET](storage-dotnet-how-to-use-queues.md) - [Get started with Queue Storage using Java](storage-java-how-to-use-queue-storage.md) - [Get started with Queue Storage using Python](storage-python-how-to-use-queue-storage.md)
storage Storage Quickstart Queues Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/storage-quickstart-queues-java.md
Additional resources:
- [API reference documentation](/java/api/overview/azure/storage-queue-readme) - [Library source code](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/storage/azure-storage-queue) - [Package (Maven)](https://mvnrepository.com/artifact/com.azure/azure-storage-queue)-- [Samples](../common/storage-samples-java.md?toc=%2fazure%2fstorage%2fqueues%2ftoc.json#queue-samples)
+- [Samples](../common/storage-samples-java.md?toc=/azure/storage/queues/toc.json#queue-samples)
## Prerequisites
storage Storage Quickstart Queues Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/storage-quickstart-queues-nodejs.md
Additional resources:
- [API reference documentation](/javascript/api/@azure/storage-queue/) - [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/storage/storage-queue) - [Package (npm)](https://www.npmjs.com/package/@azure/storage-queue)-- [Samples](../common/storage-samples-javascript.md?toc=%2fazure%2fstorage%2fqueues%2ftoc.json#queue-samples)
+- [Samples](../common/storage-samples-javascript.md?toc=/azure/storage/queues/toc.json#queue-samples)
## Prerequisites
storage Storage Quickstart Queues Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/storage-quickstart-queues-python.md
Additional resources:
- [API reference documentation](/python/api/azure-storage-queue/index) - [Library source code](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/storage/azure-storage-queue) - [Package (Python Package Index)](https://pypi.org/project/azure-storage-queue/)-- [Samples](../common/storage-samples-python.md?toc=%2fazure%2fstorage%2fqueues%2ftoc.json#queue-samples)
+- [Samples](../common/storage-samples-python.md?toc=/azure/storage/queues/toc.json#queue-samples)
## Prerequisites
storage Storage Tutorial Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/storage-tutorial-queues.md
In this tutorial, you learn how to:
## Create an Azure Storage account
-First, create an Azure Storage account. For a step-by-step guide to creating a storage account, see [Create a storage account](../common/storage-account-create.md?toc=%2Fazure%2Fstorage%2Fqueues%2Ftoc.json). This is a separate step you perform after creating a free Azure account in the prerequisites.
+First, create an Azure Storage account. For a step-by-step guide to creating a storage account, see [Create a storage account](../common/storage-account-create.md?toc=/azure/storage/queues/toc.json). This is a separate step you perform after creating a free Azure account in the prerequisites.
## Create the app
storage Azure File Migration Program Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/solution-integration/validated-partners/data-management/azure-file-migration-program-solutions.md
The following comparison matrix shows basic functionality, and comparison of mig
- [Azure File Migration Program](https://www.microsoft.com/en-us/us-partner-blog/2022/02/23/new-azure-file-migration-program-streamlines-unstructured-data-migration/) - [Storage migration overview](../../../common/storage-migration-overview.md)-- [Choose an Azure solution for data transfer](../../../common/storage-choose-data-transfer-solution.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)
+- [Choose an Azure solution for data transfer](../../../common/storage-choose-data-transfer-solution.md?toc=/azure/storage/blobs/toc.json)
- [Migrate to Azure file shares](../../../files/storage-files-migration-overview.md) - [Migrate to Data Lake Storage with WANdisco LiveData Platform for Azure](../../../blobs/migrate-gen2-wandisco-live-data-platform.md) - [Copy or move data to Azure Storage with AzCopy](../../../common/storage-use-azcopy-v10.md)
storage Migration Tools Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/solution-integration/validated-partners/data-management/migration-tools-comparison.md
The following comparison matrix shows basic functionality of different tools tha
## See also - [Storage migration overview](../../../common/storage-migration-overview.md)-- [Choose an Azure solution for data transfer](../../../common/storage-choose-data-transfer-solution.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)
+- [Choose an Azure solution for data transfer](../../../common/storage-choose-data-transfer-solution.md?toc=/azure/storage/blobs/toc.json)
- [Migrate to Azure file shares](../../../files/storage-files-migration-overview.md) - [Migrate to Data Lake Storage with WANdisco LiveData Platform for Azure](../../../blobs/migrate-gen2-wandisco-live-data-platform.md) - [Copy or move data to Azure Storage with AzCopy](../../../common/storage-use-azcopy-v10.md)
storage Storage Performance Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/storage-performance-checklist.md
Microsoft has developed a number of proven practices for developing high-performance applications with Table storage. This checklist identifies key practices that developers can follow to optimize performance. Keep these practices in mind while you are designing your application and throughout the process.
-Azure Storage has scalability and performance targets for capacity, transaction rate, and bandwidth. For more information about Azure Storage scalability targets, see [Scalability and performance targets for standard storage accounts](../common/scalability-targets-standard-account.md?toc=%2fazure%2fstorage%2ftables%2ftoc.json) and [Scalability and performance targets for Table storage](scalability-targets.md).
+Azure Storage has scalability and performance targets for capacity, transaction rate, and bandwidth. For more information about Azure Storage scalability targets, see [Scalability and performance targets for standard storage accounts](../common/scalability-targets-standard-account.md?toc=/azure/storage/tables/toc.json) and [Scalability and performance targets for Table storage](scalability-targets.md).
## Checklist
If you are performing batch inserts and then retrieving ranges of entities toget
## Next steps - [Scalability and performance targets for Table storage](scalability-targets.md)-- [Scalability and performance targets for standard storage accounts](../common/scalability-targets-standard-account.md?toc=%2fazure%2fstorage%2ftables%2ftoc.json)
+- [Scalability and performance targets for standard storage accounts](../common/scalability-targets-standard-account.md?toc=/azure/storage/tables/toc.json)
- [Status and error codes](/rest/api/storageservices/Status-and-Error-Codes2)
synapse-analytics Query Delta Lake Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/query-delta-lake-format.md
If you don't have this subfolder, you are not using Delta Lake format. You can c
```python %%pyspark
-from delta.tables import *
+from delta.tables import DeltaTable
deltaTable = DeltaTable.convertToDelta(spark, "parquet.`abfss://delta-lake@sqlondemandstorage.dfs.core.windows.net/covid`") ```
If you don't have this subfolder, you are not using Delta Lake format. You can c
```python %%pyspark
-from delta.tables import *
+from delta.tables import DeltaTable
deltaTable = DeltaTable.convertToDelta(spark, "parquet.`abfss://delta-lake@sqlondemandstorage.dfs.core.windows.net/yellow`", "year INT, month INT") ```
virtual-desktop Create Profile Container Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-profile-container-azure-ad.md
Title: Create a profile container with Azure Files and Azure Active Directory (preview)
-description: Set up an FSLogix profile container on an Azure file share in an existing Azure Virtual Desktop host pool with your Azure Active Directory domain (preview).
+ Title: Create a profile container with Azure Files and Azure Active Directory
+description: Set up an FSLogix profile container on an Azure file share in an existing Azure Virtual Desktop host pool with your Azure Active Directory domain.
Previously updated : 08/29/2022 Last updated : 11/07/2022
-# Create a profile container with Azure Files and Azure Active Directory (preview)
-
-> [!IMPORTANT]
-> Storing FSLogix profiles on Azure Files for Azure Active Directory-joined VMs is currently in public preview.
-> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+# Create a profile container with Azure Files and Azure Active Directory
In this article, you'll learn how to create an Azure Files share to store FSLogix profiles that can be accessed by hybrid user identities authenticated with Azure Active Directory (Azure AD). Azure AD users can now access an Azure file share using Kerberos authentication. This configuration uses Azure AD to issue the necessary Kerberos tickets to access the file share with the industry-standard SMB protocol. Your end-users can access Azure file shares over the internet without requiring a line-of-sight to domain controllers from Hybrid Azure AD-joined and Azure AD-joined VMs.
-This feature is currently supported in the Azure Public, Azure Government, and Azure China clouds.
+This feature is currently supported in the Azure Public cloud.
## Configure your Azure storage account and file share
virtual-desktop Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/prerequisites.md
There are different automation and deployment options available depending on whi
|Windows Server 2016|Yes|Yes|No|No| |Windows Server 2012 R2|Yes|Yes|No|No|
+> [!TIP]
+> To simplify user access rights during initial development and testing, Azure Virtual Desktop supports [Azure Dev/Test pricing](https://azure.microsoft.com/pricing/dev-test/). If you deploy Azure Virtual Desktop in an Azure Dev/Test subscription, end users may connect to that deployment without separate license entitlement in order to perform acceptance tests or provide feedback.
+ ## Network There are several network requirements you'll need to meet to successfully deploy Azure Virtual Desktop. This lets users connect to their virtual desktops and remote apps while also giving them the best possible user experience.
virtual-desktop Whats New Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-agent.md
Title: What's new in the Azure Virtual Desktop Agent? - Azure
description: New features and product updates for the Azure Virtual Desktop Agent. Previously updated : 09/10/2022 Last updated : 11/07/2022
The Azure Virtual Desktop Agent updates regularly. This article is where you'll
Make sure to check back here often to keep up with new updates.
+## Version 1.0.5555.1008
+
+This update was released in November 2022 and includes the following changes:
+
+- Increased sensitivity of AppAttachRegister monitor for improved results.
+- Fixed an error that slowed down Geneva Agent installation.
+- Version updates for Include Stack.
+- General improvements and bug fixes.
+ ## Version 1.0.5388.1701 This update was released in August 2022 and includes the following changes:
virtual-machines Sap Hana High Availability Netapp Files Suse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-hana-high-availability-netapp-files-suse.md
This section describes necessary steps required for cluster to operate seamlessl
Follow the steps in, [Setting up Pacemaker on SUSE Enterprise Linux](./high-availability-guide-suse-pacemaker.md) in Azure to create a basic Pacemaker cluster for this HANA server.
-### Implement the Python system replication hook SAPHanaSR
+## Implement HANA hooks SAPHanaSR and susChkSrv
-This is an important step to optimize the integration with the cluster and improve the detection, when a cluster failover is needed. It is highly recommended to configure the SAPHanaSR Python hook. Follow the steps mentioned in, [Implement the Python System Replication hook SAPHanaSR](./sap-hana-high-availability.md#implement-the-python-system-replication-hook-saphanasr)
+This is an important step to optimize the integration with the cluster and improve the detection, when a cluster failover is needed. It is highly recommended to configure both SAPHanaSR and susChkSrv Python hooks. Follow the steps mentioned in, [Implement the Python System Replication hooks SAPHanaSR and susChkSrv](./sap-hana-high-availability.md#implement-hana-hooks-saphanasr-and-suschksrv)
## Configure SAP HANA cluster resources
virtual-machines Sap Hana High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-hana-high-availability.md
vm-linux Previously updated : 10/17/2022 Last updated : 11/04/2022
The steps in this section use the following prefixes:
hdbnsutil -sr_register --remoteHost=<b>hn1-db-0</b> --remoteInstance=<b>03</b> --replicationMode=sync --name=<b>SITE2</b> </code></pre>
-## Implement the Python system replication hook SAPHanaSR
+## Implement HANA hooks SAPHanaSR and susChkSrv
-This is important step to optimize the integration with the cluster and improve the detection when a cluster failover is needed. It is highly recommended to configure the SAPHanaSR Python hook.
+This is important step to optimize the integration with the cluster and improve the detection when a cluster failover is needed. It is highly recommended to configure the SAPHanaSR Python hook. For HANA 2.0 SP5 and above, implementing SAPHanaSR, along with susChkSrv hook is recommended.
-1. **[A]** Install the HANA "system replication hook". The hook needs to be installed on both HANA DB nodes.
+SusChkSrv extends the functionality of the main SAPHanaSR HA provider. It acts in the situation when HANA process hdbindexserver crashes. If a single process crashes typically HANA tries to restart it. Restarting the indexserver process can take a long time, during which the HANA database is not responsive.
- > [!TIP]
- > Verify that package SAPHanaSR is at least version 0.153 to be able to use the SAPHanaSR Python hook functionality.
- > The Python hook can only be implemented for HANA 2.0.
+With susChkSrv implemented, an immediate and configurable action is executed, which triggers a failover in the configured timeout period, instead of waiting on hdbindexserver process to restart on the same node.
- 1. Prepare the hook as `root`.
+1. **[A]** Install the HANA "system replication hook". The hook needs to be installed on both HANA DB nodes.
- ```bash
- mkdir -p /hana/shared/myHooks
- cp /usr/share/SAPHanaSR/SAPHanaSR.py /hana/shared/myHooks
- chown -R hn1adm:sapsys /hana/shared/myHooks
- ```
+ > [!TIP]
+ > SAPHanaSR Python hook can only be implemented for HANA 2.0. Package SAPHanaSR must be at least version 0.153.
+ > susChkSrv Python hook requires SAP HANA 2.0 SP5 and SAPHanaSR version 0.161.1_BF or higher must be installed.
- 2. Stop HANA on both nodes. Execute as <sid\>adm:
+ 1. Stop HANA on both nodes. Execute as <sid\>adm:
```bash sapcontrol -nr 03 -function StopSystem ```
- 3. Adjust `global.ini` on each cluster node.
+ 2. Adjust `global.ini` on each cluster node. If the requirements for susChkSrv hook are not met, remove the entire block [ha_dr_provider_suschksrv] from below parameters.
+ You can adjust the behavior of susChkSrv with parameter action_on_lost.
+ Valid values are [ ignore | stop | kill | fence ].
```bash # add to global.ini [ha_dr_provider_SAPHanaSR] provider = SAPHanaSR
- path = /hana/shared/myHooks
+ path = /usr/share/SAPHanaSR
execution_order = 1
+ [ha_dr_provider_suschksrv]
+ provider = susChkSrv
+ path = /usr/share/SAPHanaSR
+ execution_order = 3
+ action_on_lost = fence
+ [trace] ha_dr_saphanasr = info
- ```
+ ```
-2. **[A]** The cluster requires sudoers configuration on each cluster node for <sid\>adm. In this example that is achieved by creating a new file. Execute the commands as `root`.
- ```bash
+Configuration pointing to the standard location /usr/share/SAPHanaSR, brings a benefit, that the python hook code is automatically updated through OS or package updates and it gets used by HANA at next restart. With an optional, own path, such as /hana/shared/myHooks you can decouple OS updates with the used hook version.
+
+2. **[A]** The cluster requires sudoers configuration on each cluster node for <sid\>adm. In this example that is achieved by creating a new file. Execute the command as `root` and adapt the bold values of hn1/HN1 with correct SID.
+ <pre><code>
cat << EOF > /etc/sudoers.d/20-saphana
- # Needed for SAPHanaSR Python hook
- hn1adm ALL=(ALL) NOPASSWD: /usr/sbin/crm_attribute -n hana_hn1_site_srHook_*
+ # Needed for SAPHanaSR and susChkSrv Python hooks
+ <b>hn1</b>adm ALL=(ALL) NOPASSWD: /usr/sbin/crm_attribute -n hana_<b>hn1</b>_site_srHook_*
+ <b>hni</b>adm ALL=(ALL) NOPASSWD: /usr/sbin/SAPHanaSR-hookHelper --sid=<b>HN1</b> --case=fenceMe
EOF
- ```
-For more details on the implementation of the SAP HANA system replication hook see [Set up HANA HA/DR providers](https://documentation.suse.com/sbp/all/html/SLES4SAP-hana-sr-guide-PerfOpt-12/https://docsupdatetracker.net/index.html#_set_up_sap_hana_hadr_providers).
+ </code></pre>
+For more details on the implementation of the SAP HANA system replication hook see [Set up HANA HA/DR providers](https://documentation.suse.com/sbp/all/html/SLES4SAP-hana-sr-guide-PerfOpt-15/https://docsupdatetracker.net/index.html#_set_up_sap_hana_hadr_providers).
3. **[A]** Start SAP HANA on both nodes. Execute as <sid\>adm.
For more details on the implementation of the SAP HANA system replication hook s
# 2021-04-08 22:18:15.877583 ha_dr_SAPHanaSR SFAIL # 2021-04-08 22:18:46.531564 ha_dr_SAPHanaSR SFAIL # 2021-04-08 22:21:26.816573 ha_dr_SAPHanaSR SOK-
+ ```
+
+ Verify the susChkSrv hook installation. Execute as <sid\>adm on all HANA VMs
+ ```bash
+ cdtrace
+ egrep '(LOST:|STOP:|START:|DOWN:|init|load|fail)' nameserver_suschksrv.trc
+ # Example output
+ # 2022-11-03 18:06:21.116728 susChkSrv.init() version 0.7.7, parameter info: action_on_lost=fence stop_timeout=20 kill_signal=9
+ # 2022-11-03 18:06:27.613588 START: indexserver event looks like graceful tenant start
+ # 2022-11-03 18:07:56.143766 START: indexserver event looks like graceful tenant start (indexserver started)
``` ## Create SAP HANA cluster resources
sudo crm configure primitive rsc_ip_<b>HN1</b>_HDB<b>03</b> ocf:heartbeat:IPaddr
params ip="<b>10.0.0.13</b>" sudo crm configure primitive rsc_nc_<b>HN1</b>_HDB<b>03</b> azure-lb port=625<b>03</b> \
+ op monitor timeout=20s interval=10 \
meta resource-stickiness=0 sudo crm configure group g_ip_<b>HN1</b>_HDB<b>03</b> rsc_ip_<b>HN1</b>_HDB<b>03</b> rsc_nc_<b>HN1</b>_HDB<b>03</b>
crm configure primitive rsc_secip_HN1_HDB03 ocf:heartbeat:IPaddr2 \
params ip="10.0.0.14" crm configure primitive rsc_secnc_HN1_HDB03 azure-lb port=62603 \
+ op monitor timeout=20s interval=10 \
meta resource-stickiness=0 crm configure group g_secip_HN1_HDB03 rsc_secip_HN1_HDB03 rsc_secnc_HN1_HDB03
web-application-firewall Waf Front Door Policy Configure Bot Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-policy-configure-bot-protection.md
Previously updated : 10/05/2022 Last updated : 11/07/2022 zone_pivot_groups: web-application-firewall-configuration
zone_pivot_groups: web-application-firewall-configuration
The Azure Web Application Firewall (WAF) for Front Door provides bot rules to identify good bots and protect from bad bots. For more information on the bot protection rule set, see [Bot protection rule set](afds-overview.md#bot-protection-rule-set).
-This article shows how to enable bot protection rules on Azure Front Door Standard and Premium tiers.
+This article shows how to enable bot protection rules on the Azure Front Door Premium tier.
## Prerequisites
web-application-firewall Application Gateway Customize Waf Rules Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-customize-waf-rules-portal.md
description: This article provides information on how to customize Web Applicati
Previously updated : 04/21/2021 Last updated : 11/07/2022 + # Customize Web Application Firewall rules using the Azure portal
The Azure Application Gateway Web Application Firewall (WAF) provides protection
## Disable rule groups and rules > [!IMPORTANT]
-> Use caution when disabling any rule groups or rules. This may expose you to increased security risks.
+> Use caution when disabling any rule groups or rules. This may expose you to increased security risks. The [anomaly score](ag-overview.md#anomaly-scoring-mode) is not incremented and no logging happens for disabled rules.
**To disable rule groups or specific rules**
CRS 3.x specific:
## Next steps
-After you configure your disabled rules, you can learn how to view your WAF logs. For more information, see [Application Gateway diagnostics](../../application-gateway/application-gateway-diagnostics.md#diagnostic-logging).
+After you configure your disabled rules, you can learn how to view your WAF logs. For more information, see [Application Gateway diagnostics](../../application-gateway/application-gateway-diagnostics.md#diagnostic-logging).