Updates from: 08/20/2022 01:11:42
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Json Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/json-transformations.md
The following example generates a JSON string based on the claim value of "email
<InputClaims> <InputClaim ClaimTypeReferenceId="email" TransformationClaimType="personalizations.0.to.0.email" /> <InputClaim ClaimTypeReferenceId="otp" TransformationClaimType="personalizations.0.dynamic_template_data.otp" />
- <InputClaim ClaimTypeReferenceId="email" TransformationClaimType="personalizations.0.dynamic_template_data.verify-email" />
+ <InputClaim ClaimTypeReferenceId="copiedEmail" TransformationClaimType="personalizations.0.dynamic_template_data.verify-email" />
</InputClaims> <InputParameters> <InputParameter Id="template_id" DataType="string" Value="d-4c56ffb40fa648b1aa6822283df94f60"/>
The following claims transformation outputs a JSON string claim that will be the
- Input claims: - **email**, transformation claim type **personalizations.0.to.0.email**: "someone@example.com"
+ - **copiedEmail**, transformation claim type **personalizations.0.dynamic_template_data.verify-email**: "someone@example.com"
- **otp**, transformation claim type **personalizations.0.dynamic_template_data.otp** "346349" - Input parameter: - **template_id**: "d-4c56ffb40fa648b1aa6822283df94f60"
active-directory-b2c Openid Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/openid-connect.md
Previously updated : 04/12/2022 Last updated : 08/12/2022
client_id=90c0fe63-bcf2-44d5-8fb7-b8bbc0b29dc6
&response_type=code+id_token &redirect_uri=https%3A%2F%2Fjwt.ms%2F &response_mode=fragment
-&scope=&scope=openid%20offline_access%20{application-id-uri}/{scope-name}
+&scope=openid%20offline_access%20{application-id-uri}/{scope-name}
&state=arbitrary_data_you_can_receive_in_the_response &nonce=12345 ```
active-directory-b2c User Flow Custom Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/user-flow-custom-attributes.md
You can create these attributes by using the portal UI before or after you use t
|Name |Used in | |||
-|`extension_loyaltyId` | Custom policy|
+|`extension_loyaltyId` | Custom policy|
|`extension_<b2c-extensions-app-guid>_loyaltyId` | [Microsoft Graph API](microsoft-graph-operations.md#application-extension-directory-extension-properties)|
+> [!NOTE]
+> When using a custom attribute in custom policies, you must prefix the claim type ID with `extension_` to allow the correct data mapping to take place within the Azure AD B2C directory.
+ The following example demonstrates the use of custom attributes in an Azure AD B2C custom policy claim definition. ```xml
active-directory Active Directory Schema Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-schema-extensions.md
For example, here is a claims-mapping policy to emit a single claim from a direc
Where *xxxxxxx* is the appID (or Client ID) of the application that the extension was registered with.
+> [!WARNING]
+> When you define a claims mapping policy for a directory extension attribute, use the `ExtensionID` property instead of the `ID` property within the body of the `ClaimsSchema` array, as shown in the example above.
+ > [!TIP] > Case consistency is important when setting directory extension attributes on objects. Extension attribute names aren't cases sensitive when being set up, but they are case sensitive when being read from the directory by the token service. If an extension attribute is set on a user object with the name "LegacyId" and on another user object with the name "legacyid", when the attribute is mapped to a claim using the name "LegacyId" the data will be successfully retrieved and the claim included in the token for the first user but not the second.
->
-> The "Id" parameter in the claims schema used for built-in directory attributes is "ExtensionID" for directory extension attributes.
## Next steps - Learn how to [add custom or additional claims to the SAML 2.0 and JSON Web Tokens (JWT) tokens](active-directory-optional-claims.md).
active-directory Reply Url https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reply-url.md
This table shows the maximum number of redirect URIs you can add to an app regis
| Microsoft work or school accounts in any organization's Azure Active Directory (Azure AD) tenant | 256 | `signInAudience` field in the application manifest is set to either *AzureADMyOrg* or *AzureADMultipleOrgs* | | Personal Microsoft accounts and work and school accounts | 100 | `signInAudience` field in the application manifest is set to *AzureADandPersonalMicrosoftAccount* |
+The maximum number of redirect URIS can't be raised for [security reasons](#restrictions-on-wildcards-in-redirect-uris). If your scenario requires more redirect URIs than the maximum limit allowed, consider the following [state parameter approach](#use-a-state-parameter) as the solution.
+ ## Maximum URI length You can use a maximum of 256 characters for each redirect URI you add to an app registration.
active-directory Auth Header Based https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-header-based.md
Previously updated : 10/10/2020 Last updated : 08/19/2022
active-directory Auth Kcd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-kcd.md
Previously updated : 10/10/2020 Last updated : 08/19/2022
There is a need to provide remote access, protect with pre-authentication, and p
* [Kerberos Constrained Delegation for single sign-on to your apps with Application Proxy](../app-proxy/application-proxy-configure-single-sign-on-with-kcd.md)
-* [Add an on-premises application for remote access through Application Proxy in Azure Active Directory](../app-proxy/application-proxy-add-on-premises-application.md)
+* [Add an on-premises application for remote access through Application Proxy in Azure Active Directory](../app-proxy/application-proxy-add-on-premises-application.md)
active-directory Auth Ldap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-ldap.md
Previously updated : 10/10/2020 Last updated : 08/19/2022
active-directory Auth Oauth2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-oauth2.md
Previously updated : 10/10/2020 Last updated : 08/19/2022
active-directory Auth Oidc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-oidc.md
Previously updated : 10/10/2020 Last updated : 08/19/2022
active-directory Auth Password Based Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-password-based-sso.md
Previously updated : 10/10/2020 Last updated : 08/19/2022
You need to protect with pre-authentication and provide SSO through password vau
* [Configure password based SSO for cloud applications ](../manage-apps/configure-password-single-sign-on-non-gallery-applications.md)
-* [Configure password-based SSO for on-premises applications with Application Proxy](../app-proxy/application-proxy-configure-single-sign-on-password-vaulting.md)
+* [Configure password-based SSO for on-premises applications with Application Proxy](../app-proxy/application-proxy-configure-single-sign-on-password-vaulting.md)
active-directory Auth Radius https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-radius.md
Previously updated : 10/10/2020 Last updated : 08/19/2022
active-directory Auth Remote Desktop Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-remote-desktop-gateway.md
Previously updated : 10/10/2020 Last updated : 08/19/2022
You need to provide remote access and protect your Remote Desktop Services deplo
* [Publish remote desktop with Azure AD Application Proxy](../app-proxy/application-proxy-integrate-with-remote-desktop-services.md)
-* [Add an on-premises application for remote access through Application Proxy in Azure AD](../app-proxy/application-proxy-add-on-premises-application.md)
+* [Add an on-premises application for remote access through Application Proxy in Azure AD](../app-proxy/application-proxy-add-on-premises-application.md)
active-directory Auth Saml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-saml.md
Previously updated : 10/10/2020 Last updated : 08/19/2022
active-directory Auth Sync Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-sync-overview.md
Previously updated : 10/10/2020 Last updated : 8/19/2022
active-directory Security Operations Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-applications.md
Previously updated : 07/15/2021 Last updated : 08/19/2022
Applications provide an attack surface for security breaches and must be monitored. While not targeted as often as user accounts, breaches can occur. Since applications often run without human intervention, the attacks may be harder to detect.
-This article provides guidance to monitor and alert on application events. It's regularly updated to help ensure that you:
+This article provides guidance to monitor and alert on application events and helps enable you to:
* Prevent malicious applications from getting unwarranted access to data.
active-directory Security Operations Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-devices.md
Previously updated : 07/15/2021 Last updated : 08/19/2022
active-directory Security Operations Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-infrastructure.md
Previously updated : 07/15/2021 Last updated : 08/19/2022
active-directory Security Operations User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-user-accounts.md
Previously updated : 07/15/2021 Last updated : 08/19/2022
active-directory Sync Ldap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/sync-ldap.md
Previously updated : 10/10/2020 Last updated : 08/19/2022
active-directory Sync Scim https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/sync-scim.md
Previously updated : 10/10/2020 Last updated : 08/19/2022
You want to automatically provision user information from an HCM system to Azure
* [Build a SCIM endpoint and configure user provisioning with Azure AD ](../app-provisioning/use-scim-to-provision-users-and-groups.md)
-* [SCIM 2.0 protocol compliance of the Azure AD Provisioning Service](../app-provisioning/application-provisioning-config-problem-scim-compatibility.md)
+* [SCIM 2.0 protocol compliance of the Azure AD Provisioning Service](../app-provisioning/application-provisioning-config-problem-scim-compatibility.md)
active-directory Entitlement Management Access Package Auto Assignment Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-auto-assignment-policy.md
+
+ Title: Configure an automatic assignment policy for an access package in Azure AD entitlement management - Azure Active Directory
+description: Learn how to configure automatic assignments based on rules for an access package in Azure Active Directory entitlement management.
+
+documentationCenter: ''
++
+editor:
++
+ na
++ Last updated : 08/15/2022+++++
+#Customer intent: As an administrator, I want detailed information about how I can edit an access package to include a policy for users to get and lose access package assignments automatically, without them or an administrator needing to request access.
++
+# Configure an automatic assignment policy for an access package in Azure AD entitlement management (Preview)
+
+You can use rules to determine access package assignment based on user properties in Azure Active Directory (Azure AD), part of Microsoft Entra. In Entitlement Management, an access package can have multiple policies, and each policy establishes how users get an assignment to the access package, and for how long. As an administrator, you can establish a policy for automatic assignments by supplying a membership rule, that Entitlement Management will follow to create and remove assignments automatically. Similar to a [dynamic group](../enterprise-users/groups-create-rule.md), when an automatic assignment policy is created, user attributes are evaluated for matches with the policy's membership rule. When an attribute changes for a user, these automatic assignment policy rules in the access packages are processed for membership changes. Assignments to users are then added or removed depending on whether they meet the rule criteria.
+
+During this preview, you can have at most one automatic assignment policy in an access package.
+
+This article describes how to create an access package automatic assignment policy for an existing access package.
+
+## Create an automatic assignment policy (Preview)
+
+To create a policy for an access package, you need to start from the access package's policy tab. Follow these steps to create a new policy for an access package.
+
+**Prerequisite role:** Global administrator, Identity Governance administrator, Catalog owner, or Access package manager
+
+1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**.
+
+1. In the left menu, click **Access packages** and then open the access package.
+
+1. Click **Policies** and then **Add auto-assignment policy** to create a new policy.
+
+1. In the first tab, you'll specify the rule. Click **Edit**.
+
+1. Provide a dynamic membership rule, using the [membership rule builder](../enterprise-users/groups-dynamic-membership.md) or by clicking **Edit** on the rule syntax text box.
+
+ > [!NOTE]
+ > The rule builder might not be able to display some rules constructed in the text box. For more information, see [rule builder in the Azure portal](/enterprise-users/groups-create-rule.md#rule-builder-in-the-azure-portal).
+
+ ![Screenshot of an access package automatic assignment policy rule configuration.](./media/entitlement-management-access-package-auto-assignment-policy/auto-assignment-rule-configuration.png)
+
+1. Click **Save** to close the dynamic membership rule editor, then click **Next** to open the **Custom Extensions** tab.
+
+1. If you have [custom extensions](entitlement-management-logic-apps-integration.md) in your catalog you wish to have run when the policy assigns or removes access, you can add them to this policy. Then click next to open the **Review** tab.
+
+1. Type a name and a description for the policy.
+
+ ![Screenshot of an access package automatic assignment policy review tab.](./media/entitlement-management-access-package-auto-assignment-policy/auto-assignment-review.png)
+
+1. Click **Create** to save the policy.
+
+ > [!NOTE]
+ > In this preview, Entitlement management will automatically create a dynamic security group corresponding to each policy, in order to evaluate the users in scope. This group should not be modified except by Entitlement Management itself. This group may also be modified or deleted automatically by Entitlement Management, so don't use this group for other applications or scenarios.
+
+1. Azure AD will evaluate the users in the organization that are in scope of this rule, and create assignments for those users who don't already have assignments to the access package. It may take several minutes for the evaluation to occur, or for subsequent updates to user's attributes to be reflected in the access package assignments.
+
+## Create an automatic assignment policy programmatically (Preview)
+
+You can also create a policy using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.ReadWrite.All` permission, or an application in a catalog role or with the `EntitlementManagement.ReadWrite.All` permission, can call the [create an accessPackageAssignmentPolicy](/graph/api/entitlementmanagement-post-assignmentpolicies?tabs=http&view=graph-rest-1.0&preserve-view=true) API. In your [request payload](/graph/api/resources/accesspackageassignmentpolicy?view=graph-rest-1.0&preserve-view=true), include the `displayName`, `description`, `specificAllowedTargets`, [`automaticRequestSettings`](/graph/api/resources/accesspackageautomaticrequestsettings?view=graph-rest-1.0&preserve-view=true) and `accessPackage` properties of the policy.
+
+## Next steps
+
+- [View assignments for an access package](entitlement-management-access-package-assignments.md)
active-directory Entitlement Management Access Package Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-request-policy.md
The way you specify who can request an access package is with a policy. Before c
When you create an access package, you can specify the request, approval and lifecycle settings, which are stored on the first policy of the access package. Most access packages will have a single policy for users to request access, but a single access package can have multiple policies. You would create multiple policies for an access package if you want to allow different sets of users to be granted assignments with different request and approval settings.
-For example, a single policy cannot be used to assign internal and external users to the same access package. However, you can create two policies in the same access package, one for internal users and one for external users. If there are multiple policies that apply to a user, they will be prompted at the time of their request to select the policy they would like to be assigned to. The following diagram shows an access package with two policies.
+For example, a single policy cannot be used to assign internal and external users to the same access package. However, you can create two policies in the same access package, one for internal users and one for external users. If there are multiple policies that apply to a user to request, they will be prompted at the time of their request to select the policy they would like to be assigned to. The following diagram shows an access package with two policies.
-![Multiple policies in an access package](./media/entitlement-management-access-package-request-policy/access-package-policy.png)
+![Diagram that illustrates multiple policies, along with multiple resource roles, can be contained within an access package.](./media/entitlement-management-access-package-request-policy/access-package-policy.png)
+
+In addition to policies for users to request access, you can also have policies for [automatic assignment](entitlement-management-access-package-auto-assignment-policy.md), and policies for direct assignment by administrators or catalog owners.
### How many policies will I need?
For example, a single policy cannot be used to assign internal and external user
| I want to allow users in my directory and also users outside my directory to request an access package | Two | | I want to specify different approval settings for some users | One for each group of users | | I want some users access package assignments to expire while other users can extend their access | One for each group of users |
-| I want users to request access and other users to be assigned access by an administrator | Two |
+| I want some users to request access and other users to be assigned access by an administrator | Two |
+| I want some users in my organization to receive access automatically, other users in my organization to be able to request, and other users to be assigned access by an administrator | Three |
For information about the priority logic that is used when multiple policies apply, see [Multiple policies](entitlement-management-troubleshoot.md#multiple-policies ).
Follow these steps if you want to allow users in your directory to be able to re
## For users not in your directory
- **Users not in your directory** refers to users who are in another Azure AD directory or domain. These users may not have yet been invited into your directory. Azure AD directories must be configured to be allow invitations in **Collaboration restrictions**. For more information, see [Configure external collaboration settings](../external-identities/external-collaboration-settings-configure.md).
+ **Users not in your directory** refers to users who are in another Azure AD directory or domain. These users may not have yet been invited into your directory. Azure AD directories must be configured to allow invitations in **Collaboration restrictions**. For more information, see [Configure external collaboration settings](../external-identities/external-collaboration-settings-configure.md).
> [!NOTE] > A guest user account will be created for a user not yet in your directory whose request is approved or auto-approved. The guest will be invited, but will not receive an invite email. Instead, they will receive an email when their access package assignment is delivered. By default, later when that guest user no longer has any access package assignments, because their last assignment has expired or been cancelled, that guest user account will be blocked from sign in and subsequently deleted. If you want to have guest users remain in your directory indefinitely, even if they have no access package assignments, you can change the settings for your entitlement management configuration. For more information about the guest user object, see [Properties of an Azure Active Directory B2B collaboration user](../external-identities/user-properties.md).
To change the request and approval settings for an access package, you need to o
1. If you are editing a policy click **Update**. If you are adding a new policy, click **Create**.
+## Creating an access package assignment policy programmatically
+
+You can also create a policy using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.ReadWrite.All` permission, or an application in a catalog role or with the `EntitlementManagement.ReadWrite.All` permission, can call the [create an accessPackageAssignmentPolicy](/graph/api/entitlementmanagement-post-assignmentpolicies?tabs=http&view=graph-rest-1.0&preserve-view=true) API.
+ ## Prevent requests from users with incompatible access In addition to the policy checks on who can request, you may wish to further restrict access, in order to avoid a user who already has some access - via a group or another access package - from obtaining excessive access.
active-directory Entitlement Management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-overview.md
Azure AD entitlement management can help address these challenges. To learn mor
Here are some of capabilities of entitlement management: - Control who can get access to applications, groups, Teams and SharePoint sites, with multi-stage approval, and ensure users don't retain access indefinitely through time-limited assignments and recurring access reviews.
+- Give users access automatically to those resources, based on the user's properties like department or cost center, and remove a user's access when those properties change (preview).
- Delegate to non-administrators the ability to create access packages. These access packages contain resources that users can request, and the delegated access package managers can define policies with rules for which users can request, who must approve their access, and when access expires. - Select connected organizations whose users can request access. When a user who isn't yet in your directory requests access, and is approved, they're automatically invited into your directory and assigned access. When their access expires, if they have no other access package assignments, their B2B account in your directory can be automatically removed.
You can have policies for users to request access. In these kinds of policies, a
- The approval process and the users that can approve or deny access - The duration of a user's access assignment, once approved, before the assignment expires
-You can also have policies for users to be assigned access, either by an administrator or automatically.
+You can also have policies for users to be assigned access, either by an administrator or [automatically](entitlement-management-access-package-auto-assignment-policy.md).
The following diagram shows an example of the different elements in entitlement management. It shows one catalog with two example access packages.
Specialized clouds, such as Azure Germany, and Azure China 21Vianet, aren't curr
Ensure that your directory has at least as many Azure AD Premium P2 licenses as you have: -- Member users who **can** request an access package.-- Member users who <u>request</u> an access package.-- Member users who <u>approve requests</u> for an access package.-- Member users who <u>review assignments</u> for an access package. -- Member users who have a <u>direct assignment</u> to an access package.
+- Member users who *can* request an access package.
+- Member users who *request* an access package.
+- Member users who *approve requests* for an access package.
+- Member users who *review assignments* for an access package.
+- Member users who have a *direct assignment* or an *automatic assignment* to an access package.
For guest users, licensing needs will depend on the [licensing model](../external-identities/external-identities-pricing.md) youΓÇÖre using. However, the below guest usersΓÇÖ activities are considered Azure AD Premium P2 usage:-- Guest users who <u>request</u> an access package. -- Guest users who <u>approve requests</u> for an access package.-- Guest users who <u>review assignments</u> for an access package.-- Guest users who have a <u>direct assignment</u> to an access package.
+- Guest users who *request* an access package.
+- Guest users who *approve requests* for an access package.
+- Guest users who *review assignments* for an access package.
+- Guest users who have a *direct assignment* to an access package.
Azure AD Premium P2 licenses are **not** required for the following tasks:
active-directory Entitlement Management Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-scenarios.md
There are several ways that you can configure entitlement management for your or
## Govern access for users in your organization
+### Administrator: Assign employees access automatically (preview)
+
+1. [Create a new access package](entitlement-management-access-package-create.md#start-new-access-package)
+1. [Add groups, Teams, applications, or SharePoint sites to access package](entitlement-management-access-package-create.md#resource-roles)
+1. [Add an automatic assignment policy](entitlement-management-access-package-auto-assignment-policy.md)
+ ### Access package 1. [Create a new access package](entitlement-management-access-package-create.md#start-new-access-package)
There are several ways that you can configure entitlement management for your or
## Programmatic administration
-You can also manage access packages, catalogs, policies, requests and assignments using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.Read.All` or `EntitlementManagement.ReadWrite.All` permission can call the [entitlement management API](/graph/tutorial-access-package-api). An application with those application permissions can also use many of those API functions, with the exception of managing resources in catalogs and access packages. An an applications which only needs to operate within specific catalogs, can be added to the **Catalog owner** or **Catalog reader** roles of a catalog to be authorized to update or read within that catalog.
+You can also manage access packages, catalogs, policies, requests and assignments using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.Read.All` or `EntitlementManagement.ReadWrite.All` permission can call the [entitlement management API](/graph/tutorial-access-package-api). An application with those application permissions can also use many of those API functions, with the exception of managing resources in catalogs and access packages. And an application which only needs to operate within specific catalogs can be added to the **Catalog owner** or **Catalog reader** roles of a catalog to be authorized to update or read within that catalog.
## Next steps
active-directory Identity Governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-overview.md
Once you've started using these identity governance features, you can easily aut
| Creating, updating and deleting AD and Azure AD user accounts automatically for employees |[Plan cloud HR to Azure AD user provisioning](../app-provisioning/plan-cloud-hr-provision.md)| | Updating the membership of a group, based on changes to the member user's attributes | [Create a dynamic group](../enterprise-users/groups-create-rule.md)| | Assigning licenses | [group-based licensing](../enterprise-users/licensing-groups-assign.md) |
+| Adding and removing a user's group memberships, application roles, and SharePoint site roles, based on changes to the user's attributes | [Configure an automatic assignment policy for an access package in entitlement management](entitlement-management-access-package-auto-assignment-policy.md) (preview)|
| Adding and removing a user's group memberships, application roles, and SharePoint site roles, on a specific date | [Configure lifecycle settings for an access package in entitlement management](entitlement-management-access-package-lifecycle-policy.md)| | Running custom workflows when a user requests or receives access, or access is removed | [Trigger Logic Apps in entitlement management](entitlement-management-logic-apps-integration.md) (preview) | | Regularly having memberships of guests in Microsoft groups and Teams reviewed, and removing guest memberships that are denied |[Create an access review](create-access-review.md) |
active-directory Reference Connect Sync Attributes Synchronized https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-sync-attributes-synchronized.md
In this case, start with the list of attributes in this topic and identify those
| targetAddress |X |X | | | | telephoneAssistant |X |X | | | | telephoneNumber |X |X | | |
-| thumbnailphoto |X |X | |synced only once from Azure AD to Exchange Online after which Exchange Online becomes source of authority for this attribute and any later changes can't be synced from on-premise. See ([KB](https://support.microsoft.com/help/3062745/user-photos-aren-t-synced-from-the-on-premises-environment-to-exchange)) for more.|
+| thumbnailphoto |X |X | |synced only once from Azure AD to Exchange Online after which Exchange Online becomes source of authority for this attribute and any later changes can't be synced from on-premises. See ([KB](https://support.microsoft.com/help/3062745/user-photos-aren-t-synced-from-the-on-premises-environment-to-exchange)) for more.|
| title |X |X | | | | unauthOrig |X |X |X | | | usageLocation |X | | |mechanical property. The userΓÇÖs country/region. Used for license assignment. |
In this case, start with the list of attributes in this topic and identify those
| targetAddress |X |X | | | | telephoneAssistant |X |X | | | | telephoneNumber |X |X | | |
-| thumbnailphoto |X |X | |synced only once from Azure AD to Exchange Online after which Exchange Online becomes source of authority for this attribute and any later changes can't be synced from on-premise. See ([KB](https://support.microsoft.com/help/3062745/user-photos-aren-t-synced-from-the-on-premises-environment-to-exchange)) for more.|
+| thumbnailphoto |X |X | |synced only once from Azure AD to Exchange Online after which Exchange Online becomes source of authority for this attribute and any later changes can't be synced from on-premises. See ([KB](https://support.microsoft.com/help/3062745/user-photos-aren-t-synced-from-the-on-premises-environment-to-exchange)) for more.|
| title |X |X | | | | unauthOrig |X |X |X | | | url |X |X | | |
active-directory Datawiza Azure Ad Sso Oracle Jde https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/datawiza-azure-ad-sso-oracle-jde.md
To integrate Oracle JDE with Azure AD:
|:--|:-| | Platform | Web | | App Name | Enter a unique application name.|
- | Public Domain | For example: https:/jde-external.example.com. <br>For testing, you can use localhost DNS. If you aren't deploying DAB behind a load balancer, use the **Public Domain** port. |
+ | Public Domain | For example: `https://jde-external.example.com`. <br>For testing, you can use localhost DNS. If you aren't deploying DAB behind a load balancer, use the **Public Domain** port. |
| Listen Port | The port that DAB listens on.| | Upstream Servers | The Oracle JDE implementation URL and port to be protected.|
active-directory Cisco Umbrella User Management Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cisco-umbrella-user-management-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
If your endpoints are running AnyConnect or the Cisco Secure Client version 4.10 MR5 or earlier, you will need to synchronize the ObjectGUID attribute for user identity attribution. You will need to reconfigure any Umbrella policy on groups after importing groups from Azure AD. > [!NOTE]
-> The on-premise Umbrella AD Connector should be turned off before importing the ObjectGUID attribute.
+> The on-premises Umbrella AD Connector should be turned off before importing the ObjectGUID attribute.
When using Microsoft Azure AD Connect, the ObjectGUID attribute of users is not synchronized from on-premises AD to Azure AD by default. To synchronize this attribute, enable the optional **Directory Extension attribute sync** and select the objectGUID attributes for users.
active-directory Meta Work Accounts Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/meta-work-accounts-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Meta Work Accounts | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Meta Work Accounts.
++++++++ Last updated : 09/03/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with Meta Work Accounts
+
+In this tutorial, you'll learn how to integrate Meta Work Accounts with Azure Active Directory (Azure AD). When you integrate Meta Work Accounts with Azure AD, you can:
+
+* Control in Azure AD who has access to Meta Work Accounts.
+* Enable your users to be automatically signed-in to Meta Work Accounts with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Meta Work Accounts single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Meta Work Accounts supports **SP and IDP** initiated SSO.
+
+## Add Meta Work Accounts from the gallery
+
+To configure the integration of Meta Work Accounts into Azure AD, you need to add Meta Work Accounts from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Meta Work Accounts** in the search box.
+1. Select **Meta Work Accounts** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Meta Work Accounts
+
+Configure and test Azure AD SSO with Meta Work Accounts using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Meta Work Accounts.
+
+To configure and test Azure AD SSO with Meta Work Accounts, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Meta Work Accounts SSO](#configure-meta-work-accounts-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Meta Work Accounts test user](#create-meta-work-accounts-test-user)** - to have a counterpart of B.Simon in Meta Work Accounts that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Meta Work Accounts** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, perform the following steps:
+
+ a. In the **Identifier** text box, type a URL using the following pattern:
+ `https://work.facebook.com/company/<ID>`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ ` https://work.facebook.com/work/saml.php?__cid=<ID>`
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type the URL:
+ `https://work.facebook.com`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Engage the [Work Accounts team](https://www.workplace.com/help/work) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/certificatebase64.png)
+
+1. On the **Set up Meta Work Accounts** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Meta Work Accounts.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Meta Work Accounts**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Meta Work Accounts SSO
+
+1. Log in to your Meta Work Accounts company site as an administrator.
+
+1. Go to **Security** > **Single Sign-On**.
+
+1. Enable **Single-sign on(SSO)** checkbox and click **+Add new SSO Provider**.
+
+ ![Screenshot shows the SSO Account.](./media/meta-work-accounts-tutorial/security.png "SSO Account")
+
+1. On the **Single Sign-On (SSO) Setup** page, perform the following steps:
+
+ ![Screenshot shows the SSO Configuration.](./media/meta-work-accounts-tutorial/certificate.png "Configuration")
+
+ 1. Enter a valid **Name of the SSO Provider**.
+
+ 1. In the **SAML URL** textbox, paste the **Login URL** value which you have copied from the Azure portal.
+
+ 1. In the **SAML Issuer URL** textbox, paste the **Azure AD Identifier** value which you have copied from the Azure portal.
+
+ 1. **Enable SAML logout redirection** checkbox and in the **SAML Logout URL** textbox, paste the **Logout URL** value which you have copied from the Azure portal.
+
+ 1. Open the downloaded **Certificate (Base64)** from the Azure portal into Notepad and paste the content into the **SAML Certificate** textbox.
+
+ 1. Copy **Audience URL** value, paste this value into the **Identifier** textbox in the **Basic SAML Configuration** section in the Azure portal.
+
+ 1. Copy **ACS (Assertion Consumer Service) URL** value, paste this value into the **Reply URL** text box in the **Basic SAML Configuration** section in the Azure portal.
+
+ 1. In the **Test SSO Setup** section, enter a valid email in the textbox and click **Test SSO**.
+
+ 1. Click **Save Changes**.
+
+### Create Meta Work Accounts test user
+
+In this section, you create a user called Britta Simon in Meta Work Accounts. Work with the [Work Accounts team](https://www.workplace.com/help/work) to add the users in the Meta Work Accounts platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Meta Work Accounts Sign on URL where you can initiate the login flow.
+
+* Go to Meta Work Accounts Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Meta Work Accounts for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Meta Work Accounts tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Meta Work Accounts for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Meta Work Accounts you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
active-directory Tickitlms Learn Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tickitlms-learn-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode: In the **Sign-on URL** text box, type the URL:
- `https:/learn.tickitlms.com/sso/login`
+ `https://learn.tickitlms.com/sso/login`
1. Click **Save**.
active-directory Admin Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/admin-api.md
Don't supply a request body for this method.
example message:
-```
+```json
{
- value:
+ "value":
[ { "id": "ZjViZjJmYzYtNzEzNS00ZDk0LWE2ZmUtYzI2ZTQ1NDNiYzVhPHNjcmlwdD5hbGVydCgneWF5IScpOzwvc2NyaXB0Pg",
example message:
"authorityId": "ffea7eb3-0000-1111-2222-000000000000", "status": "Enabled", "issueNotificationEnabled": false,
- "manifestUrl" : "https:/...",
+ "manifestUrl" : "https://...",
"rules": "<rules JSON>", "displays": [{<display JSON}] },
example message:
"authorityId": "cc55ba22-0000-1111-2222-000000000000", "status": "Enabled", "issueNotificationEnabled": false,
- "manifestUrl" : "https:/...",
+ "manifestUrl" : "https://...",
"rules": "<rules JSON>", "displays": [{<display JSON}] }
aks Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/certificate-rotation.md
Last updated 5/10/2022
Azure Kubernetes Service (AKS) uses certificates for authentication with many of its components. If you have a RBAC-enabled cluster built after March 2022 it is enabled with certificate auto-rotation. Periodically, you may need to rotate those certificates for security or policy reasons. For example, you may have a policy to rotate all your certificates every 90 days. > [!NOTE]
-> Certificate auto-rotation will not be enabled by default for non-RBAC enabled AKS clusters.
+> Certificate auto-rotation will *only* be enabled by default for RBAC enabled AKS clusters.
This article shows you how certificate rotation works in your AKS cluster.
az vmss run-command invoke -g MC_rg_myAKSCluster_region -n vmss-name --instance-
## Certificate Auto Rotation
-For AKS to automatically rotate non-CA certificates, the cluster must have [TLS Bootstrapping](https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/) which has been enabled by default in all Azure regions.
+For AKS to automatically rotate non-CA certificates, the cluster must have [TLS Bootstrapping](https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/) which has been enabled by default in all Azure regions.
> [!Note] > If you have an existing cluster you have to upgrade that cluster to enable Certificate Auto-Rotation.
+> Do not disable bootstrap to keep your auto-rotation enabled.
For any AKS clusters created or upgraded after March 2022 Azure Kubernetes Service will automatically rotate non-CA certificates on both the control plane and agent nodes within 80% of the client certificate valid time, before they expire with no downtime for the cluster.
az aks upgrade -g $RESOURCE_GROUP_NAME -n $CLUSTER_NAME
### Limitation
-Auto certificate rotation won't be enabled on a non-RBAC cluster.
+Certificate auto-rotation will only be enabled by default for RBAC enabled AKS clusters.
## Manually rotate your cluster certificates
aks Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/integrations.md
The following rules are used by AKS for applying updates to installed add-ons:
- Any breaking or behavior changes to the add-on will be announced well before, usually 60 days, a later minor version of Kubernetes is released on AKS. - Add-ons can be patched weekly with every new release of AKS which will be announced in the release notes. AKS releases can be controlled using [maintenance windows][maintenance-windows] and followed using [release tracker][release-tracker].
+### Exceptions
+
+Add-ons will be upgraded to a new major/minor version (or breaking change) within a Kubernetes minor version if either the cluster's Kubernetes version or the add-on version are in preview.
+
+### Available add-ons
+ The below table shows the available add-ons. | Name | Description | More details |
aks Scale Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/scale-cluster.md
Last updated 06/29/2022
# Scale the node count in an Azure Kubernetes Service (AKS) cluster
-If the resource needs of your applications change, you can manually scale an AKS cluster to run a different number of nodes. When you scale down, nodes are carefully [cordoned and drained][kubernetes-drain] to minimize disruption to running applications. When you scale up, AKS waits until nodes are marked **Ready** by the Kubernetes cluster before pods are scheduled on them.
+If the resource needs of your applications change, your cluster performance may be impacted due to low capacity on CPU, memory, PID space, or disk sizes. To address these changes, you can manually scale your AKS cluster to run a different number of nodes. When you scale down, nodes are carefully [cordoned and drained][kubernetes-drain] to minimize disruption to running applications. When you scale up, AKS waits until nodes are marked **Ready** by the Kubernetes cluster before pods are scheduled on them.
## Scale the cluster nodes
In this article, you manually scaled an AKS cluster to increase or decrease the
[set-azakscluster]: /powershell/module/az.aks/set-azakscluster [cluster-autoscaler]: cluster-autoscaler.md [az-aks-nodepool-scale]: /cli/azure/aks/nodepool#az_aks_nodepool_scale
-[update-azaksnodepool]: /powershell/module/az.aks/update-azaksnodepool
+[update-azaksnodepool]: /powershell/module/az.aks/update-azaksnodepool
api-management Developer Portal Integrate Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-integrate-application-insights.md
description: Learn how to integrate Application Insights into your managed or self-hosted developer portal. Previously updated : 03/25/2021 Last updated : 08/16/2022
A popular feature of Azure Monitor is Application Insights. It's an extensible A
Follow these steps to plug Application Insights into your managed or self-hosted developer portal. > [!IMPORTANT]
-> Steps 1 and 2 are not required for managed portals. If you have a managed portal, skip to step 4.
+> Steps 1 -3 are not required for managed portals. If you have a managed portal, skip to step 4.
1. Set up a [local environment](developer-portal-self-host.md#step-1-set-up-local-environment) for the latest release of the developer portal.
Follow these steps to plug Application Insights into your managed or self-hosted
npm install @paperbits/azure --save ```
-1. In the `startup.publish.ts` file in the `src` folder, import and register the Application Insights module:
+1. In the `startup.publish.ts` file in the `src` folder, import and register the Application Insights module. Add the `AppInsightsPublishModule` after the existing modules in the dependency injection container:
```typescript import { AppInsightsPublishModule } from "@paperbits/azure"; ...
+ const injector = new InversifyInjector();
+ injector.bindModule(new CoreModule());
+ ...
injector.bindModule(new AppInsightsPublishModule());
+ injector.resolve("autostart");
```
-1. Retrieve the portal's configuration:
+1. Retrieve the portal's configuration using the [Content Item - Get](/rest/api/apimanagement/current-ga/content-item/get) REST API:
```http
- GET /contentTypes/document/contentItems/configuration
+ GET https://management.azure.com/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.ApiManagement/service/{api-management-service-name}/contentTypes/document/contentItems/configuration?api-version=2021-08-01
```
+
+ Output is similar to:
```json {
- "nodes": [
+ "id": "/contentTypes/document/contentItems/configuration",
+ "type": "Microsoft.ApiManagement/service/contentTypes/contentItems",
+ "name": "configuration",
+ "properties": {
+ "nodes": [
{ "site": { "title": "Microsoft Azure API Management - developer portal",
Follow these steps to plug Application Insights into your managed or self-hosted
} } ]
+ }
} ```
-1. Extend the site configuration from the previous step with Application Insights configuration:
+1. Extend the site configuration from the previous step with Application Insights configuration. Update the configuration using the [Content Item - Create or Update](/rest/api/apimanagement/current-ga/content-item/create-or-update) REST API. Pass the Application Insights instrumentation key in an `integration` node in the request body.
+ ```http
- PUT /contentTypes/document/contentItems/configuration
+ PUT https://management.azure.com/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.ApiManagement/service/{api-management-service-name}/contentTypes/document/contentItems/configuration?api-version=2021-08-01
``` ```json {
+ "id": "/contentTypes/document/contentItems/configuration",
+ "type": "Microsoft.ApiManagement/service/contentTypes/contentItems",
+ "name": "configuration",
+ "properties": {
"nodes": [ { "site": { ... },
Follow these steps to plug Application Insights into your managed or self-hosted
} } ]
+ }
} ```
+1. After you update the configuration, [republish the portal](api-management-howto-developer-portal-customize.md#publish) for the changes to take effect.
+ ## Next steps Learn more about the developer portal:
api-management Developer Portal Integrate Google Tag Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-integrate-google-tag-manager.md
Follow the steps in this article to plug Google Tag Manager into your managed or
Follow these steps to plug Google Tag Manager into your managed or self-hosted developer portal. > [!IMPORTANT]
-> Steps 1 and 2 are not required for managed portals. If you have a managed portal, skip to step 4.
+> Steps 1 - 3 are not required for managed portals. If you have a managed portal, skip to step 4.
1. Set up a [local environment](developer-portal-self-host.md#step-1-set-up-local-environment) for the latest release of the developer portal.
app-service Overview Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-monitoring.md
Azure App Service provides several monitoring options for monitoring resources f
## Diagnostic Settings (via Azure Monitor)
-Azure Monitor is a monitoring service that provides a complete set of features to monitor your Azure resources in addition to resources in other clouds and on-premise. The Azure Monitor data platform collects data into logs and metrics where they can be analyzed. App Service monitoring data can be shipped to Azure Monitor through Diagnostic Settings.
+Azure Monitor is a monitoring service that provides a complete set of features to monitor your Azure resources in addition to resources in other clouds and on-premises. The Azure Monitor data platform collects data into logs and metrics where they can be analyzed. App Service monitoring data can be shipped to Azure Monitor through Diagnostic Settings.
Diagnostic settings lets you export logs to other services, such as Log Analytics, Storage account, and Event Hubs. Large amounts of data using SQL-like Kusto can be queried with Log Analytics. You can capture platform logs in Azure Monitor Logs as configured via Diagnostic Settings, and instrument your app further with the dedicated application performance management feature (Application Insights) for additional telemetry and logs.
app-service Reference App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/reference-app-settings.md
For more information on custom containers, see [Run a custom container in Azure]
| `DOCKER_REGISTRY_SERVER_USERNAME` | Username to authenticate with the registry server at `DOCKER_REGISTRY_SERVER_URL`. For security, this variable is not passed on to the container. || | `DOCKER_REGISTRY_SERVER_PASSWORD` | Password to authenticate with the registry server at `DOCKER_REGISTRY_SERVER_URL`. For security, this variable is not passed on to the container. || | `DOCKER_ENABLE_CI` | Set to `true` to enable the continuous deployment for custom containers. The default is `false` for custom containers. ||
-| `WEBSITE_PULL_IMAGE_OVER_VNET` | Connect and pull from a registry inside a Virtual Network or on-premise. Your app will need to be connected to a Virtual Network using VNet integration feature. This setting is also needed for Azure Container Registry with Private Endpoint. ||
+| `WEBSITE_PULL_IMAGE_OVER_VNET` | Connect and pull from a registry inside a Virtual Network or on-premises. Your app will need to be connected to a Virtual Network using VNet integration feature. This setting is also needed for Azure Container Registry with Private Endpoint. ||
| `WEBSITES_WEB_CONTAINER_NAME` | In a Docker Compose app, only one of the containers can be internet accessible. Set to the name of the container defined in the configuration file to override the default container selection. By default, the internet accessible container is the first container to define port 80 or 8080, or, when no such container is found, the first container defined in the configuration file. | | | `WEBSITES_PORT` | For a custom container, the custom port number on the container for App Service to route requests to. By default, App Service attempts automatic port detection of ports 80 and 8080. This setting is *not* injected into the container as an environment variable. || | `WEBSITE_CPU_CORES_LIMIT` | By default, a Windows container runs with all available cores for your chosen pricing tier. To reduce the number of cores, set to the number of desired cores limit. For more information, see [Customize the number of compute cores](configure-custom-container.md?pivots=container-windows#customize-the-number-of-compute-cores).||
app-service Tutorial Dotnetcore Sqldb App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-dotnetcore-sqldb-app.md
Azure App Service provides a web-based diagnostics console named Kudu. Kudu lets
To use Kudu, go to one of the following URLs. You'll need to sign into the Kudu site with your Azure credentials.
-* For apps deployed in Free, Shared, Basic, Standard, and Premium App Service plans - `https:/<app-name>.scm.azurewebsites.net`
+* For apps deployed in Free, Shared, Basic, Standard, and Premium App Service plans - `https://<app-name>.scm.azurewebsites.net`
* For apps deployed in Isolated service plans - `https://<app-name>.scm.<ase-name>.p.azurewebsites.net` From the main page in Kudu, you can find information about the application-hosting environment, app settings, deployments, and browse the files in the wwwroot directory.
automation Private Link Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/how-to/private-link-security.md
For more information, see [Key Benefits of Private Link](../../private-link/pri
## Limitations -- In the current implementation of Private Link, Automation account cloud jobs cannot access Azure resources that are secured using private endpoint. For example, Azure Key Vault, Azure SQL, Azure Storage account, etc. To workaround this, use a [Hybrid Runbook Worker](../automation-hybrid-runbook-worker.md) instead. Hence, on-premise VMs are supported to run Hybrid Runbook Workers against an Automation Account with Private Link enabled.
+- In the current implementation of Private Link, Automation account cloud jobs cannot access Azure resources that are secured using private endpoint. For example, Azure Key Vault, Azure SQL, Azure Storage account, etc. To workaround this, use a [Hybrid Runbook Worker](../automation-hybrid-runbook-worker.md) instead. Hence, on-premises VMs are supported to run Hybrid Runbook Workers against an Automation Account with Private Link enabled.
- You need to use the latest version of the [Log Analytics agent](../../azure-monitor/agents/log-analytics-agent.md) for Windows or Linux. - The [Log Analytics Gateway](../../azure-monitor/agents/gateway.md) does not support Private Link.
availability-zones Az Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/az-region.md
description: Learn what services are supported by availability zones and underst
Previously updated : 06/21/2022 Last updated : 08/18/2022
In the Product Catalog, always-available services are listed as "non-regional" s
| [Azure Backup](migrate-recovery-services-vault.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) | | [Azure Cosmos DB](../cosmos-db/high-availability.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure DNS: Azure DNS Private Zones](../dns/private-dns-getstarted-portal.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure DNS: Azure DNS Private Resolver](../dns/dns-private-resolver-get-started-portal.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
| [Azure ExpressRoute](../expressroute/designing-for-high-availability-with-expressroute.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Public IP](../virtual-network/ip-services/public-ip-addresses.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal.](media/icon-zonal.svg) | | [Azure Site Recovery](migrate-recovery-services-vault.md) | ![An icon that signifies this service is zonal](media/icon-zonal.svg) |
azure-app-configuration Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/cli-samples.md
Title: Azure CLI samples - Azure App Configuration description: Information about sample scripts provided for Azure App Configuration--++ Last updated 02/19/2020
azure-app-configuration Concept Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-customer-managed-keys.md
Title: Use customer-managed keys to encrypt your configuration data description: Encrypt your configuration data using customer-managed keys--++ Last updated 07/28/2020
azure-app-configuration Concept Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-disaster-recovery.md
Title: Azure App Configuration resiliency and disaster recovery description: Lean how to implement resiliency and disaster recovery with Azure App Configuration.--++ Last updated 07/09/2020
azure-app-configuration Concept Enable Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-enable-rbac.md
Title: Authorize access to Azure App Configuration using Azure Active Directory description: Enable Azure RBAC to authorize access to your Azure App Configuration instance--++ Last updated 05/26/2020
azure-app-configuration Concept Feature Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-feature-management.md
Title: Understand feature management using Azure App Configuration description: Turn features on and off using Azure App Configuration --++
azure-app-configuration Concept Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-geo-replication.md
Title: Geo-replication in Azure App Configuration (Preview) description: Details of the geo-replication feature in Azure App Configuration. --++
azure-app-configuration Concept Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-github-action.md
Title: Sync your GitHub repository to App Configuration description: Use GitHub Actions to automatically update your App Configuration instance when you update your GitHub repository.--++ Last updated 05/28/2020
azure-app-configuration Concept Key Value https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-key-value.md
Title: Understand Azure App Configuration key-value store description: Understand key-value storage in Azure App Configuration, which stores configuration data as key-values. Key-values are a representation of application settings.--++ Last updated 08/04/2020
azure-app-configuration Concept Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-private-endpoint.md
Title: Using private endpoints for Azure App Configuration description: Secure your App Configuration store using private endpoints --++ Last updated 07/15/2020
azure-app-configuration Enable Dynamic Configuration Aspnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/enable-dynamic-configuration-aspnet-core.md
description: In this tutorial, you learn how to dynamically update the configuration data for ASP.NET Core apps documentationcenter: ''-+ editor: '' ms.assetid:
ms.devlang: csharp Last updated 09/1/2020-+ #Customer intent: I want to dynamically update my app to use the latest configuration data in App Configuration.
azure-app-configuration Enable Dynamic Configuration Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/enable-dynamic-configuration-dotnet.md
Title: '.NET Framework Tutorial: dynamic configuration in Azure App Configuration' description: In this tutorial, you learn how to dynamically update the configuration data for .NET Framework apps using Azure App Configuration. -+ ms.devlang: csharp Last updated 07/24/2020-+ #Customer intent: I want to dynamically update my .NET Framework app to use the latest configuration data in App Configuration.
azure-app-configuration Howto App Configuration Event https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-app-configuration-event.md
Title: Use Event Grid for App Configuration data change notifications description: Learn how to use Azure App Configuration event subscriptions to send key-value modification events to a web endpoint -+ ms.assetid: ms.devlang: csharp Last updated 03/04/2020-+
azure-app-configuration Howto Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-best-practices.md
Title: Azure App Configuration best practices | Microsoft Docs
description: Learn best practices while using Azure App Configuration. Topics covered include key groupings, key-value compositions, App Configuration bootstrap, and more. documentationcenter: ''-+ editor: '' ms.assetid: Last updated 05/02/2019-+
azure-app-configuration Howto Feature Filters Aspnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-feature-filters-aspnet-core.md
description: Learn how to use feature filters to enable conditional feature flag
ms.devlang: csharp --++ Last updated 3/9/2020
azure-app-configuration Howto Integrate Azure Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-integrate-azure-managed-service-identity.md
Title: Use managed identities to access App Configuration description: Authenticate to Azure App Configuration using managed identities--++
azure-app-configuration Howto Labels Aspnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-labels-aspnet-core.md
description: This article describes how to use labels to retrieve app configuration values for the environment in which the app is currently running. ms.devlang: csharp-+ Last updated 3/12/2020-+ # Use labels to provide per-environment configuration values.
azure-app-configuration Howto Move Resource Between Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-move-resource-between-regions.md
Title: Move an App Configuration store to another region description: Learn how to move an App Configuration store to a different region. --++ Last updated 8/23/2021
azure-app-configuration Howto Targetingfilter Aspnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-targetingfilter-aspnet-core.md
description: Learn how to enable staged rollout of features for targeted audiences ms.devlang: csharp--++ Last updated 11/20/2020
azure-app-configuration Integrate Ci Cd Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/integrate-ci-cd-pipeline.md
Title: Integrate Azure App Configuration using a continuous integration and delivery pipeline description: Learn to implement continuous integration and delivery using Azure App Configuration -+ Last updated 04/19/2020-+ # Customer intent: I want to use Azure App Configuration data in my CI/CD pipeline.
azure-app-configuration Monitor App Configuration Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/monitor-app-configuration-reference.md
Title: Monitoring Azure App Configuration data reference description: Important Reference material needed when you monitor App Configuration --++ Last updated 05/05/2021
azure-app-configuration Monitor App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/monitor-app-configuration.md
Title: Monitor Azure App Configuration description: Start here to learn how to monitor App Configuration --++ Last updated 05/05/2021
azure-app-configuration Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/policy-reference.md
Title: Built-in policy definitions for Azure App Configuration
description: Lists Azure Policy built-in policy definitions for Azure App Configuration. These built-in policy definitions provide common approaches to managing your Azure resources. Last updated 08/16/2022 --++
azure-app-configuration Push Kv Devops Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/push-kv-devops-pipeline.md
Title: Push settings to App Configuration with Azure Pipelines description: Learn to use Azure Pipelines to push key-values to an App Configuration Store -+ Last updated 02/23/2021-+ # Push settings to App Configuration with Azure Pipelines
azure-app-configuration Quickstart Aspnet Core App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-aspnet-core-app.md
Title: Quickstart for Azure App Configuration with ASP.NET Core | Microsoft Docs description: Create an ASP.NET Core app with Azure App Configuration to centralize storage and management of application settings for an ASP.NET Core application. -+ ms.devlang: csharp Last updated 1/3/2022-+ #Customer intent: As an ASP.NET Core developer, I want to learn how to manage all my app settings in one place. # Quickstart: Create an ASP.NET Core app with Azure App Configuration
azure-app-configuration Quickstart Dotnet App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-dotnet-app.md
Title: Quickstart for Azure App Configuration with .NET Framework | Microsoft Do
description: In this article, create a .NET Framework app with Azure App Configuration to centralize storage and management of application settings separate from your code. documentationcenter: ''-+ ms.devlang: csharp Last updated 09/28/2020-+ #Customer intent: As a .NET Framework developer, I want to manage all my app settings in one place. # Quickstart: Create a .NET Framework app with Azure App Configuration
azure-app-configuration Quickstart Feature Flag Aspnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-feature-flag-aspnet-core.md
Title: Quickstart for adding feature flags to ASP.NET Core description: Add feature flags to ASP.NET Core apps and manage them using Azure App Configuration-+ ms.devlang: csharp Last updated 09/28/2020-+ #Customer intent: As an ASP.NET Core developer, I want to use feature flags to control feature availability quickly and confidently.
azure-app-configuration Quickstart Feature Flag Azure Functions Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-feature-flag-azure-functions-csharp.md
Title: Quickstart for adding feature flags to Azure Functions | Microsoft Docs description: In this quickstart, use Azure Functions with feature flags from Azure App Configuration and test the function locally. -+ ms.devlang: csharp Last updated 8/26/2020-+ # Quickstart: Add feature flags to an Azure Functions app
azure-app-configuration Quickstart Feature Flag Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-feature-flag-dotnet.md
Title: Quickstart for adding feature flags to .NET Framework apps | Microsoft Do
description: A quickstart for adding feature flags to .NET Framework apps and managing them in Azure App Configuration documentationcenter: ''-+ editor: '' ms.assetid:
.NET Last updated 10/19/2020-+ #Customer intent: As a .NET Framework developer, I want to use feature flags to control feature availability quickly and confidently. # Quickstart: Add feature flags to a .NET Framework app
azure-app-configuration Rest Api Authentication Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-authentication-azure-ad.md
Title: Azure Active Directory REST API - authentication description: Use Azure Active Directory to authenticate to Azure App Configuration by using the REST API--++ Last updated 08/17/2020
azure-app-configuration Rest Api Authentication Hmac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-authentication-hmac.md
Title: Azure App Configuration REST API - HMAC authentication description: Use HMAC to authenticate to Azure App Configuration by using the REST API--++ ms.devlang: csharp, golang, java, javascript, powershell, python
azure-app-configuration Rest Api Authentication Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-authentication-index.md
 Title: Azure App Configuration REST API - Authentication description: Reference pages for authentication using the Azure App Configuration REST API--++ Last updated 08/17/2020
azure-app-configuration Rest Api Authorization Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-authorization-azure-ad.md
Title: Azure App Configuration REST API - Azure Active Directory authorization description: Use Azure Active Directory for authorization against Azure App Configuration by using the REST API--++ Last updated 08/17/2020
azure-app-configuration Rest Api Authorization Hmac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-authorization-hmac.md
Title: Azure App Configuration REST API - HMAC authorization description: Use HMAC for authorization against Azure App Configuration using the REST API--++ Last updated 08/17/2020
azure-app-configuration Rest Api Authorization Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-authorization-index.md
 Title: Azure App Configuration REST API - Authorization description: Reference pages for authorization using the Azure App Configuration REST API--++ Last updated 08/17/2020
azure-app-configuration Rest Api Consistency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-consistency.md
 Title: Azure App Configuration REST API - consistency description: Reference pages for ensuring real-time consistency by using the Azure App Configuration REST API--++ Last updated 08/17/2020
azure-app-configuration Rest Api Fiddler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-fiddler.md
 Title: Azure Active Directory REST API - Test Using Fiddler description: Use Fiddler to test the Azure App Configuration REST API--++ Last updated 08/17/2020
azure-app-configuration Rest Api Headers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-headers.md
Title: Azure App Configuration REST API - Headers description: Reference pages for headers used with the Azure App Configuration REST API--++ Last updated 08/17/2020
azure-app-configuration Rest Api Key Value https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-key-value.md
 Title: Azure App Configuration REST API - key-value description: Reference pages for working with key-values by using the Azure App Configuration REST API--++ Last updated 08/17/2020
azure-app-configuration Rest Api Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-keys.md
Title: Azure App Configuration REST API - Keys description: Reference pages for working with keys using the Azure App Configuration REST API--++ Last updated 08/17/2020
azure-app-configuration Rest Api Labels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-labels.md
Title: Azure App Configuration REST API - Labels description: Reference pages for working with labels using the Azure App Configuration REST API--++ Last updated 08/17/2020
azure-app-configuration Rest Api Locks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-locks.md
Title: Azure App Configuration REST API - locks description: Reference pages for working with key-value locks by using the Azure App Configuration REST API--++ Last updated 08/17/2020
azure-app-configuration Rest Api Postman https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-postman.md
 Title: Azure Active Directory REST API - Test by using Postman description: Use Postman to test the Azure App Configuration REST API--++ Last updated 08/17/2020
azure-app-configuration Rest Api Revisions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-revisions.md
Title: Azure App Configuration REST API - key-value revisions description: Reference pages for working with key-value revisions by using the Azure App Configuration REST API--++ Last updated 08/17/2020
azure-app-configuration Rest Api Throttling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-throttling.md
Title: Azure App Configuration REST API - Throttling description: Reference pages for understanding throttling when using the Azure App Configuration REST API--++ Last updated 08/17/2020
azure-app-configuration Rest Api Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-versioning.md
Title: Azure App Configuration REST API - versioning description: Reference pages for versioning by using the Azure App Configuration REST API--++ Last updated 08/17/2020
azure-app-configuration Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api.md
Title: Azure App Configuration REST API description: Reference pages for the Azure App Configuration REST API--++ Last updated 08/17/2020
azure-app-configuration Cli Create Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/scripts/cli-create-service.md
Title: Azure CLI Script Sample - Create an Azure App Configuration Store
description: Create an Azure App Configuration store using a sample Azure CLI script. See reference article links to commands used in the script. -+ Last updated 01/24/2020-+
azure-app-configuration Cli Delete Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/scripts/cli-delete-service.md
Title: Azure CLI Script Sample - Delete an Azure App Configuration Store
description: Delete an Azure App Configuration store using a sample Azure CLI script. See reference article links to commands used in the script. -+ ms.devlang: azurecli Last updated 02/19/2020-+
azure-app-configuration Cli Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/scripts/cli-export.md
Title: Azure CLI Script Sample - Export from an Azure App Configuration Store
description: Use Azure CLI script to export configuration from Azure App Configuration -+ ms.devlang: azurecli Last updated 02/19/2020-+
azure-app-configuration Cli Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/scripts/cli-import.md
Title: Azure CLI script sample - Import to an App Configuration store
description: Use Azure CLI script - Importing configuration to Azure App Configuration -+ ms.devlang: azurecli Last updated 02/19/2020-+
azure-app-configuration Cli Work With Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/scripts/cli-work-with-keys.md
Title: Azure CLI Script Sample - Work with key-values in App Configuration Store
description: Use Azure CLI script to create, view, update and delete key values from App Configuration store -+ ms.devlang: azurecli Last updated 02/19/2020-+
azure-app-configuration Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure App Configuration
description: Lists Azure Policy Regulatory Compliance controls available for Azure App Configuration. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Last updated 08/17/2022 --++
azure-app-configuration Use Feature Flags Dotnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/use-feature-flags-dotnet-core.md
Title: Tutorial for using feature flags in a .NET Core app | Microsoft Docs
description: In this tutorial, you learn how to implement feature flags in .NET Core apps. documentationcenter: ''-+ editor: '' ms.assetid:
ms.devlang: csharp Last updated 09/17/2020-+ #Customer intent: I want to control feature availability in my app by using the .NET Core Feature Manager library.
azure-app-configuration Use Key Vault References Dotnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/use-key-vault-references-dotnet-core.md
Title: Tutorial for using Azure App Configuration Key Vault references in an ASP
description: In this tutorial, you learn how to use Azure App Configuration's Key Vault references from an ASP.NET Core app documentationcenter: ''-+ editor: '' ms.assetid:
ms.devlang: csharp Last updated 04/08/2020-+ #Customer intent: I want to update my ASP.NET Core application to reference values stored in Key Vault through App Configuration.
azure-fluid-relay Container Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/how-tos/container-recovery.md
Fluid framework periodically saves state, called summary, without any explicit b
We've added following methods to AzureClient that will enable developers to recover data from corrupted containers.
-[`getContainerVersions(ID, options)`](https://fluidframework.com/docs/apis/azure-client/azureclient/#azure-client-azureclient-getcontainerversions-Method)
+[`getContainerVersions(ID, options)`](https://fluidframework.com/docs/apis/azure-client/azureclient#getcontainerversions-Method)
`getContainerVersions` allows developers to view the previously generated versions of the container.
-[copyContainer(ID, containerSchema)](https://fluidframework.com/docs/apis/azure-client/azureclient/#azure-client-azureclient-copycontainer-Method)
+[`copyContainer(ID, containerSchema)`](https://fluidframework.com/docs/apis/azure-client/azureclient#copycontainer-Method)
`copyContainer` allows developers to generate a new detached container from a specific version of another container.
azure-fluid-relay Fluid Json Web Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/how-tos/fluid-json-web-token.md
Each part is separated by a period (.) and separately Base64 encoded.
| Claim | Format | Description | ||--|-|
-| documentId | string | Generated by FRS, identifies the document for which the token is being generated. |
+| documentId | string | Generated by Azure Fluid Relay (AFR) service. Identifies the document for which the token is being generated. |
| scope | string[] | Identifies the permissions required by the client on the document or summary. For every scope, you can define the permissions you want to give to the client. | | tenantId | string | Identifies the tenant. | | user | JSON | *Optional* `{ displayName: <display_name>, id: <user_id>, name: <user_name>, }` Identifies users of your application. This is sent back to your application by Alfred, the ordering service. It can be used by your application to identify your users from the response it gets from Alfred. Azure Fluid Relay doesn't validate this information. |
azure-fluid-relay Test Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/how-tos/test-automation.md
fluid.url: https://fluidframework.com/docs/testing/testing/
Testing and automation are crucial to maintaining the quality and longevity of your code. Internally, Fluid uses a range of unit and integration tests powered by [Mocha](https://mochajs.org/), [Jest](https://jestjs.io/), [Puppeteer](https://github.com/puppeteer/puppeteer), and [Webpack](https://webpack.js.org/).
-You can run tests using the local **@fluidframework/azure-local-service** or using a test tenant in Azure Fluid Relay service. **AzureClient** can be configured to connect to both a remote service and a local service, which enables you to use a single client type between tests against live and local service instances. The only difference is the configuration used to create the client.
+You can run tests using the local [@fluidframework/azure-local-service](https://www.npmjs.com/package/@fluidframework/azure-local-service) or using a test tenant in Azure Fluid Relay service. [AzureClient](https://fluidframework.com/docs/apis/azure-client/azureclient) can be configured to connect to both a remote service and a local service, which enables you to use a single client type between tests against live and local service instances. The only difference is the configuration used to create the client.
## Automation against Azure Fluid Relay
azure-fluid-relay Validate Document Creator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/how-tos/validate-document-creator.md
fluid.url: https://fluidframework.com/docs/apis/azure-client/itokenprovider/
# How to: Validate a User Created a Document
-When you create a document in Azure Fluid Relay, the JWT provided by the `ITokenProvider` for the creation request can only be used once. After creating a document, the client must generate a new JWT that contains the document ID provided by the service at creation time. If an application has an authorization service that manages document access control, it will need to know who created a document with a given ID in order to authorize the generation of a new JWT for access to that document.
+When you create a document in Azure Fluid Relay, the JWT provided by the [ITokenProvider](https://fluidframework.com/docs/apis/azure-client/itokenprovider/) for the creation request can only be used once. After creating a document, the client must generate a new JWT that contains the document ID provided by the service at creation time. If an application has an authorization service that manages document access control, it will need to know who created a document with a given ID in order to authorize the generation of a new JWT for access to that document.
## Inform an Authorization Service when a document is Created
-An application can tie into the document creation lifecycle by implementing a public `documentPostCreateCallback()` property in its `TokenProvider`. This callback will be triggered directly after creating the document, before a client requests the new JWT it needs to gain read/write permissions to the document that was created.
+An application can tie into the document creation lifecycle by implementing a public [documentPostCreateCallback()](https://fluidframework.com/docs/apis/azure-client/itokenprovider#documentpostcreatecallback-MethodSignature) method in its `TokenProvider`. This callback will be triggered directly after creating the document, before a client requests the new JWT it needs to gain read/write permissions to the document that was created.
The `documentPostCreateCallback()` receives two parameters: 1) the ID of the document that was created and 2) a JWT signed by the service with no permission scopes. The authorization service can verify the given JWT and use the information in the JWT to grant the correct user permissions for the newly created document.
azure-functions Durable Functions Azure Storage Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-azure-storage-provider.md
+
+ Title: Azure Storage provider for Durable Functions
+description: Learn about the characteristics of the Durable Functions Azure Storage provider.
++ Last updated : 07/18/2022+++
+# Azure Storage provider (Azure Functions)
+
+This document describes the characteristics of the Durable Functions Azure Storage provider, with a focus on performance and scalability aspects. The Azure Storage provider is the default provider. It stores instance states and queues in an Azure Storage (classic) account.
+
+> [!NOTE]
+> For more information on the supported storage providers for Durable Functions and how they compare, see the [Durable Functions storage providers](durable-functions-storage-providers.md) documentation.
+
+In the Azure Storage provider, all function execution is driven by Azure Storage queues. Orchestration and entity status and history are stored in Azure Tables. Azure Blobs and blob leases are used to distribute orchestration instances and entities across multiple app instances (also known as *workers* or simply *VMs*). This section goes into more detail on the various Azure Storage artifacts and how they impact performance and scalability.
+
+## Storage representation
+
+A [task hub](durable-functions-task-hubs.md) durably persists all instance states and all messages. For a quick overview of how these are used to track the progress of an orchestration, see the [task hub execution example](durable-functions-task-hubs.md#execution-example).
+
+The Azure Storage provider represents the task hub in storage using the following components:
+
+* Two Azure Tables store the instance states.
+* One Azure Queue stores the activity messages.
+* One or more Azure Queues store the instance messages. Each of these so-called *control queues* represents a [partition](durable-functions-perf-and-scale.md#partition-count) that is assigned a subset of all instance messages, based on the hash of the instance ID.
+* A few extra blob containers used for lease blobs and/or large messages.
+
+For example, a task hub named `xyz` with `PartitionCount = 4` contains the following queues and tables:
+
+![Diagram showing Azure Storage provider storage storage organization for 4 control queues.](./media/durable-functions-task-hubs/azure-storage.png)
+
+Next, we describe these components and the role they play in more detail.
+
+### History table
+
+The **History** table is an Azure Storage table that contains the history events for all orchestration instances within a task hub. The name of this table is in the form *TaskHubName*History. As instances run, new rows are added to this table. The partition key of this table is derived from the instance ID of the orchestration. Instance IDs are random by default, ensuring optimal distribution of internal partitions in Azure Storage. The row key for this table is a sequence number used for ordering the history events.
+
+When an orchestration instance needs to run, the corresponding rows of the History table are loaded into memory using a range query within a single table partition. These *history events* are then replayed into the orchestrator function code to get it back into its previously checkpointed state. The use of execution history to rebuild state in this way is influenced by the [Event Sourcing pattern](/azure/architecture/patterns/event-sourcing).
+
+> [!TIP]
+> Orchestration data stored in the History table includes output payloads from activity and sub-orchestrator functions. Payloads from external events are also stored in the History table. Because the full history is loaded into memory every time an orchestrator needs to execute, a large enough history can result in significant memory pressure on a given VM. The length and size of the orchestration history can be reduced by splitting large orchestrations into multiple sub-orchestrations or by reducing the size of outputs returned by the activity and sub-orchestrator functions it calls. Alternatively, you can reduce memory usage by lowering per-VM [concurrency throttles](durable-functions-perf-and-scale.md#concurrency-throttles) to limit how many orchestrations are loaded into memory concurrently.
+
+### Instances table
+
+The **Instances** table contains the statuses of all orchestration and entity instances within a task hub. As instances are created, new rows are added to this table. The partition key of this table is the orchestration instance ID or entity key and the row key is an empty string. There is one row per orchestration or entity instance.
+
+This table is used to satisfy [instance query requests from code](durable-functions-instance-management.md#query-instances) as well as [status query HTTP API](durable-functions-http-api.md#get-instance-status) calls. It is kept eventually consistent with the contents of the **History** table mentioned previously. The use of a separate Azure Storage table to efficiently satisfy instance query operations in this way is influenced by the [Command and Query Responsibility Segregation (CQRS) pattern](/azure/architecture/patterns/cqrs).
+
+> [!TIP]
+> The partitioning of the *Instances* table allows it to store millions of orchestration instances without any noticeable impact on runtime performance or scale. However, the number of instances can have a significant impact on [multi-instance query](durable-functions-instance-management.md#query-all-instances) performance. To control the amount of data stored in these tables, consider periodically [purging old instance data](durable-functions-instance-management.md#purge-instance-history).
+
+### Queues
+
+Orchestrator, entity, and activity functions are all triggered by internal queues in the function app's task hub. Using queues in this way provides reliable "at-least-once" message delivery guarantees. There are two types of queues in Durable Functions: the **control queue** and the **work-item queue**.
+
+#### The work-item queue
+
+There is one work-item queue per task hub in Durable Functions. It's a basic queue and behaves similarly to any other `queueTrigger` queue in Azure Functions. This queue is used to trigger stateless *activity functions* by dequeueing a single message at a time. Each of these messages contains activity function inputs and additional metadata, such as which function to execute. When a Durable Functions application scales out to multiple VMs, these VMs all compete to acquire tasks from the work-item queue.
+
+#### Control queue(s)
+
+There are multiple *control queues* per task hub in Durable Functions. A *control queue* is more sophisticated than the simpler work-item queue. Control queues are used to trigger the stateful orchestrator and entity functions. Because the orchestrator and entity function instances are stateful singletons, it's important that each orchestration or entity is only processed by one worker at a time. To achieve this constraint, each orchestration instance or entity is assigned to a single control queue. These control queues are load balanced across workers to ensure that each queue is only processed by one worker at a time. More details on this behavior can be found in subsequent sections.
+
+Control queues contain a variety of orchestration lifecycle message types. Examples include [orchestrator control messages](durable-functions-instance-management.md), activity function *response* messages, and timer messages. As many as 32 messages will be dequeued from a control queue in a single poll. These messages contain payload data as well as metadata including which orchestration instance it is intended for. If multiple dequeued messages are intended for the same orchestration instance, they will be processed as a batch.
+
+Control queue messages are constantly polled using a background thread. The batch size of each queue poll is controlled by the `controlQueueBatchSize` setting in host.json and has a default of 32 (the maximum value supported by Azure Queues). The maximum number of prefetched control-queue messages that are buffered in memory is controlled by the `controlQueueBufferThreshold` setting in host.json. The default value for `controlQueueBufferThreshold` varies depending on a variety of factors, including the type of hosting plan. For more information on these settings, see the [host.json schema](../functions-host-json.md#durabletask) documentation.
+
+> [!TIP]
+> Increasing the value for `controlQueueBufferThreshold` allows a single orchestration or entity to process events faster. However, increasing this value can also result in higher memory usage. The higher memory usage is partly due to pulling more messages off the queue and partly due to fetching more orchestration histories into memory. Reducing the value for `controlQueueBufferThreshold` can therefore be an effective way to reduce memory usage.
+
+#### Queue polling
+
+The durable task extension implements a random exponential back-off algorithm to reduce the effect of idle-queue polling on storage transaction costs. When a message is found, the runtime immediately checks for another message. When no message is found, it waits for a period of time before trying again. After subsequent failed attempts to get a queue message, the wait time continues to increase until it reaches the maximum wait time, which defaults to 30 seconds.
+
+The maximum polling delay is configurable via the `maxQueuePollingInterval` property in the [host.json file](../functions-host-json.md#durabletask). Setting this property to a higher value could result in higher message processing latencies. Higher latencies would be expected only after periods of inactivity. Setting this property to a lower value could result in [higher storage costs](durable-functions-billing.md#azure-storage-transactions) due to increased storage transactions.
+
+> [!NOTE]
+> When running in the Azure Functions Consumption and Premium plans, the [Azure Functions Scale Controller](../event-driven-scaling.md) will poll each control and work-item queue once every 10 seconds. This additional polling is necessary to determine when to activate function app instances and to make scale decisions. At the time of writing, this 10 second interval is constant and cannot be configured.
+
+#### Orchestration start delays
+
+Orchestrations instances are started by putting an `ExecutionStarted` message in one of the task hub's control queues. Under certain conditions, you may observe multi-second delays between when an orchestration is scheduled to run and when it actually starts running. During this time interval, the orchestration instance remains in the `Pending` state. There are two potential causes of this delay:
+
+* **Backlogged control queues**: If the control queue for this instance contains a large number of messages, it may take time before the `ExecutionStarted` message is received and processed by the runtime. Message backlogs can happen when orchestrations are processing lots of events concurrently. Events that go into the control queue include orchestration start events, activity completions, durable timers, termination, and external events. If this delay happens under normal circumstances, consider creating a new task hub with a larger number of partitions. Configuring more partitions will cause the runtime to create more control queues for load distribution. Each partition corresponds to 1:1 with a control queue, with a maximum of 16 partitions.
+
+* **Back off polling delays**: Another common cause of orchestration delays is the [previously described back-off polling behavior for control queues](#queue-polling). However, this delay is only expected when an app is scaled out to two or more instances. If there is only one app instance or if the app instance that starts the orchestration is also the same instance that is polling the target control queue, then there will not be a queue polling delay. Back off polling delays can be reduced by updating the **host.json** settings, as described previously.
+
+### Blobs
+
+In most cases, Durable Functions doesn't use Azure Storage Blobs to persist data. However, queues and tables have [size limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#azure-queue-storage-limits) that can prevent Durable Functions from persisting all of the required data into a storage row or queue message. For example, when a piece of data that needs to be persisted to a queue is greater than 45 KB when serialized, Durable Functions will compress the data and store it in a blob instead. When persisting data to blob storage in this way, Durable Function stores a reference to that blob in the table row or queue message. When Durable Functions needs to retrieve the data it will automatically fetch it from the blob. These blobs are stored in the blob container `<taskhub>-largemessages`.
+
+#### Performance considerations
+
+The extra compression and blob operation steps for large messages can be expensive in terms of CPU and I/O latency costs. Additionally, Durable Functions needs to load persisted data in memory, and may do so for many different function executions at the same time. As a result, persisting large data payloads can cause high memory usage as well. To minimize memory overhead, consider persisting large data payloads manually (for example, in blob storage) and instead pass around references to this data. This way your code can load the data only when needed to avoid redundant loads during [orchestrator function replays](durable-functions-orchestrations.md#reliability). However, storing payloads to local disks is *not* recommended since on-disk state is not guaranteed to be available since functions may execute on different VMs throughout their lifetimes.
+
+### Storage account selection
+
+The queues, tables, and blobs used by Durable Functions are created in a configured Azure Storage account. The account to use can be specified using the `durableTask/storageProvider/connectionStringName` setting (or `durableTask/azureStorageConnectionStringName` setting in Durable Functions 1.x) in the **host.json** file.
+
+#### Durable Functions 2.x
+
+```json
+{
+ "extensions": {
+ "durableTask": {
+ "storageProvider": {
+ "connectionStringName": "MyStorageAccountAppSetting"
+ }
+ }
+ }
+}
+```
+
+#### Durable Functions 1.x
+
+```json
+{
+ "extensions": {
+ "durableTask": {
+ "azureStorageConnectionStringName": "MyStorageAccountAppSetting"
+ }
+ }
+}
+```
+
+If not specified, the default `AzureWebJobsStorage` storage account is used. For performance-sensitive workloads, however, configuring a non-default storage account is recommended. Durable Functions uses Azure Storage heavily, and using a dedicated storage account isolates Durable Functions storage usage from the internal usage by the Azure Functions host.
+
+> [!NOTE]
+> Standard general purpose Azure Storage accounts are required when using the Azure Storage provider. All other storage account types are not supported. We highly recommend using legacy v1 general purpose storage accounts for Durable Functions. The newer v2 storage accounts can be significantly more expensive for Durable Functions workloads. For more information on Azure Storage account types, see the [Storage account overview](../../storage/common/storage-account-overview.md) documentation.
+
+### Orchestrator scale-out
+
+While activity functions can be scaled out infinitely by adding more VMs elastically, individual orchestrator instances and entities are constrained to inhabit a single partition and the maximum number of partitions is bounded by the `partitionCount` setting in your `host.json`.
+
+> [!NOTE]
+> Generally speaking, orchestrator functions are intended to be lightweight and should not require large amounts of computing power. It is therefore not necessary to create a large number of control-queue partitions to get great throughput for orchestrations. Most of the heavy work should be done in stateless activity functions, which can be scaled out infinitely.
+
+The number of control queues is defined in the **host.json** file. The following example host.json snippet sets the `durableTask/storageProvider/partitionCount` property (or `durableTask/partitionCount` in Durable Functions 1.x) to `3`. Note that there are as many control queues as there are partitions.
+
+#### Durable Functions 2.x
+
+```json
+{
+ "extensions": {
+ "durableTask": {
+ "storageProvider": {
+ "partitionCount": 3
+ }
+ }
+ }
+}
+```
+
+#### Durable Functions 1.x
+
+```json
+{
+ "extensions": {
+ "durableTask": {
+ "partitionCount": 3
+ }
+ }
+}
+```
+
+A task hub can be configured with between 1 and 16 partitions. If not specified, the default partition count is **4**.
+
+During low traffic scenarios, your application will be scaled-in, so partitions will be managed by a small number of workers. As an example, consider the diagram below.
+
+![Scale-in orchestrations diagram](./media/durable-functions-perf-and-scale/scale-progression-1.png)
+
+In the previous diagram, we see that orchestrators 1 through 6 are load balanced across partitions. Similarly, partitions, like activities, are load balanced across workers. Partitions are load-balanced across workers regardless of the number of orchestrators that get started.
+
+If you're running on the Azure Functions Consumption or Elastic Premium plans, or if you have load-based auto-scaling configured, more workers will get allocated as traffic increases and partitions will eventually load balance across all workers. If we continue to scale out, eventually each partition will eventually be managed by a single worker. Activities, on the other hand, will continue to be load-balanced across all workers. This is shown in the image below.
+
+![First scaled-out orchestrations diagram](./media/durable-functions-perf-and-scale/scale-progression-2.png)
+
+The upper-bound of the maximum number of concurrent _active_ orchestrations at *any given time* is equal to the number of workers allocated to your application _times_ your value for `maxConcurrentOrchestratorFunctions`. This upper-bound can be made more precise when your partitions are fully scaled-out across workers. When fully scaled-out, and since each worker will have only a single Functions host instance, the maximum number of _active_ concurrent orchestrator instances will be equal to your number of partitions _times_ your value for `maxConcurrentOrchestratorFunctions`.
+
+> [!NOTE]
+> In this context, *active* means that an orchestration or entity is loaded into memory and processing *new events*. If the orchestration or entity is waiting for more events, such as the return value of an activity function, it gets unloaded from memory and is no longer considered *active*. Orchestrations and entities will be subsequently reloaded into memory only when there are new events to process. There's no practical maximum number of *total* orchestrations or entities that can run on a single VM, even if they're all in the "Running" state. The only limitation is the number of *concurrently active* orchestration or entity instances.
+
+The image below illustrates a fully scaled-out scenario where more orchestrators are added but some are inactive, shown in grey.
+
+![Second scaled-out orchestrations diagram](./media/durable-functions-perf-and-scale/scale-progression-3.png)
+
+During scale-out, control queue leases may be redistributed across Functions host instances to ensure that partitions are evenly distributed. These leases are internally implemented as Azure Blob storage leases and ensure that any individual orchestration instance or entity only runs on a single host instance at a time. If a task hub is configured with three partitions (and therefore three control queues), orchestration instances and entities can be load-balanced across all three lease-holding host instances. Additional VMs can be added to increase capacity for activity function execution.
+
+The following diagram illustrates how the Azure Functions host interacts with the storage entities in a scaled out environment.
+
+![Scale diagram](./media/durable-functions-perf-and-scale/scale-interactions-diagram.png)
+
+As shown in the previous diagram, all VMs compete for messages on the work-item queue. However, only three VMs can acquire messages from control queues, and each VM locks a single control queue.
+
+Orchestration instances and entities are distributed across all control queue instances. The distribution is done by hashing the instance ID of the orchestration or the entity name and key pair. Orchestration instance IDs by default are random GUIDs, ensuring that instances are equally distributed across all control queues.
+
+Generally speaking, orchestrator functions are intended to be lightweight and should not require large amounts of computing power. It is therefore not necessary to create a large number of control queue partitions to get great throughput for orchestrations. Most of the heavy work should be done in stateless activity functions, which can be scaled out infinitely.
+
+## Extended sessions
+
+Extended sessions is a [caching mechanism](durable-functions-perf-and-scale.md#instance-caching) that keeps orchestrations and entities in memory even after they finish processing messages. The typical effect of enabling extended sessions is reduced I/O against the underlying durable store and overall improved throughput.
+
+You can enable extended sessions by setting `durableTask/extendedSessionsEnabled` to `true` in the **host.json** file. The `durableTask/extendedSessionIdleTimeoutInSeconds` setting can be used to control how long an idle session will be held in memory:
+
+**Functions 2.0**
+```json
+{
+ "extensions": {
+ "durableTask": {
+ "extendedSessionsEnabled": true,
+ "extendedSessionIdleTimeoutInSeconds": 30
+ }
+ }
+}
+```
+
+**Functions 1.0**
+```json
+{
+ "durableTask": {
+ "extendedSessionsEnabled": true,
+ "extendedSessionIdleTimeoutInSeconds": 30
+ }
+}
+```
+
+There are two potential downsides of this setting to be aware of:
+
+1. There's an overall increase in function app memory usage because idle instances are not unloaded from memory as quickly.
+2. There can be an overall decrease in throughput if there are many concurrent, distinct, short-lived orchestrator or entity function executions.
+
+As an example, if `durableTask/extendedSessionIdleTimeoutInSeconds` is set to 30 seconds, then a short-lived orchestrator or entity function episode that executes in less than 1 second still occupies memory for 30 seconds. It also counts against the `durableTask/maxConcurrentOrchestratorFunctions` quota mentioned previously, potentially preventing other orchestrator or entity functions from running.
+
+The specific effects of extended sessions on orchestrator and entity functions are described in the next sections.
+
+> [!NOTE]
+> Extended sessions are currently only supported in .NET languages, like C# or F#. Setting `extendedSessionsEnabled` to `true` for other platforms can lead to runtime issues, such as silently failing to execute activity and orchestration-triggered functions.
+
+### Orchestrator function replay
+
+As mentioned previously, orchestrator functions are replayed using the contents of the **History** table. By default, the orchestrator function code is replayed every time a batch of messages are dequeued from a control queue. Even if you are using the fan-out, fan-in pattern and are awaiting for all tasks to complete (for example, using `Task.WhenAll()` in .NET, `context.df.Task.all()` in JavaScript, or `context.task_all()` in Python), there will be replays that occur as batches of task responses are processed over time. When extended sessions are enabled, orchestrator function instances are held in memory longer and new messages can be processed without a full history replay.
+
+The performance improvement of extended sessions is most often observed in the following situations:
+
+* When there are a limited number of orchestration instances running concurrently.
+* When orchestrations have large number of sequential actions (for example, hundreds of activity function calls) that complete quickly.
+* When orchestrations fan-out and fan-in a large number of actions that complete around the same time.
+* When orchestrator functions need to process large messages or do any CPU-intensive data processing.
+
+In all other situations, there is typically no observable performance improvement for orchestrator functions.
+
+> [!NOTE]
+> These settings should only be used after an orchestrator function has been fully developed and tested. The default aggressive replay behavior can useful for detecting [orchestrator function code constraints](durable-functions-code-constraints.md) violations at development time, and is therefore disabled by default.
+
+### Performance targets
+
+The following table shows the expected *maximum* throughput numbers for the scenarios described in the [Performance Targets](durable-functions-perf-and-scale.md#performance-targets) section of the [Performance and Scale](durable-functions-perf-and-scale.md) article.
+
+"Instance" refers to a single instance of an orchestrator function running on a single small ([A1](../../virtual-machines/sizes-previous-gen.md)) VM in Azure App Service. In all cases, it is assumed that [extended sessions](#orchestrator-function-replay) are enabled. Actual results may vary depending on the CPU or I/O work performed by the function code.
+
+| Scenario | Maximum throughput |
+|-|-|
+| Sequential activity execution | 5 activities per second, per instance |
+| Parallel activity execution (fan-out) | 100 activities per second, per instance |
+| Parallel response processing (fan-in) | 150 responses per second, per instance |
+| External event processing | 50 events per second, per instance |
+| Entity operation processing | 64 operations per second |
+
+If you are not seeing the throughput numbers you expect and your CPU and memory usage appears healthy, check to see whether the cause is related to [the health of your storage account](../../storage/common/storage-monitoring-diagnosing-troubleshooting.md#troubleshooting-guidance). The Durable Functions extension can put significant load on an Azure Storage account and sufficiently high loads may result in storage account throttling.
+
+> [!TIP]
+> In some cases you can significantly increase the throughput of external events, activity fan-in, and entity operations by increasing the value of the `controlQueueBufferThreshold` setting in **host.json**. Increasing this value beyond its default causes the Durable Task Framework storage provider to use more memory to prefetch these events more aggressively, reducing delays associated with dequeueing messages from the Azure Storage control queues. For more information, see the [host.json](durable-functions-bindings.md#host-json) reference documentation.
+
+### High throughput processing
+
+The architecture of the Azure Storage backend puts certain limitations on the maximum theoretical performance and scalability of Durable Functions. If your testing shows that Durable Functions on Azure Storage won't meet your throughput requirements, you should consider instead using the [Netherite storage provider for Durable Functions](durable-functions-storage-providers.md#netherite).
+
+To compare the achievable throughput for various basic scenarios, see the section [Basic Scenarios](https://microsoft.github.io/durabletask-netherite/#/scenarios) of the Netherite storage provider documentation.
+
+The Netherite storage backend was designed and developed by [Microsoft Research](https://www.microsoft.com/research). It uses [Azure Event Hubs](../../event-hubs/event-hubs-about.md) and the [FASTER](https://www.microsoft.com/research/project/faster/) database technology on top of [Azure Page Blobs](../../storage/blobs/storage-blob-pageblob-overview.md). The design of Netherite enables significantly higher-throughput processing of orchestrations and entities compared to other providers. In some benchmark scenarios, throughput was shown to increase by more than an order of magnitude when compared to the default Azure Storage provider.
+
+For more information on the supported storage providers for Durable Functions and how they compare, see the [Durable Functions storage providers](durable-functions-storage-providers.md) documentation.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn about disaster recovery and geo-distribution](durable-functions-disaster-recovery-geo-distribution.md)
azure-functions Durable Functions Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-billing.md
Durable Functions uses Azure Storage by default to keep state persistent, proces
Several factors contribute to the actual Azure Storage costs incurred by your Durable Functions app: * A single function app is associated with a single task hub, which shares a set of Azure Storage resources. These resources are used by all durable functions in a function app. The actual number of functions in the function app has no effect on Azure Storage transaction costs.
-* Each function app instance internally polls multiple queues in the storage account by using an exponential-backoff polling algorithm. An idle app instance polls the queues less often than does an active app, which results in fewer transaction costs. For more information about Durable Functions queue-polling behavior, see the [queue-polling section of the Performance and Scale article](durable-functions-perf-and-scale.md#queue-polling).
+* Each function app instance internally polls multiple queues in the storage account by using an exponential-backoff polling algorithm. An idle app instance polls the queues less often than does an active app, which results in fewer transaction costs. For more information about Durable Functions queue-polling behavior when using the Azure Storage provider, see the [queue-polling section](durable-functions-azure-storage-provider.md#queue-polling) of the Azure Storage provider documentation.
* When running in the Azure Functions Consumption or Premium plans, the [Azure Functions scale controller](../event-driven-scaling.md) regularly polls all task-hub queues in the background. If a function app is under light to moderate scale, only a single scale controller instance will poll these queues. If the function app scales out to a large number of instances, more scale controller instances might be added. These additional scale controller instances can increase the total queue-transaction costs. * Each function app instance competes for a set of blob leases. These instances will periodically make calls to the Azure Blob service either to renew held leases or to attempt to acquire new leases. The task hub's configured partition count determines the number of blob leases. Scaling out to a larger number of function app instances likely increases the Azure Storage transaction costs associated with these lease operations.
azure-functions Durable Functions Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-diagnostics.md
Azure Functions supports debugging function code directly, and that same support
* **Replay**: Orchestrator functions regularly [replay](durable-functions-orchestrations.md#reliability) when new inputs are received. This behavior means a single *logical* execution of an orchestrator function can result in hitting the same breakpoint multiple times, especially if it is set early in the function code. * **Await**: Whenever an `await` is encountered in an orchestrator function, it yields control back to the Durable Task Framework dispatcher. If it is the first time a particular `await` has been encountered, the associated task is *never* resumed. Because the task never resumes, stepping *over* the await (F10 in Visual Studio) is not possible. Stepping over only works when a task is being replayed. * **Messaging timeouts**: Durable Functions internally uses queue messages to drive execution of orchestrator, activity, and entity functions. In a multi-VM environment, breaking into the debugging for extended periods of time could cause another VM to pick up the message, resulting in duplicate execution. This behavior exists for regular queue-trigger functions as well, but is important to point out in this context since the queues are an implementation detail.
-* **Stopping and starting**: Messages in Durable functions persist between debug sessions. If you stop debugging and terminate the local host process while a durable function is executing, that function may re-execute automatically in a future debug session. This behavior can be confusing when not expected. Clearing all messages from the [internal storage queues](durable-functions-perf-and-scale.md#internal-queue-triggers) between debug sessions is one technique to avoid this behavior.
+* **Stopping and starting**: Messages in Durable functions persist between debug sessions. If you stop debugging and terminate the local host process while a durable function is executing, that function may re-execute automatically in a future debug session. This behavior can be confusing when not expected. Using a [fresh task hub](durable-functions-task-hubs.md#task-hub-management) or clearing the task hub contents between debug sessions is one technique to avoid this behavior.
> [!TIP] > When setting breakpoints in orchestrator functions, if you want to only break on non-replay execution, you can set a conditional breakpoint that breaks only if the "is replaying" value is `false`.
azure-functions Durable Functions Http Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-http-api.md
Request parameters for this API include the default set mentioned previously as
| **`createdTimeFrom`** | Query string | Optional parameter. When specified, filters the list of returned instances that were created at or after the given ISO8601 timestamp.| | **`createdTimeTo`** | Query string | Optional parameter. When specified, filters the list of returned instances that were created at or before the given ISO8601 timestamp.| | **`runtimeStatus`** | Query string | Optional parameter. When specified, filters the list of returned instances based on their runtime status. To see the list of possible runtime status values, see the [Querying instances](durable-functions-instance-management.md) article. |
-| **`instanceIdPrefix`** | Query string | Optional parameter. When specified, filters the list of returned instances to include only instances whose instance id starts with the specified prefix string. Available starting with [version 2.7.2](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.DurableTask/2.7.2) of the extension. |
+| **`instanceIdPrefix`** | Query string | Optional parameter. When specified, filters the list of returned instances to include only instances whose instance ID starts with the specified prefix string. Available starting with [version 2.7.2](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.DurableTask/2.7.2) of the extension. |
| **`top`** | Query string | Optional parameter. When specified, limits the number of instances returned by the query. | ### Response
Here is an example of response payloads including the orchestration status (form
``` > [!NOTE]
-> This operation can be very expensive in terms of Azure Storage I/O if you are using the [default Azure Storage provider](durable-functions-storage-providers.md#azure-storage) and if there are a lot of rows in the Instances table. More details on Instance table can be found in the [Performance and scale in Durable Functions (Azure Functions)](durable-functions-perf-and-scale.md#instances-table) documentation.
+> This operation can be very expensive in terms of Azure Storage I/O if you are using the [default Azure Storage provider](durable-functions-storage-providers.md#azure-storage) and if there are a lot of rows in the Instances table. More details on Instance table can be found in the [Azure Storage provider](durable-functions-azure-storage-provider.md#instances-table) documentation.
If more results exist, a continuation token is returned in the response header. The name of the header is `x-ms-continuation-token`.
+> [!CAUTION]
+> The query result may return fewer items than the limit specified by `top`. When receiving results, you should therefore *always* check to see if there is a continuation token.
+ If you set continuation token value in the next request header, you can get the next page of results. This name of the request header is also `x-ms-continuation-token`. ## Purge single instance history
Request parameters for this API include the default set mentioned previously as
| **`runtimeStatus`** | Query string | Optional parameter. When specified, filters the list of purged instances based on their runtime status. To see the list of possible runtime status values, see the [Querying instances](durable-functions-instance-management.md) article. | > [!NOTE]
-> This operation can be very expensive in terms of Azure Storage I/O if you are using the [default Azure Storage provider](durable-functions-storage-providers.md#azure-storage) and if there are many rows in the Instances and/or History tables. More details on these tables can be found in the [Performance and scale in Durable Functions (Azure Functions)](durable-functions-perf-and-scale.md#instances-table) documentation.
+> This operation can be very expensive in terms of Azure Storage I/O if you are using the [default Azure Storage provider](durable-functions-storage-providers.md#azure-storage) and if there are many rows in the Instances and/or History tables. More details on these tables can be found in the [Performance and scale in Durable Functions (Azure Functions)](durable-functions-azure-storage-provider.md#instances-table) documentation.
### Response
azure-functions Durable Functions Perf And Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-perf-and-scale.md
Title: Performance and scale in Durable Functions - Azure
description: Learn about the unique scaling characteristics of the Durable Functions extension for Azure Functions. Previously updated : 05/13/2021 Last updated : 07/18/2022 # Performance and scale in Durable Functions (Azure Functions)
-To optimize performance and scalability, it's important to understand the unique scaling characteristics of [Durable Functions](durable-functions-overview.md).
+To optimize performance and scalability, it's important to understand the unique scaling characteristics of [Durable Functions](durable-functions-overview.md). In this article, we explain how workers are scaled based on load, and how one can tune the various parameters.
-## Azure Storage provider
+## Worker scaling
-The default configuration for Durable Functions stores this runtime state in an Azure Storage (classic) account. All function execution is driven by Azure Storage queues. Orchestration and entity status and history is stored in Azure Tables. Azure Blobs and blob leases are used to distribute orchestration instances and entities across multiple app instances (also known as *workers* or simply *VMs*). This section goes into more detail on the various Azure Storage artifacts and how they impact performance and scalability.
+A fundamental benefit of the [task hub concept](durable-functions-task-hubs.md) is that the number of workers that process task hub work items can be continuously adjusted. In particular, applications can add more workers (*scale out*) if the work needs to be processed more quickly, and can remove workers (*scale in*) if there is not enough work to keep the workers busy.
+It is even possible to *scale to zero* if the task hub is completely idle. When scaled to zero, there are no workers at all; only the scale controller and the storage need to remain active.
-> [!NOTE]
-> This document primarily focuses on the performance and scalability characteristics of Durable Functions using the default Azure Storage provider. However, other storage providers are also available. For more information on the supported storage providers for Durable Functions and how they compare, see the [Durable Functions storage providers](durable-functions-storage-providers.md) documentation.
-
-### History table
-
-The **History** table is an Azure Storage table that contains the history events for all orchestration instances within a task hub. The name of this table is in the form *TaskHubName*History. As instances run, new rows are added to this table. The partition key of this table is derived from the instance ID of the orchestration. Instance IDs are random by default, ensuring optimal distribution of internal partitions in Azure Storage. The row key for this table is a sequence number used for ordering the history events.
-
-When an orchestration instance needs to run, the corresponding rows of the History table are loaded into memory using a range query within a single table partition. These *history events* are then replayed into the orchestrator function code to get it back into its previously checkpointed state. The use of execution history to rebuild state in this way is influenced by the [Event Sourcing pattern](/azure/architecture/patterns/event-sourcing).
-
-> [!TIP]
-> Orchestration data stored in the History table includes output payloads from activity and sub-orchestrator functions. Payloads from external events are also stored in the History table. Because the full history is loaded into memory every time an orchestrator needs to execute, a large enough history can result in significant memory pressure on a given VM. The length and size of the orchestration history can be reduced by splitting large orchestrations into multiple sub-orchestrations or by reducing the size of outputs returned by the activity and sub-orchestrator functions it calls. Alternatively, you can reduce memory usage by lowering per-VM [concurrency throttles](#concurrency-throttles) to limit how many orchestrations are loaded into memory concurrently.
-
-### Instances table
-
-The **Instances** table contains the statuses of all orchestration and entity instances within a task hub. As instances are created, new rows are added to this table. The partition key of this table is the orchestration instance ID or entity key and the row key is an empty string. There is one row per orchestration or entity instance.
-
-This table is used to satisfy [instance query requests from code](durable-functions-instance-management.md#query-instances) as well as [status query HTTP API](durable-functions-http-api.md#get-instance-status) calls. It is kept eventually consistent with the contents of the **History** table mentioned previously. The use of a separate Azure Storage table to efficiently satisfy instance query operations in this way is influenced by the [Command and Query Responsibility Segregation (CQRS) pattern](/azure/architecture/patterns/cqrs).
-
-> [!TIP]
-> The partitioning of the *Instances* table allows it to store millions of orchestration instances without any noticeable impact on runtime performance or scale. However, the number of instances can have a significant impact on [multi-instance query](durable-functions-instance-management.md#query-all-instances) performance. To control the amount of data stored in these tables, consider periodically [purging old instance data](durable-functions-instance-management.md#purge-instance-history).
-
-### Internal queue triggers
-
-Orchestrator, entity, and activity functions are all triggered by internal queues in the function app's task hub. Using queues in this way provides reliable "at-least-once" message delivery guarantees. There are two types of queues in Durable Functions: the **control queue** and the **work-item queue**.
-
-#### The work-item queue
-
-There is one work-item queue per task hub in Durable Functions. It's a basic queue and behaves similarly to any other `queueTrigger` queue in Azure Functions. This queue is used to trigger stateless *activity functions* by dequeueing a single message at a time. Each of these messages contains activity function inputs and additional metadata, such as which function to execute. When a Durable Functions application scales out to multiple VMs, these VMs all compete to acquire tasks from the work-item queue.
-
-#### Control queue(s)
-
-There are multiple *control queues* per task hub in Durable Functions. A *control queue* is more sophisticated than the simpler work-item queue. Control queues are used to trigger the stateful orchestrator and entity functions. Because the orchestrator and entity function instances are stateful singletons, it's important that each orchestration or entity is only processed by one worker at a time. To achieve this constraint, each orchestration instance or entity is assigned to a single control queue. These control queues are load balanced across workers to ensure that each queue is only processed by one worker at a time. More details on this behavior can be found in subsequent sections.
-
-Control queues contain a variety of orchestration lifecycle message types. Examples include [orchestrator control messages](durable-functions-instance-management.md), activity function *response* messages, and timer messages. As many as 32 messages will be dequeued from a control queue in a single poll. These messages contain payload data as well as metadata including which orchestration instance it is intended for. If multiple dequeued messages are intended for the same orchestration instance, they will be processed as a batch.
+The following diagram illustrates this concept:
-Control queue messages are constantly polled using a background thread. The batch size of each queue poll is controlled by the `controlQueueBatchSize` setting in host.json and has a default of 32 (the maximum value supported by Azure Queues). The maximum number of prefetched control-queue messages that are buffered in memory is controlled by the `controlQueueBufferThreshold` setting in host.json. The default value for `controlQueueBufferThreshold` varies depending on a variety of factors, including the type of hosting plan. For more information on these settings, see the [host.json schema](../functions-host-json.md#durabletask) documentation.
+![Worker scaling diagram](./media/durable-functions-perf-and-scale/worker-scaling.png)
-> [!TIP]
-> Increasing the value for `controlQueueBufferThreshold` allows a single orchestration or entity to process events faster. However, increasing this value can also result in higher memory usage. The higher memory usage is partly due to pulling more messages off the queue and partly due to fetching more orchestration histories into memory. Reducing the value for `controlQueueBufferThreshold` can therefore be an effective way to reduce memory usage.
-
-#### Queue polling
-
-The durable task extension implements a random exponential back-off algorithm to reduce the effect of idle-queue polling on storage transaction costs. When a message is found, the runtime immediately checks for another message. When no message is found, it waits for a period of time before trying again. After subsequent failed attempts to get a queue message, the wait time continues to increase until it reaches the maximum wait time, which defaults to 30 seconds.
+### Automatic scaling
-The maximum polling delay is configurable via the `maxQueuePollingInterval` property in the [host.json file](../functions-host-json.md#durabletask). Setting this property to a higher value could result in higher message processing latencies. Higher latencies would be expected only after periods of inactivity. Setting this property to a lower value could result in [higher storage costs](durable-functions-billing.md#azure-storage-transactions) due to increased storage transactions.
+As with all Azure Functions running in the Consumption and Elastic Premium plans, Durable Functions supports auto-scale via the [Azure Functions scale controller](../event-driven-scaling.md#runtime-scaling). The Scale Controller monitors how long messages and tasks have to wait before they are processed. Based on these latencies it can decide whether to add or remove workers.
> [!NOTE]
-> When running in the Azure Functions Consumption and Premium plans, the [Azure Functions Scale Controller](../event-driven-scaling.md) will poll each control and work-item queue once every 10 seconds. This additional polling is necessary to determine when to activate function app instances and to make scale decisions. At the time of writing, this 10 second interval is constant and cannot be configured.
+> Starting with Durable Functions 2.0, function apps can be configured to run within VNET-protected service endpoints in the Elastic Premium plan. In this configuration, the Durable Functions triggers initiate scale requests instead of the Scale Controller. For more information, see [Runtime scale monitoring](../functions-networking-options.md#premium-plan-with-virtual-network-triggers).
-#### Orchestration start delays
-Orchestrations instances are started by putting an `ExecutionStarted` message in one of the task hub's control queues. Under certain conditions, you may observe multi-second delays between when an orchestration is scheduled to run and when it actually starts running. During this time interval, the orchestration instance remains in the `Pending` state. There are two potential causes of this delay:
+On a premium plan, automatic scaling can help to keep the number of workers (and therefore the operating cost) roughly proportional to the load that the application is experiencing.
-1. **Backlogged control queues**: If the control queue for this instance contains a large number of messages, it may take time before the `ExecutionStarted` message is received and processed by the runtime. Message backlogs can happen when orchestrations are processing lots of events concurrently. Events that go into the control queue include orchestration start events, activity completions, durable timers, termination, and external events. If this delay happens under normal circumstances, consider creating a new task hub with a larger number of partitions. Configuring more partitions will cause the runtime to create more control queues for load distribution. Each partition corresponds to 1:1 with a control queue, with a maximum of 16 partitions.
+### CPU usage
-2. **Back off polling delays**: Another common cause of orchestration delays is the [previously described back-off polling behavior for control queues](#queue-polling). However, this delay is only expected when an app is scaled out to two or more instances. If there is only one app instance or if the app instance that starts the orchestration is also the same instance that is polling the target control queue, then there will not be a queue polling delay. Back off polling delays can be reduced by updating the **host.json** settings, as described previously.
+**Orchestrator functions** are executed on a single thread to ensure that execution can be deterministic across many replays. Because of this single-threaded execution, it's important that orchestrator function threads do not perform CPU-intensive tasks, do I/O, or block for any reason. Any work that may require I/O, blocking, or multiple threads should be moved into activity functions.
-### Storage account selection
+**Activity functions** have all the same behaviors as regular queue-triggered functions. They can safely do I/O, execute CPU intensive operations, and use multiple threads. Because activity triggers are stateless, they can freely scale out to an unbounded number of VMs.
-The queues, tables, and blobs used by Durable Functions are created in a configured Azure Storage account. The account to use can be specified using the `durableTask/storageProvider/connectionStringName` setting (or `durableTask/azureStorageConnectionStringName` setting in Durable Functions 1.x) in the **host.json** file.
+**Entity functions** are also executed on a single thread and operations are processed one-at-a-time. However, entity functions do not have any restrictions on the type of code that can be executed.
-#### Durable Functions 2.x
+### Function timeouts
-```json
-{
- "extensions": {
- "durableTask": {
- "storageProvider": {
- "connectionStringName": "MyStorageAccountAppSetting"
- }
- }
- }
-}
-```
+Activity, orchestrator, and entity functions are subject to the same [function timeouts](../functions-scale.md#timeout) as all Azure Functions. As a general rule, Durable Functions treats function timeouts the same way as unhandled exceptions thrown by the application code.
-#### Durable Functions 1.x
+For example, if an activity times out, the function execution is recorded as a failure, and the orchestrator is notified and handles the timeout just like any other exception: retries take place if specified by the call, or an exception handler may be executed.
-```json
-{
- "extensions": {
- "durableTask": {
- "azureStorageConnectionStringName": "MyStorageAccountAppSetting"
- }
- }
-}
-```
-
-If not specified, the default `AzureWebJobsStorage` storage account is used. For performance-sensitive workloads, however, configuring a non-default storage account is recommended. Durable Functions uses Azure Storage heavily, and using a dedicated storage account isolates Durable Functions storage usage from the internal usage by the Azure Functions host.
-
-> [!NOTE]
-> Standard general purpose Azure Storage accounts are required when using the Azure Storage provider. All other storage account types are not supported. We highly recommend using legacy v1 general purpose storage accounts for Durable Functions. The newer v2 storage accounts can be significantly more expensive for Durable Functions workloads. For more information on Azure Storage account types, see the [Storage account overview](../../storage/common/storage-account-overview.md) documentation.
+### Entity operation batching
-### Orchestrator scale-out
+To improve performance and reduce cost, a single work item may execute an entire batch of entity operations. On consumption plans, each batch is then billed as a single function execution.
-While activity functions can be scaled out infinitely by adding more VMs elastically, individual orchestrator instances and entities are constrained to inhabit a single partition and the maximum number of partitions is bounded by the `partitionCount` setting in your `host.json`.
+By default, the maximum batch size is 50 for consumption plans and 5000 for all other plans. The maximum batch size can also be configured in the [host.json](durable-functions-bindings.md#host-json) file. If the maximum batch size is 1, batching is effectively disabled.
> [!NOTE]
-> Generally speaking, orchestrator functions are intended to be lightweight and should not require large amounts of computing power. It is therefore not necessary to create a large number of control-queue partitions to get great throughput for orchestrations. Most of the heavy work should be done in stateless activity functions, which can be scaled out infinitely.
+> If individual entity operations take a long time to execute, it may be beneficial to limit the maximum batch size to reduce the risk of [function timeouts](#function-timeouts), in particular on consumption plans.
-The number of control queues is defined in the **host.json** file. The following example host.json snippet sets the `durableTask/storageProvider/partitionCount` property (or `durableTask/partitionCount` in Durable Functions 1.x) to `3`. Note that there are as many control queues as there are partitions.
+## Instance caching
-#### Durable Functions 2.x
+Generally, to process an [orchestration work item](durable-functions-task-hubs.md#work-items), a worker has to both
-```json
-{
- "extensions": {
- "durableTask": {
- "storageProvider": {
- "partitionCount": 3
- }
- }
- }
-}
-```
+1. Fetch the orchestration history.
+1. Replay the orchestrator code using the history.
-#### Durable Functions 1.x
+If the same worker is processing multiple work items for the same orchestration, the storage provider can optimize this process by caching the history in the worker's memory, which eliminates the first step. Moreover, it can cache the mid-execution orchestrator, which eliminates the second step, the history replay, as well.
-```json
-{
- "extensions": {
- "durableTask": {
- "partitionCount": 3
- }
- }
-}
-```
-
-A task hub can be configured with between 1 and 16 partitions. If not specified, the default partition count is **4**.
+The typical effect of caching is reduced I/O against the underlying storage service, and overall improved throughput and latency. On the other hand, caching increases the memory consumption on the worker.
-During low traffic scenarios, your application will be scaled-in, so partitions will be managed by a small number of workers. As an example, consider the diagram below.
+Instance caching is currently supported by the Azure Storage provider and by the Netherite storage provider. The table below provides a comparison.
-![Scale-in orchestrations diagram](./media/durable-functions-perf-and-scale/scale-progression-1.png)
+|| Azure Storage provider | Netherite storage provider | MSSQL storage provider |
+|-|-|-|-|
+| **Instance caching** | Supported<br/>(.NET in-process worker only) | Supported | Not supported |
+| **Default setting** | Disabled | Enabled | n/a |
+| **Mechanism** | Extended Sessions | Instance Cache | n/a |
+| **Documentation** | See [Extended sessions](durable-functions-azure-storage-provider.md#extended-sessions) | See [Instance cache](https://microsoft.github.io/durabletask-netherite/#/caching) | n/a |
-In the previous diagram, we see that orchestrators 1 through 6 are load balanced across partitions. Similarly, partitions, like activities, are load balanced across workers. Partitions are load-balanced across workers regardless of the number of orchestrators that get started.
+> [!TIP]
+> Caching can reduce how often histories are replayed, but it cannot eliminate replay altogether. When developing orchestrators, we highly recommend testing them on a configuration that disables caching. This forced-replay behavior can useful for detecting [orchestrator function code constraints](durable-functions-code-constraints.md) violations at development time.
-If you're running on the Azure Functions Consumption or Elastic Premium plans, or if you have load-based auto-scaling configured, more workers will get allocated as traffic increases and partitions will eventually load balance across all workers. If we continue to scale out, eventually each partition will eventually be managed by a single worker. Activities, on the other hand, will continue to be load-balanced across all workers. This is shown in the image below.
+### Comparison of caching mechanisms
-![First scaled-out orchestrations diagram](./media/durable-functions-perf-and-scale/scale-progression-2.png)
+The providers use different mechanisms to implement caching, and offer different parameters to configure the caching behavior.
-The upper-bound of the maximum number of concurrent _active_ orchestrations at *any given time* is equal to the number of workers allocated to your application _times_ your value for `maxConcurrentOrchestratorFunctions`. This upper-bound can be made more precise when your partitions are fully scaled-out across workers. When fully scaled-out, and since each worker will have only a single Functions host instance, the maximum number of _active_ concurrent orchestrator instances will be equal to your number of partitions _times_ your value for `maxConcurrentOrchestratorFunctions`.
+* **Extended sessions**, as used by the Azure Storage provider, keep mid-execution orchestrators in memory until they are idle for some time. The parameters to control this mechanism are `extendedSessionsEnabled` and `extendedSessionIdleTimeoutInSeconds`. For more details, see the section [Extended sessions](durable-functions-azure-storage-provider.md#extended-sessions) of the Azure Storage provider documentation.
> [!NOTE]
-> In this context, *active* means that an orchestration or entity is loaded into memory and processing *new events*. If the orchestration or entity is waiting for more events, such as the return value of an activity function, it gets unloaded from memory and is no longer considered *active*. Orchestrations and entities will be subsequently reloaded into memory only when there are new events to process. There's no practical maximum number of *total* orchestrations or entities that can run on a single VM, even if they're all in the "Running" state. The only limitation is the number of *concurrently active* orchestration or entity instances.
-
-The image below illustrates a fully scaled-out scenario where more orchestrators are added but some are inactive, shown in grey.
+> Extended sessions are supported only in the .NET in-process worker.
-![Second scaled-out orchestrations diagram](./media/durable-functions-perf-and-scale/scale-progression-3.png)
-
-During scale-out, control queue leases may be redistributed across Functions host instances to ensure that partitions are evenly distributed. These leases are internally implemented as Azure Blob storage leases and ensure that any individual orchestration instance or entity only runs on a single host instance at a time. If a task hub is configured with three partitions (and therefore three control queues), orchestration instances and entities can be load-balanced across all three lease-holding host instances. Additional VMs can be added to increase capacity for activity function execution.
-
-The following diagram illustrates how the Azure Functions host interacts with the storage entities in a scaled out environment.
-
-![Scale diagram](./media/durable-functions-perf-and-scale/scale-interactions-diagram.png)
-
-As shown in the previous diagram, all VMs compete for messages on the work-item queue. However, only three VMs can acquire messages from control queues, and each VM locks a single control queue.
-
-Orchestration instances and entities are distributed across all control queue instances. The distribution is done by hashing the instance ID of the orchestration or the entity name and key pair. Orchestration instance IDs by default are random GUIDs, ensuring that instances are equally distributed across all control queues.
-
-Generally speaking, orchestrator functions are intended to be lightweight and should not require large amounts of computing power. It is therefore not necessary to create a large number of control queue partitions to get great throughput for orchestrations. Most of the heavy work should be done in stateless activity functions, which can be scaled out infinitely.
-
-### Auto-scale
-
-As with all Azure Functions running in the Consumption and Elastic Premium plans, Durable Functions supports auto-scale via the [Azure Functions scale controller](../event-driven-scaling.md#runtime-scaling). The Scale Controller monitors the latency of all queues by periodically issuing _peek_ commands. Based on the latencies of the peeked messages, the Scale Controller will decide whether to add or remove VMs.
-
-If the Scale Controller determines that control queue message latencies are too high, it will add VM instances until either the message latency decreases to an acceptable level or it reaches the control queue partition count. Similarly, the Scale Controller will continually add VM instances if work-item queue latencies are high, regardless of the partition count.
+* The **Instance cache**, as used by the Netherite storage provider, keeps the state of all instances, including their histories, in the worker's memory, while keeping track of the total memory used. If the cache size exceeds the limit configured by `InstanceCacheSizeMB`, the least recently used instance data is evicted. If `CacheOrchestrationCursors` is set to true, the cache also stores the mid-execution orchestrators along with the instance state.
+ For more details, see the section [Instance cache](https://microsoft.github.io/durabletask-netherite/#/caching) of the Netherite storage provider documentation.
> [!NOTE]
-> Starting with Durable Functions 2.0, function apps can be configured to run within VNET-protected service endpoints in the Elastic Premium plan. In this configuration, the Durable Functions triggers initiate scale requests instead of the Scale Controller. For more information, see [Runtime scale monitoring](../functions-networking-options.md#premium-plan-with-virtual-network-triggers).
-
-## Thread usage
-
-Orchestrator functions are executed on a single thread to ensure that execution can be deterministic across many replays. Because of this single-threaded execution, it's important that orchestrator function threads do not perform CPU-intensive tasks, do I/O, or block for any reason. Any work that may require I/O, blocking, or multiple threads should be moved into activity functions.
+> Instance caches work for all language SDKs, but the `CacheOrchestrationCursors` option is available only for the .NET in-process worker.
-Activity functions have all the same behaviors as regular queue-triggered functions. They can safely do I/O, execute CPU intensive operations, and use multiple threads. Because activity triggers are stateless, they can freely scale out to an unbounded number of VMs.
+## Concurrency throttles
-Entity functions are also executed on a single thread and operations are processed one-at-a-time. However, entity functions do not have any restrictions on the type of code that can be executed.
+A single worker instance can execute multiple [work items](durable-functions-task-hubs.md#work-items) concurrently. This helps to increase parallelism and more efficiently utilize the workers.
+However, if a worker attempts to process too many work items at the same time, it may exhaust its available resources, such as the CPU load, the number of network connections, or the available memory.
-## Function timeouts
+To ensure that an individual worker does not overcommit, it may be necessary to throttle the per-instance concurrency. By limiting the number of functions that are concurrently running on each worker, we can avoid exhausting the resource limits on that worker.
-Activity, orchestrator, and entity functions are subject to the same [function timeouts](../functions-scale.md#timeout) as all Azure Functions. As a general rule, Durable Functions treats function timeouts the same way as unhandled exceptions thrown by the application code. For example, if an activity times out, the function execution is recorded as a failure, and the orchestrator is notified and handles the timeout just like any other exception: retries take place if specified by the call, or an exception handler may be executed.
+> [!NOTE]
+> The concurrency throttles only apply locally, to limit what is currently being processed **per worker**. Thus, these throttles do not limit the total throughput of the system.
-## Concurrency throttles
+> [!TIP]
+> In some cases, throttling the per-worker concurrency can actually *increase* the total throughput of the system. This can occur when each worker takes less work, causing the scale controller to add more workers to keep up with the queues, which then increases the total throughput.
-Azure Functions supports executing multiple functions concurrently within a single app instance. This concurrent execution helps increase parallelism and minimizes the number of "cold starts" that a typical app will experience over time. However, high concurrency can exhaust per-VM system resources such network connections or available memory. Depending on the needs of the function app, it may be necessary to throttle the per-instance concurrency to avoid the possibility of running out of memory in high-load situations.
+### Configuration of throttles
-Activity, orchestrator, and entity function concurrency limits can be configured in the **host.json** file. The relevant settings are `durableTask/maxConcurrentActivityFunctions` for activity functions and `durableTask/maxConcurrentOrchestratorFunctions` for both orchestrator and entity functions. These settings control the maximum number of orchestrator, entity, or activity functions that can be loaded into memory concurrently.
+Activity, orchestrator, and entity function concurrency limits can be configured in the **host.json** file. The relevant settings are `durableTask/maxConcurrentActivityFunctions` for activity functions and `durableTask/maxConcurrentOrchestratorFunctions` for both orchestrator and entity functions. These settings control the maximum number of orchestrator, entity, or activity functions that are loaded into memory on a single worker.
> [!NOTE]
-> The concurrency throttles only apply locally, to limit what is currently being processed on one individual machine. Thus, these throttles do not limit the total throughput of the system. Quite to the contrary, they can actually support proper scale out, as they prevent individual machines from taking on too much work at once. If this leads to unprocessed work accumulating in the queues, the autoscaler adds more machines. The total throughput of the system thus scales out as needed.
+> Orchestrations and entities are only loaded into memory when they are actively processing events or operations, or if [instance caching](durable-functions-perf-and-scale.md#instance-caching) is enabled. After executing their logic and awaiting (i.e. hitting an `await` (C#) or `yield` (JavaScript, Python) statement in the orchestrator function code), they may be unloaded from memory. Orchestrations and entities that are unloaded from memory don't count towards the `maxConcurrentOrchestratorFunctions` throttle. Even if millions of orchestrations or entities are in the "Running" state, they only count towards the throttle limit when they are loaded into active memory. An orchestration that schedules an activity function similarly doesn't count towards the throttle if the orchestration is waiting for the activity to finish executing.
-> [!NOTE]
-> The `durableTask/maxConcurrentOrchestratorFunctions` limit applies only to the act of processing new events or operations. Orchestrations or entities that are idle waiting for events or operations do not count towards the limit.
-
-### Functions 2.0
+#### Functions 2.0
```json {
Activity, orchestrator, and entity function concurrency limits can be configured
} ```
-### Functions 1.x
+#### Functions 1.x
```json {
Activity, orchestrator, and entity function concurrency limits can be configured
} ```
-In the previous example, a maximum of 10 orchestrator or entity functions and 10 activity functions can run on a single VM concurrently. If not specified, the number of concurrent activity and orchestrator or entity function executions is capped at 10X the number of cores on the VM.
+### Language runtime considerations
+
+The language runtime you select may impose strict concurrency restrictions or your functions. For example, Durable Function apps written in Python or PowerShell may only support running a single function at a time on a single VM. This can result in significant performance problems if not carefully accounted for. For example, if an orchestrator fans-out to 10 activities but the language runtime restricts concurrency to just one function, then 9 of the 10 activity functions will be stuck waiting for a chance to run. Furthermore, these 9 stuck activities will not be able to be load balanced to any other workers because the Durable Functions runtime will have already loaded them into memory. This becomes especially problematic if the activity functions are long-running.
-If the maximum number of activities or orchestrations/entities on a worker VM is reached, the Durable trigger will wait for any executing functions to finish or unload before starting up new function executions.
+If the language runtime you are using places a restriction on concurrency, you should update the Durable Functions concurrency settings to match the concurrency settings of your language runtime. This ensures that the Durable Functions runtime will not attempt to run more functions concurrently than is allowed by the language runtime, allowing any pending activities to be load balanced to other VMs. For example, if you have a Python app that restricts concurrency to 4 functions (perhaps it's only configured with 4 threads on a single language worker process or 1 thread on 4 language worker processes) then you should configure both `maxConcurrentOrchestratorFunctions` and `maxConcurrentActivityFunctions` to 4.
-> [!NOTE]
-> These settings are useful to help manage memory and CPU usage on a single VM. However, when scaled out across multiple VMs, each VM has its own set of limits. These settings can't be used to control concurrency at a global level.
+For more information and performance recommendations for Python, see [Improve throughput performance of Python apps in Azure Functions](../python-scale-performance-reference.md). The techniques mentioned in this Python developer reference documentation can have a substantial impact on Durable Functions performance and scalability.
-> [!NOTE]
-> Orchestrations and entities are only loaded into memory when they are actively processing events or operations. After executing their logic and awaiting (i.e. hitting an `await` (C#) or `yield` (JavaScript, Python) statement in the orchestrator function code), they are unloaded from memory. Orchestrations and entities that are unloaded from memory don't count towards the `maxConcurrentOrchestratorFunctions` throttle. Even if millions of orchestrations or entities are in the "Running" state, they only count towards the throttle limit when they are loaded into active memory. An orchestration that schedules an activity function similarly doesn't count towards the throttle if the orchestration is waiting for the activity to finish executing.
+## Partition count
-### Language runtime considerations
+Some of the storage providers use a *partitioning* mechanism and allow specifying a `partitionCount` parameter.
-The language runtime you select may impose strict concurrency restrictions or your functions. For example, Durable Function apps written in Python or PowerShell may only support running a single function at a time on a single VM. This can result in significant performance problems if not carefully accounted for. For example, if an orchestrator fans-out to 10 activities but the language runtime restricts concurrency to just one function, then 9 of the 10 activity functions will be stuck waiting for a chance to run. Furthermore, these 9 stuck activities will not be able to be load balanced to any other workers because the Durable Functions runtime will have already loaded them into memory. This becomes especially problematic if the activity functions are long-running.
+When using partitioning, workers do not directly compete for individual work items. Instead, the work items are first grouped into `partitionCount` partitions. These partitions are then assigned to workers. This partitioned approach to load distribution can help to reduce the total number of storage accesses required. Also, it can enable [instance caching](durable-functions-perf-and-scale.md#instance-caching) and improve locality because it creates *affinity*: all work items for the same instance are processed by the same worker.
-If the language runtime you are using places a restriction on concurrency, you should update the Durable Functions concurrency settings to match the concurrency settings of your language runtime. This ensures that the Durable Functions runtime will not attempt to run more functions concurrently than is allowed by the language runtime, allowing any pending activities to be load balanced to other VMs. For example, if you have a Python app that restricts concurrency to 4 functions (perhaps its only configured with 4 threads on a single language worker process or 1 thread on 4 language worker processes) then you should configure both `maxConcurrentOrchestratorFunctions` and `maxConcurrentActivityFunctions` to 4.
+> [!NOTE]
+> Partitioning limits scale out because at most `partitionCount` workers can process work items from a partitioned queue.
-For more information and performance recommendations for Python, see [Improve throughput performance of Python apps in Azure Functions](../python-scale-performance-reference.md). The techniques mentioned in this Python developer reference documentation can have a substantial impact on Durable Functions performance and scalability.
+The following table shows, for each storage provider, which queues are partitioned, and the allowable range and default values for the `partitionCount` parameter.
-## Extended sessions
+|| Azure Storage provider | Netherite storage provider | MSSQL storage provider |
+|-|-|-|-|
+| **Instance messages**| Partitioned | Partitioned | Not partitioned |
+| **Activity messages** | Not partitioned | Partitioned | Not partitioned |
+| **Default `partitionCount`** | 4 | 12 | n/a |
+| **Maximum `partitionCount`** | 16 | 32 | n/a |
+| **Documentation** | See [Orchestrator scale-out](durable-functions-azure-storage-provider.md#orchestrator-scale-out) | See [Partition count considerations](https://microsoft.github.io/durabletask-netherite/#/settings?id=partition-count-considerations) | n/a |
-Extended sessions is a setting that keeps orchestrations and entities in memory even after they finish processing messages. The typical effect of enabling extended sessions is reduced I/O against the underlying durable store and overall improved throughput.
+> [!WARNING]
+> The partition count can no longer be changed after a task hub has been created. Thus, it is advisable to set it to a large enough value to accommodate future scale out requirements for the task hub instance.
-You can enable extended sessions by setting `durableTask/extendedSessionsEnabled` to `true` in the **host.json** file. The `durableTask/extendedSessionIdleTimeoutInSeconds` setting can be used to control how long an idle session will be held in memory:
+### Configuration of partition count
+
+The `partitionCount` parameter can be specified in the **host.json** file. The following example host.json snippet sets the `durableTask/storageProvider/partitionCount` property (or `durableTask/partitionCount` in Durable Functions 1.x) to `3`.
+
+#### Durable Functions 2.x
-**Functions 2.0**
```json { "extensions": { "durableTask": {
- "extendedSessionsEnabled": true,
- "extendedSessionIdleTimeoutInSeconds": 30
+ "storageProvider": {
+ "partitionCount": 3
+ }
} } } ```
-**Functions 1.0**
+#### Durable Functions 1.x
+ ```json {
- "durableTask": {
- "extendedSessionsEnabled": true,
- "extendedSessionIdleTimeoutInSeconds": 30
+ "extensions": {
+ "durableTask": {
+ "partitionCount": 3
+ }
} } ```
-There are two potential downsides of this setting to be aware of:
-
-1. There's an overall increase in function app memory usage because idle instances are not unloaded from memory as quickly.
-2. There can be an overall decrease in throughput if there are many concurrent, distinct, short-lived orchestrator or entity function executions.
-
-As an example, if `durableTask/extendedSessionIdleTimeoutInSeconds` is set to 30 seconds, then a short-lived orchestrator or entity function episode that executes in less than 1 second still occupies memory for 30 seconds. It also counts against the `durableTask/maxConcurrentOrchestratorFunctions` quota mentioned previously, potentially preventing other orchestrator or entity functions from running.
-
-The specific effects of extended sessions on orchestrator and entity functions are described in the next sections.
-
-> [!NOTE]
-> Extended sessions are currently only supported in .NET languages, like C# or F#. Setting `extendedSessionsEnabled` to `true` for other platforms can lead to runtime issues, such as silently failing to execute activity and orchestration-triggered functions.
-
-> [!NOTE]
-> Support for extended sessions may vary depend on the [Durable Functions storage provider you are using](durable-functions-storage-providers.md). See the storage provider documentation to learn whether it supports extended sessions.
-
-### Orchestrator function replay
-
-As mentioned previously, orchestrator functions are replayed using the contents of the **History** table. By default, the orchestrator function code is replayed every time a batch of messages are dequeued from a control queue. Even if you are using the fan-out, fan-in pattern and are awaiting for all tasks to complete (for example, using `Task.WhenAll()` in .NET, `context.df.Task.all()` in JavaScript, or `context.task_all()` in Python), there will be replays that occur as batches of task responses are processed over time. When extended sessions are enabled, orchestrator function instances are held in memory longer and new messages can be processed without a full history replay.
-
-The performance improvement of extended sessions is most often observed in the following situations:
-
-* When there are a limited number of orchestration instances running concurrently.
-* When orchestrations have large number of sequential actions (for example, hundreds of activity function calls) that complete quickly.
-* When orchestrations fan-out and fan-in a large number of actions that complete around the same time.
-* When orchestrator functions need to process large messages or do any CPU-intensive data processing.
-
-In all other situations, there is typically no observable performance improvement for orchestrator functions.
-
-> [!NOTE]
-> These settings should only be used after an orchestrator function has been fully developed and tested. The default aggressive replay behavior can useful for detecting [orchestrator function code constraints](durable-functions-code-constraints.md) violations at development time, and is therefore disabled by default.
-
-## Entity operation batching
-
-To improve performance and cost, entity operations are executed in batches. Each batch is billed as a single function execution.
-
-By default, the maximum batch size is 50 (for consumption plans) and 5000 (for all other plans). The maximum batch size can also be configured in the [host.json](durable-functions-bindings.md#host-json) file. If the maximum batch size is 1, batching is effectively disabled.
-
-> [!NOTE]
-> If individual entity operations take a long time to execute, it may be beneficial to limit the maximum batch size to reduce the risk of [function timeouts](#function-timeouts), in particular on consumption plans.
- ## Performance targets
-When planning to use Durable Functions for a production application, it is important to consider the performance requirements early in the planning process. This section covers some basic usage scenarios and the expected maximum throughput numbers.
+When planning to use Durable Functions for a production application, it is important to consider the performance requirements early in the planning process. Some basic usage scenarios include:
* **Sequential activity execution**: This scenario describes an orchestrator function that runs a series of activity functions one after the other. It most closely resembles the [Function Chaining](durable-functions-sequence.md) sample. * **Parallel activity execution**: This scenario describes an orchestrator function that executes many activity functions in parallel using the [Fan-out, Fan-in](durable-functions-cloud-backup.md) pattern.
When planning to use Durable Functions for a production application, it is impor
* **External event processing**: This scenario represents a single orchestrator function instance that waits on [external events](durable-functions-external-events.md), one at a time. * **Entity operation processing**: This scenario tests how quickly a _single_ [Counter entity](durable-functions-entities.md) can process a constant stream of operations.
-> [!TIP]
-> Unlike fan-out, fan-in operations are limited to a single VM. If your application uses the fan-out, fan-in pattern and you are concerned about fan-in performance, consider sub-dividing the activity function fan-out across multiple [sub-orchestrations](durable-functions-sub-orchestrations.md).
+We provide throughput numbers for these scenarios in the respective documentation for the storage providers. In particular:
-### Azure Storage performance targets
-
-The following table shows the expected *maximum* throughput numbers for the previously described scenarios when using the default [Azure Storage provider for Durable Functions](durable-functions-storage-providers.md#azure-storage). "Instance" refers to a single instance of an orchestrator function running on a single small ([A1](../../virtual-machines/sizes-previous-gen.md)) VM in Azure App Service. In all cases, it is assumed that [extended sessions](#orchestrator-function-replay) are enabled. Actual results may vary depending on the CPU or I/O work performed by the function code.
-
-| Scenario | Maximum throughput |
-|-|-|
-| Sequential activity execution | 5 activities per second, per instance |
-| Parallel activity execution (fan-out) | 100 activities per second, per instance |
-| Parallel response processing (fan-in) | 150 responses per second, per instance |
-| External event processing | 50 events per second, per instance |
-| Entity operation processing | 64 operations per second |
-
-If you are not seeing the throughput numbers you expect and your CPU and memory usage appears healthy, check to see whether the cause is related to [the health of your storage account](../../storage/common/storage-monitoring-diagnosing-troubleshooting.md#troubleshooting-guidance). The Durable Functions extension can put significant load on an Azure Storage account and sufficiently high loads may result in storage account throttling.
+* for the Azure Storage provider, see [Performance Targets](durable-functions-azure-storage-provider.md#performance-targets).
+* for the Netherite storage provider, see [Basic Scenarios](https://microsoft.github.io/durabletask-netherite/#/scenarios).
+* for the MSSQL storage provider, see [Orchestration Throughput Benchmarks](https://microsoft.github.io/durabletask-mssql/#/scaling?id=orchestration-throughput-benchmarks).
> [!TIP]
-> In some cases you can significantly increase the throughput of external events, activity fan-in, and entity operations by increasing the value of the `controlQueueBufferThreshold` setting in **host.json**. Increasing this value beyond its default causes the Durable Task Framework storage provider to use more memory to prefetch these events more aggressively, reducing delays associated with dequeueing messages from the Azure Storage control queues. For more information, see the [host.json](durable-functions-bindings.md#host-json) reference documentation.
-
-### High throughput processing
-
-The architecture of the Azure Storage backend puts certain limitations on the maximum theoretical performance and scalability of Durable Functions. If your testing shows that Durable Functions on Azure Storage won't meet your throughput requirements, you should consider instead using the [Netherite storage provider for Durable Functions](durable-functions-storage-providers.md#netherite).
-
-The Netherite storage backend was designed and developed by [Microsoft Research](https://www.microsoft.com/research). It uses [Azure Event Hubs](../../event-hubs/event-hubs-about.md) and the [FASTER](https://www.microsoft.com/research/project/faster/) database technology on top of [Azure Page Blobs](../../storage/blobs/storage-blob-pageblob-overview.md). The design of Netherite enables significantly higher-throughput processing of orchestrations and entities compared to other providers. In some benchmark scenarios, throughput was shown to increase by more than an order of magnitude when compared to the default Azure Storage provider.
-
-For more information on the supported storage providers for Durable Functions and how they compare, see the [Durable Functions storage providers](durable-functions-storage-providers.md) documentation.
+> Unlike fan-out, fan-in operations are limited to a single VM. If your application uses the fan-out, fan-in pattern and you are concerned about fan-in performance, consider sub-dividing the activity function fan-out across multiple [sub-orchestrations](durable-functions-sub-orchestrations.md).
## Next steps > [!div class="nextstepaction"]
-> [Learn about disaster recovery and geo-distribution](durable-functions-disaster-recovery-geo-distribution.md)
+> [Learn about the Azure Storage provider](durable-functions-azure-storage-provider.md)
azure-functions Durable Functions Serialization And Persistence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-serialization-and-persistence.md
Title: Data persistence and serialization in Durable Functions - Azure
description: Learn how the Durable Functions extension for Azure Functions persists data Previously updated : 05/26/2022 Last updated : 07/18/2022 ms.devlang: csharp, java, javascript, python #Customer intent: As a developer, I want to understand what data is persisted to durable storage, how that data is serialized, and how I can customize it when it doesn't work the way my app needs it to.
ms.devlang: csharp, java, javascript, python
# Data persistence and serialization in Durable Functions (Azure Functions)
-Durable Functions automatically persists function parameters, return values, and other state to a durable backend in order to provide reliable execution. However, the amount and frequency of data persisted to durable storage can impact application performance and storage transaction costs. Depending on the type of data your application stores, data retention and privacy policies may also need to be considered.
+The Durable Functions runtime automatically persists function parameters, return values, and other state to the [task hub](durable-functions-task-hubs.md) in order to provide reliable execution. However, the amount and frequency of data persisted to durable storage can impact application performance and storage transaction costs. Depending on the type of data your application stores, data retention and privacy policies may also need to be considered.
-## Azure Storage
+## Task Hub Contents
-By default, Durable Functions persists data to queues, tables, and blobs in an [Azure Storage](https://azure.microsoft.com/services/storage/) account that you specify.
+Task hubs store the current state of instances, and any pending messages:
-### Queues
+* *Instance states* store the current status and history of an instance. For orchestration instances, this includes the runtime state, the orchestration history, inputs, outputs, and custom status. For entity instances, it includes the entity state.
+* *Messages* store function inputs or outputs, event payloads, and metadata that Durable Functions uses for internal purposes, like routing and end-to-end correlation.
-Durable Functions uses Azure Storage queues to reliably schedule all function executions. These queue messages contain function inputs or outputs, depending on whether the message is being used to schedule an execution or return a value back to a calling function. These queue messages also include additional metadata that Durable Functions uses for internal purposes, like routing and end-to-end correlation. After a function has finished executing in response to a received message, that message is deleted and the result of the execution may also be persisted to either Azure Storage Tables or Azure Storage Blobs.
+Messages are deleted after being processed, but instance states persist unless they're explicitly deleted by the application or an operator. In particular, an orchestration history remains in storage even after the orchestration completes.
-Within a single [task hub](durable-functions-task-hubs.md), Durable Functions creates and adds messages to a *work-item* queue named `<taskhub>-workitem` for scheduling activity functions and one or more *control queues* named `<taskhub>-control-##` to schedule or resume orchestrator and entity functions. The number of control queues is equal to the number of partitions configured for your application. For more information about queues and partitions, see the [Performance and Scalability documentation](durable-functions-perf-and-scale.md).
+For an example of how states and messages represent the progress of an orchestration, see the [task hub execution example](durable-functions-task-hubs.md#execution-example).
-### Tables
-
-Once orchestrations process messages successfully, records of their resulting actions are persisted to the *History* table named `<taskhub>History`. Orchestration inputs, outputs, and custom status data is also persisted to the *Instances* table named `<taskhub>Instances`.
-
-### Blobs
-
-In most cases, Durable Functions doesn't use Azure Storage Blobs to persist data. However, queues and tables have [size limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#azure-queue-storage-limits) that can prevent Durable Functions from persisting all of the required data into a storage row or queue message. For example, when a piece of data that needs to be persisted to a queue is greater than 45 KB when serialized, Durable Functions will compress the data and store it in a blob instead. When persisting data to blob storage in this way, Durable Function stores a reference to that blob in the table row or queue message. When Durable Functions needs to retrieve the data it will automatically fetch it from the blob. These blobs are stored in the blob container `<taskhub>-largemessages`.
-
-> [!NOTE]
-> The extra compression and blob operation steps for large messages can be expensive in terms of CPU and I/O latency costs. Additionally, Durable Functions needs to load persisted data in memory, and may do so for many different function executions at the same time. As a result, persisting large data payloads can cause high memory usage as well. To minimize memory overhead, consider persisting large data payloads manually (for example, in blob storage) and instead pass around references to this data. This way your code can load the data only when needed to avoid redundant loads during [orchestrator function replays](durable-functions-orchestrations.md#reliability). However, storing payloads to disk is *not* recommended since on-disk state is not guaranteed to be available since functions may execute on different VMs throughout their lifetimes.
+Where and how states and messages are represented in storage [depends on the storage provider](durable-functions-task-hubs.md#representation-in-storage). By default, Durable Functions uses the [Azure Storage provider](durable-functions-azure-storage-provider.md) which persists data to queues, tables, and blobs in an [Azure Storage](https://azure.microsoft.com/services/storage/) account that you specify.
### Types of data that is serialized and persisted The following is a list of the different types of data that will be serialized and persisted when using features of Durable Functions:
azure-functions Durable Functions Storage Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-storage-providers.md
Title: Durable Functions storage providers - Azure
description: Learn about the different storage providers for Durable Functions and how they compare Previously updated : 05/05/2021 Last updated : 07/18/2022 #Customer intent: As a developer, I want to understand what storage providers are available Durable Functions and which one I should choose. # Durable Functions storage providers
-Durable Functions automatically persists function parameters, return values, and other state to durable storage to guarantee reliable execution. The default configuration for Durable Functions stores this runtime state in an Azure Storage (classic) account. However, it's possible to configure Durable Functions v2.0 and above to use an alternate durable storage provider.
- Durable Functions is a set of Azure Functions triggers and bindings that are internally powered by the [Durable Task Framework](https://github.com/Azure/durabletask) (DTFx). DTFx supports various backend storage providers, including the Azure Storage provider used by Durable Functions. Starting in Durable Functions **v2.5.0**, users can configure their function apps to use DTFx storage providers other than the Azure Storage provider. > [!NOTE]
-> The choice to use storage providers other than Azure Storage should be made carefully. Most function apps running in Azure should use the default Azure Storage provider for Durable Functions. However, there are important cost, scalability, and data management tradeoffs that should be considered when deciding whether to use an alternate storage provider. This article describes many of these tradeoffs in detail.
->
-> Also note that it's not currently possible to migrate data from one storage provider to another. If you want to use a new storage provider, you should create a new app configured with the new storage provider.
+> For many function apps, the default Azure Storage provider for Durable Functions is likely to suffice, and is the easiest to use since it requires no extra configuration. However, there are cost, scalability, and data management tradeoffs that may favor the use of an alternate storage provider.
+
+Two alternate storage providers were developed for use with Durable Functions and the Durable Task Framework, namely the _Netherite_ storage provider and the _Microsoft SQL Server (MSSQL)_ storage provider. This article describes all three supported providers, compares them against each other, and provides basic information about how to get started using them.
-Two alternate DTFx storage providers were developed for use with Durable Functions, the _Netherite_ storage provider and the _Microsoft SQL Server (MSSQL)_ storage provider. This article describes all three supported providers, compares them against each other, and provides basic information about how to get started using them.
+> [!NOTE]
+> It's not currently possible to migrate data from one storage provider to another. If you want to use a new storage provider, you should create a new app configured with the new storage provider.
## Azure Storage
Additional properties may be set to customize the connection. See [Common proper
To use the Netherite storage provider, you must first add a reference to the [Microsoft.Azure.DurableTask.Netherite.AzureFunctions](https://www.nuget.org/packages/Microsoft.Azure.DurableTask.Netherite.AzureFunctions) NuGet package in your **csproj** file (.NET apps) or your **extensions.proj** file (JavaScript, Python, and PowerShell apps).
-> [!NOTE]
-> The Netherite storage provider is not yet supported in apps that use [extension bundles](../functions-bindings-register.md#extension-bundles).
- The following host.json example shows the minimum configuration required to enable the Netherite storage provider. ```json
There are many significant tradeoffs between the various supported storage provi
|- |- |- |- | | Official support status | ✅ Generally available (GA) | ⚠ Public preview | ⚠ Public preview | | External dependencies | Azure Storage account (general purpose v1) | Azure Event Hubs<br/>Azure Storage account (general purpose) | [SQL Server 2019](https://www.microsoft.com/sql-server/sql-server-2019) or Azure SQL Database |
-| Local development and emulation options | [Azurite v3.12+](../../storage/common/storage-use-azurite.md) (cross platform)<br/>[Azure Storage Emulator](../../storage/common/storage-use-emulator.md) (Windows only) | In-memory emulation ([more information](https://microsoft.github.io/durabletask-netherite/#/emulation)) | SQL Server Developer Edition (supports [Windows](/sql/database-engine/install-windows/install-sql-server), [Linux](/sql/linux/sql-server-linux-setup), and [Docker containers](/sql/linux/sql-server-linux-docker-container-deployment)) |
+| Local development and emulation options | [Azurite v3.12+](../../storage/common/storage-use-azurite.md) (cross platform)<br/>[Azure Storage Emulator](../../storage/common/storage-use-emulator.md) (Windows only) | Supports in-memory emulation of task hubs ([more information](https://microsoft.github.io/durabletask-netherite/#/emulation)) | SQL Server Developer Edition (supports [Windows](/sql/database-engine/install-windows/install-sql-server), [Linux](/sql/linux/sql-server-linux-setup), and [Docker containers](/sql/linux/sql-server-linux-docker-container-deployment)) |
| Task hub configuration | Explicit | Explicit | Implicit by default ([more information](https://microsoft.github.io/durabletask-mssql/#/taskhubs)) | | Maximum throughput | Moderate | Very high | Moderate | | Maximum orchestration/entity scale-out (nodes) | 16 | 32 | N/A | | Maximum activity scale-out (nodes) | N/A | 32 | N/A | | Consumption plan support | ✅ Fully supported | ❌ Not supported | ❌ Not supported |
-| Elastic Premium plan support | ✅ Fully supported | ⚠ Requires [runtime scale monitoring](../functions-networking-options.md#premium-plan-with-virtual-network-triggers) | ⚠ Requires [runtime scale monitoring](../functions-networking-options.md#premium-plan-with-virtual-network-triggers) |
+| Elastic Premium plan support | ✅ Fully supported | ⚠ Requires [runtime scale monitoring](../functions-networking-options.md#premium-plan-with-virtual-network-triggers) | ⚠ Requires [runtime scale monitoring](../functions-networking-options.md#premium-plan-with-virtual-network-triggers) |
| [KEDA 2.0](https://keda.sh/) scaling support<br/>([more information](../functions-kubernetes-keda.md)) | ❌ Not supported | ❌ Not supported | ✅ Supported using the [MSSQL scaler](https://keda.sh/docs/scalers/mssql/) ([more information](https://microsoft.github.io/durabletask-mssql/#/scaling)) | | Support for [extension bundles](../functions-bindings-register.md#extension-bundles) (recommended for non-.NET apps) | ✅ Fully supported | ❌ Not supported | ❌ Not supported | | Price-performance configurable? | ❌ No | ✅ Yes (Event Hubs TUs and CUs) | ✅ Yes (SQL vCPUs) |
+| Managed Identity Support | ✅ Fully supported | ❌ Not supported | ⚠️ Requires runtime-driven scaling |
| Disconnected environment support | ❌ Azure connectivity required | ❌ Azure connectivity required | ✅ Fully supported | | Identity-based connections | ✅ Yes (preview) |❌ No | ❌ No |
azure-functions Durable Functions Task Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-task-hubs.md
# Task hubs in Durable Functions (Azure Functions)
-A *task hub* in [Durable Functions](durable-functions-overview.md) is a logical container for durable storage resources that are used for orchestrations and entities. Orchestrator, activity, and entity functions can only directly interact with each other when they belong to the same task hub.
+A *task hub* in [Durable Functions](durable-functions-overview.md) is a representation of the current state of the application in storage, including all the pending work. While a function app is running, the progress of orchestration, activity, and entity functions is continually stored in the task hub. This ensures that the application can resume processing where it left off, should it require to be restarted after being temporarily stopped or interrupted for some reason. Also, it allows the function app to scale the compute workers dynamically.
+
+![Diagram showing concept of function app and task hub concept.](./media/durable-functions-task-hubs/taskhub.png)
+
+Conceptually, a task hub stores the following information:
+
+* The **instance states** of all orchestration and entity instances.
+* The messages to be processed, including
+ * any **activity messages** that represent activities waiting to be run.
+ * any **instance messages** that are waiting to be delivered to instances.
+
+The difference between activity and instance messages is that activity messages are stateless, and can thus be processed anywhere, while instance messages need to be delivered to a particular stateful instance (orchestration or entity), identified by its instance ID.
+
+Internally, each storage provider may use a different organization to represent instance states and messages. For example, messages are stored in Azure Storage Queues by the Azure Storage provider, but in relational tables by the MSSQL provider. These differences don't matter as far as the design of the application is concerned, but some of them may influence the performance characteristics. We discuss them in the section [Representation in storage](durable-functions-task-hubs.md#representation-in-storage) below.
+
+## Work items
+
+The activity messages and instance messages in the task hub represent the work that the function app needs to process. While the function app is running, it continuously fetches *work items* from the task hub. Each work item is processing one or more messages. We distinguish two types of work items:
+
+* **Activity work items**: Run an activity function to process an activity message.
+* **Orchestrator work item**: Run an orchestrator or entity function to process one or more instance messages.
+
+Workers can process multiple work items at the same time, subject to the [configured per-worker concurrency limits](durable-functions-perf-and-scale.md#concurrency-throttles).
+
+Once a worker completes a work item, it commits the effects back to the task hub. These effects vary by the type of function that was executed:
+
+* A completed activity function creates an instance message containing the result, addressed to the parent orchestrator instance.
+* A completed orchestrator function updates the orchestration state and history, and may create new messages.
+* A completed entity function updates the entity state, and may also create new instance messages.
+
+For orchestrations, each work item represents one **episode** of that orchestration's execution. An episode starts when there are new messages for the orchestrator to process. Such a message may indicate that the orchestration should start; or it may indicate that an activity, entity call, timer, or suborchestration has completed; or it can represent an external event. The message triggers a work item that allows the orchestrator to process the result and to continue with the next episode. That episode ends when the orchestrator either completes, or reaches a point where it must wait for new messages.
+
+### Execution example
+
+Consider a fan-out-fan-in orchestration that starts two activities in parallel, and waits for both of them to complete:
+
+# [C#](#tab/csharp)
+
+```csharp
+[FunctionName("Example")]
+public static async Task Run([OrchestrationTrigger] IDurableOrchestrationContext context)
+{
+ Task t1 = context.CallActivityAsync<int>("MyActivity", 1);
+ Task t2 = context.CallActivityAsync<int>("MyActivity", 2);
+ await Task.WhenAll(t1, t2);
+}
+```
+
+# [JavaScript](#tab/javascript)
+
+```JavaScript
+module.exports = df.orchestrator(function*(context){
+ const tasks = [];
+ tasks.push(context.df.callActivity("MyActivity", 1));
+ tasks.push(context.df.callActivity("MyActivity", 2));
+ yield context.df.Task.all(tasks);
+});
+```
+
+# [Python](#tab/python)
+
+```python
+def orchestrator_function(context: df.DurableOrchestrationContext):
+ tasks = []
+ tasks.append(context.call_activity("MyActivity", 1))
+ tasks.append(context.call_activity("MyActivity", 2))
+ yield context.task_all(tasks)
+```
+
+# [Java](#tab/java)
+
+```java
+@FunctionName("Example")
+public String exampleOrchestrator(
+ @DurableOrchestrationTrigger(name = "runtimeState") String runtimeState) {
+ return OrchestrationRunner.loadAndRun(runtimeState, ctx -> {
+ Task<Void> t1 = ctx.callActivity("MyActivity", 1);
+ Task<Void> t2 = ctx.callActivity("MyActivity", 2);
+ ctx.allOf(List.of(t1, t2)).await();
+ });
+}
+```
+++
+After this orchestration is initiated by a client it's processed by the function app as a sequence of work items. Each completed work item updates the task hub state when it commits. These are the steps:
+
+1. A client requests to start a new orchestration with instance-id "123". After the client completes this request, the task hub contains a placeholder for the orchestration state and an instance message:
+
+ ![workitems-illustration-step-1](./media/durable-functions-task-hubs/work-items-1.png)
+
+ The label `ExecutionStarted` is one of many [history event types](https://github.com/Azure/durabletask/tree/main/src/DurableTask.Core/History#readme) that identify the various types of messages and events participating in an orchestration's history.
+
+2. A worker executes an *orchestrator work item* to process the `ExecutionStarted` message. It calls the orchestrator function which starts executing the orchestration code. This code schedules two activities and then stops executing when it is waiting for the results. After the worker commits this work item, the task hub contains
+
+ ![workitems-illustration-step-2](./media/durable-functions-task-hubs/work-items-2.png)
+
+ The runtime state is now `Running`, two new `TaskScheduled` messages were added, and the history now contains the five events `OrchestratorStarted`, `ExecutionStarted`, `TaskScheduled`, `TaskScheduled`, `OrchestratorCompleted`. These events represent the first episode of this orchestration's execution.
+
+3. A worker executes an *activity work item* to process one of the `TaskScheduled` messages. It calls the activity function with input "2". When the activity function completes, it creates a `TaskCompleted` message containing the result. After the worker commits this work item, the task hub contains
+
+ ![workitems-illustration-step-3](./media/durable-functions-task-hubs/work-items-3.png)
+
+4. A worker executes an *orchestrator work item* to process the `TaskCompleted` message. If the orchestration is still cached in memory, it can just resume execution. Otherwise, the worker first [replays the history to recover the current state of the orchestration](durable-functions-orchestrations.md#reliability). Then it continues the orchestration, delivering the result of the activity. After receiving this result, the orchestration is still waiting for the result of the other activity, so it once more stops executing. After the worker commits this work item, the task hub contains
+
+ ![workitems-illustration-step-4](./media/durable-functions-task-hubs/work-items-4.png)
+
+ The orchestration history now contains three more events `OrchestratorStarted`, `TaskCompleted`, `OrchestratorCompleted`. These events represent the second episode of this orchestration's execution.
+
+5. A worker executes an *activity work item* to process the remaining `TaskScheduled` message. It calls the activity function with input "1". After the worker commits this work item, the task hub contains
+
+ ![workitems-illustration-step-5](./media/durable-functions-task-hubs/work-items-5.png)
+
+6. A worker executes another *orchestrator work item* to process the `TaskCompleted` message. After receiving this second result, the orchestration completes. After the worker commits this work item, the task hub contains
+
+ ![workitems-illustration-step-6](./media/durable-functions-task-hubs/work-items-6.png)
+
+ The runtime state is now `Completed`, and the orchestration history now contains four more events `OrchestratorStarted`, `TaskCompleted`, `ExecutionCompleted`, `OrchestratorCompleted`. These events represent the third and final episode of this orchestration's execution.
+
+The final history for this orchestration's execution then contains the 12 events `OrchestratorStarted`, `ExecutionStarted`, `TaskScheduled`, `TaskScheduled`, `OrchestratorCompleted`, `OrchestratorStarted`, `TaskCompleted`, `OrchestratorCompleted`, `OrchestratorStarted`, `TaskCompleted`, `ExecutionCompleted`, `OrchestratorCompleted`.
> [!NOTE]
-> This document describes the details of task hubs in a way that is specific to the default [Azure Storage provider for Durable Functions](durable-functions-storage-providers.md#azure-storage). If you are using a non-default storage provider for your Durable Functions app, you can find detailed task hub documentation in the provider-specific documentation:
->
-> * [Task Hub information for the Netherite storage provider](https://microsoft.github.io/durabletask-netherite/#/storage)
-> * [Task Hub information for the Microsoft SQL (MSSQL) storage provider](https://microsoft.github.io/durabletask-mssql/#/taskhubs)
->
-> For more information on the various storage provider options and how they compare, see the [Durable Functions storage providers](durable-functions-storage-providers.md) documentation.
+> The schedule shown isn't the only one: there are many slightly different possible schedules. For example, if the second activity completes earlier, both `TaskCompleted` instance messages may be processed by a single work item. In that case, the execution history is a bit shorter, because there are only two episodes, and it contains the following 10 events: `OrchestratorStarted`, `ExecutionStarted`, `TaskScheduled`, `TaskScheduled`, `OrchestratorCompleted`, `OrchestratorStarted`, `TaskCompleted`, `TaskCompleted`, `ExecutionCompleted`, `OrchestratorCompleted`.
+
+## Task hub management
+
+Next, let's take a closer look at how task hubs are created or deleted, how to use task hubs correctly when running multiple function apps, and how the content of task hubs can be inspected.
+
+### Creation and deletion
-If multiple function apps share a storage account, each function app *must* be configured with a separate task hub name. This requirement also applies to staging slots: each staging slot must be configured with a unique task hub name. A single storage account can contain multiple task hubs. This restriction generally applies to other storage providers as well.
+An empty task hub with all the required resources is automatically created in storage when a function app is started the first time.
+
+If using the default Azure Storage provider, no extra configuration is required. Otherwise, follow the [instructions for configuring storage providers](durable-functions-storage-providers.md#configuring-alternate-storage-providers) to ensure that the storage provider can properly provision and access the storage resources required for the task hub.
> [!NOTE]
-> The exception to the task hub sharing rule is if you are configuring your app for regional disaster recovery. See the [disaster recovery and geo-distribution](durable-functions-disaster-recovery-geo-distribution.md) article for more information.
+> The task hub is *not* automatically deleted when you stop or delete the function app. You must delete the task hub, its contents, or the containing storage account manually if you no longer want to keep that data.
+
+> [!TIP]
+> In a development scenario, you may need to restart from a clean state often. To do so quickly, you can just [change the configured task hub name](durable-functions-task-hubs.md#task-hub-names). This will force the creation of a new, empty task hub when you restart the application. Be aware that the old data is not deleted in this case.
+
+### Multiple function apps
+
+If multiple function apps share a storage account, each function app *must* be configured with a separate [task hub name](durable-functions-task-hubs.md#task-hub-names). This requirement also applies to staging slots: each staging slot must be configured with a unique task hub name. A single storage account can contain multiple task hubs. This restriction generally applies to other storage providers as well.
The following diagram illustrates one task hub per function app in shared and dedicated Azure Storage accounts.
-![Diagram showing shared and dedicated storage accounts.](./media/durable-functions-task-hubs/task-hubs-storage.png)
+![Diagram showing shared and dedicated storage accounts.](./media/durable-functions-task-hubs/multiple-apps.png)
+
+> [!NOTE]
+> The exception to the task hub sharing rule is if you are configuring your app for regional disaster recovery. See the [disaster recovery and geo-distribution](durable-functions-disaster-recovery-geo-distribution.md) article for more information.
+
+### Content inspection
+
+There are several common ways to inspect the contents of a task hub:
+
+1. Within a function app, the client object provides methods to query the instance store. To learn more about what types of queries are supported, see the [Instance Management](durable-functions-instance-management.md) article.
+2. Similarly, The [HTTP API](durable-functions-http-features.md) offers REST requests to query the state of orchestrations and entities. See the [HTTP API Reference](durable-functions-http-api.md) for more details.
+3. The [Durable Functions Monitor](https://github.com/microsoft/DurableFunctionsMonitor) tool can inspect task hubs and offers various options for visual display.
+
+For some of the storage providers, it is also possible to inspect the task hub by going directly to the underlying storage:
+
+* If using the Azure Storage provider, the instance states are stored in the [Instance Table](durable-functions-azure-storage-provider.md#instances-table) and the [History Table](durable-functions-azure-storage-provider.md#history-table) that can be inspected using tools such as Azure Storage Explorer.
+* If using the MSSQL storage provider, SQL queries and tools can be used to inspect the task hub contents inside the database.
+
+## Representation in storage
+
+Each storage provider uses a different internal organization to represent task hubs in storage. Understanding this organization, while not required, can be helpful when troubleshooting a function app or when trying to ensure performance, scalability, or cost targets. We thus briefly explain, for each storage provider, how the data is organized in storage. For more information on the various storage provider options and how they compare, see the [Durable Functions storage providers](durable-functions-storage-providers.md).
+
+### Azure Storage provider
+
+The Azure Storage provider represents the task hub in storage using the following components:
+
+* Two Azure Tables store the instance states.
+* One Azure Queue stores the activity messages.
+* One or more Azure Queues store the instance messages. Each of these so-called *control queues* represents a [partition](durable-functions-perf-and-scale.md#partition-count) that is assigned a subset of all instance messages, based on the hash of the instance ID.
+* A few extra blob containers used for lease blobs and/or large messages.
+
+For example, a task hub named `xyz` with `PartitionCount = 4` contains the following queues and tables:
+
+![Diagram showing Azure Storage provider storage storage organization for 4 control queues.](./media/durable-functions-task-hubs/azure-storage.png)
+
+Next, we describe these components and the role they play in more detail.
+
+For more information how task hubs are represented by the Azure Storage provider, see the [Azure Storage provider](durable-functions-azure-storage-provider.md) documentation.
+
+### Netherite storage provider
+
+Netherite partitions all of the task hub state into a specified number of partitions.
+In storage, the following resources are used:
+
+* One Azure Storage blob container that contains all the blobs, grouped by partition.
+* One Azure Table that contains published metrics about the partitions.
+* An Azure Event Hubs namespace for delivering messages between partitions.
+
+For example, a task hub named `mytaskhub` with `PartitionCount = 32` is represented in storage as follows:
+
+![Diagram showing Netherite storage organization for 32 partitions.](./media/durable-functions-task-hubs/netherite-storage.png)
+
+> [!NOTE]
+> All of the task hub state is stored inside the `x-storage` blob container. The `DurableTaskPartitions` table and the EventHubs namespace contain redundant data: if their contents are lost, they can be automatically recovered. Therefore it is not necessary to configure the Azure Event Hubs namespace to retain messages past the default expiration time.
+
+Netherite uses an event-sourcing mechanism, based on a log and checkpoints, to represent the current state of a partition. Both block blobs and page blobs are used. It is not possible to read this format from storage directly, so the function app has to be running when querying the instance store.
+
+For more information on task hubs for the Netherite storage provider, see [Task Hub information for the Netherite storage provider](https://microsoft.github.io/durabletask-netherite/#/storage).
+
+### MSSQL storage provider
+
+All task hub data is stored in a single relational database, using several tables:
+
+* The `dt.Instances` and `dt.History` tables store the instance states.
+* The `dt.NewEvents` table stores the instance messages.
+* The `dt.NewTasks` table stores the activity messages.
-## Azure Storage resources
-A task hub in Azure Storage consists of the following resources:
+![Diagram showing MSSQL storage organization.](./media/durable-functions-task-hubs/mssql-storage.png)
-* One or more control queues.
-* One work-item queue.
-* One history table.
-* One instances table.
-* One storage container containing one or more lease blobs.
-* A storage container containing large message payloads, if applicable.
+To enable multiple task hubs to coexist independently in the same database, each table includes a `TaskHub` column as part of its primary key. Unlike the other two providers, the MSSQL provider doesn't have a concept of partitions.
-All of these resources are created automatically in the configured Azure Storage account when orchestrator, entity, or activity functions run or are scheduled to run. The [Performance and Scale](durable-functions-perf-and-scale.md) article explains how these resources are used.
+For more information on task hubs for the MSSQL storage provider, see [Task Hub information for the Microsoft SQL (MSSQL) storage provider](https://microsoft.github.io/durabletask-mssql/#/taskhubs).
## Task hub names
-Task hubs in Azure Storage are identified by a name that conforms to these rules:
+Task hubs are identified by a name that must conform to these rules:
* Contains only alphanumeric characters * Starts with a letter
public HttpResponseMessage httpStart(
> [!NOTE] > Configuring task hub names in client binding metadata is only necessary when you use one function app to access orchestrations and entities in another function app. If the client functions are defined in the same function app as the orchestrations and entities, you should avoid specifying task hub names in the binding metadata. By default, all client bindings get their task hub metadata from the **host.json** settings.
-Task hub names in Azure Storage must start with a letter and consist of only letters and numbers. If not specified, a default task hub name will be used as shown in the following table:
+Task hub names must start with a letter and consist of only letters and numbers. If not specified, a default task hub name will be used as shown in the following table:
| Durable extension version | Default task hub name | | - | - |
-| 2.x | When deployed in Azure, the task hub name is derived from the name of the _function app_. When running outside of Azure, the default task hub name is `TestHubName`. |
+| 2.x | When deployed in Azure, the task hub name is derived from the name of the *function app*. When running outside of Azure, the default task hub name is `TestHubName`. |
| 1.x | The default task hub name for all environments is `DurableFunctionsHub`. | For more information about the differences between extension versions, see the [Durable Functions versions](durable-functions-versions.md) article.
azure-functions Durable Functions Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-versioning.md
When doing side-by-side deployments in Azure Functions or Azure App Service, we
## Next steps > [!div class="nextstepaction"]
-> [Learn how to handle performance and scale issues](durable-functions-perf-and-scale.md)
+> [Learn about using and choosing storage providers](durable-functions-storage-providers.md)
azure-functions Storage Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/storage-considerations.md
Functions uses Blob storage to persist important information, such as [function
### In-region data residency
-When all customer data must remain within a single region, the storage account associated with the function app must be one with [in-region redundancy](../storage/common/storage-redundancy.md). An in-region redundant storage account also must be used with [Azure Durable Functions](./durable/durable-functions-perf-and-scale.md#storage-account-selection).
+When all customer data must remain within a single region, the storage account associated with the function app must be one with [in-region redundancy](../storage/common/storage-redundancy.md). An in-region redundant storage account also must be used with [Azure Durable Functions](./durable/durable-functions-azure-storage-provider.md#storage-account-selection).
Other platform-managed customer data is only stored within the region when hosting in an internally load-balanced App Service Environment (ASE). To learn more, see [ASE zone redundancy](../app-service/environment/zone-redundancy.md#in-region-data-residency).
azure-monitor Asp Net Exceptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-exceptions.md
description: Capture exceptions from ASP.NET apps along with request telemetry.
ms.devlang: csharp Previously updated : 05/19/2021 Last updated : 08/19/2022
namespace MVC2App.Controllers
{ if (filterContext != null && filterContext.HttpContext != null && filterContext.Exception != null) {
- //If customError is Off, then AI HTTPModule will report the exception
+ //The attribute should track exceptions only when CustomErrors setting is On
+ //if CustomErrors is Off, exceptions will be caught by AI HTTP Module
if (filterContext.HttpContext.IsCustomErrorEnabled) { //or reuse instance (recommended!). see note above var ai = new TelemetryClient();
azure-monitor Azure Vm Vmss Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-vm-vmss-apps.md
Title: Monitor performance on Azure VMs - Azure Application Insights description: Application performance monitoring for Azure VM and Azure virtual machine scale sets. Chart load and response time, dependency information, and set alerts on performance. Previously updated : 08/26/2019 Last updated : 08/19/2022 ms.devlang: csharp, java, javascript, python
azure-monitor Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ip-addresses.md
Title: IP addresses used by Azure Monitor | Microsoft Docs description: This article discusses server firewall exceptions that are required by Azure Monitor Previously updated : 01/27/2020 Last updated : 08/19/2022
[Azure Monitor](../overview.md) uses several IP addresses. Azure Monitor is made up of core platform metrics and logs in addition to Log Analytics and Application Insights. You might need to know IP addresses if the app or infrastructure that you're monitoring is hosted behind a firewall. > [!NOTE]
-> Although these addresses are static, it's possible that we'll need to change them from time to time. All Application Insights traffic represents outbound traffic with the exception of availability monitoring and webhooks, which require inbound firewall rules.
+> Although these addresses are static, it's possible that we'll need to change them from time to time. All Application Insights traffic represents outbound traffic with the exception of availability monitoring and webhook action groups, which also require inbound firewall rules.
You can use Azure [network service tags](../../virtual-network/service-tags-overview.md) to manage access if you're using Azure network security groups. If you're managing access for hybrid/on-premises resources, you can download the equivalent IP address lists as [JSON files](../../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files), which are updated each week. To cover all the exceptions in this article, use the service tags `ActionGroup`, `ApplicationInsightsAvailability`, and `AzureMonitor`.
azure-monitor Java Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-spring-boot.md
To use the programmatic configuration and attach the Application Insights agent
And invoke the `attach()` method of the `com.microsoft.applicationinsights.attach.ApplicationInsights` class.
-> [!TIP]
-> ΓÜá JRE is not supported.
+> [!WARNING]
+>
+> JRE is not supported.
-> [!TIP]
-> ΓÜá Read-only file system is not supported.
+> [!WARNING]
+>
+> Read-only file system is not supported.
-> [!TIP]
-> ΓÜá The invocation must be requested at the beginning of the `main` method.
+> [!WARNING]
+>
+> The invocation must be requested at the beginning of the `main` method.
Example:
public class SpringBootApp {
If you want to use a JSON configuration: * The `applicationinsights.json` file has to be in the classpath
-* Or you can use an environmental variable or a system property, more in the _Configuration file path_ part on [this page](../app/java-standalone-config.md).
+* Or you can use an environmental variable or a system property, more in the _Configuration file path_ part on [this page](../app/java-standalone-config.md). Spring properties defined in a Spring _.properties_ file are not supported.
> [!TIP]
azure-monitor Opencensus Python Dependency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python-dependency.md
Title: Dependency Tracking in Azure Application Insights with OpenCensus Python | Microsoft Docs description: Monitor dependency calls for your Python apps via OpenCensus Python. Previously updated : 10/15/2019 Last updated : 8/19/2022 ms.devlang: python + # Track dependencies with OpenCensus Python
OPENCENSUS = {
} ```
+You can find a Django sample application that uses dependencies in the Azure Monitor OpenCensus Python samples repository located [here](https://github.com/Azure-Samples/azure-monitor-opencensus-python/tree/master/azure_monitor/django_sample).
+ ## Dependencies with "mysql" integration Track your MYSQL dependencies with the OpenCensus `mysql` integration. This integration supports the [mysql-connector](https://pypi.org/project/mysql-connector-python/) library.
azure-monitor Opencensus Python Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python-request.md
Title: Incoming Request Tracking in Azure Application Insights with OpenCensus Python | Microsoft Docs description: Monitor request calls for your Python apps via OpenCensus Python. Previously updated : 10/15/2019 Last updated : 8/19/2022 ms.devlang: python
First, instrument your Python application with latest [OpenCensus Python SDK](./
} ```
+You can find a Django sample application in the sample Azure Monitor OpenCensus Python samples repository located [here](https://github.com/givenscj/azure-monitor-opencensus-python/tree/master/azure_monitor/django_sample).
## Tracking Flask applications 1. Download and install `opencensus-ext-flask` from [PyPI](https://pypi.org/project/opencensus-ext-flask/) and instrument your application with the `flask` middleware. Incoming requests sent to your `flask` application will be tracked.
First, instrument your Python application with latest [OpenCensus Python SDK](./
> [!NOTE] > To run Flask under uWSGI in a Docker environment, you must first add `lazy-apps = true` to the uWSGI configuration file (uwsgi.ini). For more information, see the [issue description](https://github.com/census-instrumentation/opencensus-python/issues/660).
+You can find a Flask sample application that tracks requests in the Azure Monitor OpenCensus Python samples repository located [here](https://github.com/Azure-Samples/azure-monitor-opencensus-python/tree/master/azure_monitor/flask_sample).
+ ## Tracking Pyramid applications 1. Download and install `opencensus-ext-django` from [PyPI](https://pypi.org/project/opencensus-ext-pyramid/) and instrument your application with the `pyramid` tween. Incoming requests sent to your `pyramid` application will be tracked.
azure-monitor Opencensus Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python.md
Title: Monitor Python applications with Azure Monitor | Microsoft Docs description: Provides instructions to wire up OpenCensus Python with Azure Monitor Previously updated : 10/12/2021 Last updated : 8/19/2022 ms.devlang: python
You may have noted that OpenCensus is converging into [OpenTelemetry](https://op
## Prerequisites - An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.-- Python installation. This article uses [Python 3.7.0](https://www.python.org/downloads/release/python-370/), although other versions will likely work with minor changes. The Opencensus Python SDK only supports Python v2.7 and v3.4+.-- Create an Application Insights [resource](./create-new-resource.md). You'll be assigned your own instrumentation key (ikey) for your resource. [!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
Install the OpenCensus Azure Monitor exporters:
python -m pip install opencensus-ext-azure ```
-> [!NOTE]
-> The `python -m pip install opencensus-ext-azure` command assumes that you have a `PATH` environment variable set for your Python installation. If you haven't configured this variable, you need to give the full directory path to where your Python executable is located. The result is a command like this: `C:\Users\Administrator\AppData\Local\Programs\Python\Python37-32\python.exe -m pip install opencensus-ext-azure`.
-
-The SDK uses three Azure Monitor exporters to send different types of telemetry to Azure Monitor. They're trace, metrics, and logs. For more information on these telemetry types, see [the data platform overview](../data-platform.md). Use the following instructions to send these telemetry types via the three exporters.
+The SDK uses three Azure Monitor exporters to send different types of telemetry to Azure Monitor. They are `trace`, `metrics`, and `logs`. For more information on these telemetry types, see [the data platform overview](../data-platform.md). Use the following instructions to send these telemetry types via the three exporters.
## Telemetry type mappings
Here are the exporters that OpenCensus provides mapped to the types of telemetry
main() ```
-1. The exporter sends log data to Azure Monitor. You can find the data under `traces`.
+1. The exporter sends log data to Azure Monitor. You can find the data under `traces`.
> [!NOTE] > In this context, `traces` isn't the same as `tracing`. Here, `traces` refers to the type of telemetry that you'll see in Azure Monitor when you utilize `AzureLogHandler`. But `tracing` refers to a concept in OpenCensus and relates to [distributed tracing](./distributed-tracing.md).
For more detailed information about how to use queries and logs, see [Logs in Az
* [Customization](https://github.com/census-instrumentation/opencensus-python/blob/master/README.rst#customization) * [Azure Monitor Exporters on GitHub](https://github.com/census-instrumentation/opencensus-python/tree/master/contrib/opencensus-ext-azure) * [OpenCensus Integrations](https://github.com/census-instrumentation/opencensus-python#extensions)
-* [Azure Monitor Sample Applications](https://github.com/Azure-Samples/azure-monitor-opencensus-python)
+* [Azure Monitor Sample Applications](https://github.com/Azure-Samples/azure-monitor-opencensus-python/tree/master/azure_monitor)
## Next steps
azure-monitor Status Monitor V2 Detailed Instructions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/status-monitor-v2-detailed-instructions.md
Install-Module : The 'Install-Module' command was found in the module 'PowerShel
loaded. For more information, run 'Import-Module PowerShellGet'. Import-Module : File C:\Program Files\WindowsPowerShell\Modules\PackageManagement\1.3.1\PackageManagement.psm1 cannot
-be loaded because running scripts is disabled on this system. For more information, see about_Execution_Policies at
-https:/go.microsoft.com/fwlink/?LinkID=135170.
+be loaded because running scripts is disabled on this system. For more information, see about_Execution_Policies at https://go.microsoft.com/fwlink/?LinkID=135170.
``` - ## Prerequisites for PowerShell Audit your instance of PowerShell by running the `$PSVersionTable` command.
azure-monitor Autoscale Predictive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-predictive.md
Predictive autoscale adheres to the scaling boundaries you've set for your virtu
*Forecast only* allows you to view your predicted CPU forecast without triggering the scaling action based on the prediction. You can then compare the forecast with your actual workload patterns to build confidence in the prediction models before you enable the predictive autoscale feature.
-## Public preview support, availability, and limitations
+## Public preview support and limitations
>[!NOTE] > This release is a public preview. We're testing and gathering feedback for future releases. As such, we do not provide production-level support for this feature. Support is best effort. Send feature suggestions or feedback on predicative autoscale to predautoscalesupport@microsoft.com.
-During public preview, predictive autoscale is only available in the following regions:
--- West Central US-- West US2-- UK South-- UK West-- Southeast Asia-- East Asia-- Australia East-- Australia South east-- Canada Central-- Canada East- The following limitations apply during public preview. Predictive autoscale: - Only works for workloads exhibiting cyclical CPU usage patterns.
For more information on Azure Resource Manager templates, see [Resource Manager
This section answers common questions.
+### Why is CPU percentage over 100 percent on predictive charts?
+The predictive chart shows the cumulative load for all machines in the scale set. If you have 5 VMs in a scale set, the maximum cumulative load for all VMs will be 500%, that is, five times the 100% maximum CPU load of each VM.
+ ### What happens over time when you turn on predictive autoscale for a virtual machine scale set? Prediction autoscale uses the history of a running virtual machine scale set. If your scale set has been running less than 7 days, you'll receive a message that the model is being trained. For more information, see the [no predictive data message](#errors-and-warnings). Predictions improve as time goes by and achieve maximum accuracy 15 days after the virtual machine scale set is created.
The modeling works best with workloads that exhibit periodicity. We recommend th
Standard autoscaling is a necessary fallback if the predictive model doesn't work well for your scenario. Standard autoscale will cover unexpected load spikes, which aren't part of your typical CPU load pattern. It also provides a fallback if an error occurs in retrieving the predictive data.
+### Which rule will take effect if both predictive and standard autoscale rules are set?
+Standard autoscale rules are used if there is an unexpected spike in the CPU load, or an error occurs when retrieving predictive data```
+
+We use the threshold set in the standard autoscale rules to understand when youΓÇÖd like to scale out and by how many instances. If you want your VM scale set to scale out when the CPU usage exceeds 70%, and actual or predicted data shows that CPU usage is or will be over 70%, then a scale out will occur.
+ ## Errors and warnings This section addresses common errors and warnings.
Learn more about autoscale in the following articles:
- [Azure Monitor autoscale common metrics](./autoscale-common-metrics.md) - [Best practices for Azure Monitor autoscale](./autoscale-best-practices.md) - [Use autoscale actions to send email and webhook alert notifications](./autoscale-webhook-email.md)-- [Autoscale REST API](/rest/api/monitor/autoscalesettings)
+- [Autoscale REST API](/rest/api/monitor/autoscalesettings)
azure-monitor Container Insights Analyze https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-analyze.md
# Monitor your Kubernetes cluster performance with Container insights
-With Container insights, you can use the performance charts and health status to monitor the workload of Kubernetes clusters hosted on Azure Kubernetes Service (AKS), Azure Stack, or other environment from two perspectives. You can monitor directly from the cluster, or you can view all clusters in a subscription from Azure Monitor. Viewing Azure Container Instances is also possible when monitoring a specific AKS cluster.
+With Container insights, you can use the performance charts and health status to monitor the workload of Kubernetes clusters hosted on Azure Kubernetes Service (AKS), Azure Stack, or another environment from two perspectives. You can monitor directly from the cluster. You can also view all clusters in a subscription from Azure Monitor. Viewing Azure Container Instances is also possible when you're monitoring a specific AKS cluster.
-This article helps you understand the two perspectives, and how Azure Monitor helps you quickly assess, investigate, and resolve detected issues.
+This article helps you understand the two perspectives and how Azure Monitor helps you quickly assess, investigate, and resolve detected issues.
For information about how to enable Container insights, see [Onboard Container insights](container-insights-onboard.md).
-Azure Monitor provides a multi-cluster view that shows the health status of all monitored Kubernetes clusters running Linux and Windows Server 2019 deployed across resource groups in your subscriptions. It shows clusters discovered across all environments that aren't monitored by the solution. You can immediately understand cluster health, and from here, you can drill down to the node and controller performance page or navigate to see performance charts for the cluster. For AKS clusters that were discovered and identified as unmonitored, you can enable monitoring for them at any time.
+Azure Monitor provides a multi-cluster view that shows the health status of all monitored Kubernetes clusters running Linux and Windows Server 2019 deployed across resource groups in your subscriptions. It shows clusters discovered across all environments that aren't monitored by the solution.
-The main differences in monitoring a Windows Server cluster with Container insights compared to a Linux cluster are described in [Feature of Container insights](container-insights-overview.md#features-of-container-insights) in the overview article.
+With this view, you can immediately understand cluster health. From here, you can drill down to the node and controller performance page or navigate to see performance charts for the cluster. For AKS clusters that were discovered and identified as unmonitored, you can enable monitoring for them at any time.
+The main differences in monitoring a Windows Server cluster with Container insights compared to a Linux cluster are described in [Features of Container insights](container-insights-overview.md#features-of-container-insights) in the overview article.
## Multi-cluster view from Azure Monitor To view the health status of all Kubernetes clusters deployed, select **Monitor** from the left pane in the Azure portal. Under the **Insights** section, select **Containers**.
-![Azure Monitor multi-cluster dashboard example](./media/container-insights-analyze/azmon-containers-multiview.png)
+![Screenshot that shows an Azure Monitor multi-cluster dashboard example.](./media/container-insights-analyze/azmon-containers-multiview.png)
You can scope the results presented in the grid to show clusters that are:
-* **Azure** - AKS and AKS-Engine clusters hosted in Azure Kubernetes Service
-* **Azure Stack (Preview)** - AKS-Engine clusters hosted on Azure Stack
-* **Non-Azure (Preview)** - Kubernetes clusters hosted on-premises
-* **All** - View all the Kubernetes clusters hosted in Azure, Azure Stack, and on-premises environments that are onboarded to Container insights
+* **Azure**: AKS and AKS Engine clusters hosted in Azure Kubernetes Service.
+* **Azure Stack (Preview)**: AKS Engine clusters hosted on Azure Stack.
+* **Non-Azure (Preview)**: Kubernetes clusters hosted on-premises.
+* **All**: View all the Kubernetes clusters hosted in Azure, Azure Stack, and on-premises environments that are onboarded to Container insights.
-To view clusters from a specific environment, select it from the **Environments** pill on the top-left corner of the page.
+To view clusters from a specific environment, select it from **Environment** in the upper-left corner.
-![Environment pill selector example](./media/container-insights-analyze/clusters-multiview-environment-pill.png)
+![Screenshot that shows an Environment selector example.](./media/container-insights-analyze/clusters-multiview-environment-pill.png)
On the **Monitored clusters** tab, you learn the following: -- How many clusters are in a critical or unhealthy state, versus how many are healthy or not reporting (referred to as an Unknown state).-- Whether all of the [Azure Kubernetes Engine (AKS-engine)](https://github.com/Azure/aks-engine) deployments are healthy.
+- How many clusters are in a critical or unhealthy state versus how many are healthy or not reporting (referred to as an Unknown state).
+- Whether all of the [Azure Kubernetes Engine (AKS Engine)](https://github.com/Azure/aks-engine) deployments are healthy.
- How many nodes and user and system pods are deployed per cluster. - How much disk space is available and if there's a capacity issue.
The health statuses included are:
* **Misconfigured**: Container insights wasn't configured correctly in the specified workspace. * **No data**: Data hasn't reported to the workspace for the last 30 minutes.
-Health state calculates overall cluster status as the *worst of* the three states with one exception. If any of the three states is Unknown, the overall cluster state shows **Unknown**.
+Health state calculates the overall cluster status as the *worst of* the three states with one exception. If any of the three states is Unknown, the overall cluster state shows **Unknown**.
The following table provides a breakdown of the calculation that controls the health states for a monitored cluster on the multi-cluster view.
Access to Container insights is available directly from an AKS cluster by select
- Containers >[!NOTE]
->The experience described in the remainder of this article are also applicable for viewing performance and health status of your Kubernetes clusters hosted on Azure Stack or other environment when selected from the multi-cluster view.
+>The experiences described in the remainder of this article are also applicable for viewing performance and health status of your Kubernetes clusters hosted on Azure Stack or another environment when selected from the multi-cluster view.
The default page opens and displays four line performance charts that show key performance metrics of your cluster.
-![Example performance charts on the Cluster tab](./media/container-insights-analyze/containers-cluster-perfview.png)
+![Screenshot that shows example performance charts on the Cluster tab.](./media/container-insights-analyze/containers-cluster-perfview.png)
The performance charts display four performance metrics: - **Node CPU utilization&nbsp;%**: An aggregated perspective of CPU utilization for the entire cluster. To filter the results for the time range, select **Avg**, **Min**, **50th**, **90th**, **95th**, or **Max** in the percentiles selector above the chart. The filters can be used either individually or combined. - **Node memory utilization&nbsp;%**: An aggregated perspective of memory utilization for the entire cluster. To filter the results for the time range, select **Avg**, **Min**, **50th**, **90th**, **95th**, or **Max** in the percentiles selector above the chart. The filters can be used either individually or combined.-- **Node count**: A node count and status from Kubernetes. Statuses of the cluster nodes represented are Total, Ready, and Not Ready. They can be filtered individually or combined in the selector above the chart.-- **Active pod count**: A pod count and status from Kubernetes. Statuses of the pods represented are Total, Pending, Running, Unknown, Succeeded, or Failed. They can be filtered individually or combined in the selector above the chart.
+- **Node count**: A node count and status from Kubernetes. Statuses of the cluster nodes represented are **Total**, **Ready**, and **Not Ready**. They can be filtered individually or combined in the selector above the chart.
+- **Active pod count**: A pod count and status from Kubernetes. Statuses of the pods represented are **Total**, **Pending**, **Running**, **Unknown**, **Succeeded**, or **Failed**. They can be filtered individually or combined in the selector above the chart.
Use the Left and Right arrow keys to cycle through each data point on the chart. Use the Up and Down arrow keys to cycle through the percentile lines. Select the pin icon in the upper-right corner of any one of the charts to pin the selected chart to the last Azure dashboard you viewed. From the dashboard, you can resize and reposition the chart. Selecting the chart from the dashboard redirects you to Container insights and loads the correct scope and view.
-Container insights also supports Azure Monitor [metrics explorer](../essentials/metrics-getting-started.md), where you can create your own plot charts, correlate and investigate trends, and pin to dashboards. From metrics explorer, you also can use the criteria that you set to visualize your metrics as the basis of a [metric-based alert rule](../alerts/alerts-metric.md).
+Container insights also supports Azure Monitor [Metrics Explorer](../essentials/metrics-getting-started.md), where you can create your own plot charts, correlate and investigate trends, and pin to dashboards. From Metrics Explorer, you also can use the criteria that you set to visualize your metrics as the basis of a [metric-based alert rule](../alerts/alerts-metric.md).
-## View container metrics in metrics explorer
+## View container metrics in Metrics Explorer
-In metrics explorer, you can view aggregated node and pod utilization metrics from Container insights. The following table summarizes the details to help you understand how to use the metric charts to visualize container metrics.
+In Metrics Explorer, you can view aggregated node and pod utilization metrics from Container insights. The following table summarizes the details to help you understand how to use the metric charts to visualize container metrics.
|Namespace | Metric | Description | |-|--|-| | insights.container/nodes | |
-| | cpuUsageMillicores | Aggregated measurement of CPU utilization across the cluster. It is a CPU core split into 1000 units (milli = 1000). Used to determine the usage of cores in a container where many applications might be using one core.|
+| | cpuUsageMillicores | Aggregated measurement of CPU utilization across the cluster. It's a CPU core split into 1,000 units (milli = 1000). Used to determine the usage of cores in a container where many applications might be using one core.|
| | cpuUsagePercentage | Aggregated average CPU utilization measured in percentage across the cluster.| | | memoryRssBytes | Container RSS memory used in bytes.| | | memoryRssPercentage | Container RSS memory used in percent.|
You can [split](../essentials/metrics-charts.md#apply-splitting) a metric to vie
When you switch to the **Nodes**, **Controllers**, and **Containers** tabs, a property pane automatically displays on the right side of the page. It shows the properties of the item selected, which includes the labels you defined to organize Kubernetes objects. When a Linux node is selected, the **Local Disk Capacity** section also shows the available disk space and the percentage used for each disk presented to the node. Select the **>>** link in the pane to view or hide the pane.
-As you expand the objects in the hierarchy, the properties pane updates based on the object selected. From the pane, you also can view Kubernetes container logs (stdout/stderror), events, and pod metrics by selecting the **View live data (preview)** link at the top of the pane. For more information about the configuration required to grant and control access to view this data, see [Setup the Live Data (preview)](container-insights-livedata-setup.md). While you review cluster resources, you can see this data from the container in real-time. For more information about this feature, see [How to view Kubernetes logs, events, and pod metrics in real time](container-insights-livedata-overview.md). To view Kubernetes log data stored in your workspace based on pre-defined log searches, select **View container logs** from the **View in analytics** drop-down list. For additional information about this topic, see [How to query logs from Container insights](container-insights-log-query.md).
+As you expand the objects in the hierarchy, the properties pane updates based on the object selected. From the pane, you also can view Kubernetes container logs (stdout/stderror), events, and pod metrics by selecting the **View live data (preview)** link at the top of the pane. For more information about the configuration required to grant and control access to view this data, see [Set up the Live Data (preview)](container-insights-livedata-setup.md).
-Use the **+ Add Filter** option at the top of the page to filter the results for the view by **Service**, **Node**, **Namespace**, or **Node Pool**. After you select the filter scope, select one of the values shown in the **Select value(s)** field. After the filter is configured, it's applied globally while viewing any perspective of the AKS cluster. The formula only supports the equal sign. You can add additional filters on top of the first one to further narrow your results. For example, if you specify a filter by **Node**, you can only select **Service** or **Namespace** for the second filter.
+While you review cluster resources, you can see this data from the container in real time. For more information about this feature, see [How to view Kubernetes logs, events, and pod metrics in real time](container-insights-livedata-overview.md).
+
+To view Kubernetes log data stored in your workspace based on predefined log searches, select **View container logs** from the **View in analytics** dropdown list. For more information, see [How to query logs from Container insights](container-insights-log-query.md).
+
+Use the **+ Add Filter** option at the top of the page to filter the results for the view by **Service**, **Node**, **Namespace**, or **Node Pool**. After you select the filter scope, select one of the values shown in the **Select value(s)** field. After the filter is configured, it's applied globally while viewing any perspective of the AKS cluster. The formula only supports the equal sign. You can add more filters on top of the first one to further narrow your results. For example, if you specify a filter by **Node**, you can only select **Service** or **Namespace** for the second filter.
Specifying a filter in one tab continues to be applied when you select another. It's deleted after you select the **x** symbol next to the specified filter. Switch to the **Nodes** tab and the row hierarchy follows the Kubernetes object model, which starts with a node in your cluster. Expand the node to view one or more pods running on the node. If more than one container is grouped to a pod, they're displayed as the last row in the hierarchy. You also can view how many non-pod-related workloads are running on the host if the host has processor or memory pressure.
-![Example of the Kubernetes Node hierarchy in the performance view](./media/container-insights-analyze/containers-nodes-view.png)
+![Screenshot that shows an example of the Kubernetes Node hierarchy in the performance view.](./media/container-insights-analyze/containers-nodes-view.png)
-Windows Server containers that run the Windows Server 2019 OS are shown after all of the Linux-based nodes in the list. When you expand a Windows Server node, you can view one or more pods and containers that run on the node. After a node is selected, the properties pane shows version information.
+Windows Server containers that run the Windows Server 2019 OS are shown after all the Linux-based nodes in the list. When you expand a Windows Server node, you can view one or more pods and containers that run on the node. After a node is selected, the properties pane shows version information.
-![Example Node hierarchy with Windows Server nodes listed](./media/container-insights-analyze/nodes-view-windows.png)
+![Screenshot that shows an example Node hierarchy with Windows Server nodes listed.](./media/container-insights-analyze/nodes-view-windows.png)
Azure Container Instances virtual nodes that run the Linux OS are shown after the last AKS cluster node in the list. When you expand a Container Instances virtual node, you can view one or more Container Instances pods and containers that run on the node. Metrics aren't collected and reported for nodes, only for pods.
-![Example Node hierarchy with Container Instances listed](./media/container-insights-analyze/nodes-view-aci.png)
+![Screenshot that shows an example Node hierarchy with Container Instances listed.](./media/container-insights-analyze/nodes-view-aci.png)
From an expanded node, you can drill down from the pod or container that runs on the node to the controller to view performance data filtered for that controller. Select the value under the **Controller** column for the specific node.
-![Screenshot shows the drill-down from node to controller in the performance view](./media/container-insights-analyze/drill-down-node-controller.png)
-
-Select controllers or containers at the top of the page to review the status and resource utilization for those objects. To review memory utilization, in the **Metric** drop-down list, select **Memory RSS** or **Memory working set**. **Memory RSS** is supported only for Kubernetes version 1.8 and later. Otherwise, you view values for **Min&nbsp;%** as *NaN&nbsp;%*, which is a numeric data type value that represents an undefined or unrepresentable value.
+![Screenshot that shows the drill-down from node to controller in the performance view.](./media/container-insights-analyze/drill-down-node-controller.png)
-![Container nodes performance view](./media/container-insights-analyze/containers-node-metric-dropdown.png)
+Select controllers or containers at the top of the page to review the status and resource utilization for those objects. To review memory utilization, in the **Metric** dropdown list, select **Memory RSS** or **Memory working set**. **Memory RSS** is supported only for Kubernetes version 1.8 and later. Otherwise, you view values for **Min&nbsp;%** as *NaN&nbsp;%*, which is a numeric data type value that represents an undefined or unrepresentable value.
-**Memory working set** shows both the resident memory and virtual memory (cache) included and is a total of what the application is using. **Memory RSS** shows only main memory (which is nothing but the resident memory in other words). This metric shows the actual capacity of available memory. What is the difference between resident memory and virtual memory?
+![Screenshot that shows a Container nodes performance view.](./media/container-insights-analyze/containers-node-metric-dropdown.png)
-- Resident memory or main memory, is the actual amount of machine memory available to the nodes of the cluster.
+**Memory working set** shows both the resident memory and virtual memory (cache) included and is a total of what the application is using. **Memory RSS** shows only main memory, which is nothing but the resident memory. This metric shows the actual capacity of available memory. What's the difference between resident memory and virtual memory?
-- Virtual memory is reserved hard disk space (cache) used by the operating system to swap data from memory to disk when under memory pressure, and then fetch it back to memory when needed.
+- **Resident memory**, or main memory, is the actual amount of machine memory available to the nodes of the cluster.
+- **Virtual memory** is reserved hard disk space (cache) used by the operating system to swap data from memory to disk when under memory pressure, and then fetch it back to memory when needed.
By default, performance data is based on the last six hours, but you can change the window by using the **TimeRange** option at the upper left. You also can filter the results within the time range by selecting **Min**, **Avg**, **50th**, **90th**, **95th**, and **Max** in the percentile selector.
-![Percentile selection for data filtering](./media/container-insights-analyze/containers-metric-percentile-filter.png)
+![Screenshot that shows a percentile selection for data filtering.](./media/container-insights-analyze/containers-metric-percentile-filter.png)
When you hover over the bar graph under the **Trend** column, each bar shows either CPU or memory usage, depending on which metric is selected, within a sample period of 15 minutes. After you select the trend chart through a keyboard, use the Alt+Page up key or Alt+Page down key to cycle through each bar individually. You get the same details that you would if you hovered over the bar.
-![Trend bar chart hover-over example](./media/container-insights-analyze/containers-metric-trend-bar-01.png)
+![Screenshot that shows a Trend bar chart hover-over example.](./media/container-insights-analyze/containers-metric-trend-bar-01.png)
In the next example, for the first node in the list, *aks-nodepool1-*, the value for **Containers** is 9. This value is a rollup of the total number of containers deployed.
-![Rollup of containers-per-node example](./media/container-insights-analyze/containers-nodes-containerstotal.png)
+![Screenshot that shows a rollup of containers-per-node example.](./media/container-insights-analyze/containers-nodes-containerstotal.png)
This information can help you quickly identify whether you have a proper balance of containers between nodes in your cluster.
The information that's presented when you view the **Nodes** tab is described in
| Controller | Only for containers and pods. It shows which controller it resides in. Not all pods are in a controller, so some might display **N/A**. | | Trend Min&nbsp;%, Avg&nbsp;%, 50th&nbsp;%, 90th&nbsp;%, 95th&nbsp;%, Max&nbsp;% | Bar graph trend represents the average percentile metric percentage of the controller. |
-You may notice a workload after expanding a node named **Other process**. It represents non-containerized processes that run on your node, and includes:
-
-* Self-managed or managed Kubernetes non-containerized processes
-
-* Container run-time processes
+You might notice a workload after expanding a node named **Other process**. It represents non-containerized processes that run on your node, and includes:
-* Kubelet
+* Self-managed or managed Kubernetes non-containerized processes.
+* Container run-time processes.
+* Kubelet.
+* System processes running on your node.
+* Other non-Kubernetes workloads running on node hardware or a VM.
-* System processes running on your node
-
-* Other non-Kubernetes workloads running on node hardware or VM
-
-It is calculated by: *Total usage from CAdvisor* - *Usage from containerized process*.
+It's calculated by *Total usage from CAdvisor* - *Usage from containerized process*.
In the selector, select **Controllers**.
-![Select Controllers view](./media/container-insights-analyze/containers-controllers-tab.png)
+![Screenshot that shows selecting Controllers.](./media/container-insights-analyze/containers-controllers-tab.png)
Here you can view the performance health of your controllers and Container Instances virtual node controllers or virtual node pods not connected to a controller.
-![\<Name> controllers performance view](./media/container-insights-analyze/containers-controllers-view.png)
+![Screenshot that shows a \<Name> controllers performance view.](./media/container-insights-analyze/containers-controllers-view.png)
The row hierarchy starts with a controller. When you expand a controller, you view one or more pods. Expand a pod, and the last row displays the container grouped to the pod. From an expanded controller, you can drill down to the node it's running on to view performance data filtered for that node. Container Instances pods not connected to a controller are listed last in the list.
-![Example Controllers hierarchy with Container Instances pods listed](./media/container-insights-analyze/controllers-view-aci.png)
+![Screenshot that shows an example Controllers hierarchy with Container Instances pods listed.](./media/container-insights-analyze/controllers-view-aci.png)
Select the value under the **Node** column for the specific controller.
-![Example drill-down from controller to node in the performance view](./media/container-insights-analyze/drill-down-controller-node.png)
+![Screenshot that shows an example drill-down from controller to node in the performance view.](./media/container-insights-analyze/drill-down-controller-node.png)
The information that's displayed when you view controllers is described in the following table. | Column | Description | |--|-| | Name | The name of the controller.|
-| Status | The rollup status of the containers after it's finished running with status such as *OK*, *Terminated*, *Failed*, *Stopped*, or *Paused*. If the container is running but the status either wasn't properly displayed or wasn't picked up by the agent and hasn't responded for more than 30 minutes, the status is *Unknown*. Additional details of the status icon are provided in the following table.|
+| Status | The rollup status of the containers after it's finished running with status such as **OK**, **Terminated**, **Failed**, **Stopped**, or **Paused**. If the container is running but the status either wasn't properly displayed or wasn't picked up by the agent and hasn't responded for more than 30 minutes, the status is **Unknown**. More details of the status icon are provided in the following table.|
| Min&nbsp;%, Avg&nbsp;%, 50th&nbsp;%, 90th&nbsp;%, 95th&nbsp;%, Max&nbsp;%| Rollup average of the average percentage of each entity for the selected metric and percentile. | | Min, Avg, 50th, 90th, 95th, Max | Rollup of the average CPU millicore or memory performance of the container for the selected percentile. The average value is measured from the CPU/Memory limit set for a pod. | | Containers | Total number of containers for the controller or pod. |
The icons in the status field indicate the online status of the containers.
| Icon | Status | |--|-|
-| ![Ready running status icon](./media/container-insights-analyze/containers-ready-icon.png) | Running (Ready)|
-| ![Waiting or Paused status icon](./media/container-insights-analyze/containers-waiting-icon.png) | Waiting or Paused|
-| ![Last reported running status icon](./media/container-insights-analyze/containers-grey-icon.png) | Last reported running but hasn't responded for more than 30 minutes|
-| ![Successful status icon](./media/container-insights-analyze/containers-green-icon.png) | Successfully stopped or failed to stop|
+| ![Ready running status icon.](./media/container-insights-analyze/containers-ready-icon.png) | Running (Ready)|
+| ![Waiting or Paused status icon.](./media/container-insights-analyze/containers-waiting-icon.png) | Waiting or Paused|
+| ![Last reported running status icon.](./media/container-insights-analyze/containers-grey-icon.png) | Last reported running but hasn't responded for more than 30 minutes|
+| ![Successful status icon.](./media/container-insights-analyze/containers-green-icon.png) | Successfully stopped or failed to stop|
-The status icon displays a count based on what the pod provides. It shows the worst two states, and when you hover over the status, it displays a rollup status from all pods in the container. If there isn't a ready state, the status value displays **(0)**.
+The status icon displays a count based on what the pod provides. It shows the worst two states. When you hover over the status, it displays a rollup status from all pods in the container. If there isn't a ready state, the status value displays **(0)**.
In the selector, select **Containers**.
-![Select Containers view](./media/container-insights-analyze/containers-containers-tab.png)
+![Screenshot that shows selecting Containers.](./media/container-insights-analyze/containers-containers-tab.png)
-Here you can view the performance health of your Azure Kubernetes and Azure Container Instances containers.
+Here you can view the performance health of your AKS and Container Instances containers.
-![\<Name> containers performance view](./media/container-insights-analyze/containers-containers-view.png)
+![Screenshot that shows a \<Name> containers performance view.](./media/container-insights-analyze/containers-containers-view.png)
From a container, you can drill down to a pod or node to view performance data filtered for that object. Select the value under the **Pod** or **Node** column for the specific container.
-![Example drill-down from node to containers in the performance view](./media/container-insights-analyze/drill-down-controller-node.png)
+![Screenshot that shows an example drill-down from node to containers in the performance view.](./media/container-insights-analyze/drill-down-controller-node.png)
The information that's displayed when you view containers is described in the following table. | Column | Description | |--|-| | Name | The name of the controller.|
-| Status | Status of the containers, if any. Additional details of the status icon are provided in the next table.|
+| Status | Status of the containers, if any. More details of the status icon are provided in the next table.|
| Min&nbsp;%, Avg&nbsp;%, 50th&nbsp;%, 90th&nbsp;%, 95th&nbsp;%, Max&nbsp;% | The rollup of the average percentage of each entity for the selected metric and percentile. | | Min, Avg, 50th, 90th, 95th, Max | The rollup of the average CPU millicore or memory performance of the container for the selected percentile. The average value is measured from the CPU/Memory limit set for a pod. | | Pod | Container where the pod resides.|
The icons in the status field indicate the online statuses of pods, as described
| Icon | Status | |--|-|
-| ![Ready running status icon](./media/container-insights-analyze/containers-ready-icon.png) | Running (Ready)|
-| ![Waiting or Paused status icon](./media/container-insights-analyze/containers-waiting-icon.png) | Waiting or Paused|
-| ![Last reported running status icon](./media/container-insights-analyze/containers-grey-icon.png) | Last reported running but hasn't responded in more than 30 minutes|
-| ![Terminated status icon](./media/container-insights-analyze/containers-terminated-icon.png) | Successfully stopped or failed to stop|
-| ![Failed status icon](./media/container-insights-analyze/containers-failed-icon.png) | Failed state |
+| ![Ready running status icon.](./media/container-insights-analyze/containers-ready-icon.png) | Running (Ready)|
+| ![Waiting or Paused status icon.](./media/container-insights-analyze/containers-waiting-icon.png) | Waiting or Paused|
+| ![Last reported running status icon.](./media/container-insights-analyze/containers-grey-icon.png) | Last reported running but hasn't responded in more than 30 minutes|
+| ![Terminated status icon.](./media/container-insights-analyze/containers-terminated-icon.png) | Successfully stopped or failed to stop|
+| ![Failed status icon.](./media/container-insights-analyze/containers-failed-icon.png) | Failed state |
## Monitor and visualize network configurations
-Azure Network Policy Manager includes informative Prometheus metrics that allow you to monitor and better understand your network configurations. It provides built-in visualizations in either the Azure portal or Grafana Labs. For details, see [Monitor and Visualize Network Configurations with Azure NPM](../../virtual-network/kubernetes-network-policies.md#monitor-and-visualize-network-configurations-with-azure-npm).
+Azure Network Policy Manager includes informative Prometheus metrics that you can use to monitor and better understand your network configurations. It provides built-in visualizations in either the Azure portal or Grafana Labs. For more information, see [Monitor and visualize network configurations with Azure NPM](../../virtual-network/kubernetes-network-policies.md#monitor-and-visualize-network-configurations-with-azure-npm).
## Workbooks
-Workbooks combine text, log queries, metrics, and parameters into rich interactive reports that allow you to analyze cluster performance. See [Workbooks in Container insights](container-insights-reports.md) for a description of the workbooks available for Container insights.
-
+Workbooks combine text, log queries, metrics, and parameters into rich interactive reports that you can use to analyze cluster performance. For a description of the workbooks available for Container insights, see [Workbooks in Container insights](container-insights-reports.md).
## Next steps -- Review [Create performance alerts with Container insights](./container-insights-log-alerts.md) to learn how to create alerts for high CPU and memory utilization to support your DevOps or operational processes and procedures.--- View [log query examples](container-insights-log-query.md) to see predefined queries and examples to evaluate or customize to alert, visualize, or analyze your clusters.--- View [monitor cluster health](./container-insights-overview.md) to learn about viewing the health status your Kubernetes cluster.
+- See [Create performance alerts with Container insights](./container-insights-log-alerts.md) to learn how to create alerts for high CPU and memory utilization to support your DevOps or operational processes and procedures.
+- See [Log query examples](container-insights-log-query.md) to see predefined queries and examples to evaluate or customize to alert, visualize, or analyze your clusters.
+- See [Monitor cluster health](./container-insights-overview.md) to learn about viewing the health status of your Kubernetes cluster.
azure-monitor Container Insights Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-cost.md
The following are examples of what changes you can apply to your cluster by modi
ttlSecondsAfterFinished: 100 ```
-After applying one or more of these changes to your ConfigMaps, see [Applying updated ConfigMap](container-insights-prometheus-integration.md#applying-updated-configmap) to apply it to your cluster.
+After applying one or more of these changes to your ConfigMaps, see [Apply updated ConfigMap](container-insights-prometheus-integration.md#apply-updated-configmap) to apply it to your cluster.
### Prometheus metrics scraping
azure-monitor Container Insights Log Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-log-query.md
Title: How to query logs from Container insights
-description: Container insights collects metrics and log data and this article describes the records and includes sample queries.
+ Title: Query logs from Container insights
+description: Container insights collects metrics and log data, and this article describes the records and includes sample queries.
Last updated 07/19/2021
-# How to query logs from Container insights
+# Query logs from Container insights
-Container insights collects performance metrics, inventory data, and health state information from container hosts and containers. The data is collected every three minutes and forwarded to the Log Analytics workspace in Azure Monitor where it's available for [log queries](../logs/log-query-overview.md) using [Log Analytics](../logs/log-analytics-overview.md) in Azure Monitor. You can apply this data to scenarios that include migration planning, capacity analysis, discovery, and on-demand performance troubleshooting. Azure Monitor Logs can help you look for trends, diagnose bottlenecks, forecast, or correlate data that can help you determine whether the current cluster configuration is performing optimally.
+Container insights collects performance metrics, inventory data, and health state information from container hosts and containers. The data is collected every three minutes and forwarded to the Log Analytics workspace in Azure Monitor where it's available for [log queries](../logs/log-query-overview.md) using [Log Analytics](../logs/log-analytics-overview.md) in Azure Monitor.
-See [Using queries in Azure Monitor Log Analytics](../logs/queries.md) for information on using these queries and [Log Analytics tutorial](../logs/log-analytics-tutorial.md) for a complete tutorial on using Log Analytics to run queries and work with their results.
+You can apply this data to scenarios that include migration planning, capacity analysis, discovery, and on-demand performance troubleshooting. Azure Monitor Logs can help you look for trends, diagnose bottlenecks, forecast, or correlate data that can help you determine whether the current cluster configuration is performing optimally.
+
+For information on using these queries, see [Using queries in Azure Monitor Log Analytics](../logs/queries.md). For a complete tutorial on using Log Analytics to run queries and work with their results, see [Log Analytics tutorial](../logs/log-analytics-tutorial.md).
## Open Log Analytics
-There are multiple options for starting Log Analytics, each starting with a different [scope](../logs/scope.md). For access to all data in the workspace, select **Logs** from the **Monitor** menu. To limit the data to a single Kubernetes cluster, select **Logs** from that cluster's menu.
+There are multiple options for starting Log Analytics. Each option starts with a different [scope](../logs/scope.md). For access to all data in the workspace, on the **Monitoring** menu, select **Logs**. To limit the data to a single Kubernetes cluster, select **Logs** from that cluster's menu.
+ ## Existing log queries
-You don't necessarily need to understand how to write a log query to use Log Analytics. There are multiple prebuilt queries that you can select and either run without modification or use as a start to a custom query. Click **Queries** at the top of the Log Analytics screen and view queries with a **Resource type** of **Kubernetes Services**.
+You don't necessarily need to understand how to write a log query to use Log Analytics. You can select from multiple prebuilt queries. You can either run the queries without modification or use them as a start to a custom query. Select **Queries** at the top of the Log Analytics screen, and view queries with a **Resource type** of **Kubernetes Services**.
+ ## Container tables
-See [Azure Monitor table reference](/azure/azure-monitor/reference/tables/tables-resourcetype#kubernetes-services) for a list of tables and their detailed descriptions used by Container insights. All of these tables are available for log queries.
+For a list of tables and their detailed descriptions used by Container insights, see the [Azure Monitor table reference](/azure/azure-monitor/reference/tables/tables-resourcetype#kubernetes-services). All these tables are available for log queries.
## Example log queries
-It's often useful to build queries that start with an example or two and then modify them to fit your requirements. To help build more advanced queries, you can experiment with the following sample queries:
+
+It's often useful to build queries that start with an example or two and then modify them to fit your requirements. To help build more advanced queries, you can experiment with the following sample queries.
### List all of a container's lifecycle information
Perf
| summarize AvgUsedRssMemoryBytes = avg(CounterValue) by bin(TimeGenerated, 30m), InstanceName ```
-### Requests Per Minute with Custom Metrics
+### Requests per minute with custom metrics
```kusto InsightsMetrics
InsightsMetrics
| project RequestsPerMinute = Val - prev(Val), TimeGenerated | render barchart ```+ ### Pods by name and namespace ```kusto
on ContainerID
``` ### Pod scale-out (HPA)
-Returns the number of scaled out replicas in each deployment. Calculates the scale-out percentage with the maximum number of replicas configured in HPA.
+This query returns the number of scaled-out replicas in each deployment. It calculates the scale-out percentage with the maximum number of replicas configured in HPA.
```kusto let _minthreshold = 70; // minimum threshold goes here if you want to setup as an alert
KubePodInventory
on deployment_hpa ```
-### Nodepool scale-outs
-Returns the number of active nodes in each node pool. Calculates the number of available active nodes and the max node configuration in the auto-scaler settings to determine the scale-out percentage. See commented lines in query to use it for a **number of results** alert rule.
+### Nodepool scale-outs
+
+This query returns the number of active nodes in each node pool. It calculates the number of available active nodes and the max node configuration in the autoscaler settings to determine the scale-out percentage. See commented lines in the query to use it for a **number of results** alert rule.
```kusto let nodepoolMaxnodeCount = 10; // the maximum number of nodes in your auto scale setting goes here.
KubeNodeInventory
| extend nodepoolType = todynamic(Labels) //Parse the labels to get the list of node pool types | extend nodepoolName = todynamic(nodepoolType[0].agentpool) // parse the label to get the nodepool name or set the specific nodepool name (like nodepoolName = 'agentpool)' | summarize nodeCount = count(Computer) by ClusterName, tostring(nodepoolName), TimeGenerated
-//(Uncomment the below two lines to set this as an log search alert)
+//(Uncomment the below two lines to set this as a log search alert)
//| extend scaledpercent = iff(((nodeCount * 100 / nodepoolMaxnodeCount) >= _minthreshold and (nodeCount * 100 / nodepoolMaxnodeCount) < _maxthreshold), "warn", "normal") //| where scaledpercent == 'warn' | summarize arg_max(TimeGenerated, *) by nodeCount, ClusterName, tostring(nodepoolName)
KubeNodeInventory
``` ### System containers (replicaset) availability
-Returns the system containers (replicasets) and report the unavailable percentage. See commented lines in query to use it for a **number of results** alert rule.
+
+This query returns the system containers (replicasets) and reports the unavailable percentage. See commented lines in the query to use it for a **number of results** alert rule.
```kusto let startDateTime = 5m; // the minimum time interval goes here
KubePodInventory
``` ### System containers (daemonsets) availability
-Returns the system containers (daemonsets) and report the unavailable percentage. See commented lines in query to use it for a **number of results** alert rule.
+
+This query returns the system containers (daemonsets) and reports the unavailable percentage. See commented lines in the query to use it for a **number of results** alert rule.
```kusto let startDateTime = 5m; // the minimum time interval goes here
KubePodInventory
``` ## Resource logs
-Resource logs for AKS are stored in the [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) table You can distinguish different logs with the **Category** column. See [AKS reference resource logs](../../aks/monitor-aks-reference.md) for a description of each category. The following examples require a diagnostic extension to send resource logs for an AKS cluster to a Log Analytics workspace. See [Configure monitoring](../../aks/monitor-aks.md#configure-monitoring) for details.
+
+Resource logs for AKS are stored in the [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) table. You can distinguish different logs with the **Category** column. For a description of each category, see [AKS reference resource logs](../../aks/monitor-aks-reference.md). The following examples require a diagnostic extension to send resource logs for an AKS cluster to a Log Analytics workspace. For more information, see [Configure monitoring](../../aks/monitor-aks.md#configure-monitoring).
### API server logs
InsightsMetrics
```
-To view Prometheus metrics scraped by Azure Monitor filtered by Namespace, specify "prometheus". Here's a sample query to view Prometheus metrics from the `default` kubernetes namespace.
+To view Prometheus metrics scraped by Azure Monitor and filtered by namespace, specify "prometheus". Here's a sample query to view Prometheus metrics from the `default` Kubernetes namespace.
``` InsightsMetrics
InsightsMetrics
| where Name contains "some_prometheus_metric" ```
-### Query config or scraping errors
+### Query configuration or scraping errors
To investigate any configuration or scraping errors, the following example query returns informational events from the `KubeMonAgentEvents` table.
KubeMonAgentEvents | where Level != "Info"
The output shows results similar to the following example:
-![Log query results of informational events from agent](./media/container-insights-log-query/log-query-example-kubeagent-events.png)
+![Screenshot that shows log query results of informational events from an agent.](./media/container-insights-log-query/log-query-example-kubeagent-events.png)
## Next steps
-Container insights does not include a predefined set of alerts. Review the [Create performance alerts with Container insights](./container-insights-log-alerts.md) to learn how to create recommended alerts for high CPU and memory utilization to support your DevOps or operational processes and procedures.
+Container insights doesn't include a predefined set of alerts. To learn how to create recommended alerts for high CPU and memory utilization to support your DevOps or operational processes and procedures, see [Create performance alerts with Container insights](./container-insights-log-alerts.md).
azure-monitor Container Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-overview.md
Title: Overview of Container insights | Microsoft Docs
-description: This article describes Container insights that monitors AKS Container Insights solution and the value it delivers by monitoring the health of your AKS clusters and Container Instances in Azure.
+description: This article describes Container insights, which monitors the AKS Container insights solution, and the value it delivers by monitoring the health of your AKS clusters and Container Instances in Azure.
Last updated 09/08/2020
Container insights is a feature designed to monitor the performance of container workloads deployed to: -- Managed Kubernetes clusters hosted on [Azure Kubernetes Service (AKS)](../../aks/intro-kubernetes.md)-- Self-managed Kubernetes clusters hosted on Azure using [AKS Engine](https://github.com/Azure/aks-engine)-- [Azure Container Instances](../../container-instances/container-instances-overview.md)-- Self-managed Kubernetes clusters hosted on [Azure Stack](/azure-stack/user/azure-stack-kubernetes-aks-engine-overview) or on-premises-- [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/overview.md)
+- Managed Kubernetes clusters hosted on [Azure Kubernetes Service (AKS)](../../aks/intro-kubernetes.md).
+- Self-managed Kubernetes clusters hosted on Azure using [AKS Engine](https://github.com/Azure/aks-engine).
+- [Azure Container Instances](../../container-instances/container-instances-overview.md).
+- Self-managed Kubernetes clusters hosted on [Azure Stack](/azure-stack/user/azure-stack-kubernetes-aks-engine-overview) or on-premises.
+- [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/overview.md).
-Container insights supports clusters running the Linux and Windows Server 2019 operating system. The container runtimes it supports are Moby and any CRI compatible runtime such as CRI-O and ContainerD. Docker is no longer supported as a container runtime as of September 2022. For more information about this deprecation, see the [AKS release notes][aks-release-notes].
+Container insights supports clusters running the Linux and Windows Server 2019 operating system. The container runtimes it supports are Moby and any CRI-compatible runtime such as CRI-O and ContainerD. Docker is no longer supported as a container runtime as of September 2022. For more information about this deprecation, see the [AKS release notes][aks-release-notes].
>[!NOTE]
-> Container insights support for Windows Server 2022 operating system in public preview.
+> Container insights support for Windows Server 2022 operating system is in public preview.
Monitoring your containers is critical, especially when you're running a production cluster, at scale, with multiple applications.
-Container insights gives you performance visibility by collecting memory and processor metrics from controllers, nodes, and containers that are available in Kubernetes through the Metrics API. After you enable monitoring from Kubernetes clusters, metrics and Container logs are automatically collected for you through a containerized version of the Log Analytics agent for Linux. Metrics are sent to the [metrics database in Azure Monitor](../essentials/data-platform-metrics.md), and log data is sent to your [Log Analytics workspace](../logs/log-analytics-workspace-overview.md).
+Container insights gives you performance visibility by collecting memory and processor metrics from controllers, nodes, and containers that are available in Kubernetes through the Metrics API. After you enable monitoring from Kubernetes clusters, metrics and Container logs are automatically collected for you through a containerized version of the Log Analytics agent for Linux. Metrics are sent to the [metrics database in Azure Monitor](../essentials/data-platform-metrics.md). Log data is sent to your [Log Analytics workspace](../logs/log-analytics-workspace-overview.md).
## Features of Container insights
-Container insights delivers a comprehensive monitoring experience to understand the performance and health of your Kubernetes cluster and container workloads.
+Container insights deliver a comprehensive monitoring experience to understand the performance and health of your Kubernetes cluster and container workloads. You can:
- Identify resource bottlenecks by identifying AKS containers running on the node and their average processor and memory utilization. - Identify processor and memory utilization of container groups and their containers hosted in Azure Container Instances. - View the controller's or pod's overall performance by identifying where the container resides in a controller or a pod. - Review the resource utilization of workloads running on the host that are unrelated to the standard processes that support the pod. - Identify capacity needs and determine the maximum load that the cluster can sustain by understanding the behavior of the cluster under average and heaviest loads.-- Configure alerts to proactively notify you or record it when CPU and memory utilization on nodes or containers exceed your thresholds, or when a health state change occurs in the cluster at the infrastructure or nodes health rollup.-- Integrate with [Prometheus](https://prometheus.io/docs/introduction/overview/) to view application and workload metrics it collects from nodes and Kubernetes using [queries](container-insights-log-query.md) to create custom alerts, dashboards, and perform detailed analysis.
+- Configure alerts to proactively notify you or record when CPU and memory utilization on nodes or containers exceed your thresholds, or when a health state change occurs in the cluster at the infrastructure or nodes health rollup.
+- Integrate with [Prometheus](https://prometheus.io/docs/introduction/overview/) to view application and workload metrics it collects from nodes and Kubernetes by using [queries](container-insights-log-query.md) to create custom alerts and dashboards and perform detailed analysis.
- Monitor container workloads [deployed to AKS Engine](https://github.com/Azure/aks-engine) on-premises and [AKS Engine on Azure Stack](/azure-stack/user/azure-stack-kubernetes-aks-engine-overview). - Monitor container workloads [deployed to Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/overview.md). --
-Check out the following video providing an intermediate level deep dive to help you learn about monitoring your AKS cluster with Container insights. Note that the video refers to *Azure Monitor for Containers* which is the previous name for *Container insights*.
+The following video provides an intermediate-level deep dive to help you learn about monitoring your AKS cluster with Container insights. The video refers to *Azure Monitor for Containers*, which is the previous name for *Container insights*.
> [!VIDEO https://www.youtube.com/embed/XEdwGvS2AwA]
+## Access Container insights
-## How to access Container insights
-Access Container insights in the Azure portal from Azure Monitor or directly from the selected AKS cluster. The Azure Monitor menu gives you the global perspective of all the containers deployed amd which are monitored, allowing you to search and filter across your subscriptions and resource groups. You can then drill into Container insights from the selected container. Access Container insights for a particular AKS container directly from the AKS page.
-
-![Overview of methods to access Container insights](./media/container-insights-overview/azmon-containers-experience.png)
+You can access Container insights in the Azure portal from Azure Monitor or directly from the selected AKS cluster. The Azure Monitor menu gives you the global perspective of all the containers that are deployed and monitored. This information allows you to search and filter across your subscriptions and resource groups. You can then drill into Container insights from the selected container. Access Container insights for a particular AKS container directly from the AKS page.
+![Screenshot that shows an overview of methods to access Container insights.](./media/container-insights-overview/azmon-containers-experience.png)
## Differences between Windows and Linux clusters
-The main differences in monitoring a Windows Server cluster compared to a Linux cluster include the following:
-- Windows doesn't have a Memory RSS metric, and as a result it isn't available for Windows node and containers. The [Working Set](/windows/win32/memory/working-set) metric is available.
+The main differences in monitoring a Windows Server cluster compared to a Linux cluster include:
+
+- Windows doesn't have a Memory RSS metric, and as a result it isn't available for Windows nodes and containers. The [Working Set](/windows/win32/memory/working-set) metric is available.
- Disk storage capacity information isn't available for Windows nodes. - Only pod environments are monitored, not Docker environments. - With the preview release, a maximum of 30 Windows Server containers are supported. This limitation doesn't apply to Linux containers. ## Next steps
-To begin monitoring your Kubernetes cluster, review [How to enable Container insights](container-insights-onboard.md) to understand the requirements and available methods to enable monitoring.
+To begin monitoring your Kubernetes cluster, review [Enable Container insights](container-insights-onboard.md) to understand the requirements and available methods to enable monitoring.
<!-- LINKS - external --> [aks-release-notes]: https://github.com/Azure/AKS/releases
azure-monitor Container Insights Prometheus Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-prometheus-integration.md
Title: Configure Container insights Prometheus Integration | Microsoft Docs
+ Title: Configure Container insights Prometheus integration | Microsoft Docs
description: This article describes how you can configure the Container insights agent to scrape metrics from Prometheus with your Kubernetes cluster. Last updated 04/22/2020
# Configure scraping of Prometheus metrics with Container insights
-[Prometheus](https://prometheus.io/) is a popular open source metric monitoring solution and is a part of the [Cloud Native Compute Foundation](https://www.cncf.io/). Container insights provides a seamless onboarding experience to collect Prometheus metrics. Typically, to use Prometheus, you need to set up and manage a Prometheus server with a store. By integrating with Azure Monitor, a Prometheus server is not required. You just need to expose the Prometheus metrics endpoint through your exporters or pods (application), and the containerized agent for Container insights can scrape the metrics for you.
+[Prometheus](https://prometheus.io/) is a popular open-source metric monitoring solution and is a part of the [Cloud Native Compute Foundation](https://www.cncf.io/). Container insights provides a seamless onboarding experience to collect Prometheus metrics.
-![Container monitoring architecture for Prometheus](./media/container-insights-prometheus-integration/monitoring-kubernetes-architecture.png)
+Typically, to use Prometheus, you need to set up and manage a Prometheus server with a store. If you integrate with Azure Monitor, a Prometheus server isn't required. You only need to expose the Prometheus metrics endpoint through your exporters or pods (application). Then the containerized agent for Container insights can scrape the metrics for you.
+
+![Diagram that shows container monitoring architecture for Prometheus.](./media/container-insights-prometheus-integration/monitoring-kubernetes-architecture.png)
>[!NOTE]
->The minimum agent version supported for scraping Prometheus metrics is ciprod07092019 or later, and the agent version supported for writing configuration and agent errors in the `KubeMonAgentEvents` table is ciprod10112019. For Azure Red Hat OpenShift and Red Hat OpenShift v4, agent version ciprod04162020 or higher.
+>The minimum agent version supported for scraping Prometheus metrics is ciprod07092019. The agent version supported for writing configuration and agent errors in the `KubeMonAgentEvents` table is ciprod10112019. For Azure Red Hat OpenShift and Red Hat OpenShift v4, the agent version is ciprod04162020 or later.
>
->For more information about the agent versions and what's included in each release, see [agent release notes](https://github.com/microsoft/Docker-Provider/tree/ci_feature_prod).
->To verify your agent version, click on **Insights** Tab of the resource, from the **Nodes** tab select a node, and in the properties pane note value of the **Agent Image Tag** property.
+>For more information about the agent versions and what's included in each release, see [Agent release notes](https://github.com/microsoft/Docker-Provider/tree/ci_feature_prod).
+>To verify your agent version, select the **Insights** tab of the resource. From the **Nodes** tab, select a node. In the properties pane, note the value of the **Agent Image Tag** property.
Scraping of Prometheus metrics is supported with Kubernetes clusters hosted on: -- Azure Kubernetes Service (AKS)-- Azure Stack or on-premises-- Azure Arc enabled Kubernetes
- - Azure Red Hat OpenShift and Red Hat OpenShift version 4.x through cluster connect to Azure Arc
+- Azure Kubernetes Service (AKS).
+- Azure Stack or on-premises.
+- Azure Arc enabled Kubernetes.
+- Azure Red Hat OpenShift and Red Hat OpenShift version 4.x through cluster connect to Azure Arc.
### Prometheus scraping settings Active scraping of metrics from Prometheus is performed from one of two perspectives:
-* Cluster-wide - HTTP URL and discover targets from listed endpoints of a service. For example, k8s services such as kube-dns and kube-state-metrics, and pod annotations specific to an application. Metrics collected in this context will be defined in the ConfigMap section *[Prometheus data_collection_settings.cluster]*.
-* Node-wide - HTTP URL and discover targets from listed endpoints of a service. Metrics collected in this context will be defined in the ConfigMap section *[Prometheus_data_collection_settings.node]*.
+* **Cluster-wide**: HTTP URL and discover targets from listed endpoints of a service, for example, Kubernetes services such as kube-dns and kube-state-metrics, and pod annotations specific to an application. Metrics collected in this context will be defined in the ConfigMap section *[Prometheus data_collection_settings.cluster]*.
+* **Node-wide**: HTTP URL and discover targets from listed endpoints of a service. Metrics collected in this context will be defined in the ConfigMap section *[Prometheus_data_collection_settings.node]*.
| Endpoint | Scope | Example | |-|-||
-| Pod annotation | Cluster-wide | annotations: <br>`prometheus.io/scrape: "true"` <br>`prometheus.io/path: "/mymetrics"` <br>`prometheus.io/port: "8000"` <br>`prometheus.io/scheme: "http"` |
+| Pod annotation | Cluster-wide | Annotations: <br>`prometheus.io/scrape: "true"` <br>`prometheus.io/path: "/mymetrics"` <br>`prometheus.io/port: "8000"` <br>`prometheus.io/scheme: "http"` |
| Kubernetes service | Cluster-wide | `http://my-service-dns.my-namespace:9100/metrics` <br>`https://metrics-server.kube-system.svc.cluster.local/metrics`ΓÇï |
-| url/endpoint | Per-node and/or cluster-wide | `http://myurl:9101/metrics` |
+| URL/endpoint | Per-node and/or cluster-wide | `http://myurl:9101/metrics` |
-When a URL is specified, Container insights only scrapes the endpoint. When Kubernetes service is specified, the service name is resolved with the cluster DNS server to get the IP address and then the resolved service is scraped.
+When a URL is specified, Container insights only scrapes the endpoint. When Kubernetes service is specified, the service name is resolved with the cluster DNS server to get the IP address. Then the resolved service is scraped.
|Scope | Key | Data type | Value | Description | ||--|--|-|-| | Cluster-wide | | | | Specify any one of the following three methods to scrape endpoints for metrics. |
-| | `urls` | String | Comma-separated array | HTTP endpoint (Either IP address or valid URL path specified). For example: `urls=[$NODE_IP/metrics]`. ($NODE_IP is a specific Container insights parameter and can be used instead of node IP address. Must be all uppercase.) |
-| | `kubernetes_services` | String | Comma-separated array | An array of Kubernetes services to scrape metrics from kube-state-metrics. Fully qualified domain names must be used here. For example,`kubernetes_services = ["https://metrics-server.kube-system.svc.cluster.local/metrics",http://my-service-dns.my-namespace.svc.cluster.local:9100/metrics]`.|
-| | `monitor_kubernetes_pods` | Boolean | true or false | When set to `true` in the cluster-wide settings, Container insights agent will scrape Kubernetes pods across the entire cluster for the following Prometheus annotations:<br> `prometheus.io/scrape:`<br> `prometheus.io/scheme:`<br> `prometheus.io/path:`<br> `prometheus.io/port:` |
-| | `prometheus.io/scrape` | Boolean | true or false | Enables scraping of the pod. `monitor_kubernetes_pods` must be set to `true`. |
-| | `prometheus.io/scheme` | String | http or https | Defaults to scrapping over HTTP. If necessary, set to `https`. |
-| | `prometheus.io/path` | String | Comma-separated array | The HTTP resource path on which to fetch metrics from. If the metrics path is not `/metrics`, define it with this annotation. |
-| | `prometheus.io/port` | String | 9102 | Specify a port to scrape from. If port is not set, it will default to 9102. |
+| | `urls` | String | Comma-separated array | HTTP endpoint (either IP address or valid URL path specified). For example: `urls=[$NODE_IP/metrics]`. ($NODE_IP is a specific Container insights parameter and can be used instead of a node IP address. Must be all uppercase.) |
+| | `kubernetes_services` | String | Comma-separated array | An array of Kubernetes services to scrape metrics from kube-state-metrics. Fully qualified domain names must be used here. For example,`kubernetes_services = ["https://metrics-server.kube-system.svc.cluster.local/metrics",http://my-service-dns.my-namespace.svc.cluster.local:9100/metrics]`|
+| | `monitor_kubernetes_pods` | Boolean | true or false | When set to `true` in the cluster-wide settings, the Container insights agent will scrape Kubernetes pods across the entire cluster for the following Prometheus annotations:<br> `prometheus.io/scrape:`<br> `prometheus.io/scheme:`<br> `prometheus.io/path:`<br> `prometheus.io/port:` |
+| | `prometheus.io/scrape` | Boolean | true or false | Enables scraping of the pod, and `monitor_kubernetes_pods` must be set to `true`. |
+| | `prometheus.io/scheme` | String | http or https | Defaults to scraping over HTTP. If necessary, set to `https`. |
+| | `prometheus.io/path` | String | Comma-separated array | The HTTP resource path from which to fetch metrics. If the metrics path isn't `/metrics`, define it with this annotation. |
+| | `prometheus.io/port` | String | 9102 | Specify a port to scrape from. If the port isn't set, it will default to 9102. |
| | `monitor_kubernetes_pods_namespaces` | String | Comma-separated array | An allowlist of namespaces to scrape metrics from Kubernetes pods.<br> For example, `monitor_kubernetes_pods_namespaces = ["default1", "default2", "default3"]` |
-| Node-wide | `urls` | String | Comma-separated array | HTTP endpoint (Either IP address or valid URL path specified). For example: `urls=[$NODE_IP/metrics]`. ($NODE_IP is a specific Container insights parameter and can be used instead of node IP address. Must be all uppercase.) |
-| Node-wide or Cluster-wide | `interval` | String | 60s | The collection interval default is one minute (60 seconds). You can modify the collection for either the *[prometheus_data_collection_settings.node]* and/or *[prometheus_data_collection_settings.cluster]* to time units such as s, m, h. |
-| Node-wide or Cluster-wide | `fieldpass`<br> `fielddrop`| String | Comma-separated array | You can specify certain metrics to be collected or not from the endpoint by setting the allow (`fieldpass`) and disallow (`fielddrop`) listing. You must set the allowlist first. |
+| Node-wide | `urls` | String | Comma-separated array | HTTP endpoint (either IP address or valid URL path specified). For example: `urls=[$NODE_IP/metrics]`. ($NODE_IP is a specific Container insights parameter and can be used instead of a node IP address. Must be all uppercase.) |
+| Node-wide or cluster-wide | `interval` | String | 60s | The collection interval default is one minute (60 seconds). You can modify the collection for either the *[prometheus_data_collection_settings.node]* and/or *[prometheus_data_collection_settings.cluster]* to time units such as s, m, and h. |
+| Node-wide or cluster-wide | `fieldpass`<br> `fielddrop`| String | Comma-separated array | You can specify certain metrics to be collected or not from the endpoint by setting the allow (`fieldpass`) and disallow (`fielddrop`) listing. You must set the allowlist first. |
-ConfigMaps is a global list and there can be only one ConfigMap applied to the agent. You cannot have another ConfigMaps overruling the collections.
+ConfigMaps is a global list and there can be only one ConfigMap applied to the agent. You can't have another ConfigMaps overruling the collections.
## Configure and deploy ConfigMaps
Perform the following steps to configure your ConfigMap configuration file for t
* Azure Stack or on-premises * Azure Red Hat OpenShift version 4.x and Red Hat OpenShift version 4.x
-1. [Download](https://aka.ms/container-azm-ms-agentconfig) the template ConfigMap yaml file and save it as container-azm-ms-agentconfig.yaml.
+1. [Download](https://aka.ms/container-azm-ms-agentconfig) the template ConfigMap YAML file and save it as container-azm-ms-agentconfig.yaml.
>[!NOTE]
- >This step is not required when working with Azure Red Hat OpenShift since the ConfigMap template already exists on the cluster.
+ >This step isn't required when you're working with Azure Red Hat OpenShift because the ConfigMap template already exists on the cluster.
-2. Edit the ConfigMap yaml file with your customizations to scrape Prometheus metrics.
+1. Edit the ConfigMap YAML file with your customizations to scrape Prometheus metrics.
- >[!NOTE]
- >If you are editing the ConfigMap yaml file for Azure Red Hat OpenShift, first run the command `oc edit configmaps container-azm-ms-agentconfig -n openshift-azure-logging` to open the file in a text editor.
+ If you're editing the ConfigMap YAML file for Azure Red Hat OpenShift, first run the command `oc edit configmaps container-azm-ms-agentconfig -n openshift-azure-logging` to open the file in a text editor.
>[!NOTE]
- >The following annotation `openshift.io/reconcile-protect: "true"` must be added under the metadata of *container-azm-ms-agentconfig* ConfigMap to prevent reconciliation.
+ >The following annotation `openshift.io/reconcile-protect: "true"` must be added under the metadata of *container-azm-ms-agentconfig* ConfigMap to prevent reconciliation.
>``` >metadata: > annotations: > openshift.io/reconcile-protect: "true" >```
- - To collect of Kubernetes services cluster-wide, configure the ConfigMap file using the following example.
+ - To collect Kubernetes services cluster-wide, configure the ConfigMap file by using the following example:
``` prometheus-data-collection-settings: |- ΓÇï
Perform the following steps to configure your ConfigMap configuration file for t
kubernetes_services = ["http://my-service-dns.my-namespace:9102/metrics"] ```
- - To configure scraping of Prometheus metrics from a specific URL across the cluster, configure the ConfigMap file using the following example.
+ - To configure scraping of Prometheus metrics from a specific URL across the cluster, configure the ConfigMap file by using the following example:
``` prometheus-data-collection-settings: |- ΓÇï
Perform the following steps to configure your ConfigMap configuration file for t
urls = ["http://myurl:9101/metrics"] ## An array of urls to scrape metrics from ```
- - To configure scraping of Prometheus metrics from an agent's DaemonSet for every individual node in the cluster, configure the following in the ConfigMap:
+ - To configure scraping of Prometheus metrics from an agent's DaemonSet for every individual node in the cluster, configure the following example in the ConfigMap:
``` prometheus-data-collection-settings: |- ΓÇï
Perform the following steps to configure your ConfigMap configuration file for t
``` >[!NOTE]
- >$NODE_IP is a specific Container insights parameter and can be used instead of node IP address. It must be all uppercase.
+ >$NODE_IP is a specific Container insights parameter and can be used instead of a node IP address. It must be all uppercase.
- - To configure scraping of Prometheus metrics by specifying a pod annotation, perform the following steps:
+ - To configure scraping of Prometheus metrics by specifying a pod annotation:
- 1. In the ConfigMap, specify the following:
+ 1. In the ConfigMap, specify the following configuration:
``` prometheus-data-collection-settings: |- ΓÇï
Perform the following steps to configure your ConfigMap configuration file for t
monitor_kubernetes_pods = true ```
- 2. Specify the following configuration for pod annotations:
+ 1. Specify the following configuration for pod annotations:
``` - prometheus.io/scrape:"true" #Enable scraping for this pod ΓÇï
Perform the following steps to configure your ConfigMap configuration file for t
- prometheus.io/port:"8000" #If port is not 9102 use this annotationΓÇï ```
- If you want to restrict monitoring to specific namespaces for pods that have annotations, for example only include pods dedicated for production workloads, set the `monitor_kubernetes_pod` to `true` in ConfigMap, and add the namespace filter `monitor_kubernetes_pods_namespaces` specifying the namespaces to scrape from. For example, `monitor_kubernetes_pods_namespaces = ["default1", "default2", "default3"]`
+ If you want to restrict monitoring to specific namespaces for pods that have annotations, for example, only include pods dedicated for production workloads, set the `monitor_kubernetes_pod` to `true` in ConfigMap. Then add the namespace filter `monitor_kubernetes_pods_namespaces` to specify the namespaces to scrape from. An example is `monitor_kubernetes_pods_namespaces = ["default1", "default2", "default3"]`.
-3. Run the following kubectl command: `kubectl apply -f <configmap_yaml_file.yaml>`.
+1. Run the following kubectl command: `kubectl apply -f <configmap_yaml_file.yaml>`.
- Example: `kubectl apply -f container-azm-ms-agentconfig.yaml`.
+ Example: `kubectl apply -f container-azm-ms-agentconfig.yaml`.
-The configuration change can take a few minutes to finish before taking effect. You must restart all omsagent pods manually. When the restarts are finished, a message is displayed that's similar to the following and includes the result: `configmap "container-azm-ms-agentconfig" created`.
+The configuration change can take a few minutes to finish before taking effect. You must restart all omsagent pods manually. When the restarts are finished, a message appears that's similar to the following and includes the result `configmap "container-azm-ms-agentconfig" created`.
-## Configure and deploy ConfigMaps - Azure Red Hat OpenShift v3
+## Configure and deploy ConfigMaps for Azure Red Hat OpenShift v3
This section includes the requirements and steps to successfully configure your ConfigMap configuration file for Azure Red Hat OpenShift v3.x cluster. >[!NOTE]
->For Azure Red Hat OpenShift v3.x, a template ConfigMap file is created in the *openshift-azure-logging* namespace. It is not configured to actively scrape metrics or data collection from the agent.
+>For Azure Red Hat OpenShift v3.x, a template ConfigMap file is created in the *openshift-azure-logging* namespace. It isn't configured to actively scrape metrics or data collection from the agent.
### Prerequisites
-Before you start, confirm you are a member of the Customer Cluster Admin role of your Azure Red Hat OpenShift cluster to configure the containerized agent and Prometheus scraping settings. To verify you are a member of the *osa-customer-admins* group, run the following command:
+Before you start, confirm you're a member of the Customer Cluster Admin role of your Azure Red Hat OpenShift cluster to configure the containerized agent and Prometheus scraping settings. To verify you're a member of the *osa-customer-admins* group, run the following command:
``` bash oc get groups ```
-The output will resemble the following:
+The output will resemble the following example:
``` bash NAME USERS osa-customer-admins <your-user-account>@<your-tenant-name>.onmicrosoft.com ```
-If you are member of *osa-customer-admins* group, you should be able to list the `container-azm-ms-agentconfig` ConfigMap using the following command:
+If you're a member of *osa-customer-admins* group, you should be able to list the `container-azm-ms-agentconfig` ConfigMap by using the following command:
``` bash oc get configmaps container-azm-ms-agentconfig -n openshift-azure-logging ```
-The output will resemble the following:
+The output will resemble the following example:
``` bash NAME DATA AGE
container-azm-ms-agentconfig 4 56m
### Enable monitoring
-Perform the following steps to configure your ConfigMap configuration file for your Azure Red Hat OpenShift v3.x cluster.
+To configure your ConfigMap configuration file for your Azure Red Hat OpenShift v3.x cluster:
-1. Edit the ConfigMap yaml file with your customizations to scrape Prometheus metrics. The ConfigMap template already exists on the Red Hat OpenShift v3 cluster. Run the command `oc edit configmaps container-azm-ms-agentconfig -n openshift-azure-logging` to open the file in a text editor.
+1. Edit the ConfigMap YAML file with your customizations to scrape Prometheus metrics. The ConfigMap template already exists on the Red Hat OpenShift v3 cluster. Run the command `oc edit configmaps container-azm-ms-agentconfig -n openshift-azure-logging` to open the file in a text editor.
>[!NOTE]
- >The following annotation `openshift.io/reconcile-protect: "true"` must be added under the metadata of *container-azm-ms-agentconfig* ConfigMap to prevent reconciliation.
+ >The following annotation `openshift.io/reconcile-protect: "true"` must be added under the metadata of *container-azm-ms-agentconfig* ConfigMap to prevent reconciliation.
>``` >metadata: > annotations: > openshift.io/reconcile-protect: "true" >```
- - To collect of Kubernetes services cluster-wide, configure the ConfigMap file using the following example.
+ - To collect Kubernetes services cluster-wide, configure the ConfigMap file by using the following example:
``` prometheus-data-collection-settings: |- ΓÇï
Perform the following steps to configure your ConfigMap configuration file for y
kubernetes_services = ["http://my-service-dns.my-namespace:9102/metrics"] ```
- - To configure scraping of Prometheus metrics from a specific URL across the cluster, configure the ConfigMap file using the following example.
+ - To configure scraping of Prometheus metrics from a specific URL across the cluster, configure the ConfigMap file by using the following example:
``` prometheus-data-collection-settings: |- ΓÇï
Perform the following steps to configure your ConfigMap configuration file for y
urls = ["http://myurl:9101/metrics"] ## An array of urls to scrape metrics from ```
- - To configure scraping of Prometheus metrics from an agent's DaemonSet for every individual node in the cluster, configure the following in the ConfigMap:
+ - To configure scraping of Prometheus metrics from an agent's DaemonSet for every individual node in the cluster, configure the following example in the ConfigMap:
``` prometheus-data-collection-settings: |- ΓÇï
Perform the following steps to configure your ConfigMap configuration file for y
``` >[!NOTE]
- >$NODE_IP is a specific Container insights parameter and can be used instead of node IP address. It must be all uppercase.
+ >$NODE_IP is a specific Container insights parameter and can be used instead of a node IP address. It must be all uppercase.
- - To configure scraping of Prometheus metrics by specifying a pod annotation, perform the following steps:
+ - To configure scraping of Prometheus metrics by specifying a pod annotation:
- 1. In the ConfigMap, specify the following:
+ 1. In the ConfigMap, specify the following configuration:
``` prometheus-data-collection-settings: |- ΓÇï
Perform the following steps to configure your ConfigMap configuration file for y
monitor_kubernetes_pods = true ```
- 2. Specify the following configuration for pod annotations:
+ 1. Specify the following configuration for pod annotations:
``` - prometheus.io/scrape:"true" #Enable scraping for this pod ΓÇï
Perform the following steps to configure your ConfigMap configuration file for y
- prometheus.io/port:"8000" #If port is not 9102 use this annotationΓÇï ```
- If you want to restrict monitoring to specific namespaces for pods that have annotations, for example only include pods dedicated for production workloads, set the `monitor_kubernetes_pod` to `true` in ConfigMap, and add the namespace filter `monitor_kubernetes_pods_namespaces` specifying the namespaces to scrape from. For example, `monitor_kubernetes_pods_namespaces = ["default1", "default2", "default3"]`
+ If you want to restrict monitoring to specific namespaces for pods that have annotations, for example, only include pods dedicated for production workloads, set `monitor_kubernetes_pod` to `true` in ConfigMap. Then add the namespace filter `monitor_kubernetes_pods_namespaces` to specify the namespaces to scrape from. An example is `monitor_kubernetes_pods_namespaces = ["default1", "default2", "default3"]`.
-2. Save your changes in the editor.
+1. Save your changes in the editor.
-The configuration change can take a few minutes to finish before taking effect, and all omsagent pods in the cluster will restart. The restart is a rolling restart for all omsagent pods, not all restart at the same time. When the restarts are finished, a message is displayed that's similar to the following and includes the result: `configmap "container-azm-ms-agentconfig" created`.
+The configuration change can take a few minutes to finish before taking effect. Then all omsagent pods in the cluster will restart. The restart is a rolling restart for all omsagent pods. Not all pods restart at the same time. When the restarts are finished, a message appears that's similar to the following and includes the result `configmap "container-azm-ms-agentconfig" created`.
-You can view the updated ConfigMap by running the command, `oc describe configmaps container-azm-ms-agentconfig -n openshift-azure-logging`.
+You can view the updated ConfigMap by running the command `oc describe configmaps container-azm-ms-agentconfig -n openshift-azure-logging`.
-## Applying updated ConfigMap
+## Apply updated ConfigMap
-If you have already deployed a ConfigMap to your cluster and you want to update it with a newer configuration, you can edit the ConfigMap file you've previously used, and then apply using the same commands as before.
+If you've already deployed a ConfigMap to your cluster and you want to update it with a newer configuration, you can edit the ConfigMap file you've previously used. Then apply it by using the same commands as before.
For the following Kubernetes environments:
For the following Kubernetes environments:
- Azure Stack or on-premises - Azure Red Hat OpenShift and Red Hat OpenShift version 4.x
-run the command `kubectl apply -f <config3. map_yaml_file.yaml>`.
+run the command `kubectl apply -f <config3. map_yaml_file.yaml>`.
-For an example, run the command, `Example: kubectl apply -f container-azm-ms-agentconfig.yaml` to open the file in your default editor to modify and then save it.
+For an example, run the command `Example: kubectl apply -f container-azm-ms-agentconfig.yaml` to open the file in your default editor to modify and then save it.
-The configuration change can take a few minutes to finish before taking effect, and all omsagent pods in the cluster will restart. The restart is a rolling restart for all omsagent pods, not all restart at the same time. When the restarts are finished, a popup message is displayed that's similar to the following and includes the result: 'configmap "container-azm-ms-agentconfig' created to indicate the configmap resource created.
+The configuration change can take a few minutes to finish before taking effect. Then all omsagent pods in the cluster will restart. The restart is a rolling restart for all omsagent pods. Not all pods restart at the same time. When the restarts are finished, a message appears that's similar to the following and includes the result "configmap 'container-azm-ms-agentconfig' created" to indicate the configmap resource was created.
## Verify configuration
-To verify the configuration was successfully applied to a cluster, use the following command to review the logs from an agent pod: `kubectl logs omsagent-fdf58 -n=kube-system`.
+To verify the configuration was successfully applied to a cluster, use the following command to review the logs from an agent pod: `kubectl logs omsagent-fdf58 -n=kube-system`.
>[!NOTE]
->This command is not applicable to Azure Red Hat OpenShift v3.x cluster.
->
+>This command isn't applicable to Azure Red Hat OpenShift v3.x cluster.
+>
-If there are configuration errors from the omsagent pods, the output will show errors similar to the following:
+If there are configuration errors from the omsagent pods, the output will show errors similar to the following example:
``` ***************Start Config Processing********************
config::unsupported/missing config schema version - 'v21' , using defaults
Errors related to applying configuration changes are also available for review. The following options are available to perform additional troubleshooting of configuration changes and scraping of Prometheus metrics: -- From an agent pod logs using the same `kubectl logs` command
+- From an agent pod logs using the same `kubectl logs` command.
>[!NOTE]
- >This command is not applicable to Azure Red Hat OpenShift cluster.
+ >This command isn't applicable to Azure Red Hat OpenShift cluster.
> -- From Live Data (preview). Live Data (preview) logs show errors similar to the following:
+- From Live Data (preview). Live Data (preview) logs show errors similar to the following example:
``` 2019-07-08T18:55:00Z E! [inputs.prometheus]: Error in plugin: error making HTTP request to http://invalidurl:1010/metrics: Get http://invalidurl:1010/metrics: dial tcp: lookup invalidurl on 10.0.0.10:53: no such host ``` - From the **KubeMonAgentEvents** table in your Log Analytics workspace. Data is sent every hour with *Warning* severity for scrape errors and *Error* severity for configuration errors. If there are no errors, the entry in the table will have data with severity *Info*, which reports no errors. The **Tags** property contains more information about the pod and container ID on which the error occurred and also the first occurrence, last occurrence, and count in the last hour.- - For Azure Red Hat OpenShift v3.x and v4.x, check the omsagent logs by searching the **ContainerLog** table to verify if log collection of openshift-azure-logging is enabled.
-Errors prevent omsagent from parsing the file, causing it to restart and use the default configuration. After you correct the error(s) in ConfigMap on clusters other than Azure Red Hat OpenShift v3.x, save the yaml file and apply the updated ConfigMaps by running the command: `kubectl apply -f <configmap_yaml_file.yaml`.
+Errors prevent omsagent from parsing the file, causing it to restart and use the default configuration. After you correct the errors in ConfigMap on clusters other than Azure Red Hat OpenShift v3.x, save the YAML file and apply the updated ConfigMaps by running the command `kubectl apply -f <configmap_yaml_file.yaml`.
-For Azure Red Hat OpenShift v3.x, edit and save the updated ConfigMaps by running the command: `oc edit configmaps container-azm-ms-agentconfig -n openshift-azure-logging`.
+For Azure Red Hat OpenShift v3.x, edit and save the updated ConfigMaps by running the command `oc edit configmaps container-azm-ms-agentconfig -n openshift-azure-logging`.
## Query Prometheus metrics data
-To view prometheus metrics scraped by Azure Monitor and any configuration/scraping errors reported by the agent, review [Query Prometheus metrics data](container-insights-log-query.md#query-prometheus-metrics-data) and [Query config or scraping errors](container-insights-log-query.md#query-config-or-scraping-errors).
+To view Prometheus metrics scraped by Azure Monitor and any configuration/scraping errors reported by the agent, review [Query Prometheus metrics data](container-insights-log-query.md#query-prometheus-metrics-data) and [Query configuration or scraping errors](container-insights-log-query.md#query-configuration-or-scraping-errors).
## View Prometheus metrics in Grafana
-Container insights supports viewing metrics stored in your Log Analytics workspace in Grafana dashboards. We have provided a template that you can download from Grafana's [dashboard repository](https://grafana.com/grafana/dashboards?dataSource=grafana-azure-monitor-datasource&category=docker) to get you started and reference to help you learn how to query additional data from your monitored clusters to visualize in custom Grafana dashboards.
+Container insights supports viewing metrics stored in your Log Analytics workspace in Grafana dashboards. We've provided a template that you can download from Grafana's [dashboard repository](https://grafana.com/grafana/dashboards?dataSource=grafana-azure-monitor-datasource&category=docker). Use the template to get started and reference it to help you learn how to query other data from your monitored clusters to visualize in custom Grafana dashboards.
## Review Prometheus data usage
-To identify the ingestion volume of each metrics size in GB per day to understand if it is high, the following query is provided.
+To identify the ingestion volume of each metrics size in GB per day to understand if it's high, the following query is provided.
``` InsightsMetrics
InsightsMetrics
| render barchart ```
-The output will show results similar to the following:
+The output will show results similar to the following example.
-![Screenshot shows the log query results of data ingestion volume](./media/container-insights-prometheus-integration/log-query-example-usage-03.png)
+![Screenshot that shows the log query results of data ingestion volume.](./media/container-insights-prometheus-integration/log-query-example-usage-03.png)
To estimate what each metrics size in GB is for a month to understand if the volume of data ingested received in the workspace is high, the following query is provided.
InsightsMetrics
| render barchart ```
-The output will show results similar to the following:
+The output will show results similar to the following example.
-![Log query results of data ingestion volume](./media/container-insights-prometheus-integration/log-query-example-usage-02.png)
+![Screenshot that shows log query results of data ingestion volume.](./media/container-insights-prometheus-integration/log-query-example-usage-02.png)
-Further information on how to analyze usage is available in [Analyze usage in Log Analytics workspace](../logs/analyze-usage.md).
+For more information on how to analyze usage, see [Analyze usage in Log Analytics workspace](../logs/analyze-usage.md).
## Next steps
-Learn more about configuring the agent collection settings for stdout, stderr, and environmental variables from container workloads [here](container-insights-agent-config.md).
+To learn more about configuring the agent collection settings for stdout, stderr, and environmental variables from container workloads, see [Configure agent data collection for Container insights](container-insights-agent-config.md).
azure-monitor Resource Manager Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-manager-diagnostic-settings.md
param storageAccountId string
@description('The resource Id for the event hub authorization rule.') param eventHubAuthorizationRuleId string
-@description('The name of teh event hub.')
+@description('The name of the event hub.')
param eventHubName string resource setting 'Microsoft.Insights/diagnosticSettings@2021-05-01-preview' = {
resource setting 'Microsoft.Insights/diagnosticSettings@2021-05-01-preview' = {
"eventHubName": { "type": "string", "metadata": {
- "description": "The name of teh event hub."
+ "description": "The name of the event hub."
} } },
azure-monitor Log Analytics Workspace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-analytics-workspace-overview.md
Each workspace contains multiple tables that are organized into separate columns
[![Diagram that shows the Azure Monitor Logs structure.](media/data-platform-logs/logs-structure.png)](media/data-platform-logs/logs-structure.png#lightbox)
+> [!WARNING]
+> Table names are used for billing purposes so they should not contain sensitive information.
+ ## Cost There's no direct cost for creating or maintaining a workspace. You're charged for the data sent to it, which is also known as data ingestion. You're charged for how long that data is stored, which is otherwise known as data retention. These costs might vary based on the data plan of each table, as described in [Log data plans (preview)](#log-data-plans-preview).
azure-monitor Vminsights Enable Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-powershell.md
PARAMETERS
This cmdlet supports the common parameters: Verbose, Debug, ErrorAction, ErrorVariable, WarningAction, WarningVariable, OutBuffer, PipelineVariable, and OutVariable. For more information, see
- about_CommonParameters (https:/go.microsoft.com/fwlink/?LinkID=113216).
+ about_CommonParameters (https://go.microsoft.com/fwlink/?LinkID=113216).
-- EXAMPLE 1 -- .\Install-VMInsights.ps1 -WorkspaceRegion eastus -WorkspaceId <WorkspaceId> -WorkspaceKey <WorkspaceKey> -SubscriptionId <SubscriptionId>
azure-netapp-files Azacsnap Cmd Ref Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-cmd-ref-configure.md
na Previously updated : 04/21/2021 Last updated : 08/19/2022
Database section
# [SAP HANA](#tab/sap-hana)
-When adding a *SAP HANA database* to the configuration, the following values are required:
+When you add an *SAP HANA database* to the configuration, the following values are required:
- **HANA Server's Address** = The SAP HANA server hostname or IP address. - **HANA SID** = The SAP HANA System ID.
When adding a *SAP HANA database* to the configuration, the following values are
[Azure Backup](../backup/index.yml) service provides an alternate backup tool for SAP HANA, where database and log backups are streamed into the Azure Backup Service. Some customers would like to combine the streaming backint-based backups with regular snapshot-based backups. However, backint-based
-backups block other methods of backup, such as using a files-based backup or a storage snapshot-based backup (for example, AzAcSnap). Guidance is provided on
-the Azure Backup site on how to [Run SAP HANA native client backup to local disk on a database with Azure Backup enabled](../backup/sap-hana-db-manage.md).
+backups block other methods of backup, such as using a files-based backup or a storage snapshot-based backup (for example, AzAcSnap). Guidance is provided on the Azure Backup site on how to [Run SAP HANA Studio backup on a database with Azure Backup enabled](../backup/backup-azure-sap-hana-database.md#run-sap-hana-studio-backup-on-a-database-with-azure-backup-enabled).
The process described in the Azure Backup documentation has been implemented with AzAcSnap to automatically do the following steps:
the configuration file directly.
# [Oracle](#tab/oracle)
-When adding an *Oracle database* to the configuration, the following values are required:
+When you add an *Oracle database* to the configuration, the following values are required:
- **Oracle DB Server's Address** = The database server hostname or IP address. - **SID** = The database System ID.
When adding an *Oracle database* to the configuration, the following values are
# [Azure Large Instance (Bare Metal)](#tab/azure-large-instance)
-When adding *HLI Storage* to a database section, the following values are required:
+When you add *HLI Storage* to a database section, the following values are required:
- **Storage User Name** = This value is the user name used to establish the SSH connection to the Storage. - **Storage IP Address** = The address of the Storage system.
When adding *HLI Storage* to a database section, the following values are requir
# [Azure NetApp Files (with VM)](#tab/azure-netapp-files)
-When adding *ANF Storage* to a database section, the following values are required:
+When you add *ANF Storage* to a database section, the following values are required:
-- **Service Principal Authentication filename** = this is the `authfile.json` file generated in the Cloud Shell when configuring
+- **Service Principal Authentication filename** = the `authfile.json` file generated in the Cloud Shell when configuring
communication with Azure NetApp Files storage.-- **Full ANF Storage Volume Resource ID** = the full Resource ID of the Volume being snapshot. This can be retrieved from:
+- **Full ANF Storage Volume Resource ID** = the full Resource ID of the Volume being snapshot. This string can be retrieved from:
Azure portal ΓÇô> ANF ΓÇô> Volume ΓÇô> Settings/Properties ΓÇô> Resource ID
For **Azure Large Instance** system, this information is provided by Microsoft S
is made available in an Excel file that is provided during handover. Open a service request if you need to be provided this information again.
-The following is an example only and is the content of the file as generated by the configuration session above, update all the values accordingly.
+The following output is an example configuration file only and is the content of the file as generated by the configuration session above, update all the values accordingly.
```bash cat azacsnap.json
azure-netapp-files Azacsnap Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-release-notes.md
This page lists major changes made to AzAcSnap to provide new functionality or r
> [!IMPORTANT] > AzAcSnap 6 brings a new release model for AzAcSnap and includes fully supported GA features and Preview features in a single release.
-Since AzAcSnap v5.0 was released as GA in April-2021, there have been 8 releases of AzAcSnap across two branches. Our goal with the new release model is to align with how Azure components are released. This allows moving features from Preview to GA (without having to move an entire branch), and introduce new Preview features (without having to create a new branch). From AzAcSnap 6 we will have a single branch with fully supported GA features and Preview features (which are subject to Microsoft's Preview Ts&Cs). ItΓÇÖs important to note customers cannot accidentally use Preview features, and must enable them with the `--preview` command line option. This means the next release will be AzAcSnap 7, which could include; patches (if necessary) for GA features, current Preview features moving to GA, or new Preview features.
+Since AzAcSnap v5.0 was released as GA in April 2021, there have been 8 releases of AzAcSnap across two branches. Our goal with the new release model is to align with how Azure components are released. This allows moving features from Preview to GA (without having to move an entire branch), and introduce new Preview features (without having to create a new branch). From AzAcSnap 6 we will have a single branch with fully supported GA features and Preview features (which are subject to Microsoft's Preview Ts&Cs). ItΓÇÖs important to note customers cannot accidentally use Preview features, and must enable them with the `--preview` command line option. This means the next release will be AzAcSnap 7, which could include; patches (if necessary) for GA features, current Preview features moving to GA, or new Preview features.
AzAcSnap 6 is being released with the following fixes and improvements:
azure-netapp-files Azure Netapp Files Mount Unmount Volumes For Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-mount-unmount-volumes-for-virtual-machines.md
Title: Mount Azure NetApp Files volumes for virtual machines | Microsoft Docs
-description: Learn how to mount an Azure NetApp Files volume for Windows or Linux virtual machines.
+ Title: Mount NFS volumes for virtual machines | Microsoft Docs
+description: Learn how to mount an NFS volume for Windows or Linux virtual machines.
Previously updated : 06/13/2022 Last updated : 08/18/2022
-# Mount a volume for Windows or Linux VMs
+# Mount NFS volumes for Linux or Windows VMs
-You can mount an Azure NetApp Files file for Windows or Linux virtual machines (VMs). The mount instructions for Linux virtual machines are available on Azure NetApp Files.
+You can mount an NFS file for Windows or Linux virtual machines (VMs).
## Requirements
You can mount an Azure NetApp Files file for Windows or Linux virtual machines (
* 4045 TCP/UDP = `nlockmgr` (NFSv3 only) * 4046 TCP/UDP = `status` (NFSv3 only)
-## Steps
+## Mount NFS volumes on Linux clients
-1. Click the **Volumes** blade, and then select the volume for which you want to mount.
-2. Click **Mount instructions** from the selected volume, and then follow the instructions to mount the volume.
+1. Review the [Linux NFS mount options best practices](performance-linux-mount-options.md).
+2. Select the **Volumes** pane and then the NFS volume that you want to mount.
+3. To mount the NFS volume using a Linux client, select **Mount instructions** from the selected volume. Follow the displayed instructions to mount the volume.
+ :::image type="content" source="../media/azure-netapp-files/azure-netapp-files-mount-instructions-nfs.png" alt-text="Screenshot of Mount instructions." lightbox="../media/azure-netapp-files/azure-netapp-files-mount-instructions-nfs.png":::
+ * Ensure that you use the `vers` option in the `mount` command to specify the NFS protocol version that corresponds to the volume you want to mount.
+ For example, if the NFS version is NFSv4.1:
+ `sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=4.1,tcp,sec=sys $MOUNTTARGETIPADDRESS:/$VOLUMENAME $MOUNTPOINT`
+ * If you use NFSv4.1 and your configuration requires using VMs with the same host names (for example, in a DR test), refer to [Configure two VMs with the same hostname to access NFSv4.1 volumes](configure-nfs-clients.md#configure-two-vms-with-the-same-hostname-to-access-nfsv41-volumes).
+4. If you want the volume mounted automatically when an Azure VM is started or rebooted, add an entry to the `/etc/fstab` file on the host.
+ For example: `$ANFIP:/$FILEPATH /$MOUNTPOINT nfs bg,rw,hard,noatime,nolock,rsize=65536,wsize=65536,vers=3,tcp,_netdev 0 0`
+ * `$ANFIP` is the IP address of the Azure NetApp Files volume found in the volume properties menu
+ * `$FILEPATH` is the export path of the Azure NetApp Files volume
+ * `$MOUNTPOINT` is the directory created on the Linux host used to mount the NFS export
+5. If you want to mount an NFS Kerberos volume, refer to [Configure NFSv4.1 Kerberos encryption](configure-kerberos-encryption.md) for additional details.
+6. You can also access SMB volumes from Unix and Linux clients via NFS by setting the protocol access for the volume to ΓÇ£dual-protocolΓÇ¥. This allows for accessing the volume via NFS (NFSv3 or NFSv4.1) and SMB. See [Create a dual-protocol volume](create-volumes-dual-protocol.md) for details. Take note of the security style mappings table. Mounting a dual-protocol volume from Unix and Linux clients relies on the same procedure as regular NFS volumes.
- ![Mount instructions NFS](../media/azure-netapp-files/azure-netapp-files-mount-instructions-nfs.png)
+## Mount NFS volumes on Windows clients
- ![Mount instructions SMB](../media/azure-netapp-files/azure-netapp-files-mount-instructions-smb.png)
- * If you are mounting an NFS volume, ensure that you use the `vers` option in the `mount` command to specify the NFS protocol version that corresponds to the volume you want to mount.
- * If you are using NFSv4.1, use the following command to mount your file system: `sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=4.1,tcp,sec=sys $MOUNTTARGETIPADDRESS:/$VOLUMENAME $MOUNTPOINT`
- > [!NOTE]
- > If you use NFSv4.1 and your use case involves leveraging VMs with the same hostnames (for example, in a DR test), see [Configure two VMs with the same hostname to access NFSv4.1 volumes](configure-nfs-clients.md#configure-two-vms-with-the-same-hostname-to-access-nfsv41-volumes).
+Mounting NFSv4.1 volumes on Windows clients is supported. For more information, see [Network File System overview](/windows-server/storage/nfs/nfs-overview).
-3. If you want to have an NFS volume automatically mounted when an Azure VM is started or rebooted, add an entry to the `/etc/fstab` file on the host.
+If you want to mount NFSv3 volumes on a Windows client using NFS:
- For example: `$ANFIP:/$FILEPATH /$MOUNTPOINT nfs bg,rw,hard,noatime,nolock,rsize=65536,wsize=65536,vers=3,tcp,_netdev 0 0`
-
- * `$ANFIP` is the IP address of the Azure NetApp Files volume found in the volume properties blade.
- * `$FILEPATH` is the export path of the Azure NetApp Files volume.
- * `$MOUNTPOINT` is the directory created on the Linux host used to mount the NFS export.
-
-4. If you want to mount the volume to Windows using NFS:
-
- > [!NOTE]
- > One alternative to mounting an NFS volume on Windows is to [Create a dual-protocol volume for Azure NetApp Files](create-volumes-dual-protocol.md), allowing the native access of SMB for Windows and NFS for Linux. However, if that is not possible, you can mount the NFS volume on Windows using the steps below.
-
- * Set the permissions to allow the volume to be mounted on Windows
- * Follow the steps to [Configure Unix permissions and change ownership mode for NFS and dual-protocol volumes](configure-unix-permissions-change-ownership-mode.md#unix-permissions) and set the permissions to '777' or '775'.
- * Install NFS client on Windows
- * Open PowerShell
- * type: `Install-WindowsFeature -Name NFS-Client`
- * Mount the volume via the NFS client on Windows
- * Obtain the 'mount path' of the volume
- * Open a Command prompt
- * type: `mount -o anon -o mtype=hard \\$ANFIP\$FILEPATH $DRIVELETTER:\`
- * `$ANFIP` is the IP address of the Azure NetApp Files volume found in the volume properties blade.
- * `$FILEPATH` is the export path of the Azure NetApp Files volume.
- * `$DRIVELETTER` is the drive letter where you would like the volume mounted within Windows.
-
-5. If you want to mount an NFS Kerberos volume, see [Configure NFSv4.1 Kerberos encryption](configure-kerberos-encryption.md) for additional details.
+1. [Mount the volume onto a Unix or Linux VM first](#mount-nfs-volumes-on-linux-clients).
+1. Run a `chmod 777` or `chmod 775` command against the volume.
+1. Mount the volume via the NFS client on Windows using the mount option `mtype=hard` to reduce connection issues.
+ See [Windows command line utility for mounting NFS volumes](/windows-server/administration/windows-commands/mount) for more detail.
+ For example: `Mount -o rsize=256 -o wsize=256 -o mtype=hard \\10.x.x.x\testvol X:* `
+1. You can also access NFS volumes from Windows clients via SMB by setting the protocol access for the volume to ΓÇ£dual-protocolΓÇ¥. This setting allows access to the volume via SMB and NFS (NFSv3 or NFSv4.1) and will result in better performance than using the NFS client on Windows with an NFS volume. See [Create a dual-protocol volume](create-volumes-dual-protocol.md) for details, and take note of the security style mappings table. Mounting a dual-protocol volume from Windows clients using the same procedure as regular SMB volumes.
## Next steps
+* [Mount SMB volumes for Windows or Linux virtual machines](mount-volumes-vms-smb.md)
+* [Linux NFS mount options best practices](performance-linux-mount-options.md)
* [Configure NFSv4.1 default domain for Azure NetApp Files](azure-netapp-files-configure-nfsv41-domain.md) * [NFS FAQs](faq-nfs.md) * [Network File System overview](/windows-server/storage/nfs/nfs-overview)
azure-netapp-files Azure Netapp Files Solution Architectures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
na Previously updated : 08/11/2022 Last updated : 08/19/2022 # Solution architectures using Azure NetApp Files
This section provides references for solutions for Linux OSS applications and da
* [General mainframe refactor to Azure - Azure Example Scenarios](/azure/architecture/example-scenario/mainframe/general-mainframe-refactor) * [Refactor mainframe applications with Advanced - Azure Example Scenarios](/azure/architecture/example-scenario/mainframe/refactor-mainframe-applications-advanced)
+* [Refactor mainframe applications with Astadia ΓÇô Azure Example Scenarios](/azure/architecture/example-scenario/mainframe/refactor-mainframe-applications-astadia)
+* [Refactor mainframe computer systems that run Adabas & Natural - Azure Example Scenarios](/azure/architecture/example-scenario/mainframe/refactor-adabas-aks)
+* [Refactor IBM z/OS mainframe coupling facility (CF) to Azure - Azure Example Scenarios](/azure/architecture/reference-architectures/zos/refactor-zos-coupling-facility)
+* [Refactor mainframe applications to Azure with Raincode compilers - Azure Example Scenarios](/azure/architecture/reference-architectures/app-modernization/raincode-reference-architecture)
+ ### Oracle
This section provides references for Virtual Desktop infrastructure solutions.
* [Azure Virtual Desktop at enterprise scale](/azure/architecture/example-scenario/wvd/windows-virtual-desktop) * [Microsoft FSLogix for the enterprise - Azure NetApp Files best practices](/azure/architecture/example-scenario/wvd/windows-virtual-desktop-fslogix#azure-netapp-files-best-practices) * [Setting up Azure NetApp Files for MSIX App Attach](https://techcommunity.microsoft.com/t5/windows-virtual-desktop/setting-up-azure-netapp-files-for-msix-app-attach-step-by-step/m-p/1990021)
+* [Multiple forests with AD DS and Azure AD ΓÇô Azure Example Scenarios](/azure/architecture/example-scenario/wvd/multi-forest)
+* [Multiregion Business Continuity and Disaster Recovery (BCDR) for Azure Virtual Desktop ΓÇô Azure Example Scenarios](/azure/architecture/example-scenario/wvd/azure-virtual-desktop-multi-region-bcdr)
+* [Deploy Esri ArcGIS Pro in Azure Virtual Desktop ΓÇô Azure Example Scenarios](/azure/architecture/example-scenario/data/esri-arcgis-azure-virtual-desktop)
+ ### Citrix
This section provides solutions for Azure platform services.
### Azure Red Hat Openshift * [Using Trident to Automate Azure NetApp Files from OpenShift](https://techcommunity.microsoft.com/t5/fasttrack-for-azure/using-trident-to-automate-azure-netapp-files-from-openshift/ba-p/2367351)
+* [Deploy IBM Maximo Application Suite on Azure ΓÇô Azure Example Scenarios](/azure/architecture/example-scenario/apps/deploy-ibm-maximo-application-suite)
### Azure Batch
azure-netapp-files Backup Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-introduction.md
na Previously updated : 07/26/2021 Last updated : 08/19/2022
Azure NetApp Files backup is supported for the following regions:
* Japan East * North Europe * South Central US
-* UK South
* West Europe * West US * West US 2
azure-netapp-files Configure Kerberos Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-kerberos-encryption.md
The following requirements apply to NFSv4.1 client encryption:
* Ensure that User Principal Names for user accounts do *not* end with a `$` symbol (for example, user$@REALM.COM). <!-- Not using 'contoso.com' in this example; per Mark, A customers REALM namespace may be different from their AD domain name space. --> For [Group managed service accounts](/windows-server/security/group-managed-service-accounts/getting-started-with-group-managed-service-accounts) (gMSA), you need to remove the trailing `$` from the User Principal Name before the account can be used with the Azure NetApp Files Kerberos feature. - ## Create an NFS Kerberos Volume 1. Follow steps in [Create an NFS volume for Azure NetApp Files](azure-netapp-files-create-volumes.md) to create the NFSv4.1 volume.
azure-netapp-files Faq Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-performance.md
Previously updated : 02/07/2022 Last updated : 08/18/2022 # Performance FAQs for Azure NetApp Files
Azure NetApp Files provides volume performance metrics. You can also use Azure M
See [Performance impact of Kerberos on NFSv4.1 volumes](performance-impact-kerberos.md) for information about security options for NFSv4.1, the performance vectors tested, and the expected performance impact.
+## What's the performance impact of using `nconnect` with Kerberos?
++ ## Does Azure NetApp Files support SMB Direct? No, Azure NetApp Files does not support SMB Direct.
azure-netapp-files Mount Volumes Vms Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/mount-volumes-vms-smb.md
+
+ Title: Mount SMB volumes for Windows VMs | Microsoft Docs
+description: Learn how to mount SMB volumes for Windows virtual machines.
+++++ Last updated : 08/18/2022+
+# Mount SMB volumes for Windows VMs
+
+You can mount an SMB file for Windows virtual machines (VMs).
+
+## Mount SMB volumes on a Windows client
+
+1. Select the **Volumes** menu and then the SMB volume that you want to mount.
+1. To mount the SMB volume using a Windows client, select **Mount instructions** from the selected volume. Follow the displayed instructions to mount the volume.
+ :::image type="content" source="../media/azure-netapp-files/azure-netapp-files-mount-instructions-smb.png" alt-text="Screenshot of Mount instructions." lightbox="../media/azure-netapp-files/azure-netapp-files-mount-instructions-smb.png":::
+
+## Next steps
+
+* [Mount NFS volumes for Windows or Linux VMs](azure-netapp-files-mount-unmount-volumes-for-virtual-machines.md)
+* [SMB FAQs](faq-smb.md)
+* [Network File System overview](/windows-server/storage/nfs/nfs-overview)
azure-netapp-files Performance Linux Mount Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-linux-mount-options.md
na Previously updated : 05/05/2022 Last updated : 08/19/2022 # Linux NFS mount options best practices for Azure NetApp Files
When you use `nconnect`, keep the following rules in mind:
For details, see [Linux concurrency best practices for Azure NetApp Files](performance-linux-concurrency-session-slots.md).
+### `Nconnect` considerations
++ ## `Rsize` and `Wsize` Examples in this section provide information about how to approach performance tuning. You might need to make adjustments to suit your specific application needs.
sudo vi /etc/fstab
10.23.1.4:/HN1-shared/shared /hana/shared nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0 ```
-Also for example, SAS Viya recommends a 256-KiB read and write sizes, and [SAS GRID](https://communities.sas.com/t5/Administration-and-Deployment/Azure-NetApp-Files-A-shared-file-system-to-use-with-SAS-Grid-on/m-p/606973/highlight/true#M17740) limits the `r/wsize` to 64 KiB while augmenting read performance with increased read-ahead for the NFS mounts. See [NFS read-ahead best practices for Azure NetApp Files](performance-linux-nfs-read-ahead.md) for details.
+For example, SAS Viya recommends a 256-KiB read and write sizes, and [SAS GRID](https://communities.sas.com/t5/Administration-and-Deployment/Azure-NetApp-Files-A-shared-file-system-to-use-with-SAS-Grid-on/m-p/606973/highlight/true#M17740) limits the `r/wsize` to 64 KiB while augmenting read performance with increased read-ahead for the NFS mounts. See [NFS read-ahead best practices for Azure NetApp Files](performance-linux-nfs-read-ahead.md) for details.
The following considerations apply to the use of `rsize` and `wsize`:
The attributes `acregmin`, `acregmax`, `acdirmin`, and `acdirmax` control the co
For example, consider the default `acregmin` and `acregmax` values, 3 and 30 seconds, respectively. For instance, the attributes are repeatedly evaluated for the files in a directory. After 3 seconds, the NFS service is queried for freshness. If the attributes are deemed valid, the client doubles the trusted time to 6 seconds, 12 seconds, 24 seconds, then as the maximum is set to 30, 30 seconds. From that point on, until the cached attributes are deemed out of date (at which point the cycle starts over), trustworthiness is defined as 30 seconds being the value specified by `acregmax`.
-There are other cases that can benefit from a similar set of mount options, even when there is no complete ownership by the clients, for example, if the clients use the data as read only and data update is managed through another path. For applications that use grids of clients like EDA, web hosting and movie rendering and have relatively static data sets (EDA tools or libraries, web content, texture data), the typical behavior is that the data set is largely cached on the clients. There are very few reads and no writes. There will be many `getattr`/access calls coming back to storage. These data sets are typically updated through another client mounting the file systems and periodically pushing content updates.
+There are other cases that can benefit from a similar set of mount options, even when there's no complete ownership by the clients, for example, if the clients use the data as read only and data update is managed through another path. For applications that use grids of clients like EDA, web hosting and movie rendering and have relatively static data sets (EDA tools or libraries, web content, texture data), the typical behavior is that the data set is largely cached on the clients. There are few reads and no writes. There will be many `getattr`/access calls coming back to storage. These data sets are typically updated through another client mounting the file systems and periodically pushing content updates.
-In these cases, there is a known lag in picking up new content and the application still works with potentially out-of-date data. In these cases, `nocto` and `actimeo` can be used to control the period where out-of-data date can be managed. For example, in EDA tools and libraries, `actimeo=600` works well because this data is typically updated infrequently. For small web hosting where clients need to see their data updates timely as they are editing their sites, `actimeo=10` might be acceptable. For large-scale web sites where there is content pushed to multiple file systems, `actimeo=60` might be acceptable.
+In these cases, there's a known lag in picking up new content and the application still works with potentially out-of-date data. In these cases, `nocto` and `actimeo` can be used to control the period where out-of-data date can be managed. For example, in EDA tools and libraries, `actimeo=600` works well because this data is typically updated infrequently. For small web hosting where clients need to see their data updates timely as they're editing their sites, `actimeo=10` might be acceptable. For large-scale web sites where there's content pushed to multiple file systems, `actimeo=60` might be acceptable.
Using these mount options significantly reduces the workload to storage in these cases. (For example, a recent EDA experience reduced IOPs to the tool volume from >150 K to ~6 K.) Applications can run significantly faster because they can trust the data in memory. (Memory access time is nanoseconds vs. hundreds of microseconds for `getattr`/access on a fast network.)
Close-to-open consistency (the `cto` mount option) ensures that no matter the st
* When a directory is crawled (`ls`, `ls -l` for example) a certain set of PRC calls are issued. The NFS server shares its view of the filesystem. As long as `cto` is used by all NFS clients accessing a given NFS export, all clients will see the same list of files and directories therein. The freshness of the attributes of the files in the directory is controlled by the [attribute cache timers](#how-attribute-cache-timers-work). In other words, as long as `cto` is used, files appear to remote clients as soon as the file is created and the file lands on the storage. * When a file is opened, the content of the file is guaranteed fresh from the perspective of the NFS server.
- If there is a race condition where the content has not finished flushing from Machine 1 when a file is opened on Machine 2, Machine 2 will only receive the data present on the server at the time of the open. In this case, Machine 2 will not retrieve more data from the file until the `acreg` timer is reached, and Machine 2 checks its cache coherency from the server again. This scenario can be observed using a tail `-f` from Machine 2 when the file is still being written to from Machine 1.
+ If there's a race condition where the content has not finished flushing from Machine 1 when a file is opened on Machine 2, Machine 2 will only receive the data present on the server at the time of the open. In this case, Machine 2 will not retrieve more data from the file until the `acreg` timer is reached, and Machine 2 checks its cache coherency from the server again. This scenario can be observed using a tail `-f` from Machine 2 when the file is still being written to from Machine 1.
### No close-to-open consistency
azure-resource-manager Publish Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/publish-notifications.md
Title: Managed apps with notifications
-description: Configure managed applications with webhook endpoints to receive notifications about creates, updates, deletes, and errors on the managed application instances.
+ Title: Azure managed applications with notifications
+description: Configure an Azure managed application with webhook endpoints to receive notifications about creates, updates, deletes, and errors on the managed application instances.
Previously updated : 11/01/2019 Last updated : 08/18/2022 + # Azure managed applications with notifications
-Azure managed application notifications allow publishers to automate actions based on lifecycle events of the managed application instances. Publishers can specify custom notification webhook endpoints to receive event notifications about new and existing managed application instances. Publishers can set up custom workflows at the time of application provisioning, updates, and deletion.
+Azure managed application notifications allow publishers to automate actions based on lifecycle events of the managed application instances. Publishers can specify a custom notification webhook endpoint to receive event notifications about new and existing managed application instances. Publishers can set up custom workflows at the time of application provisioning, updates, and deletion.
## Getting started
-To start receiving managed applications, spin up a public HTTPS endpoint and specify it when you publish the service catalog application definition or Azure Marketplace offer.
+
+To start receiving managed application notifications, create a public HTTPS endpoint. Specify the endpoint when you publish the service catalog application definition or Microsoft Azure Marketplace offer.
Here are the recommended steps to get started quickly:
-1. Spin up a public HTTPS endpoint that logs the incoming POST requests and returns `200 OK`.
-2. Add the endpoint to the service catalog application definition or Azure Marketplace offer as explained later in this article.
-3. Create a managed application instance that references the application definition or Azure Marketplace offer.
-4. Validate that the notifications are being received.
-5. Enable authorization as explained in the **Endpoint authentication** section of this article.
-6. Follow the instructions in the **Notification schema** section of this article to parse the notification requests and implement your business logic based on the notification.
+
+1. Create a public HTTPS endpoint that logs the incoming POST requests and returns `200 OK`.
+1. Add the endpoint to the service catalog application definition or Azure Marketplace offer as explained later in this article.
+1. Create a managed application instance that references the application definition or Azure Marketplace offer.
+1. Validate that the notifications are being received.
+1. Enable authorization as explained in the [Endpoint authentication](#endpoint-authentication) section of this article.
+1. Follow the instructions in the [Notification schema](#notification-schema) section of this article to parse the notification requests and implement your business logic based on the notification.
## Add service catalog application definition notifications
-#### Azure portal
+
+The following examples show how to add a notification endpoint URI using the portal or REST API.
+
+### Azure portal
+ To get started, see [Publish a service catalog application through Azure portal](./publish-portal.md).
-![Service catalog application definition notifications in the Azure portal](./media/publish-notifications/service-catalog-notifications.png)
-#### REST API
+### REST API
> [!NOTE]
-> Currently, you can supply only one endpoint in the `notificationEndpoints` in the application definition properties.
+> You can only supply one endpoint in the `notificationEndpoints` property of the managed application definition.
``` JSON
- {
- "properties": {
- "isEnabled": true,
- "lockLevel": "ReadOnly",
- "displayName": "Sample Application Definition",
- "description": "Notification-enabled application definition.",
- "notificationPolicy": {
- "notificationEndpoints": [
- {
- "uri": "https://isv.azurewebsites.net:1214?sig=unique_token"
- }
- ]
- },
- "authorizations": [
- {
- "principalId": "d6b7fbd3-4d99-43fe-8a7a-f13aef11dc18",
- "roleDefinitionId": "8e3af657-a8ff-443c-a75c-2fe8c4bcb635"
- },
- ...
-
+{
+ "properties": {
+ "isEnabled": true,
+ "lockLevel": "ReadOnly",
+ "displayName": "Sample Application Definition",
+ "description": "Notification-enabled application definition.",
+ "notificationPolicy": {
+ "notificationEndpoints": [
+ {
+ "uri": "https://isv.azurewebsites.net:1214?sig=unique_token"
+ }
+ ]
+ },
+ "authorizations": [
+ {
+ "principalId": "d6b7fbd3-4d99-43fe-8a7a-f13aef11dc18",
+ "roleDefinitionId": "8e3af657-a8ff-443c-a75c-2fe8c4bcb635"
+ },
+ ...
```+ ## Add Azure Marketplace managed application notifications+ For more information, see [Create an Azure application offer](../../marketplace/azure-app-offer-setup.md).
-![Azure Marketplace managed application notifications in the Azure portal](./media/publish-notifications/marketplace-notifications.png)
+ ## Event triggers
-The following table describes all the possible combinations of EventType and ProvisioningState and their triggers:
+
+The following table describes all the possible combinations of `eventType` and `provisioningState` and their triggers:
EventType | ProvisioningState | Trigger for notification ||
PATCH | Succeeded | After a successful PATCH on the managed application instance
DELETE | Deleting | As soon as the user initiates a DELETE of a managed app instance. DELETE | Deleted | After the full and successful deletion of the managed application. DELETE | Failed | After any error during the deprovisioning process that blocks the deletion.+ ## Notification schema
-When you spin up your webhook endpoint to handle notifications, you'll need to parse the payload to get important properties to then act upon the notification. Service catalog and Azure Marketplace managed application notifications provide many of the same properties. Two small differences are outlined in the table that follows the samples.
-#### Service catalog application notification schema
-Here's a sample service catalog notification after the successful provisioning of a managed application instance:
+When you create your webhook endpoint to handle notifications, you'll need to parse the payload to get important properties to then act upon the notification. Service catalog and Azure Marketplace managed application notifications provide many of the same properties, but there are some differences. The `applicationDefinitionId` property only applies to service catalog. The `billingDetails` and `plan` properties only apply to Azure Marketplace.
+
+Azure appends `/resource` to the notification endpoint URI you provided in the managed application definition. The webhook endpoint must be able to handle notifications on the `/resource` URI. For example, if you provided a notification endpoint URI like `https://fabrikam.com` then the webhook endpoint URI is `https://fabrikam.com/resource`.
+
+### Service catalog application notification schema
+
+The following sample shows a service catalog notification after the successful provisioning of a managed application instance.
+ ``` HTTP POST https://{your_endpoint_URI}/resource?{optional_parameter}={optional_parameter_value} HTTP/1.1 {
- "eventType": "PUT",
- "applicationId": "/subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.Solutions/applications/<applicationName>",
- "eventTime": "2019-08-14T19:20:08.1707163Z",
- "provisioningState": "Succeeded",
- "applicationDefinitionId": "/subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.Solutions/applicationDefinitions/<appDefName>"
+ "eventType": "PUT",
+ "applicationId": "/subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.Solutions/applications/<applicationName>",
+ "eventTime": "2019-08-14T19:20:08.1707163Z",
+ "provisioningState": "Succeeded",
+ "applicationDefinitionId": "/subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.Solutions/applicationDefinitions/<appDefName>"
}- ``` If the provisioning fails, a notification with the error details will be sent to the specified endpoint.
If the provisioning fails, a notification with the error details will be sent to
POST https://{your_endpoint_URI}/resource?{optional_parameter}={optional_parameter_value} HTTP/1.1 {
- "eventType": "PUT",
- "applicationId": "subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.Solutions/applications/<applicationName>",
- "eventTime": "2019-08-14T19:20:08.1707163Z",
- "provisioningState": "Failed",
- "applicationDefinitionId": "/subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.Solutions/applicationDefinitions/<appDefName>",
- "error": {
- "code": "ErrorCode",
- "message": "error message",
- "details": [
- {
- "code": "DetailedErrorCode",
- "message": "error message"
- }
- ]
- }
+ "eventType": "PUT",
+ "applicationId": "subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.Solutions/applications/<applicationName>",
+ "eventTime": "2019-08-14T19:20:08.1707163Z",
+ "provisioningState": "Failed",
+ "applicationDefinitionId": "/subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.Solutions/applicationDefinitions/<appDefName>",
+ "error": {
+ "code": "ErrorCode",
+ "message": "error message",
+ "details": [
+ {
+ "code": "DetailedErrorCode",
+ "message": "error message"
+ }
+ ]
+ }
}- ```
-#### Azure Marketplace application notification schema
+### Azure Marketplace application notification schema
+
+The following sample shows a service catalog notification after the successful provisioning of a managed application instance.
-Here's a sample service catalog notification after the successful provisioning of a managed application instance:
``` HTTP POST https://{your_endpoint_URI}/resource?{optional_parameter}={optional_parameter_value} HTTP/1.1 {
- "eventType": "PUT",
- "applicationId": "/subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.Solutions/applications/<applicationName>",
- "eventTime": "2019-08-14T19:20:08.1707163Z",
- "provisioningState": "Succeeded",
- "billingDetails": {
- "resourceUsageId":"<resourceUsageId>"
- },
- "plan": {
- "publisher": "publisherId",
- "product": "offer",
- "name": "skuName",
- "version": "1.0.1"
- }
+ "eventType": "PUT",
+ "applicationId": "/subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.Solutions/applications/<applicationName>",
+ "eventTime": "2019-08-14T19:20:08.1707163Z",
+ "provisioningState": "Succeeded",
+ "billingDetails": {
+ "resourceUsageId": "<resourceUsageId>"
+ },
+ "plan": {
+ "publisher": "publisherId",
+ "product": "offer",
+ "name": "skuName",
+ "version": "1.0.1"
+ }
}- ``` If the provisioning fails, a notification with the error details will be sent to the specified endpoint.
If the provisioning fails, a notification with the error details will be sent to
POST https://{your_endpoint_URI}/resource?{optional_parameter}={optional_parameter_value} HTTP/1.1 {
- "eventType": "PUT",
- "applicationId": "/subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.Solutions/applications/<applicationName>",
- "eventTime": "2019-08-14T19:20:08.1707163Z",
- "provisioningState": "Failed",
- "billingDetails": {
- "resourceUsageId":"<resourceUsageId>"
- },
- "plan": {
- "publisher": "publisherId",
- "product": "offer",
- "name": "skuName",
- "version": "1.0.1"
- },
- "error": {
- "code": "ErrorCode",
- "message": "error message",
- "details": [
- {
- "code": "DetailedErrorCode",
- "message": "error message"
- }
- ]
- }
+ "eventType": "PUT",
+ "applicationId": "/subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.Solutions/applications/<applicationName>",
+ "eventTime": "2019-08-14T19:20:08.1707163Z",
+ "provisioningState": "Failed",
+ "billingDetails": {
+ "resourceUsageId": "<resourceUsageId>"
+ },
+ "plan": {
+ "publisher": "publisherId",
+ "product": "offer",
+ "name": "skuName",
+ "version": "1.0.1"
+ },
+ "error": {
+ "code": "ErrorCode",
+ "message": "error message",
+ "details": [
+ {
+ "code": "DetailedErrorCode",
+ "message": "error message"
+ }
+ ]
+ }
}- ```
-Parameter | Description
+Property | Description
|
-eventType | The type of event that triggered the notification. (For example, PUT, PATCH, DELETE.)
-applicationId | The fully qualified resource identifier of the managed application for which the notification was triggered.
-eventTime | The timestamp of the event that triggered the notification. (Date and time in UTC ISO 8601 format.)
-provisioningState | The provisioning state of the managed application instance. (For example, Succeeded, Failed, Deleting, Deleted.)
-error | *Specified only when the provisioningState is Failed*. Contains the error code, message, and details of the issue that caused the failure.
-applicationDefinitionId | *Specified only for service catalog managed applications*. Represents the fully qualified resource identifier of the application definition for which the managed application instance was provisioned.
-plan | *Specified only for Azure Marketplace managed applications*. Represents the publisher, offer, SKU, and version of the managed application instance.
-billingDetails | *Specified only for Azure Marketplace managed applications.* The billing details of the managed application instance. Contains the resourceUsageId that you can use to query Azure Marketplace for usage details.
+`eventType` | The type of event that triggered the notification. (For example, PUT, PATCH, DELETE.)
+`applicationId` | The fully qualified resource identifier of the managed application for which the notification was triggered.
+`eventTime` | The timestamp of the event that triggered the notification. (Date and time in UTC ISO 8601 format.)
+`provisioningState` | The provisioning state of the managed application instance. For example, Succeeded, Failed, Deleting, Deleted.
+`applicationDefinitionId` | _Specified only for service catalog managed applications_. Represents the fully qualified resource identifier of the application definition for which the managed application instance was provisioned.
+`billingDetails` | _Specified only for Azure Marketplace managed applications_. The billing details of the managed application instance. Contains the `resourceUsageId` that you can use to query Azure Marketplace for usage details.
+`plan` | _Specified only for Azure Marketplace managed applications_. Represents the publisher, offer, SKU, and version of the managed application instance.
+`error` | _Specified only when the provisioningState is Failed_. Contains the error code, message, and details of the issue that caused the failure.
## Endpoint authentication+ To secure the webhook endpoint and ensure the authenticity of the notification:
-1. Provide a query parameter on top of the webhook URI, like this: https\://your-endpoint.com?sig=Guid. With each notification, check that the query parameter `sig` has the expected value `Guid`.
-2. Issue a GET on the managed application instance by using applicationId. Validate that the provisioningState matches the provisioningState of the notification to ensure consistency.
+
+1. Provide a query parameter on top of the webhook URI, like this: `https://your-endpoint.com?sig=Guid`. With each notification, check that the query parameter `sig` has the expected value `Guid`.
+1. Issue a GET on the managed application instance by using `applicationId`. Validate that the `provisioningState` matches the `provisioningState` of the notification to ensure consistency.
## Notification retries
-The Managed Application Notification service expects a `200 OK` response from the webhook endpoint to the notification. The notification service will retry if the webhook endpoint returns an HTTP error code greater than or equal to 500, if it returns an error code of 429, or if the endpoint is temporarily unreachable. If the webhook endpoint doesn't become available within 10 hours, the notification message will be dropped and the retries will stop.
+The managed application notification service expects a `200 OK` response from the webhook endpoint to the notification. The notification service will retry if the webhook endpoint returns an HTTP error code greater than or equal to 500, it returns an error code of 429, or if the endpoint is temporarily unreachable. If the webhook endpoint doesn't become available within 10 hours, the notification message will be dropped, and the retries will stop.
azure-resource-manager Template Tutorial Quickstart Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-quickstart-template.md
Title: Tutorial - Use quickstart templates
-description: Learn how to use Azure Quickstart templates to complete your template development.
+description: Learn how to use Azure Quickstart Templates to complete your template development.
Previously updated : 03/27/2020 Last updated : 08/17/2022
-# Tutorial: Use Azure Quickstart templates
+# Tutorial: Use Azure Quickstart Templates
-[Azure Quickstart templates](https://azure.microsoft.com/resources/templates/) is a repository of community contributed templates. You can use the sample templates in your template development. In this tutorial, you find a website resource definition, and add it to your own template. It takes about **12 minutes** to complete.
+[Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/) is a repository of community contributed templates. You can use the sample templates in your template development. In this tutorial, you find a website resource definition and add it to your own template. This instruction takes **12 minutes** to complete.
## Prerequisites We recommend that you complete the [tutorial about exported templates](template-tutorial-export-template.md), but it's not required.
-You must have Visual Studio Code with the Resource Manager Tools extension, and either Azure PowerShell or Azure CLI. For more information, see [template tools](template-tutorial-create-first-template.md#get-tools).
+You need to have Visual Studio Code with the Resource Manager Tools extension, and either Azure PowerShell or Azure Command-Line Interface (CLI). For more information, see [template tools](template-tutorial-create-first-template.md#get-tools).
## Review template
-At the end of the previous tutorial, your template had the following JSON:
+At the end of the previous tutorial, your template had the following JSON file:
:::code language="json" source="~/resourcemanager-templates/get-started-with-templates/export-template/azuredeploy.json":::
This template works for deploying storage accounts and app service plans, but yo
## Find template
-1. Open [Azure Quickstart templates](https://azure.microsoft.com/resources/templates/)
-1. In **Search**, enter _deploy linux web app_.
+1. Open [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/)
1. Select the tile with the title **Deploy a basic Linux web app**. If you have trouble finding it, here's the [direct link](https://azure.microsoft.com/resources/templates/webapp-basic-linux/). 1. Select **Browse on GitHub**. 1. Select _azuredeploy.json_.
-1. Review the template. In particular, look for the `Microsoft.Web/sites` resource.
+1. Review the template. Look for the `Microsoft.Web/sites` resource.
![Resource Manager template quickstart web site](./media/template-tutorial-quickstart-template/resource-manager-template-quickstart-template-web-site.png)
Merge the quickstart template with the existing template:
:::code language="json" source="~/resourcemanager-templates/get-started-with-templates/quickstart-template/azuredeploy.json" range="1-108" highlight="32-45,49,85-100":::
-The web app name needs to be unique across Azure. To prevent having duplicate names, the `webAppPortalName` variable has been updated from `"webAppPortalName": "[concat(parameters('webAppName'), '-webapp')]"` to `"webAppPortalName": "[concat(parameters('webAppName'), uniqueString(resourceGroup().id))]"`.
+The web app name needs to be unique across Azure. To prevent having duplicate names, the `webAppPortalName` variable is updated from `"webAppPortalName": "[concat(parameters('webAppName'), '-webapp')]"` to `"webAppPortalName": "[concat(parameters('webAppName'), uniqueString(resourceGroup().id))]"`.
Add a comma at the end of the `Microsoft.Web/serverfarms` definition to separate the resource definition from the `Microsoft.Web/sites` definition. There are a couple of important features to note in this new resource.
-You'll notice it has an element named `dependsOn` that's set to the app service plan. This setting is required because the app service plan must exist before the web app is created. The `dependsOn` element tells Resource Manager how to order the resources for deployment.
+It has an element named `dependsOn` that's set to the app service plan. This setting is required because the app service plan needs to exist before the web app is created. The `dependsOn` element tells Resource Manager how to order the resources for deployment.
The `serverFarmId` property uses the [resourceId](template-functions-resource.md#resourceid) function. This function gets the unique identifier for a resource. In this case, it gets the unique identifier for the app service plan. The web app is associated with one specific app service plan.
New-AzResourceGroupDeployment `
# [Azure CLI](#tab/azure-cli)
-To run this deployment command, you must have the [latest version](/cli/azure/install-azure-cli) of Azure CLI.
+To run this deployment command, you need to have the [latest version](/cli/azure/install-azure-cli) of Azure CLI.
```azurecli az deployment group create \
az deployment group create \
> [!NOTE]
-> If the deployment failed, use the `verbose` switch to get information about the resources being created. Use the `debug` switch to get more information for debugging.
+> If the deployment fails, use the `verbose` switch to get information about the resources you're creating. Use the `debug` switch to get more information for debugging.
## Clean up resources If you're moving on to the next tutorial, you don't need to delete the resource group.
-If you're stopping now, you might want to clean up the resources you deployed by deleting the resource group.
+If you're stopping now, you might want to delete the resource group.
-1. From the Azure portal, select **Resource group** from the left menu.
-2. Enter the resource group name in the **Filter by name** field.
-3. Select the resource group name.
+1. From the Azure portal, select **Resource groups** from the left menu.
+2. Type the resource group name in the **Filter for any field...** text field.
+3. Check the box next to **myResourceGroup** and select **myResourceGroup** or your resource group name.
4. Select **Delete resource group** from the top menu. ## Next steps
azure-video-analyzer Connect Cameras To Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/cloud/connect-cameras-to-cloud.md
You can deploy the Video Analyzer edge module to an IoT Edge device on the same
* When cameras/devices need to be shielded from exposure to the internet * When cameras/devices do not have the functionality to connect to IoT Hub independently
-* When power, space, or other considerations permit only a lightweight edge device to be deployed on-premise
+* When power, space, or other considerations permit only a lightweight edge device to be deployed on-premises
The Video Analyzer edge module does not act as a transparent gateway for messaging and telemetry from the camera to IoT Hub, but only as a transparent gateway for video.
azure-video-indexer Accounts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/accounts-overview.md
# Azure Video Indexer account types
-This article gives an overview of Azure Video Indexer accounts and provides links to other articles for more details.
+This article gives an overview of Azure Video Indexer accounts types and provides links to other articles for more details.
-## Differences between classic, ARM, trial accounts
+## Overview
-Classic and ARM (Azure Resource Manager) are both paid accounts with similar data plane capabilities and pricing. The main difference is that classic accounts control plane is managed by Azure Video Indexer and ARM accounts control plane is managed by Azure Resource Manager.
-Going forward, ARM account support more Azure native features and integrations such as: Azure Monitor, Private endpoints, Service tag and CMK (Customer managed key).
+The first time you visit the [www.videoindexer.ai/](https://www.videoindexer.ai/) website, a trial account is automatically created. A trial Azure Video Indexer account has limitation on number of indexing minutes, support, and SLA.
-A trial account is automatically created the first time you visit the [www.videoindexer.ai/](https://www.videoindexer.ai/) website. A trial Azure Video Indexer account has limitation on number of videos, support, and SLA. A trial Azure Video Indexer account has limitation on number of videos, support, and SLA.
+With a trial, account Azure Video Indexer provides up to 600 minutes of free indexing to users and up to 2400 minutes of free indexing to users that subscribe to the Azure Video Indexer API on the [developer portal](https://aka.ms/avam-dev-portal).
-### To generate an access token
+> [!NOTE]
+> The trial account is not available on the Azure Government cloud.
+
+You can later create a paid account where you're not limited by the quota. Two types of paid accounts are available to you: Azure Resource Manager (ARM) (currently in preview) and classic (generally available). The main difference between the two is account management platform. While classic accounts are built on the API Management, ARM-based accounts management is built on Azure, which enables apply access control to all services with role-based access control (Azure RBAC) natively.
+
+Make sure to review [Pricing](https://azure.microsoft.com/pricing/details/video-indexer/).
+
+## Connecting to Azure subscription
+
+With a trial account, you don't have to set up an Azure subscription. When creating a paid account, you need to connect Azure Video Indexer [to your Azure subscription and an Azure Media Services account](connect-to-azure.md).
+
+## To get access to your account
| | ARM-based |Classic| Trial| ||||| |Get access token | [ARM REST API](https://aka.ms/avam-arm-api) |[Get access token](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Account-Access-Token)|Same as classic |Share account| [Azure RBAC(role based access control)](../role-based-access-control/overview.md)| [Invite users](invite-users.md) |Same as classic
-### Indexing
-
-* Free trial account: up to 10 hours of free indexing, and up to 40 hours of free indexing for API registered users.
-* Paid unlimited account: for larger scale indexing, create a new Video Indexer account connected to a paid Microsoft Azure subscription.
-
-For more details, see [Pricing](https://azure.microsoft.com/pricing/details/video-indexer/).
-
-### Create accounts
+## Create accounts
* ARM accounts: [Get started with Azure Video Indexer in Azure portal](create-account-portal.md). **The recommended paid account type is the ARM-based account**. * Upgrade a trial account to an ARM based account and [**import** your content for free](connect-to-azure.md#import-your-content-from-the-trial-account). * Classic accounts: [Create classic accounts using API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Create-Paid-Account).
-* Connect a classic account to ARM: [Connect an existing classic paid Azure Video Indexer account to ARM-based account](connect-classic-account-to-arm.md).
+* Connect a classic account to ARM: [Connect an existing classic paid Azure Video Indexer account to an ARM-based account](connect-classic-account-to-arm.md).
## Limited access features
azure-video-indexer Animated Characters Recognition How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/animated-characters-recognition-how-to.md
Title: Animated character detection with Azure Video Indexer how to
-description: This how to demonstrates how to use animated character detection with Azure Video Indexer.
+description: This topic demonstrates how to use animated character detection with Azure Video Indexer.
Last updated 12/07/2020
-# Use the animated character detection (preview) with portal and API
+# Use the animated character detection with portal and API
Azure Video Indexer supports detection, grouping, and recognition of characters in animated content, this functionality is available through the Azure portal and through API. Review [this overview](animated-characters-recognition.md) article.
Follow these steps to connect your Custom Vision account to Azure Video Indexer,
1. Select the question mark on the top-right corner of the page and choose **API Reference**. 1. Make sure you're subscribed to API Management by clicking **Products** tab. If you have an API connected you can continue to the next step, otherwise, subscribe. 1. On the developer portal, select the **Complete API Reference** and browse to **Operations**.
-1. Select **Connect Custom Vision Account (PREVIEW)** and select **Try it**.
+1. Select **Connect Custom Vision Account** and select **Try it**.
1. Fill in the required fields and the access token and select **Send**. For more information about how to get the Video Indexer access token go to the [developer portal](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Account-Access-Token), and see the [relevant documentation](video-indexer-use-apis.md#obtain-access-token-using-the-authorization-api).
Before tagging and training the model, all animated characters will be named ΓÇ£
1. Review each character group: * If the group contains unrelated images, it's recommended to delete these in the Custom Vision website.
- * If there are images that belong to a different character, change the tag on these specific images by select the image, adding the right tag and deleting the wrong tag.
+ * If there are images that belong to a different character, change the tag on these specific images by selecting the image, adding the right tag and deleting the wrong tag.
* If the group isn't correct, meaning it contains mainly non-character images or images from multiple characters, you can delete in Custom Vision website or in Azure Video Indexer insights. * The grouping algorithm will sometimes split your characters to different groups. It's therefore recommended to give all the groups that belong to the same character the same name (in Azure Video Indexer Insights), which will immediately cause all these groups to appear as on in Custom Vision website. 1. Once the group is refined, make sure the initial name you tagged it with reflects the character in the group.
Once trained, any video that will be indexed or reindexed with that model will r
1. Create an animated characters model. Use the [create animation model](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Create-Animation-Model) API.
-1. Index or re-index a video.
+1. Index or reindex a video.
Use the [re-indexing](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Re-Index-Video) API. 1. Customize the animated characters models.
azure-video-indexer Animated Characters Recognition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/animated-characters-recognition.md
Last updated 11/19/2019 -
-# Animated character detection (preview)
+# Animated character detection
Azure Video Indexer supports detection, grouping, and recognition of characters in animated content via integration with [Cognitive Services custom vision](https://azure.microsoft.com/services/cognitive-services/custom-vision-service/). This functionality is available both through the portal and through the API.
Before you start training your model, the characters are detected namelessly. As
The following diagram demonstrates the flow of the animated character detection process.
-![Flow diagram](./media/animated-characters-recognition/flow.png)
+> [!div class="mx-imgBorder"]
+> :::image type="content" source="./media/animated-characters-recognition/flow.png" alt-text="Image of a flow diagram ." lightbox="./media/animated-characters-recognition/flow.png":::
## Accounts
Depending on a type of your Azure Video Indexer account, different feature sets
|||| |Custom Vision account|Managed behind the scenes by Azure Video Indexer. |Your Custom Vision account is connected to Azure Video Indexer.| |Number of animation models|One|Up to 100 models per account (Custom Vision limitation).|
-|Training the model|Azure Video Indexer trains the model for new characters additional examples of existing characters.|The account owner trains the model when they are ready to make changes.|
+|Training the model|Azure Video Indexer trains the model for new characters additional examples of existing characters.|The account owner trains the model when they're ready to make changes.|
|Advanced options in Custom Vision|No access to the Custom Vision portal.|You can adjust the models yourself in the Custom Vision portal.| ## Use the animated character detection with portal and API
For details, see [Use the animated character detection with portal and API](anim
## Limitations
-* Currently, the "animation identification" capability is not supported in East-Asia region.
+* Currently, the "animation identification" capability isn't supported in East-Asia region.
* Characters that appear to be small or far in the video may not be identified properly if the video's quality is poor. * The recommendation is to use a model per set of animated characters (for example per an animated series).
azure-video-indexer Compliance Privacy Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/compliance-privacy-security.md
+
+ Title: Azure Video Indexer compliance, privacy and security
+description: This article discusses Azure Video Indexer compliance, privacy and security.
+ Last updated : 08/18/2022+++
+# Compliance, Privacy and Security
+
+As an important reminder, you must comply with all applicable laws in your use of Azure Video Indexer, and you may not use Azure Video Indexer or any Azure service in a manner that violates the rights of others, or that may be harmful to others.
+
+Before uploading any video/image to Azure Video Indexer, You must have all the proper rights to use the video/image, including, where required by law, all the necessary consents from individuals (if any) in the video/image, for the use, processing, and storage of their data in Azure Video Indexer and Azure. Some jurisdictions may impose special legal requirements for the collection, online processing and storage of certain categories of data, such as biometric data. Before using Azure Video Indexer and Azure for the processing and storage of any data subject to special legal requirements, You must ensure compliance with any such legal requirements that may apply to You.
+
+To learn about compliance, privacy and security in Azure Video Indexer please visit the Microsoft [Trust Center](https://www.microsoft.com/TrustCenter/CloudServices/Azure/default.aspx). For Microsoft's privacy obligations, data handling and retention practices, including how to delete your data, please review Microsoft's [Privacy Statement](https://privacy.microsoft.com/PrivacyStatement), the [Online Services Terms](https://www.microsoft.com/licensing/product-licensing/products?rtc=1) ("OST") and [Data Processing Addendum](https://www.microsoftvolumelicensing.com/DocumentSearch.aspx?Mode=3&DocumentTypeId=67) ("DPA"). By using Azure Video Indexer, you agree to be bound by the OST, DPA and the Privacy Statement.
+
+## Next steps
+
+[Azure Video Indexer overview](video-indexer-overview.md)
azure-video-indexer Create Account Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/create-account-portal.md
Last updated 06/10/2022
-# Tutorial: create an account with Azure portal
+# Tutorial: create an ARM-based account with Azure portal
[!INCLUDE [Gate notice](./includes/face-limited-access.md)]
-This tutorial walks you through the steps of creating an Azure Video Indexer account and its accompanying resources by using the Azure portal. The created account is an Azure Resource Manager (ARM) based account. For information about different Azure Video Indexer account types, see the [Overview of account types](accounts-overview.md) topic.
+This tutorial walks you through the steps of creating an Azure Video Indexer account and its accompanying resources by using the Azure portal. The created account is an Azure Resource Manager (ARM) based account (currently in preview). For information about different Azure Video Indexer account types, see the [Overview of account types](accounts-overview.md) topic.
## Prerequisites
azure-video-indexer Observed People Featured Clothing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/observed-people-featured-clothing.md
+
+ Title: People's featured clothing
+description: This article gives an overview of featured clothing images appearing in a video.
+ Last updated : 11/15/2021+++
+# People's featured clothing (preview)
+
+Azure Video Indexer enables you to get data on the featured clothing of an observed person. The people's featured clothing feature, helps to enable the following scenarios:
+
+- Ads placement - using the featured clothing insight information, you can enable more targeted ads placement.
+- Video summarization - you can create a summary of the most interesting outfits appearing in the video.
+
+## Viewing featured clothing
+
+The featured clothing insight is available when indexing your file by choosing the Advanced option -> Advanced video or Advanced video + audio preset (under Video + audio indexing). Standard indexing will not include this insight.
++
+The featured clothing images are ranked based on some of the following factors: key moments of the video, general emotions from text or audio. The `id` property indicates the ranking index. For example, `"id": 1` signifies the most important featured clothing.
+
+> [!NOTE]
+> The featured clothing currently can be viewed only from the artifact file.
+
+1. In the right-upper corner, select to download the artifact zip file: **Download** -> **Artifact (ZIP)**
+1. Open `featuredclothing.zip`.
+
+The .zip file contains two objects:
+
+- `featuredclothing.map.json` - the file contains instances of each featured clothing, with the following properties:
+
+ - `id` ΓÇô ranking index (`"id": 1` is the most important clothing).
+ - `confidence` ΓÇô the score of the featured clothing.
+ - `frameIndex` ΓÇô the best frame of the clothing.
+ - `timestamp` ΓÇô corresponding to the frameIndex.
+ - `opBoundingBox` ΓÇô bounding box of the person.
+ - `faceBoundingBox` ΓÇô bounding box of the person's face, if detected.
+ - `fileName` ΓÇô where the best frame of the clothing is saved.
+
+ An example of the featured clothing with `"id": 1`.
+
+ ```
+ "instances": [
+ {
+ "confidence": 0.98,
+ "faceBoundingBox": {
+ "x": 0.50158,
+ "y": 0.10508,
+ "width": 0.13589,
+ "height": 0.45372
+ },
+ "fileName": "frame_12147.jpg",
+ "frameIndex": 12147,
+ "id": 1,
+ "opBoundingBox": {
+ "x": 0.34141,
+ "y": 0.16667,
+ "width": 0.28125,
+ "height": 0.82083
+ },
+ "timestamp": "00:08:26.6311250"
+ },
+ ```
+- `featuredclothing.frames.map` ΓÇô this folder contains images of the best frames that the featured clothing appeared in, corresponding to the `fileName` property in each instance in `featuredclothing.map.json`.
+
+## Limitations and assumptions
+
+It's important to note the limitations of featured clothing to avoid or mitigate the effects of false detections of images with low quality or low relevancy.ΓÇ»
+
+- Pre-condition for the featured clothing is that the person wearing the clothes can be found in the observed people insight.
+- If the face of a person wearing the featured clothing wasn't detected, the results won't include the faces bounding box.
+- If a person in a video wears more than one outfit, the algorithm selects its best outfit as a single featured clothing image.
+- When posed, the tracks are optimized to handle observed people who most often appear on the front.
+- Wrong detections may occur when people are overlapping.
+- Frames containing blurred people are more prone to low quality results.
+
+For more information, see the [limitations of observed people](observed-people-tracing.md#limitations-and-assumptions).
+
+## Next steps
+
+- [Trace observed people in a video](observed-people-tracing.md)
+- [People's detected clothing](detected-clothing.md)
azure-video-indexer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md
In order to upload a video from a URL, change your code to send nu
var uploadRequestResult = await client.PostAsync($"{apiUrl}/{accountInfo.Location}/Accounts/{accountInfo.Id}/Videos?{queryParams}", null); ```
-## June 2022 release updates
+## July 2022 release updates
+
+### Featured clothing insight (preview)
+
+You can now view the featured clothing of an observed person, when indexing a video using Azure Video Indexer advanced video settings. With the new featured clothing insight information, you can enable more targeted ads placement.
+
+For details on how featured clothing images are ranked and how to view this insight, see [observed people featured clothing](observed-people-featured-clothing.md).
+
+## June 2022
### Create Video Indexer blade improvements in Azure portal
Azure Video Indexer introduces source languages support for STT (speech-to-text)
### Matched person detection capability
-When indexing a video through our advanced video settings, you can view the new matched person detection capability. If there are people observed in your media file, you can now view the specific person who matched each of them through the media player.
+When indexing a video with Azure Video Indexer advanced video settings, you can view the new matched person detection capability. If there are people observed in your media file, you can now view the specific person who matched each of them through the media player.
## November 2021
For more information go to [create an Azure Video Indexer account](https://techc
### PeopleΓÇÖs clothing detection
-When indexing a video through the advanced video settings, you can view the new **PeopleΓÇÖs clothing detection** capability. If there are people detected in your media file, you can now view the clothing type they are wearing through the media player.
+When indexing a video with Azure Video Indexer advanced video settings, you can view the new peopleΓÇÖs clothing detection capability. If there are people detected in your media file, you can now view the clothing type they are wearing through the media player.
### Face bounding box (preview)
azure-video-indexer Video Indexer Output Json V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-output-json-v2.md
When indexing with an API and the response status is OK, you get a detailed JSON
[!INCLUDE [insights](./includes/insights.md)]
-This article examines the Azure Video Indexer output (JSON content). For information about what features and insights are available to you, see [Azure Video Indexer insights](video-indexer-overview.md#video-insights).
+This article examines the Azure Video Indexer output (JSON content). For information about what features and insights are available to you, see [Azure Video Indexer insights](video-indexer-overview.md#video-models).
> [!NOTE] > All the access tokens in Azure Video Indexer expire in one hour.
azure-video-indexer Video Indexer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-overview.md
Title: What is Azure Video Indexer? description: This article gives an overview of the Azure Video Indexer service. Previously updated : 06/09/2022 Last updated : 08/18/2022
To start extracting insights with Azure Video Indexer, see the [how can I get st
## Compliance, Privacy and Security
-As an important reminder, you must comply with all applicable laws in your use of Azure Video Indexer, and you may not use Azure Video Indexer or any Azure service in a manner that violates the rights of others, or that may be harmful to others.
-
-Before uploading any video/image to Azure Video Indexer, You must have all the proper rights to use the video/image, including, where required by law, all the necessary consents from individuals (if any) in the video/image, for the use, processing, and storage of their data in Azure Video Indexer and Azure. Some jurisdictions may impose special legal requirements for the collection, online processing and storage of certain categories of data, such as biometric data. Before using Azure Video Indexer and Azure for the processing and storage of any data subject to special legal requirements, You must ensure compliance with any such legal requirements that may apply to You.
-
-To learn about compliance, privacy and security in Azure Video Indexer please visit the Microsoft [Trust Center](https://www.microsoft.com/TrustCenter/CloudServices/Azure/default.aspx). For Microsoft's privacy obligations, data handling and retention practices, including how to delete your data, please review Microsoft's [Privacy Statement](https://privacy.microsoft.com/PrivacyStatement), the [Online Services Terms](https://www.microsoft.com/licensing/product-licensing/products?rtc=1) ("OST") and [Data Processing Addendum](https://www.microsoftvolumelicensing.com/DocumentSearch.aspx?Mode=3&DocumentTypeId=67) ("DPA"). By using Azure Video Indexer, you agree to be bound by the OST, DPA and the Privacy Statement.
+> [!Important]
+> Before you continue with Azure Video Indexer, read [Compliance, privacy and security](compliance-privacy-security.md).
## What can I do with Azure Video Indexer?
Azure Video Indexer's insights can be applied to many scenarios, among them are:
* Content moderation: Use textual and visual content moderation models to keep your users safe from inappropriate content and validate that the content you publish matches your organization's values. You can automatically block certain videos or alert your users about the content. * Recommendations: Video insights can be used to improve user engagement by highlighting the relevant video moments to users. By tagging each video with additional metadata, you can recommend to users the most relevant videos and highlight the parts of the video that will match their needs.
-## Features
+## Video/audio AI features
+
+The following list shows the insights you can retrieve from your videos using Azure Video Indexer video and audio AI features (models.
-The following list shows the insights you can retrieve from your videos using Azure Video Indexer video and audio models:
+Unless specified otherwise, a model is generally available.
-### Video insights
+### Video models
* **Face detection**: Detects and groups faces appearing in the video.
-* **Celebrity identification**: Azure Video Indexer automatically identifies over 1 million celebritiesΓÇölike world leaders, actors, actresses, athletes, researchers, business, and tech leaders across the globe. The data about these celebrities can also be found on various websites (IMDB, Wikipedia, and so on).
-* **Account-based face identification**: Azure Video Indexer trains a model for a specific account. It then recognizes faces in the video based on the trained model. For more information, see [Customize a Person model from the Azure Video Indexer website](customize-person-model-with-website.md) and [Customize a Person model with the Azure Video Indexer API](customize-person-model-with-api.md).
-* **Thumbnail extraction for faces** ("best face"): Automatically identifies the best captured face in each group of faces (based on quality, size, and frontal position) and extracts it as an image asset.
-* **Visual text recognition** (OCR): Extracts text that's visually displayed in the video.
+* **Celebrity identification**: Identifies over 1 million celebritiesΓÇölike world leaders, actors, artists, athletes, researchers, business, and tech leaders across the globe. The data about these celebrities can also be found on various websites (IMDB, Wikipedia, and so on).
+* **Account-based face identification**: Trains a model for a specific account. It then recognizes faces in the video based on the trained model. For more information, see [Customize a Person model from the Azure Video Indexer website](customize-person-model-with-website.md) and [Customize a Person model with the Azure Video Indexer API](customize-person-model-with-api.md).
+* **Thumbnail extraction for faces**: Identifies the best captured face in each group of faces (based on quality, size, and frontal position) and extracts it as an image asset.
+* **Optical character recognition (OCR)**: Extracts text from images like pictures, street signs and products in media files to create insights.
* **Visual content moderation**: Detects adult and/or racy visuals. * **Labels identification**: Identifies visual objects and actions displayed. * **Scene segmentation**: Determines when a scene changes in video based on visual cues. A scene depicts a single event and it's composed by a series of consecutive shots, which are semantically related.
The following list shows the insights you can retrieve from your videos using Az
* **Black frame detection**: Identifies black frames presented in the video. * **Keyframe extraction**: Detects stable keyframes in a video. * **Rolling credits**: Identifies the beginning and end of the rolling credits in the end of TV shows and movies.
-* **Animated characters detection** (preview): Detection, grouping, and recognition of characters in animated content via integration with [Cognitive Services custom vision](https://azure.microsoft.com/services/cognitive-services/custom-vision-service/). For more information, see [Animated character detection](animated-characters-recognition.md).
-* **Editorial shot type detection**: Tagging shots based on their type (like wide shot, medium shot, close up, extreme close up, two shot, multiple people, outdoor and indoor, and so on). For more information, see [Editorial shot type detection](scenes-shots-keyframes.md#editorial-shot-type-detection).
-* **Observed people tracking** (preview): detects observed people in videos and provides information such as the location of the person in the video frame (using bounding boxes) and the exact timestamp (start, end) and confidence when a person appears. For more information, see [Trace observed people in a video](observed-people-tracing.md).
- * **People's detected clothing**: detects the clothing types of people appearing in the video and provides information such as long or short sleeves, long or short pants and skirt or dress. The detected clothing is associated with the people wearing it and the exact timestamp (start,end) along with a confidence level for the detection are provided.
-* **Matched person**: matches between people that were observed in the video with the corresponding faces detected. The matching between the observed people and the faces contain a confidence level.
-
-### Audio insights
-
-* **Audio transcription**: Converts speech to text over 50 languages and allows extensions. Supported languages include English US, English United Kingdom, English Australia, Spanish, Spanish(Mexico), French, French(Canada), German, Italian, Mandarin Chinese, Chinese (Cantonese, Traditional), Chinese (Simplified), Japanese, Russian, Portuguese, Hindi, Czech, Dutch, Polish, Danish, Norwegian, Finish, Swedish, Thai, Turkish, Korean, Arabic(Egypt), Arabic(Syrian Arab Republic), Arabic(Israel), Arabic(Iraq), Arabic(Jordan), Arabic(Kuwait), Arabic(Lebanon), Arabic(Oman), Arabic(Qatar), Arabic(Saudi Arabia), Arabic(United Arab Emirates), Arabic(Palestinian Authority) and Arabic Modern Standard (Bahrain) .
-* **Automatic language detection**: Automatically identifies the dominant spoken language. Supported languages include English, Spanish, French, German, Italian, Mandarin Chinese, Japanese, Russian, and Portuguese. If the language can't be identified with confidence, Azure Video Indexer assumes the spoken language is English. For more information, see [Language identification model](language-identification-model.md).
-* **Multi-language speech identification and transcription**: Automatically identifies the spoken language in different segments from audio. It sends each segment of the media file to be transcribed and then combines the transcription back to one unified transcription. For more information, see [Automatically identify and transcribe multi-language content](multi-language-identification-transcription.md).
+* **Animated characters detection** : Detects, groups, and recognizes characters in animated content via integration with [Cognitive Services custom vision](https://azure.microsoft.com/services/cognitive-services/custom-vision-service/). For more information, see [Animated character detection](animated-characters-recognition.md).
+* **Editorial shot type detection**: Tags shots based on their type (like wide shot, medium shot, close up, extreme close up, two shot, multiple people, outdoor and indoor, and so on). For more information, see [Editorial shot type detection](scenes-shots-keyframes.md#editorial-shot-type-detection).
+* **Observed people tracking** (preview): Detects observed people in videos and provides information such as the location of the person in the video frame (using bounding boxes) and the exact timestamp (start, end) and confidence when a person appears. For more information, see [Trace observed people in a video](observed-people-tracing.md).
+ * **People's detected clothing** (preview): Detects the clothing types of people appearing in the video and provides information such as long or short sleeves, long or short pants and skirt or dress. The detected clothing is associated with the people wearing it and the exact timestamp (start, end) along with a confidence level for the detection are provided. For more information, see [detected clothing](detected-clothing.md).
+ * **Featured clothing** (preview): captures featured clothing images appearing in a video. You can improve your targeted ads by using the featured clothing insight. For information on how the featured clothing images are ranked and how to get the insights, see [featured clothing](observed-people-featured-clothing.md).
+* **Matched person** (preview): Matches people that were observed in the video with the corresponding faces detected. The matching between the observed people and the faces contain a confidence level.
+
+### Audio models
+
+* **Audio transcription**: Converts speech to text over 50 languages and allows extensions. For a comprehensive list of language support by service, see [language support](language-support.md).
+* **Automatic language detection**: Identifies the dominant spoken language. For a comprehensive list of language support by service, see [language support](language-support.md). If the language can't be identified with confidence, Azure Video Indexer assumes the spoken language is English. For more information, see [Language identification model](language-identification-model.md).
+* **Multi-language speech identification and transcription**: Identifies the spoken language in different segments from audio. It sends each segment of the media file to be transcribed and then combines the transcription back to one unified transcription. For more information, see [Automatically identify and transcribe multi-language content](multi-language-identification-transcription.md).
* **Closed captioning**: Creates closed captioning in three formats: VTT, TTML, SRT. * **Two channel processing**: Auto detects separate transcript and merges to single timeline. * **Noise reduction**: Clears up telephony audio or noisy recordings (based on Skype filters).
The following list shows the insights you can retrieve from your videos using Az
> [!NOTE] > The full set of events is available only when you choose **Advanced Audio Analysis** when uploading a file, in upload preset. By default, only silence is detected.
-### Audio and video insights (multi-channels)
+### Audio and video models (multi-channels)
When indexing by one channel, partial result for those models will be available. * **Keywords extraction**: Extracts keywords from speech and visual text. * **Named entities extraction**: Extracts brands, locations, and people from speech and visual text via natural language processing (NLP).
-* **Topic inference**: Extracts topics based on various keywords (i.e. keywords 'Stock Exchange', 'Wall Street' will produce the topic 'Economics'). The model uses three different ontologies ([IPTC](https://iptc.org/standards/media-topics/), [Wikipedia](https://www.wikipedia.org/) and the Video Indexer hierarchical topic ontology). The model uses transcription (spoken words), OCR content (visual text), and celebrities recognized in the video using the Video Indexer facial recognition model.
+* **Topic inference**: Extracts topics based on various keywords (that is, keywords 'Stock Exchange', 'Wall Street' will produce the topic 'Economics'). The model uses three different ontologies ([IPTC](https://iptc.org/standards/media-topics/), [Wikipedia](https://www.wikipedia.org/) and the Video Indexer hierarchical topic ontology). The model uses transcription (spoken words), OCR content (visual text), and celebrities recognized in the video using the Video Indexer facial recognition model.
* **Artifacts**: Extracts rich set of "next level of details" artifacts for each of the models. * **Sentiment analysis**: Identifies positive, negative, and neutral sentiments from speech and visual text.
When indexing by one channel, partial result for those models will be available.
Before creating a new account, review [Account types](accounts-overview.md).
+### Supported browsers
+
+The following list shows the supported browsers that you can use for the Azure Video Indexer website and for your apps that embed the widgets. The list also shows the minimum supported browser version:
+
+- Edge, version: 16
+- Firefox, version: 54
+- Chrome, version: 58
+- Safari, version: 11
+- Opera, version: 44
+- Opera Mobile, version: 59
+- Android Browser, version: 81
+- Samsung Browser, version: 7
+- Chrome for Android, version: 87
+- Firefox for Android, version: 83
+ ### Start using Azure Video Indexer You can access Azure Video Indexer capabilities in three ways:
You can access Azure Video Indexer capabilities in three ways:
For more information, seeΓÇ»[Embed visual widgets in your application](video-indexer-embed-widgets.md). If you're using the website, the insights are added as metadata and are visible in the portal. If you're using APIs, the insights are available as a JSON file.
-## Supported browsers
-
-The following list shows the supported browsers that you can use for the Azure Video Indexer website and for your apps that embed the widgets. The list also shows the minimum supported browser version:
--- Edge, version: 16-- Firefox, version: 54-- Chrome, version: 58-- Safari, version: 11-- Opera, version: 44-- Opera Mobile, version: 59-- Android Browser, version: 81-- Samsung Browser, version: 7-- Chrome for Android, version: 87-- Firefox for Android, version: 83- ## Next steps You're ready to get started with Azure Video Indexer. For more information, see the following articles:
azure-vmware Concepts Design Public Internet Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-design-public-internet-access.md
Your requirements for security controls, visibility, capacity, and operations dr
## Internet Service hosted in Azure
-There are multiple ways to generate a default route in Azure and send it towards your Azure VMware Solution private cloud or on-premise. The options are as follows:
+There are multiple ways to generate a default route in Azure and send it towards your Azure VMware Solution private cloud or on-premises. The options are as follows:
- An Azure firewall in a Virtual WAN Hub. - A third-party Network Virtual Appliance in a Virtual WAN Hub Spoke Virtual Network.
azure-vmware Concepts Network Design Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-network-design-considerations.md
Title: Concepts - Network design considerations
description: Learn about network design considerations for Azure VMware Solution Previously updated : 03/04/2022 Last updated : 08/19/2022 # Azure VMware Solution network design considerations
To reach vCenter Server and NSX Manager, more specific routes from on-prem need
Now that you've covered Azure VMware Solution network design considerations, you might consider learning more. - [Network interconnectivity concepts - Azure VMware Solution](concepts-networking.md)
+- [Plan the Azure VMware Solution deployment](plan-private-cloud-deployment.md)
+- [Networking planning checklist for Azure VMware Solution](tutorial-network-checklist.md)
## Recommended content
azure-vmware Connect Multiple Private Clouds Same Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/connect-multiple-private-clouds-same-region.md
The Azure VMware Solution Interconnect feature is available in all regions.
>[!NOTE] >The **AVS interconnect** feature doesn't check for overlapping IP space the way native Azure vNet peering does before creating the peering. Therefore, it's your responsibility to ensure that there isn't overlap between the private clouds. >
->In Azure VMware Solution environments, it's possible to configure non-routed, overlapping IP deployments on NSX segments that aren't routed to Azure. These don't cause issues with the AVS Interconnect feature, as it only routes between the NSX T0 on each private cloud.
+>In Azure VMware Solution environments, it's possible to configure non-routed, overlapping IP deployments on NSX segments that aren't routed to Azure. These don't cause issues with the AVS Interconnect feature, as it only routes between the NSX-T Data Center T0 gateway on each private cloud.
## Add connection between private clouds
azure-vmware Deploy Disaster Recovery Using Vmware Hcx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-disaster-recovery-using-vmware-hcx.md
This guide covers the following replication scenarios:
1. After selecting **Test**, the recovery operation begins.
-1. When finished, you can check the new VM in the Azure VMware Solution private cloud vCenter.
+1. When finished, you can check the new VM in the Azure VMware Solution private cloud vCenter Server.
:::image type="content" source="./media/disaster-recovery-virtual-machines/verify-test-recovery.png" alt-text="Screenshot showing the check recovery operation summary." border="true" lightbox="./media/disaster-recovery-virtual-machines/verify-test-recovery.png":::
azure-vmware Deploy Zerto Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-zerto-disaster-recovery.md
Zerto is a disaster recovery solution designed to minimize downtime of VMs shoul
| Component | Description | | | | | **Zerto Virtual Manager (ZVM)** | Management application for Zerto implemented as a Windows service installed on a Windows VM. The private cloud administrator installs and manages the Windows VM. The ZVM enables Day 0 and Day 2 disaster recovery configuration. For example, configuring primary and disaster recovery sites, protecting VMs, recovering VMs, and so on. However, it doesn't handle the replication data of the protected customer VMs. |
-| **Virtual Replication appliance (vRA)** | Linux VM to handle data replication from the source to the replication target. One instance of vRA is installed per ESXi host, delivering a true scale architecture that grows and shrinks along with the private cloud's hosts. The VRA manages data replication to and from protected VMs to its local or remote target, storing the data in the journal. |
+| **Virtual Replication appliance (vRA)** | Linux VM to handle data replication from the source to the replication target. One instance of vRA is installed per ESXi host, delivering a true scale architecture that grows and shrinks along with the private cloud's hosts. The vRA manages data replication to and from protected VMs to its local or remote target, storing the data in the journal. |
| **Zerto ESXi host driver** | Installed on each VMware ESXi host configured for Zerto disaster recovery. The host driver intercepts a vSphere VM's IO and sends the replication data to the chosen vRA for that host. The vRA is then responsible for replicating the VM's data to one or more disaster recovery targets. | | **Zerto Cloud Appliance (ZCA)** | Windows VM only used when Zerto is used to recover vSphere VMs as Azure Native IaaS VMs. The ZCA is composed of:<ul><li>**ZVM:** A Windows service that hosts the UI and integrates with the native APIs of Azure for management and orchestration.</li><li>**VRA:** A Windows service that replicates the data from or to Azure.</li></ul>The ZCA integrates natively with the platform it's deployed on, allowing you to use Azure Blob storage within a storage account on Microsoft Azure. As a result, it ensures the most cost-efficient deployment on each of these platforms. | | **Virtual Protection Group (VPG)** | Logical group of VMs created on the ZVM. Zerto allows configuring disaster recovery, Backup, and Mobility policies on a VPG. This mechanism enables a consistent set of policies to be applied to a group of VMs. |
To learn more about Zerto platform architecture, see the [Zerto Platform Archite
You can use Zerto with Azure VMware Solution for the following three scenarios.
-### Scenario 1: On-premises VMware to Azure VMware Solution disaster recovery
+### Scenario 1: On-premises VMware vSphere to Azure VMware Solution disaster recovery
In this scenario, the primary site is an on-premises vSphere-based environment. The disaster recovery site is an Azure VMware Solution private cloud.
azure-vmware Disable Internet Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/disable-internet-access.md
Last updated 05/12/2022
# Disable internet access or enable a default route
-In this article, you'll learn how to disable Internet access or enable a default route for your Azure VMware Solution private cloud. There are multiple ways to set up a default route. You can use a Virtual WAN hub, Network Virtual Appliance in a Virtual Network, or use a default route from on-premise. If you don't set up a default route, there will be no Internet access to your Azure VMware Solution private cloud.
+In this article, you'll learn how to disable Internet access or enable a default route for your Azure VMware Solution private cloud. There are multiple ways to set up a default route. You can use a Virtual WAN hub, Network Virtual Appliance in a Virtual Network, or use a default route from on-premises. If you don't set up a default route, there will be no Internet access to your Azure VMware Solution private cloud.
With a default route setup, you can achieve the following tasks: - Disable Internet access to your Azure VMware Solution private cloud.
azure-vmware Enable Vmware Cds With Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/enable-vmware-cds-with-azure.md
+
+ Title: Enable VMware Cloud director service with Azure VMware Solution (Public Preview)
+description: This article explains how to use Azure VMware Solution to enable enterprise customers to use Azure VMware Solution for private clouds underlying resources for virtual datacenters.
+ Last updated : 08/09/2022++
+# Enable VMware Cloud Director service with Azure VMware Solution (Preview)
+
+[VMware Cloud Director Service (CDs)](https://docs.vmware.com/en/VMware-Cloud-Director-service/services/getting-started-with-vmware-cloud-director-service/GUID-149EF3CD-700A-4B9F-B58B-8EA5776A7A92.html) with Azure VMware Solution enables enterprise customers to use APIs or the Cloud Director services portal to self-service provision and manage virtual datacenters through multi-tenancy with reduced time and complexity.
+
+In this article, you'll learn how to enable VMware Cloud Director service (CDs) with Azure VMware Solution for enterprise customers to use Azure VMware Solution resources and Azure VMware Solution private clouds with underlying resources for virtual datacenters.
+
+>[!IMPORTANT]
+> Cloud Director service (CDs) is now available to use with Azure VMware Solution under the Enterprise Agreement (EA) model only. It's not suitable for MSP / Hoster to resell Azure VMware Solution capacity to customers at this point. For more information, see [Azure Service terms](https://www.microsoft.com/licensing/terms/productoffering/MicrosoftAzure/EAEAS#GeneralServiceTerms).
+
+## Reference architecture
+The following diagram shows typical architecture for Cloud Director Services with Azure VMware Solution and how they're connected. Communications to Azure VMware Solution endpoints from Cloud Director service are supported by an SSL reverse proxy.
++
+VMware Cloud Director supports multi-tenancy by using organizations. A single organization can have multiple organization virtual data centers (VDC). Each OrganizationΓÇÖs VDC can have their own dedicated Tier-1 router (Edge Gateway) which is further connected with the providerΓÇÖs managed shared Tier-0 router.
+
+## Connect tenants and their organization virtual datacenters to Azure vNet based resources
+
+To provide access to vNET based Azure resources, each tenant can have their own dedicated Azure vNET with Azure VPN gateway. A site-to-site VPN between customer organization VDC and Azure vNET is established. To achieve this connectivity, the provider will provide public IP to the organization VDC. Organization VDCΓÇÖs administrator can configure IPSEC VPN connectivity from Cloud Director Service portal.
++
+As shown in the diagram above, organization 01 has two organization Virtual datacenters (VDCs): VDC1 and VDC2. The virtual datacenter of each organization has its own Azure vNETs connected with their respective organization VDC Edge gateway through IPSEC VPN.
+Providers provide public IP addresses to the organization VDC Edge gateway for IPSEC VPN configuration. An ORG VDC Edge gateway firewall blocks all traffic by default, specific allow rules needs to be added on organization Edge gateway firewall.
+
+Organization VDCs can be part of a single organization and still provide isolation between them. For example, VM1 hosted in organization VDC1 cannot ping Azure VM JSVM2 for tenant2.
+
+### Prerequisites
+- Organization VDC is configured with an Edge gateway and has Public IPs assigned to it to establish IPSEC VPN by provider.
+- Tenants have created a routed Organization VDC network in tenantΓÇÖs Virtual datacenter.
+- Test VM1 and VM2 are created in the Organization VDC1 and VDC2 respectively. Both VMs are connected to the routed orgVDC network in their respective VDCs.
+- Have a dedicated [Azure vNET](tutorial-configure-networking.md#create-a-vnet-manually) configured for each tenant. For this example, we created Tenant1-vNet and Tenant2-vNet for tenant1 and tenant2 respectively.
+- Create an [Azure Virtual network gateway](tutorial-configure-networking.md#create-a-virtual-network-gateway) for vNETs created earlier.
+- Deploy Azure VMs JSVM1 and JSVM2 for tenant1 and tenant2 for test purposes.
+
+> [!Note]
+> CDS supports a policy-based VPN. Azure VPN gateway configures route-based VPN by default and to configure policy-based VPN policy-based selector needs to be enabled.
+
+### Configure Azure vNet
+Create the following components in tenantΓÇÖs dedicated Azure vNet to establish IPSEC tunnel connection with the tenantΓÇÖs ORG VDC edge gateway.
+- Azure Virtual network gateway
+- Local network gateway.
+- Add IPSEC connection on VPN gateway.
+- Edit connection configuration to enable policy-based VPN. git status
+
+### Create Azure virtual network gateway
+To create an Azure virtual network gateway, see the [create-a-virtual-network-gateway tutorial](tutorial-configure-networking.md#create-a-virtual-network-gateway).
+
+### Create local network gateway
+1. Log in to the Azure portal and select **Local network gateway** from marketplace and then select **Create**.
+1. Local Network Gateway represents remote end site details. Therefore provide tenant1 OrgVDC public IP address and orgVDC Network details to create local end point for tenant1.
+1. Under **Instance details**, select **Endpoint** as IP address
+1. Add IP address (add Public IP address from tenantΓÇÖs OrgVDC Edge gateway).
+1. Under **Address space** add **Tenants Org VDC Network**.
+1. Repeat steps 1-5 to create a local network gateway for tenant 2.
+
+### Create IPSEC connection on VPN gateway
+1. Select tenant1 VPN Gateway (created earlier) and then select **Connection** (in left pane) to add new IPSEC connection with tenant1 orgVDC Edge gateway.
+1. Enter the following details.
+
+ | **Name** | **Connection** |
+ |:- | :--|
+ | Connection Type | Site to Site |
+ | VPN Gateway | TenantΓÇÖs VPN Gateway |
+ | Local Network Gateway | TenantΓÇÖs Local Gateway |
+ | PSK | Shared Key (provide a password) |
+ | IKE Protocol | IKEV2 (ORG-VDC is using IKEv2) |
+
+1. Select **Ok** to deploy local network gateway.
+
+### Configure IPsec Connection
+Cloud Director Service supports a policy-based VPN. Azure VPN gateway configures route-based VPN by default and to configure policy-based VPN policy-based selector needs to be enabled.
+
+1. Select the connection you created earlier and then select **configuration** to view the default settings.
+1. **IPSEC/IKE Policy**
+1. **Enable policy base traffic selector**
+1. Modify all other parameters to match what you have in OrgVDC.
+ >[!Note]
+ > Both source and destination of the tunnel should have identical settings for IKE,SA, DPD etc.
+1. Select **Save**.
+
+### Configure VPN on organization VDC Edge router
+1. Log in to Organization CDS tenant portal and select tenantΓÇÖs edge gateway.
+1. Select **IPSEC VPN** option under **Services** and then select **New**.
+1. Under general setting, provide **Name** and select desired security profile. Ensure that security profile settings (IKE, Tunnel, and DPD configuration) are same on both sides of the IPsec tunnel.
+1. Modify Azure VPN gateway to match the Security profile, if necessary. You can also do security profile customization from CDS tenant portal.
+
+ >[!Note]
+ > VPN tunnel won't establish if these settings were mismatched.
+1. Under **Peer Authentication Mode**, provide the same pre-shared key that is used at the Azure VPN gateway.
+1. Under **Endpoint configuration**, add the OrganizationΓÇÖs public IP and network details in local endpoint and Azure VNet details in remote endpoint configuration.
+1. Under **Ready to complete**, review applied configuration.
+1. Select **Finish** to apply configuration.
+
+### Apply firewall configuration
+Organization VDC Edge router firewall denies traffic by default. You'll need to apply specific rules to enable connectivity. Use the following steps to apply firewall rules.
+
+1. Add IP set in CDS portal
+ 1. Log in to Edge router then select **IP SETS** under the **Security** tab in left plane.
+ 1. Select **New** to create IP sets.
+ 1. Enter **Name** and **IP address** of test VM deployed in orgVDC.
+ 1. Create another IP set for Azure vNET for this tenant.
+2. Apply firewall rules on ORG VDC Edge router.
+ 1. Under **Edge gateway**, select **Edge gateway** and then select **firewall** under **services**.
+ 1. Select **Edit rules**.
+ 1. Select **NEW ON TOP** and enter rule name.
+ 1. Add **source** and **destination** details. Use created IPSET in source and destination.
+ 1. Under **Action**, select **Allow**.
+ 1. Select **Save** to apply configuration.
+3. Verify tunnel status
+ 1. Under **Edge gateway** select **Service**, then select **IPSEC VPN**,
+ 1. Select **View statistics**.
+ Status of tunnel should show **UP**.
+4. Verify IPsec connection
+ 1. Log in to Azure VM deployed in tenants vNET and ping tenantΓÇÖs test VM IP address in tenantΓÇÖs OrgVDC.
+ For example, ping VM1 from JSVM1. Similarly, you should be able to ping VM2 from JSVM2.
+You can verify isolation between tenants Azure vNETs. Tenant1ΓÇÖs VM1 won't be able to ping Tenant2ΓÇÖs Azure VM JSVM2 in tenant2 Azure vNETs.
+
+## Connect Tenant workload to public Internet
+
+- Tenants can use public IP to do SNAT configuration to enable Internet access for VM hosted in organization VDC. To achieve this connectivity, the provider can provide public IP to the organization VDC.
+- Each organization VDC can be created with dedicated T1 router (created by provider) with reserved Public & Private IP for NAT configuration. Tenants can use public IP SNAT configuration to enable Internet access for VM hosted in organization VDC.
+- OrgVDC administrator can create a routed OrgVDC network connected to their OrgVDC Edge gateway. To provide Internet access.
+- OrgVDC administrator can configure SNAT to provide a specific VM or use network CIDR to provide public connectivity.
+- OrgVDC Edge has default DENY ALL firewall rule. Organization administrators will need to open appropriate ports to allow access through the firewall by adding a new firewall rule. Virtual machines configured on such OrgVDC network used in SNAT configuration should be able to access the Internet.
+
+### Prerequisites
+1. Public IP is assigned to the organization VDC Edge router.
+ To verify, log in to the organization's VDC. Under **Networking**> **Edges**, select **Edge Gateway**, then select **IP allocations** under **IP management**. You should see a range of assigned IP address there.
+2. Create a routed Organization VDC network. (Connect OrgvDC network to the edge gateway with public IP address assigned)
+
+### Apply SNAT configuration
+1. Log in to Organization VDC. Navigate to your Edge gateway and then select **NAT** under **Services**.
+2. Select **New** to add new SNAT rule.
+3. Provide **Name** and select **Interface type** as SNAT.
+4. Under **External IP**, enter public IP address from public IP pool assigned to your orgVDC Edge router.
+5. Under **Internal IP**, enter IP address for your test VM.
+ This IP address is one of the orgVDC network IP assigned to the VM.
+6. **State** should be enabled.
+7. Under **Priority**, select a higher number.
+ For example, 4096.
+8. Select **Save** to save the configuration.
+
+### Apply firewall rule
+1. Log in to Organization VDC and navigate to **Edge Gateway**, then select **IP set** under security.
+2. Create an IPset. Provide IP address of your VM (you can use CIDR also). Select **Save**.
+3. Under **services**, select **Firewall**, then select **Edit rules**.
+4. Select **New ON TOP** and create a firewall rule to allow desired port and destination.
+1. Select the **IPset** your created earlier as source. Under **Action**, select **Allow**.
+1. Select **Keep** to save the configuration.
+1. Log in to your test VM and ping your destination address to verify outbound connectivity.
+
+## Migrate workloads to Cloud Director Service on Azure VMware Solution
+
+VMware Cloud Director Availability can be used to migrate VMware Cloud Director workload into Cloud Director service on Azure VMware Solution. Enterprise customers can drive self-serve one-way warm migration from the on-premises Cloud Director Availability vSphere plugin, or they can run the Cloud Director Availability plugin from the provider-managed Cloud Director instance and move workloads into Azure VMware Solution.
+
+For more information about VMware Cloud Director Availability, see [VMware Cloud Director Availability | Disaster Recovery & Migration](https://www.vmware.com/products/cloud-director-availability.html)
+
+## FAQs
+**Question**: What are the supported Azure regions for the VMware Cloud Director service?
+
+**Answer**: This offering is supported in all Azure regions where Azure VMware Solution is available except for Brazil South and South Africa. Ensure that the region you wish to connect to Cloud Director service is within a 150-milliseconds round trip time for latency with Cloud Director service.
+
+## Next steps
+[VMware Cloud Director Service Documentation](https://docs.vmware.com/en/VMware-Cloud-Director-service/https://docsupdatetracker.net/index.html)
azure-vmware Request Host Quota Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/request-host-quota-azure-vmware-solution.md
CSPs must use [Microsoft Partner Center](https://partner.microsoft.com) to enabl
Access the Azure portal using the **Admin On Behalf Of** (AOBO) procedure from Partner Center. >[!IMPORTANT]
->Azure VMware Solution service does not provide a multi-tenancy required. Hosting partners requiring it are not supported.
+>Azure VMware Solution service does not provide multi-tenancy support. Hosting partners requiring it are not supported.
1. Configure the CSP Azure plan:
azure-vmware Vrealize Operations For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/vrealize-operations-for-azure-vmware-solution.md
Most customers have an existing on-premises deployment of vRealize Operations to
:::image type="content" source="media/vrealize-operations-manager/vrealize-operations-deployment-option-1.png" alt-text="Diagram showing the on-premises vRealize Operations managing Azure VMware Solution deployment." border="false":::
-To extend the vRealize Operations capabilities to the Azure VMware Solution private cloud, you create an adapter [instance for the private cloud resources](https://docs.vmware.com/en/vRealize-Operations-Manager/8.1/com.vmware.vcom.config.doc/GUID-640AD750-301E-4D36-8293-1BFEB67E2600.html). It collects data from the Azure VMware Solution private cloud and brings it into on-premises vRealize Operations. The on-premises vRealize Operations Manager instance can directly connect to the vCenter Server and NSX-T Manager on Azure VMware Solution. Optionally, you can deploy a vRealize Operations Remote Collector on the Azure VMware Solution private cloud. The collector compresses and encrypts the data collected from the private cloud before it's sent over the ExpressRoute or VPN network to the vRealize Operations Manager running on-premise.
+To extend the vRealize Operations capabilities to the Azure VMware Solution private cloud, you create an adapter [instance for the private cloud resources](https://docs.vmware.com/en/vRealize-Operations-Manager/8.1/com.vmware.vcom.config.doc/GUID-640AD750-301E-4D36-8293-1BFEB67E2600.html). It collects data from the Azure VMware Solution private cloud and brings it into on-premises vRealize Operations. The on-premises vRealize Operations Manager instance can directly connect to the vCenter Server and NSX-T Manager on Azure VMware Solution. Optionally, you can deploy a vRealize Operations Remote Collector on the Azure VMware Solution private cloud. The collector compresses and encrypts the data collected from the private cloud before it's sent over the ExpressRoute or VPN network to the vRealize Operations Manager running on-premises.
> [!TIP] > Refer to the [VMware documentation](https://docs.vmware.com/en/vRealize-Operations-Manager/8.1/com.vmware.vcom.vapp.doc/GUID-7FFC61A0-7562-465C-A0DC-46D092533984.html) for step-by-step guide for installing vRealize Operations Manager.
baremetal-infrastructure About The Public Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-public-preview/about-the-public-preview.md
In particular, this article highlights Public Preview features.
## Unlock the benefits of Azure * Establish a consistent hybrid deployment strategy
-* Operate seamlessly with on-premise Nutanix Clusters in Azure
+* Operate seamlessly with on-premises Nutanix Clusters in Azure
* Build and scale without constraints * Invent for today and be prepared for tomorrow with NC2 on Azure
batch Simplified Node Communication Pool No Public Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/simplified-node-communication-pool-no-public-ip.md
client-request-id: 00000000-0000-0000-0000-000000000000
} ```
+## Create a pool without public IP addresses using ARM template
+
+You can use this [Azure Quickstart Template](https://azure.microsoft.com/resources/templates/batch-pool-no-public-ip/) to create a pool without public IP addresses using Azure Resource Manager (ARM) template.
+
+Following resources will be deployed by the template:
+
+- Azure Batch account with IP firewall configured to block public network access to Batch node management endpoint
+- Virtual network with network security group to block internet outbound access
+- Private endpoint to access Batch node management endpoint of the account
+- DNS integration for the private endpoint using private DNS zone linked to the virtual network
+- Batch pool deployed in the virtual network and without public IP addresses
+
+If you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
+
+[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.batch%2Fbatch-pool-no-public-ip%2Fazuredeploy.json)
+
+> [!NOTE]
+> If the private endpoint deployment failed due to invalid groupId "nodeManagement", please check if the region is in the supported list, and you've already opted in with [Simplified compute node communication](simplified-compute-node-communication.md). Choose the right region and opt in your Batch account, then retry the deployment.
+ ## Outbound access to the internet In a pool without public IP addresses, your virtual machines won't be able to access the public internet unless you configure your network setup appropriately, such as by using [virtual network NAT](../virtual-network/nat-gateway/nat-overview.md). Note that NAT only allows outbound access to the internet from the virtual machines in the virtual network. Batch-created compute nodes won't be publicly accessible, since they don't have public IP addresses associated.
center-sap-solutions Install Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/install-software.md
After you've created infrastructure for your new SAP system using *Azure Center for SAP solutions (ACSS)*, you need to install the SAP software.
-In this how-to guide, you'll learn how to upload and install all the required components in your Azure account. You can either [run a pre-installation script to automate the upload process](#upload-components-with-script) or [manually upload the components](#upload-components-manually). Then, you can [run the software installation wizard](#install-software).
+In this how-to guide, you'll learn how to upload and install all the required components in your Azure account. You can either [run a pre-installation script to automate the upload process](#option-1-upload-software-components-with-script) or [manually upload the components](#option-2-upload-software-components-manually). Then, you can [run the software installation wizard](#install-software).
## Prerequisites
In this how-to guide, you'll learn how to upload and install all the required co
## Supported software
-ACSS supports the following SAP software version: **S/4HANA 1909 SPS 03, SAP S/4HANA 2020 SPS 03, SAP S/4HANA 2021 ISS 00**.
+ACSS supports the following SAP software version: **S/4HANA 1909 SPS 03, S/4HANA 2020 SPS 03, S/4HANA 2021 ISS 00**.
Following is the operating system (OS) software versions compatibility with SAP Software Version: | Publisher | Version | Generation SKU | Patch version name | Supported SAP Software Version | | | - | -- | | |
-| Red Hat | RHEL-SAP-HA (8.2 HA Pack) | 82sapha-gen2 | 8.2.2021091202 | S/4HANA 1909 SPS 03,SAP S/4HANA 2020 SPS 03, SAP S/4HANA 2021 ISS 00 |
-| Red Hat | RHEL-SAP-HA (8.4 HA Pack) | 84sapha-gen2 | 8.4.2021091202 | S/4HANA 1909 SPS 03,SAP S/4HANA 2020 SPS 03, SAP S/4HANA 2021 ISS 00 |
-| SUSE | sles-sap-15-sp3 | gen2 | 2022.01.26 | S/4HANA 1909 SPS 03,SAP S/4HANA 2020 SPS 03, SAP S/4HANA 2021 ISS 00 |
+| Red Hat | RHEL-SAP-HA (8.2 HA Pack) | 82sapha-gen2 | 8.2.2021091202 | S/4HANA 1909 SPS 03, S/4HANA 2020 SPS 03, S/4HANA 2021 ISS 00 |
+| Red Hat | RHEL-SAP-HA (8.4 HA Pack) | 84sapha-gen2 | 8.4.2021091202 | S/4HANA 1909 SPS 03, S/4HANA 2020 SPS 03, S/4HANA 2021 ISS 00 |
+| SUSE | sles-sap-15-sp3 | gen2 | 2022.01.26 | S/4HANA 1909 SPS 03, S/4HANA 2020 SPS 03, S/4HANA 2021 ISS 00 |
| SUSE | sles-sap-12-sp4 | gen2 | 2022.02.01 | S/4HANA 1909 SPS 03 | ## Required components
The following components are necessary for the SAP installation:
- SAP software installation media (part of the `sapbits` container described later in this article) - All essential SAP packages (*SWPM*, *SAPCAR*, etc.)
- - SAP software (for example, *S/4HANA 1909 SPS 03, S/4 HANA 2020 SPS 03, SAP S/4HANA 2021 ISS 00*)
-- Supporting software packages for the installation process
+ - SAP software (for example, *S/4HANA 2021 ISS 00*)
+- Supporting software packages for the installation process (part of the `deployervmpackages` container described later in this article)
- `pip3` version `pip-21.3.1.tar.gz` - `wheel` version 0.37.1 - `jq` version 1.6
The following components are necessary for the SAP installation:
- The SAP URL to download the software (`url`) - Template or INI files, which are stack XML files required to run the SAP packages.
-## Upload components with script
+## Option 1: Upload software components with script
You can use the following method to upload the SAP components to your Azure account using scripts. Then, you can [run the software installation wizard](#install-software) to install the SAP software.
-You also can [upload the components manually](#upload-components-manually) instead.
+You also can [upload the components manually](#option-2-upload-software-components-manually) instead.
### Set up storage account
-Before you can download the software, set up an Azure Storage account for the downloads.
+Before you can download the software, set up an Azure Storage account for storing the software.
1. [Create an Azure Storage account through the Azure portal](../storage/common/storage-account-create.md). Make sure to create the storage account in the same subscription as your SAP system infrastructure.
Before you can download the software, set up an Azure Storage account for the do
1. On the **New container** pane, for **Name**, enter `sapbits`. 1. Select **Create**.
-
+
+ 1. Grant the ACSS application *Azure SAP Workloads Management* **Storage Blob Data Reader** and **Reader and Data Access** role access on this storage account.
+
+### Download supporting software
+After setting up your Azure Storage account, you need an Ubuntu VM to run scripts that download the software components.
+ 1. Create an Ubuntu 20.04 VM in Azure 1. Sign in to the VM.
Before you can download the software, set up an Azure Storage account for the do
1. When asked if you have a storage account, enter `Y`.
-1. When asked for the base path to the SAP storage account, enter the container path. To find the container path:
+1. When asked for the base path to the software storage account, enter the container path. To find the container path:
1. Find the storage account that you created in the Azure portal.
Before you can download the software, set up an Azure Storage account for the do
1. Copy the **Key** value.
-1. In the Azure portal, find the container named `sapbits` in the storage account that you created.
+1. Once the script completes successfully, in the Azure portal, find the container named `sapbits` in the storage account that you created.
1. Make sure the deployer VM packages are now visible in `sapbits`.
Before you can download the software, set up an Azure Storage account for the do
### Download SAP media
-After setting up your Azure Storage account, you can download the SAP installation media required to install the SAP software.
+You can download the SAP installation media required to install the SAP software, using a script as described in this section.
-1. Sign in to the Ubuntu VM that you created in the [previous section](#set-up-storage-account).
+1. Sign in to the Ubuntu VM that you created in the [previous section](#download-supporting-software).
-1. Install ansible 2.9.27 on the ubuntu VM
+1. Install Ansible 2.9.27 on the ubuntu VM
```bash sudo pip3 install ansible==2.9.27
After setting up your Azure Storage account, you can download the SAP installati
- For `<username>`, use your SAP username. - For `<password>`, use your SAP password. - For `<bom_base_name>`, use the SAP Version you want to install i.e. **_S41909SPS03_v0011ms_** or **_S42020SPS03_v0003ms_** or **_S4HANA_2021_ISS_v0001ms_**
- - For `<storageAccountAccessKey>`, use your storage account's access key. You found this value in the [previous section](#set-up-storage-account).
- - For `<containerBasePath>`, use the path to your `sapbits` container. You found this value in the [previous section](#set-up-storage-account).
+ - For `<storageAccountAccessKey>`, use your storage account's access key. You found this value in the [previous section](#download-supporting-software).
+ - For `<containerBasePath>`, use the path to your `sapbits` container. You found this value in the [previous section](#download-supporting-software).
The format is `https://<your-storage-account>.blob.core.windows.net/sapbits`
After setting up your Azure Storage account, you can download the SAP installati
Now, you can [install the SAP software](#install-software) using the installation wizard.
-## Upload components manually
+## Option 2: Upload software components manually
-You can use the following method to download and upload the SAP components to your Azure account manually. Then, you can [run the software installation wizard](#install-software) to install the SAP software.
+You can use the following method to download and upload the SAP components to your Azure storage account manually. Then, you can [run the software installation wizard](#install-software) to install the SAP software.
-You also can [run scripts to automate this process](#upload-components-with-script) instead.
+You also can [run scripts to automate this process](#option-1-upload-software-components-with-script) instead.
-1. Create a new Azure storage account for the SAP components.
+1. Create a new Azure storage account for storing the software components.
1. Grant the ACSS application *Azure SAP Workloads Management* **Storage Blob Data Reader** and **Reader and Data Access** role access to this storage account. 1. Create a container within the storage account. You can choose any container name; for example, **sapbits**.
-1. Create two folders within the contained, named **deployervmpackages** and **sapfiles**.
+1. Create two folders within the container, named **deployervmpackages** and **sapfiles**.
> [!WARNING] > Don't change the folder name structure for any steps in this process. Otherwise, the installation process can fail. 1. Download the supporting software packages listed in the [required components list](#required-components) to your local computer.
You also can [run scripts to automate this process](#upload-components-with-scri
1. **SUM20SP14_latest**
- - For S/4 HANA 2020 SPS 03, make following folders
+ - For S/4HANA 2020 SPS 03, make following folders
1. **HANA_2_00_063_v0001ms** 1. **S42020SPS03_v0003ms** 1. **SWPM20SP12_latest** 1. **SUM20SP14_latest**
- - For SAP S/4HANA 2021 ISS 00, make following folders
+ - For S/4HANA 2021 ISS 00, make following folders
1. **HANA_2_00_063_v0001ms** 1. **S4HANA_2021_ISS_v0001ms** 1. **SWPM20SP12_latest**
You also can [run scripts to automate this process](#upload-components-with-scri
1. [SWPM20SP12_latest.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/SWPM20SP12_latest/SWPM20SP12_latest.yaml) 1. [SUM20SP14_latest.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/SUM20SP14_latest/SUM20SP14_latest.yaml)
- - For S/4 HANA 2020 SPS 03,
+ - For S/4HANA 2020 SPS 03,
1. [S42020SPS03_v0003ms.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/S42020SPS03_v0003ms.yaml) 1. [HANA_2_00_063_v0001ms.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/HANA_2_00_063_v0001ms/HANA_2_00_063_v0001ms.yaml) 1. [SWPM20SP12_latest.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/SWPM20SP12_latest/SWPM20SP12_latest.yaml) 1. [SUM20SP14_latest.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/SUM20SP14_latest/SUM20SP14_latest.yaml)
- - For SAP S/4HANA 2021 ISS 00,
+ - For S/4HANA 2021 ISS 00,
1. [S4HANA_2021_ISS_v0001ms.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/S4HANA_2021_ISS_v0001ms.yaml) 1. [HANA_2_00_063_v0001ms.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/HANA_2_00_063_v0001ms/HANA_2_00_063_v0001ms.yaml) 1. [SWPM20SP12_latest.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/SWPM20SP12_latest/SWPM20SP12_latest.yaml)
You also can [run scripts to automate this process](#upload-components-with-scri
1. [S41909SPS03_v0011ms-scsha-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-scsha-inifile-param.j2) 1. [S41909SPS03_v0011ms-web-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-web-inifile-param.j2)
- - For S/4 HANA 2020 SPS 03,
+ - For S/4HANA 2020 SPS 03,
1. [HANA_2_00_055_v1_install.rsp.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/templates/HANA_2_00_055_v1_install.rsp.j2) 1. [HANA_2_00_install.rsp.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/templates/HANA_2_00_install.rsp.j2) 1. [S42020SPS03_v0003ms-app-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/templates/S42020SPS03_v0003ms-app-inifile-param.j2)
You also can [run scripts to automate this process](#upload-components-with-scri
1. [S42020SPS03_v0003ms-scs-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/templates/S42020SPS03_v0003ms-scs-inifile-param.j2) 1. [S42020SPS03_v0003ms-scsha-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/templates/S42020SPS03_v0003ms-scsha-inifile-param.j2)
- - For SAP S/4HANA 2021 ISS 00,
+ - For S/4HANA 2021 ISS 00,
1. [HANA_2_00_055_v1_install.rsp.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/HANA_2_00_055_v1_install.rsp.j2) 1. [HANA_2_00_install.rsp.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/HANA_2_00_install.rsp.j2) 1. [NW_ABAP_ASCS_S4HANA2021.CORE.HDB.AB](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/NW_ABAP_ASCS_S4HANA2021.CORE.HDB.ABAP_Distributed.params)
You also can [run scripts to automate this process](#upload-components-with-scri
1. [SWPM20SP12_latest.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/SWPM20SP12_latest/SWPM20SP12_latest.yaml) 1. [SUM20SP14_latest.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/SUM20SP14_latest/SUM20SP14_latest.yaml)
- - For S/4 HANA 2020 SPS 03,
+ - For S/4HANA 2020 SPS 03,
1. [S42020SPS03_v0003ms.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/S42020SPS03_v0003ms.yaml) 1. [HANA_2_00_063_v0001ms.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/HANA_2_00_063_v0001ms/HANA_2_00_063_v0001ms.yaml) 1. [SWPM20SP12_latest.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/SWPM20SP12_latest/SWPM20SP12_latest.yaml) 1. [SUM20SP14_latest.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/SUM20SP14_latest/SUM20SP14_latest.yaml)
- - For SAP S/4HANA 2021 ISS 00,
+ - For S/4HANA 2021 ISS 00,
1. [S4HANA_2021_ISS_v0001ms.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/S4HANA_2021_ISS_v0001ms.yaml) 1. [HANA_2_00_063_v0001ms.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/HANA_2_00_063_v0001ms/HANA_2_00_063_v0001ms.yaml) 1. [SWPM20SP12_latest.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/SWPM20SP12_latest/SWPM20SP12_latest.yaml)
To install the SAP software on Azure, use the ACSS installation wizard.
1. For **Software version**, use the **SAP S/4HANA 1909 SPS03** or **SAP S/4HANA 2020 SPS 03** or **SAP S/4HANA 2021 ISS 00** . Please note only those versions will light up which are supported with the OS version that was used to deploy the infrastructure previously.
- 1. For **BOM directory location**, select **Browse** and find the path to your BOM file. For example, `/sapfiles/boms/S41909SPS03_v0010ms.yaml`.
+ 1. For **BOM directory location**, select **Browse** and find the path to your BOM file. For example, `https://<your-storage-account>.blob.core.windows.net/sapbits/sapfiles/boms/S41909SPS03_v0010ms.yaml`.
- 1. For **SAP FQDN:**, provide a fully qualified domain name (FQDN) for your SAP system. For example, `sap.contoso.com`.
+ 1. For **SAP FQDN**, provide a fully qualified domain name (FQDN) for your SAP system. For example, `sap.contoso.com`.
- 1. For High Availability (HA) systems only, enter the client identifier for the SONITH Fencing Agent service principal for **Fencing client ID**.
+ 1. For High Availability (HA) systems only, enter the client identifier for the STONITH Fencing Agent service principal for **Fencing client ID**.
- 1. For High Availability (HA) systems only, enter the password for the SONITH Fencing Agent service principal for **Fencing client password**.
+ 1. For High Availability (HA) systems only, enter the password for the STONITH Fencing Agent service principal for **Fencing client password**.
1. For **SSH private key**, provide the SSH private key that you created or selected as part of your infrastructure deployment.
To install the SAP software on Azure, use the ACSS installation wizard.
1. Wait for the installation to complete. The process takes approximately three hours. You can see the progress, along with estimated times for each step, in the wizard.
-1. After the installation completes, sign in with your SAP system credentials.
+1. After the installation completes, sign in with your SAP system credentials. Refer to [this section](manage-virtual-instance.md) to find the SAP system and HANA DB credentials for the newly installed system.
## Limitations
The following are known limitations and issues.
You can install a maximum of 10 Application Servers, excluding the Primary Application Server.
-### SAP package versions
+### SAP package version changes
When SAP changes the version of packages for a component in the BOM, you might encounter problems with the automated installation shell script. It's recommended to download your SAP installation media as soon as possible to avoid issues.
cognitive-services Sovereign Clouds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/sovereign-clouds.md
https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.cn/translator/text/batch
"inputs": [ { "source": {
- "sourceUrl": "https://my.blob.core.windows.net/source-en?sv=2019-12-12&st=2021-03-05T17%3A45%3A25Z&se=2021-03-13T17%3A45%3A00Z&sr=c&sp=rl&sig=SDRPMjE4nfrH3csmKLILkT%2Fv3e0Q6SWpssuuQl1NmfM%3D"
+ "sourceUrl": "https://<storage_acount>.blob.core.chinacloudapi.cn/source-en?sv=2019-12-12&st=2021-03-05T17%3A45%3A25Z&se=2021-03-13T17%3A45%3A00Z&sr=c&sp=rl&sig=SDRPMjE4nfrH3csmKLILkT%2Fv3e0Q6SWpssuuQl1NmfM%3D"
}, "targets": [ {
- "targetUrl": "https://my.blob.core.windows.net/target-zh-Hans?sv=2019-12-12&st=2021-03-05T17%3A49%3A02Z&se=2021-03-13T17%3A49%3A00Z&sr=c&sp=wdl&sig=Sq%2BYdNbhgbq4hLT0o1UUOsTnQJFU590sWYo4BOhhQhs%3D",
+ "targetUrl": "https://<storage_acount>.blob.core.chinacloudapi.cn/target-zh-Hans?sv=2019-12-12&st=2021-03-05T17%3A49%3A02Z&se=2021-03-13T17%3A49%3A00Z&sr=c&sp=wdl&sig=Sq%2BYdNbhgbq4hLT0o1UUOsTnQJFU590sWYo4BOhhQhs%3D",
"language": "zh-Hans" } ]
cognitive-services Previous Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/previous-updates.md
This article contains a list of previously recorded updates for Azure Cognitive
### Text Analytics for health updates
-* A new model version `2021-05-15` for the `/health` endpoint and on-premise container which provides
+* A new model version `2021-05-15` for the `/health` endpoint and on-premises container which provides
* 5 new entity types: `ALLERGEN`, `CONDITION_SCALE`, `COURSE`, `EXPRESSION` and `MUTATION_TYPE`, * 14 new relation types, * Assertion detection expanded for new entity types and
This article contains a list of previously recorded updates for Azure Cognitive
* This parameter lets you specify select PII entities, as well as those not supported by default for the input language. * Updated client libraries, which include asynchronous and text analytics for health operations.
-* A new model version `2021-03-01` for text analytics for health API and on-premise container which provides:
+* A new model version `2021-03-01` for text analytics for health API and on-premises container which provides:
* A rename of the `Gene` entity type to `GeneOrProtein`. * A new `Date` entity type. * Assertion detection which replaces negation detection.
cognitive-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/models.md
The service provides access to many different models. Models describe a family o
## Naming convention
-Azure OpenAI's models follow a standard naming convention: `{task}-{model name}-{version #}`. For example, our most powerful natural language model is called `text-davinci-001` and a codex series model would look like `code-cushman-001`.
+Azure OpenAI's models follow a standard naming convention: `{task}-{model name}-{version #}`. For example, our most powerful natural language model is called `text-davinci-001` and a Codex series model would look like `code-cushman-001`.
> Older versions of the GPT-3 models are available as `ada`, `babbage`, `curie`, `davinci` and do not follow these conventions. These models are primarily intended to be used for fine-tuning and search.
The Codex models are descendants of our base GPT-3 models that can understand an
TheyΓÇÖre most capable in Python and proficient in over a dozen languages including JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, SQL, and even Shell.
-Currently we only offer one codex model: `code-cushman-001`.
+Currently we only offer one Codex model: `code-cushman-001`.
## Embeddings Models
cognitive-services Fine Tuning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/fine-tuning.md
The Azure OpenAI Service lets you tailor our models to your personal datasets us
- An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services) - Access granted to service in the desired Azure subscription. This service is currently invite only. You can fill out a new use case request here: <https://aka.ms/oai/access>. Please open an issue on this repo to contact us if you have an issue-- The following python libraries: os, requests, json
+- The following Python libraries: os, requests, json
- An Azure OpenAI Service resource with a model deployed. If you don't have a resource/model the process is documented in our [resource deployment guide](../how-to/create-resource.md) ## Fine-tuning workflow
The fine-tuning workflow requires the following steps:
Your training data set consists of input & output examples for how you would like the model perform.
-The training dataset you use **must** be a JSON lines (JSONL) document where each line is a prompt-completion pair and a single example. The OpenAI python CLI provides a useful data preparation tool to easily convert your data into this file format.
+The training dataset you use **must** be a JSON lines (JSONL) document where each line is a prompt-completion pair and a single example. The OpenAI Python CLI provides a useful data preparation tool to easily convert your data into this file format.
Here's an example of the format:
Once you've prepared your dataset, you can upload your files to the service. We
For large data files, we recommend you import from Azure Blob. Large files can become unstable when uploaded through multipart forms because the requests are atomic and can't be retried or resumed.
-The following python code will create a sample dataset and show how to upload a file and print the returned ID. Make sure to save the IDs returned as you'll need them for the fine-tuning training job creation.
+The following Python code will create a sample dataset and show how to upload a file and print the returned ID. Make sure to save the IDs returned as you'll need them for the fine-tuning training job creation.
> [!IMPORTANT] > Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like [Azure Key Vault](../../../key-vault/general/overview.md). See the Cognitive Services [security](../../cognitive-services-security.md) article for more information.
while train_status not in ["succeeded", "failed"] or valid_status not in ["succe
After you've uploaded the training and (optional) validation file, you wish to use for your training job you're ready to start the process. You can use the [Models API](../reference.md#models) to identify which models are fine-tunable.
-Once you have the model, you want to fine-tune you need to create a job. The following python code shows an example of how to create a new job:
+Once you have the model, you want to fine-tune you need to create a job. The following Python code shows an example of how to create a new job:
```python create_args = {
az cognitiveservices account deployment create
## Use a fine-tuned model
-Once your model has been deployed, you can use it like any other model. Reference the deployment name you specified in the previous step. You can use either the REST API or python SDK and can continue to use all the other Completions parameters like temperature, frequency_penalty, presence_penalty, etc., on these requests to fine-tuned models.
+Once your model has been deployed, you can use it like any other model. Reference the deployment name you specified in the previous step. You can use either the REST API or Python SDK and can continue to use all the other Completions parameters like temperature, frequency_penalty, presence_penalty, etc., on these requests to fine-tuned models.
```python print('Sending a test completion job')
That said, tweaking the hyperparameters used for fine-tuning can often lead to a
## Next Steps - Explore the full REST API Reference documentation to learn more about all the fine-tuning capabilities. You can find the [full REST documentation here](../reference.md).-- Explore more of the [python SDK operations here](https://github.com/openai/openai-python/blob/main/examples/azure/finetuning.ipynb).
+- Explore more of the [Python SDK operations here](https://github.com/openai/openai-python/blob/main/examples/azure/finetuning.ipynb).
cognitive-services Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/managed-identity.md
In the following sections, you'll use the Azure CLI to assign roles, and obtain
- An Azure subscription - Access granted to service in the desired Azure subscription. - Azure CLI. [Installation Guide](/cli/azure/install-azure-cli)-- The following python libraries: os, requests, json
+- The following Python libraries: os, requests, json
## Sign into the Azure CLI
cognitive-services Work With Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/work-with-code.md
When you show Codex the database schema, it's able to make an informed guess abo
### Specify the programming language
-Codex understands dozens of different programming languages. Many share similar conventions for comments, functions and other programming syntax. By specifying the language and what version in a comment, Codex is better able to provide a completion for what you want. That said, Codex is fairly flexible with style and syntax. Here's an example for R and python.
+Codex understands dozens of different programming languages. Many share similar conventions for comments, functions and other programming syntax. By specifying the language and what version in a comment, Codex is better able to provide a completion for what you want. That said, Codex is fairly flexible with style and syntax. Here's an example for R and Python.
```r # R language
cognitive-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/reference.md
The Azure OpenAI service provides two methods for authentication. you can use e
The service APIs are versioned using the ```api-version``` query parameter. All versions follow the YYYY-MM-DD date structure, with a -preview suffix for a preview service. For example: ```
-POST https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=2021-11-01-preview
+POST https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=2022-06-01-preview
```
-We currently have the following versions available: ```2022-03-01-preview``` and ```2021-11-01-preview```
+We currently have the following versions available: ```2022-06-01-preview```
## Completions With the Completions operation, the model will generate one or more predicted completions based on a provided prompt. The service can also return the probabilities of alternative tokens at each position.
GET https://{your-resource-name}.openai.azure.com/openai/fine-tunes/{fine_tune_i
**Supported versions** -- `2022-03-01-preview`
+- `2022-06-01-preview`
#### Example request
communication-services Define Media Composition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/media-composition/define-media-composition.md
In this section you learned how to:
You may also want to: - Learn about [media composition concept](../../concepts/voice-video-calling/media-comp.md)
+ - Get started on [media composition](./get-started-media-composition.md)
+ <!-- -->
communication-services Get Started Media Composition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/media-composition/get-started-media-composition.md
+
+ Title: Azure Communication Services Quickstart - Create and manage a media composition
+
+description: In this quickstart, you'll learn how to create a media composition within your Azure Communication Services resource.
+++++ Last updated : 08/18/2022++++
+# Quickstart: Create and manage a media composition resource
++
+Get started with Azure Communication Services by using the Communication Services C# Media Composition SDK to compose and stream videos.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- The latest version of [.NET Core SDK](https://dotnet.microsoft.com/download/dotnet-core) for your operating system.
+- An active Communication Services resource and connection string. [Create a Communication Services resource](../create-communication-resource.md).
+
+### Prerequisite check
+
+- In a terminal or command window, run the `dotnet` command to check that the .NET SDK is installed.
+
+## Set up the application environment
+
+To set up an environment for using media composition, take the steps in the following sections.
+
+### Create a new C# application
+
+1. In a console window, such as cmd, PowerShell, or Bash, use the `dotnet new` command to create a new console app with the name `MediaCompositionQuickstart`. This command creates a simple "Hello World" C# project with a single source file, **Program.cs**.
+
+ ```console
+ dotnet new console -o MediaCompositionQuickstart
+ ```
+
+1. Change your directory to the newly created app folder and use the `dotnet build` command to compile your application.
+
+ ```console
+ cd MediaCompositionQuickstart
+ dotnet build
+ ```
+
+### Install the package
+
+1. While still in the application directory, install the Azure Communication Services MediaComposition SDK for .NET package by using the following command.
+
+ ```console
+ dotnet add package Azure.Communication.MediaCompositionQuickstart --version 1.0.0-beta.1
+ ```
+
+1. Add a `using` directive to the top of **Program.cs** to include the `Azure.Communication` namespace.
+
+ ```csharp
+ using System;
+ using System.Collections.Generic;
+
+ using Azure;
+ using Azure.Communication;
+ using Azure.Communication.MediaComposition;
+ ```
+
+## Authenticate the media composition client
+
+Open **Program.cs** in a text editor and replace the body of the `Main` method with code to initialize a `MediaCompositionClient` with your connection string. The `MediaCompositionClient` will be used to create and manage media composition objects.
+
+ You can find your Communication Services resource connection string in the Azure portal. For more information on connection strings, see [this page](../create-communication-resource.md#access-your-connection-strings-and-service-endpoints).
++
+```csharp
+// Find your Communication Services resource in the Azure portal
+var connectionString = "<connection_string>";
+var mediaCompositionClient = new MediaCompositionClient(connectionString);
+```
+
+## Create a media composition
+
+Create a new media composition by defining the `inputs`, `layout`, `outputs`, and a user-friendly `mediaCompositionId`. For more information on how to define the values, see [this page](./define-media-composition.md). These values are passed into the `CreateAsync` function exposed on the client. The code snippet below shows and example of defining a simple two by two grid layout:
+
+```csharp
+var layout = new GridLayout(
+ rows: 2,
+ columns: 2,
+ inputIds: new List<List<string>>
+ {
+ new List<string> { "Jill", "Jack" }, new List<string> { "Jane", "Jerry" }
+ })
+ {
+ Resolution = new(1920, 1080)
+ };
+
+var inputs = new Dictionary<string, MediaInput>()
+{
+ ["Jill"] = new ParticipantInput
+ (
+ id: new MicrosoftTeamsUserIdentifier("f3ba9014-6dca-4456-8ec0-fa03cfa2b7b7"),
+ call: "teamsMeeting")
+ {
+ PlaceholderImageUri = "https://imageendpoint"
+ },
+ ["Jack"] = new ParticipantInput
+ (
+ id: new MicrosoftTeamsUserIdentifier("fa4337b5-f13a-41c5-a34f-f2aa46699b61"),
+ call: "teamsMeeting")
+ {
+ PlaceholderImageUri = "https://imageendpoint"
+ },
+ ["Jane"] = new ParticipantInput
+ (
+ id: new MicrosoftTeamsUserIdentifier("2dd69470-dc25-49cf-b5c3-f562f08bf3b2"),
+ call: "teamsMeeting"
+ )
+ {
+ PlaceholderImageUri = "https://imageendpoint"
+ },
+ ["Jerry"] = new ParticipantInput
+ (
+ id: new MicrosoftTeamsUserIdentifier("30e29fde-ac1c-448f-bb34-0f3448d5a677"),
+ call: "teamsMeeting")
+ {
+ PlaceholderImageUri = "https://imageendpoint"
+ },
+ ["teamsMeeting"] = new TeamsMeetingInput(teamsJoinUrl: "https://teamsJoinUrl")
+};
+
+var outputs = new Dictionary<string, MediaOutput>()
+{
+ ["acsGroupCall"] = new GroupCallOutput("d12d2277-ffec-4e22-9979-8c0d8c13d193")
+};
+
+var mediaCompositionId = "twoByTwoGridLayout"
+var response = await mediaCompositionClient.CreateAsync(mediaCompositionId, layout, inputs, outputs);
+```
+
+You can use the `mediaCompositionId` to view or update the properties of a media composition object. Therefore, it is important to keep track of and persist the `mediaCompositionId` in your storage medium of choice.
+
+## Get properties of an existing media composition
+
+Retrieve the details of an existing media composition by referencing the `mediaCompositionId`.
+
+```C# Snippet:GetMediaComposition
+var gridMediaComposition = await mediaCompositionClient.GetAsync(mediaCompositionId);
+```
+
+## Updates
+
+Updating the `layout` of a media composition can happen on-the-fly as the media composition is running. However, `input` updates while the media composition is running are not supported. The media composition will need to be stopped and restarted before any changes to the inputs are applied.
+
+### Update layout
+
+Updating the `layout` can be issued by passing in the new `layout` object and the `mediaCompositionId`. For example, we can update the grid layout to an auto-grid layout following the snippet below:
+
+```csharp
+var layout = new AutoGridLayout(new List<string>() { "teamsMeeting" })
+{
+ Resolution = new(720, 480),
+};
+
+var response = await mediaCompositionClient.UpdateLayoutAsync(mediaCompositionId, layout);
+```
+
+### Upsert or remove inputs
+
+To upsert inputs from the media composition object, use the `UpsertInputsAsync` function exposed in the client.
+
+```csharp
+var inputsToUpsert = new Dictionary<string, MediaInput>()
+{
+ ["James"] = new ParticipantInput
+ (
+ id: new MicrosoftTeamsUserIdentifier("f3ba9014-6dca-4456-8ec0-fa03cfa2b70p"),
+ call: "teamsMeeting"
+ )
+ {
+ PlaceholderImageUri = "https://imageendpoint"
+ }
+};
+
+var response = await mediaCompositionClient.UpsertInputsAsync(mediaCompositionId, inputsToUpsert);
+```
+
+You can also explicitly remove inputs from the list.
+```csharp
+var inputIdsToRemove = new List<string>()
+{
+ "Jane", "Jerry"
+};
+var response = await mediaCompositionClient.RemoveInputsAsync(mediaCompositionId, inputIdsToRemove);
+```
+
+### Upsert or remove outputs
+
+To upsert outputs, you can use the `UpsertOutputsAsync` function from the client.
+```csharp
+var outputsToUpsert = new Dictionary<string, MediaOutput>()
+{
+ ["youtube"] = new RtmpOutput("key", new(1920, 1080), "rtmp://a.rtmp.youtube.com/live2")
+};
+
+var response = await mediaCompositionClient.UpsertOutputsAsync(mediaCompositionId, outputsToUpsert);
+```
+
+You can remove outputs by following the snippet below:
+```csharp
+var outputIdsToRemove = new List<string>()
+{
+ "acsGroupCall"
+};
+var response = await mediaCompositionClient.RemoveOutputsAsync(mediaCompositionId, outputIdsToRemove);
+```
+
+## Start running a media composition
+
+After defining the media composition with the correct properties, you can start composing the media by calling the `StartAsync` function using the `mediaCompositionId`.
+
+```csharp
+var compositionSteamState = await mediaCompositionClient.StartAsync(mediaCompositionId);
+```
+
+## Stop running a media composition
+
+To stop a media composition, call the `StopAsync` function using the `mediaCompositionId`.
+
+```csharp
+var compositionSteamState = await mediaCompositionClient.StopAsync(mediaCompositionId);
+```
+
+## Delete a media composition
+
+If you wish to delete a media composition, you may issue a delete request:
+```csharp
+await mediaCompositionClient.DeleteAsync(mediaCompositionId);
+```
+
+## Object model
+
+The table below lists the main properties of media composition objects:
+
+| Name | Description |
+|--|-|
+| `mediaCompositionId` | Media composition identifier that can be a user-friendly string. Must be unique across a Communication Service resource. |
+| `layout` | Specifies how the media sources will be composed into a single frame. |
+| `inputs` | Defines which media sources will be used in the layout composition. |
+| `outputs` | Defines where to send the composed streams to.|
+
+## Next steps
+
+In this section you learned how to:
+> [!div class="checklist"]
+> - Create a new media composition
+> - Get the properties of a media composition
+> - Update layout
+> - Upsert and remove inputs
+> - Upsert and remove outputs
+> - Start and stop a media composition
+> - Delete a media composition
+
+You may also want to:
+ - Learn about [media composition concept](../../concepts/voice-video-calling//media-comp.md)
+ - Learn about [how to define a media composition](./define-media-composition.md)
connectors Connectors Create Api Sqlazure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-sqlazure.md
Title: Connect to SQL databases
+ Title: Connect to SQL databases from workflows
description: Connect to SQL databases from workflows in Azure Logic Apps. ms.suite: integration Previously updated : 06/08/2022 Last updated : 08/19/2022 tags: connectors
-# Connect to a SQL database from workflows in Azure Logic Apps
+# Connect to an SQL database from workflows in Azure Logic Apps
This article shows how to access your SQL database from a workflow in Azure Logic Apps with the SQL Server connector. You can then create automated workflows that run when triggered by events in your SQL database or in other systems and run actions to manage your SQL data and resources.
The SQL Server connector has different versions, based on [logic app type and ho
| Logic app | Environment | Connector version | |--|-|-|
-| **Consumption** | Multi-tenant Azure Logic Apps | [Managed connector - Standard class](managed.md). For operations, limits, and other information, review the [SQL Server managed connector reference](/connectors/sql). |
-| **Consumption** | Integration service environment (ISE) | [Managed connector - Standard class](managed.md) and ISE version. For operations, managed connector limits, and other information, review the [SQL Server managed connector reference](/connectors/sql). For ISE-versioned limits, review the [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits), not the managed connector's message limits. |
-| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | [Managed connector - Standard class](managed.md) and [built-in connector](built-in.md), which is [service provider based](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). For managed connector operations, limits, and other information, review the [SQL Server managed connector reference](/connectors/sql/). <br><br>The built-in connector differs in the following ways: <br><br>- The built-in version has no triggers. <br><br>- The built-in version has a single **Execute Query** action. This action can directly access Azure virtual networks with a connection string and doesn't need the on-premises data gateway. <br><br>For built-in connector operations, limits, and other information, review the [SQL Server built-in connector reference](#built-in-connector-operations). |
-||||
+| **Consumption** | Multi-tenant Azure Logic Apps | Managed connector (Standard class). For more information, review the following documentation: <br><br>- [SQL Server managed connector reference](/connectors/sql). <br>- [Managed connectors in Azure Logic Apps](managed.md) |
+| **Consumption** | Integration service environment (ISE) | Managed connector (Standard class) and ISE version, which has different message limits than the Standard class. For more information, review the following documentation: <br><br>- [SQL Server managed connector reference](/connectors/sql) <br>- [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits) <br>- [Managed connectors in Azure Logic Apps](managed.md) |
+| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | Managed connector (Standard class) and built-in connector, which is [service provider based](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). The built-in version differs in the following ways: <br><br>- The built-in version doesn't have triggers. You can use either an SQL managed connector trigger or a different trigger. <br><br>- The built-in version connects directly to an SQL server and database requiring only a connection string. You don't need the on-premises data gateway. <br><br>- The built-in version can directly access Azure virtual networks. You don't need the on-premises data gateway.<br><br>For more information, review the following documentation: <br><br>- [SQL Server managed connector reference](/connectors/sql/) <br>- [SQL Server built-in connector reference](#built-in-connector-operations) section later in this article <br>- [Built-in connectors in Azure Logic Apps](built-in.md) |
## Limitations
For more information, review the [SQL Server managed connector reference](/conne
The SQL Server connector requires that your tables contain data so that the connector operations can return results when called. For example, if you use Azure SQL Database, you can use the included sample databases to try the SQL Server connector operations.
-* The information required to create a SQL database connection, such as your SQL server and database names. If you're using Windows Authentication or SQL Server Authentication to authenticate access, you also need your user name and password. You can usually find this information in the connection string.
+* The information required to create an SQL database connection, such as your SQL server and database name. If you're using Windows Authentication or SQL Server Authentication to authenticate access, you also need your user name and password. You can usually find this information in the connection string.
- > [!NOTE]
+ > [!IMPORTANT]
>
- > If you use a SQL Server connection string that you copied directly from the Azure portal,
+ > If you use an SQL Server connection string that you copied directly from the Azure portal,
> you have to manually add your password to the connection string.
- * For a SQL database in Azure, the connection string has the following format:
+ * For an SQL database in Azure, the connection string has the following format:
`Server=tcp:{your-server-name}.database.windows.net,1433;Initial Catalog={your-database-name};Persist Security Info=False;User ID={your-user-name};Password={your-password};MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;`
For more information, review the [SQL Server managed connector reference](/conne
* Standard logic app workflow
- You can use the SQL Server built-in connector, which requires a connection string. To use the SQL Server managed connector, follow the same requirements as a Consumption logic app workflow in multi-tenant Azure Logic Apps.
+ You can use the SQL Server built-in connector, which requires a connection string. The built-in connector currently supports only SQL Server Authentication. You can adjust connection pooling by specifying parameters in the connection string. For more information, review [Connection Pooling](/dotnet/framework/data/adonet/connection-pooling).
+
+ To use the SQL Server managed connector, follow the same requirements as a Consumption logic app workflow in multi-tenant Azure Logic Apps.
-For other connector requirements, review [SQL Server managed connector reference](/connectors/sql/).
+ For other connector requirements, review [SQL Server managed connector reference](/connectors/sql/).
<a name="add-sql-trigger"></a>
The following steps use the Azure portal, but with the appropriate Azure Logic A
* Standard logic app workflows: [Visual Studio Code](../logic-apps/create-single-tenant-workflows-visual-studio-code.md)
-In this example, the logic app workflow starts with the [Recurrence trigger](../connectors/connectors-native-recurrence.md), and calls an action that gets a row from a SQL database.
+In this example, the logic app workflow starts with the [Recurrence trigger](../connectors/connectors-native-recurrence.md), and calls an action that gets a row from an SQL database.
### [Consumption](#tab/consumption)
In this example, the logic app workflow starts with the [Recurrence trigger](../
1. Under the **Choose an operation** search box, select either of the following options:
- * **Built-in** when you want to use SQL Server built-in actions such as **Execute Query**
+ * **Built-in** when you want to use SQL Server [built-in actions](#built-in-connector-operations) such as **Execute query**
![Screenshot showing the Azure portal, workflow designer for Standard logic app, and designer search box with "Built-in" selected underneath.](./media/connectors-create-api-sqlazure/select-built-in-category-standard.png)
In this example, the logic app workflow starts with the [Recurrence trigger](../
1. From the actions list, select the SQL Server action that you want.
- * Built-in actions
+ * [Built-in actions](#built-in-connector-operations)
- This example selects the only available built-in action named **Execute Query**.
+ This example selects the built-in action named **Execute query**.
- ![Screenshot showing the designer search box with "sql server" and "Built-in" selected underneath with the "Execute Query" action selected in the "Actions" list.](./media/connectors-create-api-sqlazure/select-sql-execute-query-action-standard.png)
+ ![Screenshot showing the designer search box with "sql server" and "Built-in" selected underneath with the "Execute query" action selected in the "Actions" list.](./media/connectors-create-api-sqlazure/select-sql-execute-query-action-standard.png)
- * Managed actions
+ * [Managed actions](/connectors/sql/#actions)
This example selects the action named **Get row**, which gets a single record.
In this example, the logic app workflow starts with the [Recurrence trigger](../
1. Provide the [information for your connection](#create-connection). When you're done, select **Create**.
-1. If you haven't already provided the SQL server name and database name, provide those values. Otherwise, from the **Table name** list, select the table that you want to use. In the **Row id** property, enter the ID for the record that you want.
+1. Provide the information required by your selected action.
- In this example, the table name is **SalesLT.Customer**.
+ The following example continues with the managed action named **Get row**. If you haven't already provided the SQL server name and database name, provide those values. Otherwise, from the **Table name** list, select the table that you want to use. In this example, the table name is **SalesLT.Customer**. In the **Row id** property, enter the ID for the record that you want.
- ![Screenshot showing Standard workflow designer and "Get row" action with the example "Table name" property value and empty row ID.](./media/connectors-create-api-sqlazure/specify-table-row-id-standard.png)
+ ![Screenshot showing Standard workflow designer and managed action "Get row" with the example "Table name" property value and empty row ID.](./media/connectors-create-api-sqlazure/specify-table-row-id-standard.png)
- This action returns only one row from the selected table, and nothing else. To view the data in this row, add other actions. For example, such actions might create a file, include the fields from the returned row, and store the file in a cloud storage account. To learn about other available actions for this connector, see the [connector's reference page](/connectors/sql/).
+ This action returns only one row from the selected table, and nothing else. To view the data in this row, add other actions. For example, such actions might create a file, include the fields from the returned row, and store the file in a cloud storage account. To learn about other available actions for this connector, review the [managed connector's reference page](/connectors/sql/).
1. When you're done, save your workflow.
In the connection information box, complete the following steps:
| **Service principal (Azure AD application)** | - Supported with the SQL Server managed connector. <br><br>- Requires an Azure AD application and service principal. For more information, see [Create an Azure AD application and service principal that can access resources using the Azure portal](../active-directory/develop/howto-create-service-principal-portal.md). | | **Logic Apps Managed Identity** | - Supported with the SQL Server managed connector and ISE-versioned connector. <br><br>- Requires the following items: <br><br> A valid managed identity that's [enabled on your logic app resource](../logic-apps/create-managed-service-identity.md) and has access to your database. <br><br> **SQL DB Contributor** role access to the SQL Server resource <br><br> **Contributor** access to the resource group that includes the SQL Server resource. <br><br>For more information, see [SQL - Server-Level Roles](/sql/relational-databases/security/authentication-access/server-level-roles). | | [**Azure AD Integrated**](/azure/azure-sql/database/authentication-aad-overview) | - Supported with the SQL Server managed connector and ISE-versioned connector. <br><br>- Requires a valid managed identity in Azure Active Directory (Azure AD) that's [enabled on your logic app resource](../logic-apps/create-managed-service-identity.md) and has access to your database. For more information, see these topics: <br><br>- [Azure SQL Security Overview - Authentication](/azure/azure-sql/database/security-overview#authentication) <br>- [Authorize database access to Azure SQL - Authentication and authorization](/azure/azure-sql/database/logins-create-manage#authentication-and-authorization) <br>- [Azure SQL - Azure AD Integrated authentication](/azure/azure-sql/database/authentication-aad-overview) |
- | [**SQL Server Authentication**](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-sql-server-authentication) | - Supported with the SQL Server managed connector and ISE-versioned connector. <br><br>- Requires the following items: <br><br> A data gateway resource that's previously created in Azure for your connection, regardless whether your logic app is in multi-tenant Azure Logic Apps or an ISE. <br><br> A valid user name and strong password that are created and stored in your SQL Server database. For more information, see the following topics: <br><br>- [Azure SQL Security Overview - Authentication](/azure/azure-sql/database/security-overview#authentication) <br>- [Authorize database access to Azure SQL - Authentication and authorization](/azure/azure-sql/database/logins-create-manage#authentication-and-authorization) |
+ | [**SQL Server Authentication**](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-sql-server-authentication) | - Supported with the SQL Server managed connector, SQL Server built-in connector, and ISE-versioned connector. <br><br>- Requires the following items: <br><br> A data gateway resource that's previously created in Azure for your connection, regardless whether your logic app is in multi-tenant Azure Logic Apps or an ISE. <br><br> A valid user name and strong password that are created and stored in your SQL Server database. For more information, see the following topics: <br><br>- [Azure SQL Security Overview - Authentication](/azure/azure-sql/database/security-overview#authentication) <br>- [Authorize database access to Azure SQL - Authentication and authorization](/azure/azure-sql/database/logins-create-manage#authentication-and-authorization) |
The following examples show how the connection information box might appear if you select **Azure AD Integrated** authentication.
In the connection information box, complete the following steps:
| **Server name** | Yes | The address for your SQL server, for example, **Fabrikam-Azure-SQL.database.windows.net** | | **Database name** | Yes | The name for your SQL database, for example, **Fabrikam-Azure-SQL-DB** | | **Table name** | Yes | The table that you want to use, for example, **SalesLT.Customer** |
- ||||
> [!TIP]
+ >
> To provide your database and table information, you have these options: > > * Find this information in your database's connection string. For example, in the Azure portal, find and open your database. On the database menu, select either **Connection strings** or **Properties**, where you can find the following string:
In the connection information box, complete the following steps:
| Authentication | Description | |-|-|
- | [**SQL Server Authentication**](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-sql-server-authentication) | - Supported with the SQL Server managed connector and ISE-versioned connector. <br><br>- Requires the following items: <br><br> A data gateway resource that's previously created in Azure for your connection, regardless whether your logic app is in multi-tenant Azure Logic Apps or an ISE. <br><br> A valid user name and strong password that are created and stored in your SQL Server. <br><br>For more information, see [SQL Server Authentication](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-sql-server-authentication). |
+ | [**SQL Server Authentication**](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-sql-server-authentication) | - Supported with the SQL Server managed connector, SQL Server built-in connector, and ISE-versioned connector. <br><br>- Requires the following items: <br><br> A data gateway resource that's previously created in Azure for your connection, regardless whether your logic app is in multi-tenant Azure Logic Apps or an ISE. <br><br> A valid user name and strong password that are created and stored in your SQL Server. <br><br>For more information, see [SQL Server Authentication](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-sql-server-authentication). |
| [**Windows Authentication**](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-windows-authentication) | - Supported with the SQL Server managed connector. <br><br>- Requires the following items: <br><br> A data gateway resource that's previously created in Azure for your connection, regardless whether your logic app is in multi-tenant Azure Logic Apps or an ISE. <br><br> A valid Windows user name and password to confirm your identity through your Windows account. <br><br>For more information, see [Windows Authentication](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-windows-authentication). |
- |||
1. Select or provide the following values for your SQL database:
In the connection information box, complete the following steps:
| **Password** | Yes | Your password for the SQL server and database | | **Subscription** | Yes, for Windows authentication | The Azure subscription for the data gateway resource that you previously created in Azure | | **Connection Gateway** | Yes, for Windows authentication | The name for the data gateway resource that you previously created in Azure <br><br><br><br>**Tip**: If your gateway doesn't appear in the list, check that you correctly [set up your gateway](../logic-apps/logic-apps-gateway-connection.md). |
- |||
> [!TIP] > You can find this information in your database's connection string:
When you call a stored procedure by using the SQL Server connector, the returned
> [!NOTE] >
- > If you get an error that Azure Logic Apps can't generate a schema,
- > check that your sample output's syntax is correctly formatted.
- > If you still can't generate the schema, in the **Schema** box,
- > manually enter the schema.
+ > If you get an error that Azure Logic Apps can't generate a schema, check that your
+ > sample output's syntax is correctly formatted. If you still can't generate the schema,
+ > in the **Schema** box, manually enter the schema.
1. When you're done, save your workflow. 1. To reference the JSON content properties, click inside the edit boxes where you want to reference those properties so that the dynamic content list appears. In the list, under the [**Parse JSON**](../logic-apps/logic-apps-perform-data-operations.md#parse-json-action) heading, select the data tokens for the JSON content properties that you want.
+<a name="built-in-connector-app-settings"></a>
+
+## Built-in connector app settings
+
+In a Standard logic app resource, the SQL Server built-in connector includes app settings that control various thresholds for performance, throughput, capacity, and so on. For example, you can change the query timeout value from 30 seconds. For more information, review [Reference for app settings - local.settings.json](../logic-apps/edit-app-settings-host-settings.md#reference-local-settings-json).
+ <a name="built-in-connector-operations"></a>
-## Built-in connector operations
+## SQL built-in connector operations
+
+The SQL Server built-in connector is available only for Standard logic app workflows and provides the following actions, but no triggers:
+
+| Action | Description |
+|--|-|
+| [**Delete rows**](#delete-rows) | Deletes and returns the table rows that match the specified **Where condition** value. |
+| [**Execute query**](#execute-query) | Runs a query on an SQL database. |
+| [**Execute stored procedure**](#execute-stored-procedure) | Runs a stored procedure on an SQL database. |
+| [**Get rows**](#get-rows) | Gets the table rows that match the specified **Where condition** value. |
+| [**Get tables**](#get-tables) | Gets all the tables from the database. |
+| [**Insert row**](#insert-row) | Inserts a single row in the specified table. |
+| [**Update rows**](#update-rows) | Updates the specified columns in all the table rows that match the specified **Where condition** value using the **Set columns** column names and values. |
+
+<a name="delete-rows"></a>
+
+### Delete rows
+
+Operation ID: `deleteRows`
+
+Deletes and returns the table rows that match the specified **Where condition** value.
+
+#### Parameters
+
+| Name | Key | Required | Type | Description |
+||--|-||-|
+| **Table name** | `tableName` | True | String | The name for the table |
+| **Where condition** | `columnValuesForWhereCondition` | True | Object | This object contains the column names and corresponding values used for selecting the rows to delete. To provide this information, follow the *key-value* pair format, for example, *columnName* and *columnValue*, which also lets you specify single or specific rows to delete. |
+
+#### Returns
+| Name | Type |
+|||
+| **Result** | An array object that returns all the deleted rows. Each row contains the column name and the corresponding deleted value. |
+| **Result Item** | An array object that returns one deleted row at a time. A **For each** loop is automatically added to your workflow to iterate through the array. Each row contains the column name and the corresponding deleted value. |
-### Actions
+*Example*
-The SQL Server built-in connector has a single action.
+The following example shows sample parameter values for the **Delete rows** action:
-#### Execute Query
+**Sample values**
+
+| Parameter | JSON name | Sample value |
+|--|--|--|
+| **Table name** | `tableName` | tableName1 |
+| **Where condition** | `columnValuesForWhereCondition` | Key-value pairs: <br><br>- <*columnName1*>, <*columnValue1*> <br><br>- <*columnName2*>, <*columnValue2*> |
+
+**Parameters in the action's underlying JSON definition**
+
+```json
+"parameters": {
+ "tableName": "tableName1",
+ "columnValuesForWhereCondition": {
+ "columnName1": "columnValue1",
+ "columnName2": "columnValue2"
+ }
+},
+```
+
+<a name="execute-query"></a>
+
+### Execute query
Operation ID: `executeQuery`
-Runs a query against a SQL database.
+Runs a query on an SQL database.
-##### Parameters
+#### Parameters
| Name | Key | Required | Type | Description | ||--|-||-|
-| **Query** | `query` | True | Dynamic | The body for your query |
-| **Query Parameters** | `queryParameters` | False | Objects | The parameters for your query |
-||||||
+| **Query** | `query` | True | Dynamic | The body for your SQL query |
+| **Query parameters** | `queryParameters` | False | Objects | The parameters for your query. <br><br>**Note**: If the query requires input parameters, you must provide these parameters. |
-##### Returns
+#### Returns
-The outputs from this operation are dynamic.
+| Name | Type |
+|||
+| **Result** | An array object that returns all the query results. Each row contains the column name and the corresponding value. |
+| **Result Item** | An array object that returns one query result at a time. A **For each** loop is automatically added to your workflow to iterate through the array. Each row contains the column name and the corresponding value. |
-## Built-in connector app settings
+<a name="execute-stored-procedure"></a>
+
+### Execute stored procedure
+
+Operation ID: `executeStoredProcedure`
+
+Runs a stored procedure on an SQL database.
+
+#### Parameters
+
+| Name | Key | Required | Type | Description |
+||--|-||-|
+| **Procedure name** | `storedProcedureName` | True | String | The name for your stored procedure |
+| **Parameters** | `storedProcedureParameters` | False | Dynamic | The parameters for your stored procedure. <br><br>**Note**: If the stored procedure requires input parameters, you must provide these parameters. |
+
+#### Returns
+
+| Name | Type |
+|||
+| **Result** | An object that contains the result sets array, return code, and output parameters |
+| **Result Result Sets** | An object array that contains all the result sets from the stored procedure, which might return zero, one, or multiple result sets. |
+| **Result Return Code** | An integer that represents the status code from the stored procedure |
+| **Result Stored Procedure Parameters** | An object that contains the final values of the stored procedure's output and input-output parameters |
+| **Status Code** | The status code from the **Execute stored procedure** operation |
+
+<a name="get-rows"></a>
+
+### Get rows
+
+Operation ID: `getRows`
+
+Gets the table rows that match the specified **Where condition** value.
+
+#### Parameters
+
+| Name | Key | Required | Type | Description |
+||--|-||-|
+| **Table name** | `tableName` | True | String | The name for the table |
+| **Where condition** | `columnValuesForWhereCondition` | False | Dynamic | This object contains the column names and corresponding values used for selecting the rows to get. To provide this information, follow the *key-value* pair format, for example, *columnName* and *columnValue*, which also lets you specify single or specific rows to get. |
+
+#### Returns
+
+| Name | Type |
+|||
+| **Result** | An array object that returns all the row results. |
+| **Result Item** | An array object that returns one row result at a time. A **For each** loop is automatically added to your workflow to iterate through the array. |
+
+*Example*
+
+The following example shows sample parameter values for the **Get rows** action:
+
+**Sample values**
+
+| Parameter | JSON name | Sample value |
+|--|--|--|
+| **Table name** | `tableName` | tableName1 |
+| **Where condition** | `columnValuesForWhereCondition` | Key-value pairs: <br><br>- <*columnName1*>, <*columnValue1*> <br><br>- <*columnName2*>, <*columnValue2*> |
+
+**Parameters in the action's underlying JSON definition**
+
+```json
+"parameters": {
+ "tableName": "tableName1",
+ "columnValuesForWhereCondition": {
+ "columnName1": "columnValue1",
+ "columnName2": "columnValue2"
+ }
+},
+```
+
+<a name="get-tables"></a>
+
+### Get tables
+
+Operation ID: `getTables`
+
+Gets a list of all the tables in the database.
+
+#### Parameters
+
+None.
+
+#### Returns
+
+| Name | Type |
+|||
+| **Result** | An array object that contains the full names and display names for all tables in the database. |
+| **Result Display Name** | An array object that contains the display name for each table in the database. A **For each** loop is automatically added to your workflow to iterate through the array. |
+| **Result Full Name** | An array object that contains the full name for each table in the database. A **For each** loop is automatically added to your workflow to iterate through the array. |
+| **Result Item** | An array object that returns the full name and display name one at time for each table. A **For each** loop is automatically added to your workflow to iterate through the array. |
+
+<a name="insert-row"></a>
+
+### Insert row
+
+Operation ID: `insertRow`
+
+Inserts a single row in the specified table.
+
+| Name | Key | Required | Type | Description |
+||--|-||-|
+| **Table name** | `tableName` | True | String | The name for the table |
+| **Set columns** | `setColumns` | False | Dynamic | This object contains the column names and corresponding values to insert. To provide this information, follow the *key-value* pair format, for example, *columnName* and *columnValue*. If the table has columns with default or autogenerated values, you can leave this field empty. |
+
+#### Returns
+
+| Name | Type |
+|||
+| **Result** | The inserted row, including the names and values of any autogenerated, default, and null value columns. |
+
+<a name="update-rows"></a>
+
+### Update rows
+
+Operation ID: `updateRows`
+
+Updates the specified columns in all the table rows that match the specified **Where condition** value using the **Set columns** column names and values.
+
+| Name | Key | Required | Type | Description |
+||--|-||-|
+| **Table name** | `tableName` | True | String | The name for the table |
+| **Where condition** | `columnValuesForWhereCondition` | True | Dynamic | This object contains the column names and corresponding values for selecting the rows to update. To provide this information, follow the *key-value* pair format, for example, *columnName* and *columnValue*, which also lets you specify single or specific rows to update. |
+| **Set columns** | `setColumns` | True | Dynamic | This object contains the column names and the corresponding values to use for the update. To provide this information, follow the *key-value* pair format, for example, *columnName* and *columnValue*. |
+
+#### Returns
+
+| Name | Type |
+|||
+| **Result** | An array object that returns all the columns for the updated rows. |
+| **Result Item** | An array object that returns one column at a time from the updated rows. A **For each** loop is automatically added to your workflow to iterate through the array. |
+
+*Example*
+
+The following example shows sample parameter values for the **Update rows** action:
+
+**Sample values**
+
+| Parameter | JSON name | Sample value |
+|--|--|--|
+| **Table name** | `tableName` | tableName1 |
+| **Where condition** | `columnValuesForWhereCondition` | Key-value pairs: <br><br>- <*columnName1*>, <*columnValue1*> <br><br>- <*columnName2*>, <*columnValue2*> |
+
+**Parameters in the action's underlying JSON definition**
-The SQL Server built-in connector includes app settings on your Standard logic app resource that control various thresholds for performance, throughput, capacity, and so on. For example, you can change the default timeout value for connector operations. For more information, review [Reference for app settings - local.settings.json](../logic-apps/edit-app-settings-host-settings.md#reference-local-settings-json).
+```json
+"parameters": {
+ "tableName": "tableName1",
+ "columnValuesForWhereCondition": {
+ "columnName1": "columnValue1",
+ "columnName2": "columnValue2"
+ }
+},
+```
## Troubleshoot problems
Connection problems can commonly happen, so to troubleshoot and resolve these ki
## Next steps
-* Learn about other [managed connectors for Azure Logic Apps](../connectors/apis-list.md)
+* [Managed connectors for Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors)
+* [Built-in connectors for Azure Logic Apps](built-in.md)
cosmos-db Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/cli-samples.md
Previously updated : 02/21/2022 Last updated : 08/19/2022
These samples apply to all Azure Cosmos DB APIs. These samples use a SQL (Core)
| [Secure with IP firewall](../scripts/cli/common/ipfirewall.md)| Create a Cosmos account with IP firewall configured.| | [Secure new account with service endpoints](../scripts/cli/common/service-endpoints.md)| Create a Cosmos account and secure with service-endpoints.| | [Secure existing account with service endpoints](../scripts/cli/common/service-endpoints-ignore-missing-vnet.md)| Update a Cosmos account to secure with service-endpoints when the subnet is eventually configured.|
+| [Find existing free-tier account](../scripts/cli/common/free-tier.md)| Find whether there is an existing free-tier account in your subscription.|
||| ## Next steps
cosmos-db Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/concepts-limits.md
Cosmos DB automatically takes backups of your data at regular intervals. For det
| Resource | Limit | | | |
+| Maximum number of databases per account | 100 |
| Maximum number of containers per account | 100 | | Maximum number of regions | 1 (Any Azure region) |
cosmos-db Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/cli-samples.md
Previously updated : 02/21/2022 Last updated : 08/19/2022
These samples apply to all Azure Cosmos DB APIs. These samples use a SQL (Core)
| [Secure with IP firewall](../scripts/cli/common/ipfirewall.md)| Create a Cosmos account with IP firewall configured.| | [Secure new account with service endpoints](../scripts/cli/common/service-endpoints.md)| Create a Cosmos account and secure with service-endpoints.| | [Secure existing account with service endpoints](../scripts/cli/common/service-endpoints-ignore-missing-vnet.md)| Update a Cosmos account to secure with service-endpoints when the subnet is eventually configured.|
+| [Find existing free-tier account](../scripts/cli/common/free-tier.md)| Find whether there is an existing free-tier account in your subscription.|
||| ## Next steps
cosmos-db Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/cli-samples.md
Previously updated : 02/21/2022 Last updated : 08/18/2022
These samples apply to all Azure Cosmos DB APIs. These samples use a SQL (Core)
| [Secure with IP firewall](../scripts/cli/common/ipfirewall.md)| Create a Cosmos account with IP firewall configured.| | [Secure new account with service endpoints](../scripts/cli/common/service-endpoints.md)| Create a Cosmos account and secure with service-endpoints.| | [Secure existing account with service endpoints](../scripts/cli/common/service-endpoints-ignore-missing-vnet.md)| Update a Cosmos account to secure with service-endpoints when the subnet is eventually configured.|
+| [Find existing free-tier account](../scripts/cli/common/free-tier.md)| Find whether there is an existing free-tier account in your subscription.|
||| ## Next steps
cosmos-db Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/cli-samples.md
Previously updated : 02/21/2022 Last updated : 08/19/2022
These samples apply to all Azure Cosmos DB APIs. These samples use a SQL (Core)
| [Secure with IP firewall](../scripts/cli/common/ipfirewall.md)| Create a Cosmos account with IP firewall configured.| | [Secure new account with service endpoints](../scripts/cli/common/service-endpoints.md)| Create a Cosmos account and secure with service-endpoints.| | [Secure existing account with service endpoints](../scripts/cli/common/service-endpoints-ignore-missing-vnet.md)| Update a Cosmos account to secure with service-endpoints when the subnet is eventually configured.|
+| [Find existing free-tier account](../scripts/cli/common/free-tier.md)| Find whether there is an existing free-tier account in your subscription.|
||| ## Next steps
cosmos-db Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/cli-samples.md
Previously updated : 02/21/2022 Last updated : 08/19/2022
These samples apply to all Azure Cosmos DB APIs. These samples use a SQL (Core)
| [Secure with IP firewall](../scripts/cli/common/ipfirewall.md)| Create a Cosmos account with IP firewall configured.| | [Secure new account with service endpoints](../scripts/cli/common/service-endpoints.md)| Create a Cosmos account and secure with service-endpoints.| | [Secure existing account with service endpoints](../scripts/cli/common/service-endpoints-ignore-missing-vnet.md)| Update a Cosmos account to secure with service-endpoints when the subnet is eventually configured.|
+| [Find existing free-tier account](../scripts/cli/common/free-tier.md)| Find whether there is an existing free-tier account in your subscription.|
||| ## Next steps
data-factory Connector Snowflake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-snowflake.md
If your data store is located inside an on-premises network, an Azure virtual ne
If your data store is a managed cloud data service, you can use the Azure Integration Runtime. If the access is restricted to IPs that are approved in the firewall rules, you can add [Azure Integration Runtime IPs](azure-integration-runtime-ip-addresses.md) to the allowed list.
-The Snowflake account that is used for Source or Sink should have the necessary `USAGE` access on the Database and Read / Write access on Schema and the Tables/Views under it. In addition, it should also have `CREATE STAGE` on the schema to be able to create the External stage with SAS URI.
+The Snowflake account that is used for Source or Sink should have the necessary `USAGE` access on the database and read/write access on schema and the tables/views under it.. In addition, it should also have `CREATE STAGE` on the schema to be able to create the External stage with SAS URI.
The following Account properties values must be set
data-factory Connector Troubleshoot Azure Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-azure-blob-storage.md
Previously updated : 10/01/2021 Last updated : 08/12/2022
This article provides suggestions to troubleshoot common problems with the Azure
- **Cause**: Multiple concurrent writing requests occur, which causes conflicts on file content.
+## Error code: AzureBlobFailedToCreateContainer
+
+- **Message**: `Unable to create Azure Blob container. Endpoint: '%endpoint;', Container Name: '%containerName;'.`
+
+- **Cause**: This error happens when copying data with Azure Blob Storage account public access.
+
+- **Recommendation**: For more information about connection errors in the public endpoint, see [Connection error in public endpoint](security-and-access-control-troubleshoot-guide.md#connection-error-in-public-endpoint).
+ ## Next steps For more troubleshooting help, try these resources:
data-factory Connector Troubleshoot Azure Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-azure-data-explorer.md
+
+ Title: Troubleshoot the Azure Data Explorer connector
+
+description: Learn how to troubleshoot issues with the Azure Data Explorer connector in Azure Data Factory and Azure Synapse Analytics.
++++ Last updated : 08/12/2022++++
+# Troubleshoot the Azure Data Explorer connector in Azure Data Factory and Azure Synapse
++
+This article provides suggestions to troubleshoot common problems with the Azure Data Explorer connector in Azure Data Factory and Azure Synapse.
+
+## Error code: KustoMappingReferenceHasWrongKind
+
+- **Message**: `Mapping reference should be of kind 'Csv'. Mapping reference: '%reference;'. Kind '%kind;'.`
+
+- **Cause**: The ingestion mapping reference is not CSV type.
+
+- **Recommendation**: Create a CSV ingestion mapping reference.
+
+## Error code: KustoWriteFailed
+
+- **Message**: `Write to Kusto failed with following error: '%message;'.`
+
+- **Cause**: Wrong configuration or transient errors when the sink reads data from the source.
+
+- **Recommendation**: For transient failures, set retries for the activity. For permanent failures, check your configuration and contact support.
+
+## Next steps
+
+For more troubleshooting help, try these resources:
+
+- [Connector troubleshooting guide](connector-troubleshoot-guide.md)
+- [Data Factory blog](https://azure.microsoft.com/blog/tag/azure-data-factory/)
+- [Data Factory feature requests](/answers/topics/azure-data-factory.html)
+- [Azure videos](https://azure.microsoft.com/resources/videos/index/?sort=newest&services=data-factory)
+- [Microsoft Q&A page](/answers/topics/azure-data-factory.html)
+- [Stack Overflow forum for Data Factory](https://stackoverflow.com/questions/tagged/azure-data-factory)
+- [Twitter information about Data Factory](https://twitter.com/hashtag/DataFactory)
data-factory Connector Troubleshoot Azure Data Lake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-azure-data-lake.md
Previously updated : 10/13/2021 Last updated : 08/10/2022
This article provides suggestions to troubleshoot common problems with the Azure
| If Azure Data Lake Storage Gen2 throws error indicating some operation failed.| Check the detailed error message thrown by Azure Data Lake Storage Gen2. If the error is a transient failure, retry the operation. For further help, contact Azure Storage support, and provide the request ID in error message. | | If the error message contains the string "Forbidden", the service principal or managed identity you use might not have sufficient permission to access Azure Data Lake Storage Gen2. | To troubleshoot this error, see [Copy and transform data in Azure Data Lake Storage Gen2](./connector-azure-data-lake-storage.md#service-principal-authentication). | | If the error message contains the string "InternalServerError", the error is returned by Azure Data Lake Storage Gen2. | The error might be caused by a transient failure. If so, retry the operation. If the issue persists, contact Azure Storage support and provide the request ID from the error message. |
+ | If the error message is `Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host`, your integration runtime has a network issue in connecting to Azure Data Lake Storage Gen2. | In the firewall rule setting of Azure Data Lake Storage Gen2, make sure Azure Data Factory IP addresses are in the allowed list. For more information, see [Configure Azure Storage firewalls and virtual networks](../storage/common/storage-network-security.md). |
+ | If the error message is `This endpoint does not support BlobStorageEvents or SoftDelete`, you are using an Azure Data Lake Storage Gen2 linked service to connect to an Azure Blob Storage account that enables Blob storage events or soft delete. | Try the following options:<br>1. If you still want to use an Azure Data Lake Storage Gen2 linked service, upgrade your Azure Blob Storage to Azure Data Lake Storage Gen2. For more information, see [Upgrade Azure Blob Storage with Azure Data Lake Storage Gen2 capabilities](../storage/blobs/upgrade-to-data-lake-storage-gen2-how-to.md).<br>2. Switch your linked service to Azure Blob Storage.<br>3. Disable Blob storage events or soft delete in your Azure Blob Storage account. |
### Request to Azure Data Lake Storage Gen2 account caused a timeout error
data-factory Connector Troubleshoot Ftp Sftp Http https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-ftp-sftp-http.md
Previously updated : 07/29/2022 Last updated : 08/12/2022
This article provides suggestions to troubleshoot common problems with the FTP,
- **Recommendation**: Check the HTTP status code in the error message, and fix the remote server issue.
+### Error code: HttpSourceUnsupportedStatusCode
+
+- **Message**: `Http source doesn't support HTTP Status Code '%code;'.`
+
+- **Cause**: This error happens when Azure Data Factory requests HTTP source but gets unexpected status code.
+
+- **Recommendation**: For more information about HTTP status code, see this [document](/troubleshoot/developer/webapps/iis/www-administration-management/http-status-code).
+ ## Next steps For more troubleshooting help, try these resources:
data-factory Data Access Strategies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-access-strategies.md
This should work in many scenarios, and we do understand that a unique Static IP
For more information about supported network security mechanisms on data stores in Azure Integration Runtime and Self-hosted Integration Runtime, see below two tables. * **Azure Integration Runtime**
- | Data Stores | Supported Network Security Mechanism on Data Stores | Private Link | Trusted Service | Static IP range | Service Tags | Allow Azure Services |
+ | Data Stores | Supported Network Security Mechanism on Data Stores | Private Link | Trusted Service | Static IP range | Service Tags | Allow Azure Services |
||-||--|--|-|--| | Azure PaaS Data stores | Azure Cosmos DB | Yes | - | Yes | - | Yes | | | Azure Data Explorer | - | - | Yes* | Yes* | - |
For more information about supported network security mechanisms on data stores
| | Azure SQL DB, Azure Synapse Analytics), SQL Ml | Yes (only Azure SQL DB/DW) | - | Yes | - | Yes | | | Azure Key Vault (for fetching secrets/ connection string) | yes | Yes | Yes | - | - | | Other PaaS/ SaaS Data stores | AWS S3, SalesForce, Google Cloud Storage, etc. | - | - | Yes | - | - |
+ | | Snowflake | Yes | - | Yes | - | - |
| Azure IaaS | SQL Server, Oracle, etc. | - | - | Yes | Yes | - |
- | On-premises IaaS | SQL Server, Oracle, etc. | - | - | Yes | - | - |
-
+ | On-premises IaaS | SQL Server, Oracle, etc. | - | - | Yes | - | -
+
+
**Applicable only when Azure Data Explorer is virtual network injected, and IP range can be applied on NSG/ Firewall.* * **Self-hosted Integration Runtime (in VNet/on-premises)**
data-factory Scenario Ssis Migration Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/scenario-ssis-migration-rules.md
Connection that contains host name may fail, typically because the Azure virtual
You can use below options for SSIS Integration runtime to access these resources: -- [Join Azure-SSIS IR to a virtual network that connects to on-premise sources](/azure/data-factory/join-azure-ssis-integration-runtime-virtual-network)
+- [Join Azure-SSIS IR to a virtual network that connects to on-premises sources](/azure/data-factory/join-azure-ssis-integration-runtime-virtual-network)
- Migrate your data to Azure and use Azure resource endpoint. - Use Managed Identity authentication if moving to Azure resources.-- [Use self-hosted IR to connect on-premise sources](/azure/data-factory/self-hosted-integration-runtime-proxy-ssis).
+- [Use self-hosted IR to connect on-premises sources](/azure/data-factory/self-hosted-integration-runtime-proxy-ssis).
### [1002]Connection with absolute or UNC path might not be accessible
You can use below options for SSIS Integration runtime to access these resources
- [Change to %TEMP%](/azure/data-factory/ssis-azure-files-file-shares) - [Migrate your files to Azure Files](/azure/data-factory/ssis-azure-files-file-shares)-- [Join Azure-SSIS IR to a virtual network that connects to on-premise sources](/azure/data-factory/join-azure-ssis-integration-runtime-virtual-network).-- [Use self-hosted IR to connect on-premise sources](/azure/data-factory/self-hosted-integration-runtime-proxy-ssis).
+- [Join Azure-SSIS IR to a virtual network that connects to on-premises sources](/azure/data-factory/join-azure-ssis-integration-runtime-virtual-network).
+- [Use self-hosted IR to connect on-premises sources](/azure/data-factory/self-hosted-integration-runtime-proxy-ssis).
### [1003]Connection with Windows authentication may fail
Recommendation
You can use below options for SSIS Integration runtime to launch your executable(s): - [Migrate your executable(s) to Azure Files](/azure/data-factory/ssis-azure-files-file-shares).-- [Join Azure-SSIS IR to a virtual network](/azure/data-factory/join-azure-ssis-integration-runtime-virtual-network) that connects to on-premise sources.
+- [Join Azure-SSIS IR to a virtual network](/azure/data-factory/join-azure-ssis-integration-runtime-virtual-network) that connects to on-premises sources.
- If necessary, [customize setup script to install your executable(s)](/azure/data-factory/how-to-configure-azure-ssis-ir-custom-setup) in advance when starting IR. ### [4001]Absolute or UNC configuration path is discovered in package configuration
Recommendation
You can use below options for SSIS Integration runtime to access these resources: - [Migrate your files to Azure Files](/azure/data-factory/ssis-azure-files-file-shares)-- [Join Azure-SSIS IR to a virtual network that connects to on-premise sources](/azure/data-factory/join-azure-ssis-integration-runtime-virtual-network).-- [Use self-hosted IR to connect on-premise sources](/azure/data-factory/self-hosted-integration-runtime-proxy-ssis).
+- [Join Azure-SSIS IR to a virtual network that connects to on-premises sources](/azure/data-factory/join-azure-ssis-integration-runtime-virtual-network).
+- [Use self-hosted IR to connect on-premises sources](/azure/data-factory/self-hosted-integration-runtime-proxy-ssis).
### [4002]Registry entry is discovered in package configuration
data-factory Security And Access Control Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/security-and-access-control-troubleshoot-guide.md
Previously updated : 02/07/2022 Last updated : 08/15/2022
For example: The Azure Blob Storage sink was using Azure IR (public, not Managed
` <LogProperties><Text>Invoke callback url with req:
-"ErrorCode=UserErrorFailedToCreateAzureBlobContainer,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Unable to create Azure Blob container. Endpoint: XXXXXXX/, Container Name: test.,Source=Microsoft.DataTransfer.ClientLibrary,''Type=Microsoft.WindowsAzure.Storage.StorageException,Message=Unable to connect to the remote server,Source=Microsoft.WindowsAzure.Storage,''Type=System.Net.WebException,Message=Unable to connect to the remote server,Source=System,''Type=System.Net.Sockets.SocketException,Message=A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond public ip:443,Source=System,'","Details":null}}</Text></LogProperties>.
+"ErrorCode=AzureBlobFailedToCreateContainer,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Unable to create Azure Blob container. Endpoint: XXXXXXX/, Container Name: test.,Source=Microsoft.DataTransfer.ClientLibrary,''Type=Microsoft.WindowsAzure.Storage.StorageException,Message=Unable to connect to the remote server,Source=Microsoft.WindowsAzure.Storage,''Type=System.Net.WebException,Message=Unable to connect to the remote server,Source=System,''Type=System.Net.Sockets.SocketException,Message=A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond public ip:443,Source=System,'","Details":null}}</Text></LogProperties>.
` #### Cause
dns Dns Private Resolver Get Started Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-get-started-portal.md
This quickstart walks you through the steps to create an Azure DNS Private Resolver (Public Preview) using the Azure portal. If you prefer, you can complete this quickstart using [Azure PowerShell](private-dns-getstarted-powershell.md).
-Azure DNS Private Resolver enables you to query Azure DNS private zones from an on-premises environment, and vice versa, without deploying VM based DNS servers. You no longer need to provision IaaS based solutions on your virtual networks to resolve names registered on Azure private DNS zones. You can configure conditional forwarding of domains back to on-premises, multi-cloud and public DNS servers. For more information, including benefits, capabilities, and regional availability, see [What is Azure DNS Private Resolver](dns-private-resolver-overview.md).
+Azure DNS Private Resolver enables you to query Azure DNS private zones from an on-premises environment, and vice versa, without deploying VM based DNS servers. You no longer need to provision IaaS based solutions on your virtual networks to resolve names registered on Azure private DNS zones. You can configure conditional forwarding of domains back to on-premises, multicloud and public DNS servers. For more information, including benefits, capabilities, and regional availability, see [What is Azure DNS Private Resolver](dns-private-resolver-overview.md).
## Prerequisites
dns Dns Private Resolver Get Started Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-get-started-powershell.md
description: In this quickstart, you learn how to create and manage your first p
Previously updated : 06/02/2022 Last updated : 08/18/2022
dns Dns Private Resolver Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-overview.md
Previously updated : 08/02/2022 Last updated : 08/17/2022 #Customer intent: As an administrator, I want to evaluate Azure DNS Private Resolver so I can determine if I want to use it instead of my current DNS resolver service.
The DNS query process when using an Azure DNS Private Resolver is summarized bel
The architecture for Azure DNS Private Resolver is summarized in the following figure. DNS resolution between Azure virtual networks and on-premises networks requires [Azure ExpressRoute](../expressroute/expressroute-introduction.md) or a [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md).
-[ ![Azure DNS Private Resolver architecture](./media/dns-resolver-overview/resolver-architecture.png) ](./media/dns-resolver-overview/resolver-architecture.png#lightbox)
+[ ![Azure DNS Private Resolver architecture](./media/dns-resolver-overview/resolver-architecture.png) ](./media/dns-resolver-overview/resolver-architecture-highres.png#lightbox)
Figure 1: Azure DNS Private Resolver architecture
Azure DNS Private Resolver is available in the following regions:
## DNS resolver endpoints
+For more information about endpoints and rulesets, see [Azure DNS Private Resolver endpoints and rulesets](private-resolver-endpoints-rulesets.md).
+ ### Inbound endpoints An inbound endpoint enables name resolution from on-premises or other private locations via an IP address that is part of your private virtual network address space. To resolve your Azure private DNS zone from on-premises, enter the IP address of the inbound endpoint into your on-premises DNS conditional forwarder. The on-premises DNS conditional forwarder must have a network connection to the virtual network.
Virtual network links enable name resolution for virtual networks that are linke
## DNS forwarding rulesets
-A DNS forwarding ruleset is a group of DNS forwarding rules (up to 1,000) that can be applied to one or more outbound endpoints, or linked to one or more virtual networks. This is a 1:N relationship. Rulesets are associated with a specific outbound endpoint.
+A DNS forwarding ruleset is a group of DNS forwarding rules (up to 1,000) that can be applied to one or more outbound endpoints, or linked to one or more virtual networks. This is a 1:N relationship. Rulesets are associated with a specific outbound endpoint. For more information, see [DNS forwarding rulesets](private-resolver-endpoints-rulesets.md#dns-forwarding-rulesets).
## DNS forwarding rules
The following restrictions hold with respect to virtual networks:
Subnets used for DNS resolver have the following limitations: - A subnet must be a minimum of /28 address space or a maximum of /24 address space. - A subnet can't be shared between multiple DNS resolver endpoints. A single subnet can only be used by a single DNS resolver endpoint.-- All IP configurations for a DNS resolver inbound endpoint must reference the same subnet. Spanning multiple subnets in the IP configuration for a single DNS resolver inbound endpoint is not allowed.
+- All IP configurations for a DNS resolver inbound endpoint must reference the same subnet. Spanning multiple subnets in the IP configuration for a single DNS resolver inbound endpoint isn't allowed.
- The subnet used for a DNS resolver inbound endpoint must be within the virtual network referenced by the parent DNS resolver. ### Outbound endpoint restrictions
Outbound endpoints have the following limitations:
- IPv6 enabled subnets aren't supported in Public Preview. - ## Next steps * Learn how to create an Azure DNS Private Resolver by using [Azure PowerShell](./dns-private-resolver-get-started-powershell.md) or [Azure portal](./dns-private-resolver-get-started-portal.md).
+* Understand how to [Resolve Azure and on-premises domains](private-resolver-hybrid-dns.md) using the Azure DNS Private Resolver.
+* Learn about [Azure DNS Private Resolver endpoints and rulesets](private-resolver-endpoints-rulesets.md).
+* Learn how to [Set up DNS failover using private resolvers](tutorial-dns-private-resolver-failover.md)
+* Learn how to [configure hybrid DNS](private-resolver-hybrid-dns.md) using private resolvers.
* Learn about some of the other key [networking capabilities](../networking/fundamentals/networking-overview.md) of Azure. * [Learn module: Introduction to Azure DNS](/learn/modules/intro-to-azure-dns).
dns Private Resolver Endpoints Rulesets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-resolver-endpoints-rulesets.md
+
+ Title: Azure DNS Private Resolver endpoints and rulesets
+description: In this article, understand the Azure DNS Private Resolver endpoints and rulesets
++++ Last updated : 08/16/2022+
+#Customer intent: As an administrator, I want to understand components of the Azure DNS Private Resolver.
++
+# Azure DNS Private Resolver endpoints and rulesets
+
+In this article, you'll learn about components of the [Azure DNS Private Resolver](dns-private-resolver-overview.md). Inbound endpoints, outbound endpoints, and DNS forwarding rulesets are discussed. Properties and settings of these components are described, and examples are provided for how to use them.
+
+> [!IMPORTANT]
+> Azure DNS Private Resolver is currently in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Inbound endpoints
+
+As the name suggests, inbound endpoints will ingress to Azure. Inbound endpoints provide an IP address to forward DNS queries from on-premises and other locations outside your virtual network. DNS queries sent to the inbound endpoint are resolved using Azure DNS. Private DNS zones that are linked to the virtual network where the inbound endpoint is provisioned are resolved by the inbound endpoint.
+
+The IP address associated with an inbound endpoint is always part of the private virtual network address space where the private resolver is deployed. No other resources can exist in the same subnet with the inbound endpoint. The following screenshot shows an inbound endpoint with an IP address of 10.10.0.4 inside the subnet `snet-E-inbound` provisioned within a virtual network with address space of 10.10.0.0/16.
+
+![View inbound endpoints](./media/private-resolver-endpoints-rulesets/east-inbound-endpoint.png)
+
+## Outbound endpoints
+
+Outbound endpoints egress from Azure and can be linked to [DNS Forwarding Rulesets](#dns-forwarding-rulesets).
+
+Outbound endpoints are also part of the private virtual network address space where the private resolver is deployed. An outbound endpoint is associated with a subnet, but isn't provisioned with an IP address like the inbound endpoint. No other resources can exist in the same subnet with the outbound endpoint. The following screenshot shows an outbound endpoint inside the subnet `snet-E-outbound`.
+
+![View outbound endpoints](./media/private-resolver-endpoints-rulesets/east-outbound-endpoint.png)
+
+## DNS forwarding rulesets
+
+DNS forwarding rulesets enable you to specify one or more custom DNS servers to answer queries for specific DNS namespaces. The individual [rules](#rules) in a ruleset determine how these DNS names are resolved. Rulesets can also be linked one or more virtual networks, enabling resources in the vnets to use the forwarding rules that you configure.
+
+Rulesets have the following associations:
+- A single ruleset can be associated with multiple outbound endpoints.
+- A ruleset can have up to 1000 DNS forwarding rules.
+- A ruleset can be linked to any number of virtual networks in the same region
+
+A ruleset can't be linked to a virtual network in another region.
+
+When you link a ruleset to a virtual network, resources within that virtual network will use the DNS forwarding rules enabled in the ruleset. The linked virtual network must peer with the virtual network where the outbound endpoint exists. This configuration is typically used in a hub and spoke design, with spoke vnets peered to a hub vnet that has one or more private resolver endpoints. In this hub and spoke scenario, the spoke vnet does not need to be linked to the private DNS zone in order to resolve resource records in the zone. In this case, the forwarding ruleset rule for the private zone sends queries to the hub vnet's inbound endpoint. For example: **azure.contoso.com** to **10.10.0.4**.
+
+The following screenshot shows a DNS forwarding ruleset linked to two virtual networks: a hub vnet: **myeastvnet**, and a spoke vnet: **myeastspoke**.
+
+![View ruleset links](./media/private-resolver-endpoints-rulesets/ruleset-links.png)
+
+Virtual network links for DNS forwarding rulesets enable resources in vnets to use forwarding rules when resolving DNS names. Vnets that are linked from a ruleset, but don't have their own private resolver, must have a peering connection to the vnet that contains the private resolver. The vnet with the private resolver must also be linked from any private DNS zones for which there are ruleset rules.
+
+For example, resources in the vnet `myeastspoke` can resolve records in the private DNS zone `azure.contoso.com` if:
+- The vnet `myeastspoke` peers with `myeastvnet`
+- The ruleset provisioned in `myeastvnet` is linked to `myeastspoke` and `myeastvnet`
+- A ruleset rule is configured and enabled in the linked ruleset to resolve `azure.contoso.com` using the inbound endpoint in `myeastvnet`
+
+### Rules
+
+DNS forwarding rules (ruleset rules) have the following properties:
+
+| Property | Description |
+| | |
+| Rule name | The name of your rule. The name must begin with a letter, and can contain only letters, numbers, underscores, and dashes. |
+| Domain name | The dot-terminated DNS namespace where your rule applies. The namespace must have either zero labels (for wildcard) or between 2 and 34 labels. For example, `contoso.com.` has two labels. |
+| Destination IP:Port | The forwarding destination. One or more IP addresses and ports of DNS servers that will be used to resolve DNS queries in the specified namespace. |
+| Rule state | The rule state: Enabled or disabled. If a rule is disabled, it's ignored. |
+
+If multiple rules are matched, the longest prefix match is used.
+
+For example, if you have the following rules:
+
+| Rule name | Domain name | Destination IP:Port | Rule state |
+| | | | |
+| Contoso | contoso.com. | 10.100.0.2:53 | Enabled |
+| AzurePrivate | azure.contoso.com. | 10.10.0.4:53 | Enabled |
+| Wildcard | . | 10.100.0.2:53 | Enabled |
+
+A query for `secure.store.azure.contoso.com` will match the **AzurePrivate** rule for `azure.contoso.com` and also the **Contoso** rule for `contoso.com`, but the **AzurePrivate** rule takes precedence because the prefix `azure.contoso` is longer than `contoso`.
+
+## Next steps
+
+* Review components, benefits, and requirements for [Azure DNS Private Resolver](dns-private-resolver-overview.md).
+* Learn how to create an Azure DNS Private Resolver by using [Azure PowerShell](./dns-private-resolver-get-started-powershell.md) or [Azure portal](./dns-private-resolver-get-started-portal.md).
+* Understand how to [Resolve Azure and on-premises domains](private-resolver-hybrid-dns.md) using the Azure DNS Private Resolver.
+* Learn how to [Set up DNS failover using private resolvers](tutorial-dns-private-resolver-failover.md)
+* Learn how to [configure hybrid DNS](private-resolver-hybrid-dns.md) using private resolvers.
+* Learn about some of the other key [networking capabilities](../networking/fundamentals/networking-overview.md) of Azure.
+* [Learn module: Introduction to Azure DNS](/learn/modules/intro-to-azure-dns).
dns Private Resolver Hybrid Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-resolver-hybrid-dns.md
+
+ Title: Resolve Azure and on-premises domains
+description: Configure Azure and on-premises DNS to resolve private DNS zones and on-premises domains
++++ Last updated : 08/18/2022+
+# Customer intent: As an administrator, I want to resolve on-premises domains in Azure and resolve Azure private zones on-premises.
++
+# Resolve Azure and on-premises domains
+
+This article provides guidance on how to configure hybrid DNS resolution by using an [Azure DNS Private Resolver](#azure-dns-private-resolver) with a [DNS forwarding ruleset](#dns-forwarding-ruleset).
+
+*Hybrid DNS resolution* is defined here as enabling Azure resources to resolve your on-premises domains, and on-premises DNS to resolve your Azure private DNS zones.
+
+> [!IMPORTANT]
+> Azure DNS Private Resolver is currently in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Azure DNS Private Resolver
+
+The [Azure DNS Private Resolver](dns-private-resolver-overview.md) is a service that can resolve on-premises DNS queries for Azure DNS private zones. Previously, it was necessary to [deploy a VM-based custom DNS resolver](/azure/hdinsight/connect-on-premises-network), or use non-Microsoft DNS, DHCP, and IPAM (DDI) solutions to perform this function.
+
+Benefits of using the Azure DNS Private Resolver service vs. VM-based resolvers or DDI solutions include:
+- Zero maintenance: Unlike VM or hardware based solutions, the private resolver doesn't require software updates, vulnerability scans, or security patching. The private resolver service is fully managed.
+- Cost reduction: Azure DNS Private Resolver is a multi-tenant service and can cost a fraction of the expense that is required to use and license multiple VM-based DNS resolvers.
+- High availability: The Azure DNS Private Resolver service has built-in high availability features. The service is [availability zone](/azure/availability-zones/az-overview) aware, thus ensuring that high availability and redundancy of your DNS solution can be accomplished with much less effort. For more information on how to configure DNS failover using the private resolver service, see [Tutorial: Set up DNS failover using private resolvers](tutorial-dns-private-resolver-failover.md).
+- DevOps friendly: Traditional DNS solutions are hard to integrate with DevOps workflows as these often require manual configuration for every DNS change. Azure DNS private resolver provides a fully functional ARM interface that can be easily integrated with DevOps workflows.
+
+## DNS forwarding ruleset
+
+A DNS forwarding ruleset is a group of rules that specify one or more custom DNS servers to answer queries for specific DNS namespaces. For more information, see [Azure DNS Private Resolver endpoints and rulesets](private-resolver-endpoints-rulesets.md).
+
+## Procedures
+
+The following procedures in this article are used to enable and test hybrid DNS:
+- [Create an Azure DNS private zone](#create-an-azure-dns-private-zone)
+- [Create an Azure DNS Private Resolver](#create-an-azure-dns-private-resolver)
+- [Configure an Azure DNS forwarding ruleset](#configure-an-azure-dns-forwarding-ruleset)
+- [Configure on-premises DNS conditional forwarders](#configure-on-premises-dns-conditional-forwarders)
+- [Demonstrate hybrid DNS](#demonstrate-hybrid-dns)
+
+## Create an Azure DNS private zone
+
+Create a private zone with at least one resource record to use for testing. The following quickstarts are available to help you create a private zone:
+- [Create a private zone - portal](private-dns-getstarted-portal.md)
+- [Create a private zone - PowerShell](private-dns-getstarted-powershell.md)
+- [Create a private zone - CLI](private-dns-getstarted-cli.md)
+
+In this article, the private zone **azure.contoso.com** and the resource record **test** are used. Autoregistration isn't required for the current demonstration.
+
+[ ![View resource records](./media/private-resolver-hybrid-dns/private-zone-records-small.png) ](./media/private-resolver-hybrid-dns/private-zone-records.png#lightbox)
+
+**Requirement**: You must create a virtual network link in the zone to the virtual network where you'll deploy your Azure DNS Private Resolver. In the example shown below, the private zone is linked to two vnets: **myeastvnet** and **mywestvnet**. At least one link is required.
+
+[ ![View zone links](./media/private-resolver-hybrid-dns/private-zone-links-small.png) ](./media/private-resolver-hybrid-dns/private-zone-links.png#lightbox)
+
+## Create an Azure DNS Private Resolver
+
+The following quickstarts are available to help you create a private resolver. These quickstarts walk you through creating a resource group, a virtual network, and Azure DNS Private Resolver. The steps to configure an inbound endpoint, outbound endpoint, and DNS forwarding ruleset are provied:
+- [Create a private resolver - portal](dns-private-resolver-get-started-portal.md)
+- [Create a private resolver - PowerShell](dns-private-resolver-get-started-powershell.md)
+
+ When you're finished, write down the IP address of the inbound endpoint for the Azure DNS Private Resolver, as shown below. In this case, the IP address is **10.10.0.4**. This IP address will be used later to configure on-premises DNS conditional forwarders.
+
+[ ![View endpoint IP address](./media/private-resolver-hybrid-dns/inbound-endpoint-ip-small.png) ](./media/private-resolver-hybrid-dns/inbound-endpoint-ip.png#lightbox)
+
+## Configure an Azure DNS forwarding ruleset
+
+Create a forwarding ruleset in the same region as your private resolver. The following example shows two rulesets. The **East US** region ruleset is used for the hybrid DNS demonstration.
+
+[ ![View ruleset region](./media/private-resolver-hybrid-dns/forwarding-ruleset-region-small.png) ](./media/private-resolver-hybrid-dns/forwarding-ruleset-region.png#lightbox)
+
+**Requirement**: You must create a virtual network link to the vnet where your private resolver is deployed. In the following example, two virtual network links are present. The link **myeastvnet-link** is created to a hub vnet where the private resolver is provisioned. There's also a virtual network link **myeastspoke-link** that provides hybrid DNS resolution in a spoke vnet that doesn't have its own private resolver. The spoke network is able to use the private resolver because it peers with the hub network. The spoke vnet link isn't required for the current demonstration.
+
+[ ![View ruleset links](./media/private-resolver-hybrid-dns/ruleset-links-small.png) ](./media/private-resolver-hybrid-dns/ruleset-links.png#lightbox)
+
+Next, create a rule in your ruleset for your on-premises domain. In this example, we use **contoso.com**. Set the destination IP address for your rule to be the IP address of your on-premises DNS server. In this example, the on-premises DNS server is at **10.100.0.2**. Verify that the rule is **Enabled**.
+
+[ ![View rules](./media/private-resolver-hybrid-dns/ruleset-rules-small.png) ](./media/private-resolver-hybrid-dns/ruleset-rules.png#lightbox)
+
+> [!NOTE]
+> Don't change the DNS settings for your virtual network to use the inbound endpoint IP address. Leave the default DNS settings.
+
+## Configure on-premises DNS conditional forwarders
+
+The procedure to configure on-premises DNS depends on the type of DNS server you're using. In the following example, a Windows DNS server at **10.100.0.2** is configured with a conditional forwarder for the private DNS zone **azure.contoso.com**. The conditional forwarder is set to forward queries to **10.10.0.4**, which is the inbound endpoint IP address for your Azure DNS Private Resolver. There's another IP address also configured here to enable DNS failover. For more information about enabling failover, see [Tutorial: Set up DNS failover using private resolvers](tutorial-dns-private-resolver-failover.md). For the purposes of this demonstration, only the **10.10.0.4** inbound endpoint is required.
+
+![View on-premises forwarding](./media/private-resolver-hybrid-dns/on-premises-forwarders.png)
+
+## Demonstrate hybrid DNS
+
+Using a VM located in the virtual network where the Azure DNS Private Resolver is provisioned, issue a DNS query for a resource record in your on-premises domain. In this example, a query is performed for the record **testdns.contoso.com**:
+
+![Verify Azure to on-premise](./media/private-resolver-hybrid-dns/azure-to-on-premises-lookup.png)
+
+The path for the query is: Azure DNS > inbound endpoint > outbound endpoint > ruleset rule for contoso.com > on-premises DNS (10.100.0.2). The DNS server at 10.100.0.2 is an on-premises DNS resolver, but it could also be an authoritative DNS server.
+
+Using an on-premises VM or device, issue a DNS query for a resource record in your Azure private DNS zone. In this example, a query is performed for the record **test.azure.contoso.com**:
+
+![Verify on-premises to Azure](./media/private-resolver-hybrid-dns/on-premises-to-azure-lookup.png)
+
+The path for this query is: client's default DNS resolver (10.100.0.2) > on-premises conditional forwarder rule for azure.contoso.com > inbound endpoint (10.10.0.4)
+
+## Next steps
+* Review components, benefits, and requirements for [Azure DNS Private Resolver](dns-private-resolver-overview.md).
+* Learn how to create an Azure DNS Private Resolver by using [Azure PowerShell](./dns-private-resolver-get-started-powershell.md) or [Azure portal](./dns-private-resolver-get-started-portal.md).
+* Understand how to [Resolve Azure and on-premises domains](private-resolver-hybrid-dns.md) using the Azure DNS Private Resolver.
+* Learn about [Azure DNS Private Resolver endpoints and rulesets](private-resolver-endpoints-rulesets.md).
+* Learn how to [Set up DNS failover using private resolvers](tutorial-dns-private-resolver-failover.md)
+* Learn about some of the other key [networking capabilities](../networking/fundamentals/networking-overview.md) of Azure.
+* [Learn module: Introduction to Azure DNS](/learn/modules/intro-to-azure-dns).
dns Tutorial Dns Private Resolver Failover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/tutorial-dns-private-resolver-failover.md
+
+ Title: Tutorial - Set up DNS failover using private resolvers
+description: A tutorial on how to configure regional failover using the Azure DNS Private Resolver
++++ Last updated : 08/18/2022+
+#Customer intent: As an administrator, I want to avoid having a single point of failure for DNS resolution.
++
+# Tutorial: Set up DNS failover using private resolvers
+
+This article details how to eliminate a single point of failure in your on-premises DNS services by using two or more Azure DNS private resolvers deployed across different regions. DNS failover is enabled by assigning a local resolver as your primary DNS and the resolver in an adjacent region as secondary DNS.
+
+> [!IMPORTANT]
+> Azure DNS Private Resolver is currently in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Resolve Azure Private DNS zones using on-premises conditional fowarders and Azure DNS private resolvers.
+> * Enable on-premises DNS failover for your Azure Private DNS zones.
+
+The following diagram shows the failover scenario discussed in this article.
+
+[ ![Azure DNS Private Resolver architecture](./media/tutorial-dns-private-resolver-failover/private-resolver-failover.png) ](./media/tutorial-dns-private-resolver-failover/private-resolver-failover-highres.png#lightbox)
+
+In this scenario, you have connections from two on-premises locations to two Azure hub vnets.
+- In the east region, the primary path is to the east vnet hub. You have a secondary connection to the west hub. The west region is configured in the reverse.
+- Due to an Internet connectivity issue, the connection to one vnet (west) is temporarily broken.
+- Service is maintained in both regions due to the redundant design.
+
+The DNS resolution path is:
+1) Redundant on-premises DNS [conditional forwarders](#on-premise-forwarding) send DNS queries to inbound endpoints.
+2) [Inbound endpoints](#inbound-endpoints) receive DNS queries from on-premises.
+3) Outbound endpoints and DNS forwarding rulesets process DNS queries and return replies to your on-premises resources.
+
+Outbound endpoints and DNS forwarding rulesets aren't needed for the failover scenario, but are included here for completeness. Rulesets can be used is to resolve on-premises domains from Azure. For more information, see [Azure DNS Private Resolver endpoints and rulesets](private-resolver-endpoints-rulesets.md) and [Resolve Azure and on-premises domains](private-resolver-hybrid-dns.md).
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- Two [Azure virtual networks](../virtual-network/quick-create-portal.md) in two regions
+- A [VPN](../vpn-gateway/tutorial-site-to-site-portal.md) or [ExpressRoute](../expressroute/expressroute-howto-circuit-portal-resource-manager.md) link from on-premises to each virtual network
+- An [Azure DNS Private Resolver](dns-private-resolver-get-started-portal.md) in each virtual network
+- An Azure [private DNS zone](private-dns-getstarted-portal.md) that is linked to each virtual network
+- An on-premises DNS server
+
+> [!NOTE]
+> In this tutorial,`azure.contoso.com` is an Azure private DNS zone. Replace `azure.contoso.com` with your private DNS zone name.
+
+## Sign in to Azure
+
+Sign in to the [Azure portal](https://portal.azure.com).
+
+<a name="inbound-endpoints"></a>
+
+## Determine inbound endpoint IP addresses
+
+Write down the IP addresses assigned to the inbound endpoints of your DNS private resolvers. The IP addresses will be used to configure on-premises DNS forwarders.
+
+In this example, there are two virtual networks in two regions:
+- **myeastvnet** is in the East US region, assigned the address space 10.10.0.0/16
+- **mywestvnet** is in the West Central US region, assigned the address space 10.20.0.0/16
+
+1. Search for **DNS Private Resolvers** and select your private resolver from the first region. For example: **myeastresolver**.
+2. Under **Settings**, select **Inbound endpoints** and write down the **IP address** setting. For example: **10.10.0.4**.
+
+ ![View inbound endpoint](./media/tutorial-dns-private-resolver-failover/east-inbound-endpoint.png)
+
+3. Return to the list of **DNS Private Resolvers** and select a resolver from a different region. For example: **mywestresolver**.
+4. Under **Settings**, select **Inbound endpoints** and write down the **IP address** setting of this resolver. For example: **10.20.0.4**.
+
+## Verify private zone links
+
+To resolve DNS records in an Azure DNS private zone, the zone must be linked to the virtual network. In this example, the zone `azure.contoso.com` is linked to **myeastvnet** and **mywestvnet**. Links to other vnets can also be present.
+
+1. Search for **Private DNS zones** and select your private zone. For example: **azure.contoso.com**.
+2. Under **Settings**, select **Virtual network links** and verify that the vnets you used for inbound endpoints in the previous procedure are also listed under Virtual network. For example: **myeastvnet** and **mywestvnet**.
+
+ ![View vnet links](./media/tutorial-dns-private-resolver-failover/vnet-links.png)
+
+3. If one or more vnets aren't yet linked, you can add it here by selecting **Add**, providing a **Link name**, choosing your **Subscription**, and then choosing the **Virtual network**.
+
+> [!TIP]
+> You can also use peering to resolve records in private DNS zones. For more information, see [Azure DNS Private Resolver endpoints and rulesets](private-resolver-endpoints-rulesets.md).
+
+## Verify Azure DNS resolution
+
+Check that DNS settings for your virtual networks are set to Default (Azure-provided).
+
+1. Search for **Virtual networks** and select the first Vnet. For example: **myeastvnet**.
+2. Under **Settings**, select **DNS servers** and verify that **Default (Azure-provided)** is chosen.
+3. Select the next Vnet (ex: **mywestvnet**) and verify that **Default (Azure-provided)** is chosen.
+
+ > [!NOTE]
+ > Custom DNS settings can also be made to work, but this is not in scope for the current scenario.
+
+4. Search for **Private DNS zones** and select your private zone name. For example: **azure.contoso.com**.
+5. Create a test record in the zone by selecting **+ Record set** and adding a new A record. For example: **test**.
+
+ ![Create a test A record](./media/tutorial-dns-private-resolver-failover/test-record.png)
+
+5. Open a command prompt using an on-premises client and use nslookup to look up your test record using the first private resolver IP address that you wrote down (ex: 10.10.0.4). See the following example:
+
+ ```cmd
+ nslookup test.azure.contoso.com 10.10.0.4
+ ```
+ The query should return the IP address that you assigned to your test record.
+ ![Results of nslookup - east](./media/tutorial-dns-private-resolver-failover/nslookup-results-e.png)
+
+6. Repeat this nslookup query using the IP address that you wrote down for the second private resolver (ex: 10.20.0.4).
+
+ ![Results of nslookup - west](./media/tutorial-dns-private-resolver-failover/nslookup-results-w.png)
+
+ > [!NOTE]
+ > If DNS resolution for the private zone is not working, check that your on-premises links to the Azure Vnets are connected.
+
+<a name="on-premise-forwarding"></a>
+
+## Configure on-premises DNS forwarding
+
+Now that DNS resolution is working from on-premises to Azure using two different Azure DNS Private Resolvers, we can configure forwarding to use both of these addresses. This will enable redundancy in case one of the connections to Azure is interrupted. The procedure to configure forwarders will depend on the type of DNS server that you're using. The following example uses a Windows Server that is running the DNS Server role service and has an IP address of 10.100.0.2.
+
+ > [!NOTE]
+ > The DNS server that you use to configure forwarding should be a server that client devices on your network will use for DNS resolution. If the server you're configuring is not the default, you'll need to query it's IP address directly (ex: nslookup test.azure.contoso.com 10.100.0.2) after forwarding is configured.
+
+1. Open an elevated Windows PowerShell prompt and issue the following command. Replace **azure.contoso.com** with the name of your private zone, and replace the IP addresses below with the IP addresses of your private resolvers.
+
+ ```PowerShell
+ Add-DnsServerConditionalForwarderZone -Name "azure.contoso.com" -MasterServers 10.20.0.4,10.10.0.4
+ ```
+2. If preferred, you can also use the DNS console to enter conditional forwarders. See the following example:
+
+ ![View DNS forwarders](./media/tutorial-dns-private-resolver-failover/forwarders.png)
+
+3. Now that forwarding is in place, issue the same DNS query that you used in the previous procedure. However, this time don't enter a destination IP address for the query. The query will use the client's default DNS server.
+
+ ![Results of nslookup](./media/tutorial-dns-private-resolver-failover/nslookup-results.png)
+
+## Demonstrate resiliency (optional)
+
+You can now demonstrate that DNS resolution works when one of the connections is broken.
+
+1. Interrupt connectivity from on-premises to one of your Vnets by disabling or disconnecting the interface. Verify that the connection doesn't automatically reconnect on-demand.
+2. Run the nslookup query using the private resolver from the Vnet that is no longer connected and verify that it fails (see below).
+3. Run the nslookup query using your default DNS server (configured with forwarders) and verify it still works due to the redundancy you enabled.
+
+ ![Results of nslookup - failover](./media/tutorial-dns-private-resolver-failover/nslookup-results-failover.png)
+
+## Next steps
+
+* Review components, benefits, and requirements for [Azure DNS Private Resolver](dns-private-resolver-overview.md).
+* Learn how to create an Azure DNS Private Resolver by using [Azure PowerShell](./dns-private-resolver-get-started-powershell.md) or [Azure portal](./dns-private-resolver-get-started-portal.md).
+* Understand how to [Resolve Azure and on-premises domains](private-resolver-hybrid-dns.md) using the Azure DNS Private Resolver.
+* Learn about [Azure DNS Private Resolver endpoints and rulesets](private-resolver-endpoints-rulesets.md).
+* Learn how to [configure hybrid DNS](private-resolver-hybrid-dns.md) using private resolvers.
+* Learn about some of the other key [networking capabilities](../networking/fundamentals/networking-overview.md) of Azure.
+* [Learn module: Introduction to Azure DNS](/learn/modules/intro-to-azure-dns).
+
firewall Premium Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/premium-features.md
You can identify what category a given FQDN or URL is by using the **Web Categor
:::image type="content" source="media/premium-features/firewall-category-search.png" alt-text="Firewall category search dialog":::
+> [!IMPORTANT]
+> To use **Web Category Check** feature, user has to have an access of Microsoft.Network/azureWebCategories/getwebcategory/action for **subscription** level, not resource group level.
+ ### Category change Under the **Web Categories** tab in **Firewall Policy Settings**, you can request a categorization change if you:
hdinsight General Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/general-guidelines.md
HDInsight can't depend on on-premises domain controllers or custom domain contro
### Properties synced from Azure AD to Azure AD DS
-* Azure AD connect syncs from on-premise to Azure AD.
+* Azure AD connect syncs from on-premises to Azure AD.
* Azure AD DS syncs from Azure AD. Azure AD DS syncs objects from Azure AD periodically. The Azure AD DS blade on the Azure portal displays the sync status. During each stage of sync, unique properties may get into conflict and renamed. Pay attention to the property mapping from Azure AD to Azure AD DS.
hdinsight Hdinsight Troubleshoot Yarn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-troubleshoot-yarn.md
These changes are visible immediately on the YARN Scheduler UI.
- [Connect to HDInsight (Apache Hadoop) by using SSH](./hdinsight-hadoop-linux-use-ssh-unix.md) - [Apache Hadoop YARN concepts and applications](https://hadoop.apache.org/docs/r2.7.4/hadoop-yarn/hadoop-yarn-site/WritingYarnApplications.html#Concepts_and_Flow) +
+## How do I troubleshoot YARN common issues?
+
+### Yarn UI isn't loading
+
+If your YARN UI isn't loading or is unreachable, and it returns "HTTP Error 502.3 - Bad Gateway," it highly indicates your ResourceManager service is unhealthy. To mitigate the issue, follow these steps:
+
+1. Go to **Ambari UI** > **YARN** > **SUMMARY** and check to see if only the active ResourceManager is in the **Started** state. If not, try to mitigate by restarting the unhealthy or stopped ResourceManager.
+2. If step 1 doesn't resolve the issue, SSH the active ResourceManager head node and check the garbage collection status using `jstat -gcutil <ResourceManager pid> 1000 100`. If you see the **FGCT** increase significantly in just a few seconds, it indicates ResourceManager is busy in *Full GC*, and is unable to process the other requests.
+3. Go to **Ambari UI** > **YARN** > **CONFIGS** > **Advanced** and increase `ResourceManager java heap size`.
+4. Restart required services in Ambari UI.
+
+### Both resource managers are in standby
+
+1. Check ResourceManager log to see if below similar error exists.
+```
+Service RMActiveServices failed in state STARTED; cause: org.apache.hadoop.service.ServiceStateException: com.google.protobuf.InvalidProtocolBufferException: Could not obtain block: BP-452067264-10.0.0.16-1608006815288:blk_1074235266_494491 file=/yarn/node-labels/nodelabel.mirror
+```
+2. If the error exists, check to see if some files are under replication or if there are missing blocks in the HDFS. You can run `hdfs fsck hdfs://mycluster/`
+
+3. Run `hdfs fsck hdfs://mycluster/ -delete` to forcefully clean up the HDFS and to get rid of the standby RM issue. Alternatively, run [PatchYarnNodeLabel](https://hdiconfigactions.blob.core.windows.net/hadoopcorepatchingscripts/PatchYarnNodeLabel.sh) on one of headnodes to patch the cluster.
+ ## Next steps
hdinsight Apache Spark Perf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-perf.md
Last updated 08/21/2020
-# Optimize Apache Spark jobs in HDInsight
+# Optimize Apache Spark applications in HDInsight
-This article provides an overview of strategies to optimize Apache Spark jobs on Azure HDInsight.
+This article provides an overview of strategies to optimize Apache Spark applications on Azure HDInsight.
## Overview
-The performance of your Apache Spark jobs depends on multiple factors. These performance factors include: how your data is stored, how the cluster is configured, and the operations that are used when processing the data.
+You might face below common Scenarios
-Common challenges you might face include: memory constraints due to improperly sized executors, long-running operations, and tasks that result in cartesian operations.
+- The same spark job is slower than before in the same HDInsight cluster
+- The spark job is slower in HDInsight cluster than on-premise or other third party service provider
+- The spark job is slower in one HDI cluster than another HDI cluster
-There are also many optimizations that can help you overcome these challenges, such as caching, and allowing for data skew.
+The performance of your Apache Spark jobs depends on multiple factors. These performance factors include:
+
+- How your data is stored
+- How the cluster is configured
+- The operations that are used when processing the data.
+- Unhealthy yarn service
+- Memory constraints due to improperly sized executors and OutOfMemoryError
+- Too many tasks or too few tasks
+- Data skew caused a few heavy tasks or slow tasks
+- Tasks slower in bad nodes
+++
+## Step 1: Check if your yarn service is healthy
+
+1. Go to Ambari UI:
+- Check if ResourceManager or NodeManager alerts
+- Check ResourceManager and NodeManager status in YARN > SUMMARY: All NodeManager should be in Started and only Active ResourceManager should be in Started
+
+2. Check if Yarn UI is accessible through `https://YOURCLUSTERNAME.azurehdinsight.net/yarnui/hn/cluster`
+
+3. Check if any exceptions or errors in ResourceManager log in `/var/log/hadoop-yarn/yarn/hadoop-yarn-resourcemanager-*.log`
+
+See more information in [Yarn Common Issues](../hdinsight-troubleshoot-yarn.md#how-do-i-troubleshoot-yarn-common-issues)
+
+## Step 2: Compare your new application resources with yarn available resources
+
+1. Go to **Ambari UI > YARN > SUMMARY**, check **CLUSTER MEMORY** in ServiceMetrics
+
+2. Check yarn queue metrics in details:
+- Go to Yarn UI, check Yarn scheduler metrics through `https://YOURCLUSTERNAME.azurehdinsight.net/yarnui/hn/cluster/scheduler`
+- Alternatively, you can check yarn scheduler metrics through Yarn Rest API. For example, `curl -u "xxxx" -sS -G "https://YOURCLUSTERNAME.azurehdinsight.net/ws/v1/cluster/scheduler"`. For ESP, you should use domain admin user.
+
+3. Calculate total resources for your new application
+- All executors resources: `spark.executor.instances * (spark.executor.memory + spark.yarn.executor.memoryOverhead) and spark.executor.instances * spark.executor.cores`. See more information in [spark executors configuration](apache-spark-settings.md#configuring-spark-executors)
+- ApplicationMaster
+ - In cluster mode, use `spark.driver.memory` and `spark.driver.cores`
+ - In client mode, use `spark.yarn.am.memory+spark.yarn.am.memoryOverhead` and `spark.yarn.am.cores`
+
+> [!NOTE]
+> `yarn.scheduler.minimum-allocation-mb <= spark.executor.memory+spark.yarn.executor.memoryOverhead <= yarn.scheduler.maximum-allocation-mb`
+++
+4. Compare your new application total resources with yarn available resources in your specified queue
++
+## Step 3: Track your spark application
+
+1. [Monitor your running spark application through Spark UI](apache-spark-job-debugging.md#track-an-application-in-the-spark-ui)
+
+2. [Monitor your complete or incomplete spark application through Spark History Server UI](apache-spark-job-debugging.md#find-information-about-completed-jobs-using-the-spark-history-server)
+
+We need to identify below symptoms through Spark UI or Spark History UI:
+
+- Which stage is slow
+- Are total executor CPU v-cores fully utilized in Event-Timeline in **Stage** tab
+- If using spark sql, what's the physical plan in SQL tab
+- Is DAG too long in one stage
+- Observe tasks metrics(input size, shuffle write size, GC Time) in **Stage** tab
+
+See more information in [Monitoring your Spark Applications ](https://spark.apache.org/docs/latest/monitoring.html)
+
+## Step 4: Optimize your spark application
+
+There are many optimizations that can help you overcome these challenges, such as caching, and allowing for data skew.
In each of the following articles, you can find information on different aspects of Spark optimization.
In each of the following articles, you can find information on different aspects
* [Optimize memory usage for Apache Spark](optimize-memory-usage.md) * [Optimize HDInsight cluster configuration for Apache Spark](optimize-cluster-configuration.md)
+### Optimize Spark SQL partitions
+
+- `spark.sql.shuffle.paritions` is 200 by default. We can adjust based on the business needs when shuffling data for joins or aggregations.
+- `spark.sql.files.maxPartitionBytes` is 1G by default in HDI. The maximum number of bytes to pack into a single partition when reading files. This configuration is effective only when using file-based sources such as Parquet, JSON and ORC.
+- AQE in Spark 3.0. See [Adaptive Query Execution](https://spark.apache.org/docs/latest/sql-performance-tuning.html#adaptive-query-execution)
++ ## Next steps * [Debug Apache Spark jobs running on Azure HDInsight](apache-spark-job-debugging.md)
hdinsight Optimize Cluster Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/optimize-cluster-configuration.md
For more information on using Ambari to configure executors, see [Apache Spark s
Monitor query performance for outliers or other performance issues, by looking at the timeline view. Also SQL graph, job statistics, and so forth. For information on debugging Spark jobs using YARN and the Spark History server, see [Debug Apache Spark jobs running on Azure HDInsight](apache-spark-job-debugging.md). For tips on using YARN Timeline Server, see [Access Apache Hadoop YARN application logs](../hdinsight-hadoop-access-yarn-app-logs-linux.md).
+## Tasks slower on some executors or nodes
+ Sometimes one or a few of the executors are slower than the others, and tasks take much longer to execute. This slowness frequently happens on larger clusters (> 30 nodes). In this case, divide the work into a larger number of tasks so the scheduler can compensate for slow tasks. For example, have at least twice as many tasks as the number of executor cores in the application. You can also enable speculative execution of tasks with `conf: spark.speculation = true`. ## Next steps
iot-edge How To Authenticate Downstream Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-authenticate-downstream-device.md
For X.509 self-signed authentication, sometimes referred to as thumbprint authen
* C: [iotedge_downstream_device_sample.c](https://github.com/Azure/azure-iot-sdk-c/tree/master/iothub_client/samples/iotedge_downstream_device_sample) * Node.js: [simple_sample_device_x509.js](https://github.com/Azure/azure-iot-sdk-node/blob/main/device/samples/javascript/simple_sample_device_x509.js) * Java: [SendEventX509.java](https://github.com/Azure/azure-iot-sdk-java/tree/main/device/iot-device-samples/send-event-x509)
- * Python: [send_message_x509.py](https://github.com/Azure/azure-iot-sdk-python/blob/main/azure-iot-device/samples/async-hub-scenarios/send_message_x509.py)
+ * Python: [send_message_x509.py](https://github.com/Azure/azure-iot-sdk-python/blob/main/samples/async-hub-scenarios/send_message_x509.py)
You also can use the [IoT extension for Azure CLI](https://github.com/Azure/azure-iot-cli-extension) to complete the same device creation operation. The following example uses the [az iot hub device-identity](/cli/azure/iot/hub/device-identity) command to create a new IoT device with X.509 self-signed authentication and assigns a parent device:
This section is based on the IoT Hub X.509 certificate tutorial series. See [Und
* C: [iotedge_downstream_device_sample.c](https://github.com/Azure/azure-iot-sdk-c/tree/master/iothub_client/samples/iotedge_downstream_device_sample) * Node.js: [simple_sample_device_x509.js](https://github.com/Azure/azure-iot-sdk-node/blob/main/device/samples/javascript/simple_sample_device_x509.js) * Java: [SendEventX509.java](https://github.com/Azure/azure-iot-sdk-java/tree/main/device/iot-device-samples/send-event-x509)
- * Python: [send_message_x509.py](https://github.com/Azure/azure-iot-sdk-python/blob/main/azure-iot-device/samples/async-hub-scenarios/send_message_x509.py)
+ * Python: [send_message_x509.py](https://github.com/Azure/azure-iot-sdk-python/blob/main/samples/async-hub-scenarios/send_message_x509.py)
You also can use the [IoT extension for Azure CLI](https://github.com/Azure/azure-iot-cli-extension) to complete the same device creation operation. The following example uses the [az iot hub device-identity](/cli/azure/iot/hub/device-identity) command to create a new IoT device with X.509 CA signed authentication and assigns a parent device:
iot-edge How To Connect Downstream Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-connect-downstream-device.md
This section introduces a sample application to connect an Azure IoT Java device
This section introduces a sample application to connect an Azure IoT Python device client to an IoT Edge gateway.
-1. Get the sample for **send_message_downstream** from the [Azure IoT device SDK for Python samples](https://github.com/Azure/azure-iot-sdk-python/tree/main/azure-iot-device/samples/async-edge-scenarios).
+1. Get the sample for **send_message_downstream** from the [Azure IoT device SDK for Python samples](https://github.com/Azure/azure-iot-sdk-python/tree/main/samples/async-edge-scenarios).
2. Set the `IOTHUB_DEVICE_CONNECTION_STRING` and `IOTEDGE_ROOT_CA_CERT_PATH` environment variables as specified in the Python script comments. 3. Refer to the SDK documentation for any additional instructions on how to run the sample on your device.
iot-edge Troubleshoot Iot Edge For Linux On Windows Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/troubleshoot-iot-edge-for-linux-on-windows-common-errors.md
The following section addresses the common errors when installing the EFLOW MSI
- [Azure IoT Edge for Linux on Windows prerequisites](https://aka.ms/AzEFLOW-Requirements) - [Nested virtualization for Azure IoT Edge for Linux on Windows](./nested-virtualization.md) - [Networking configuration for Azure IoT Edge for Linux on Windows](./how-to-configure-iot-edge-for-linux-on-windows-networking.md)-- [Azure IoT Edge for Linux on Windows virtual switch creation](/how-to-create-virtual-switch.md)
+- [Azure IoT Edge for Linux on Windows virtual switch creation](./how-to-create-virtual-switch.md)
- [PowerShell functions for IoT Edge for Linux on Windows](./reference-iot-edge-for-linux-on-windows-functions.md) > [!div class="mx-tdCol2BreakAll"]
The following section addresses the common errors related to EFLOW networking an
- [Azure IoT Edge for Linux on Windows networking](./iot-edge-for-linux-on-windows-networking.md) - [Networking configuration for Azure IoT Edge for Linux on Windows](./how-to-configure-iot-edge-for-linux-on-windows-networking.md)-- [Azure IoT Edge for Linux on Windows virtual switch creation](/how-to-create-virtual-switch.md)
+- [Azure IoT Edge for Linux on Windows virtual switch creation](./how-to-create-virtual-switch.md)
> [!div class="mx-tdCol2BreakAll"] > | Error | Error Description | Solution |
iot-edge Troubleshoot Iot Edge For Linux On Windows Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/troubleshoot-iot-edge-for-linux-on-windows-networking.md
For more information about EFLOW VM firewall, see [IoT Edge for Linux on Windows
sudo iptables -L ```
-To add a firewall rule to the EFLOW VM, you can use the [EFLOW Util - Firewall Rules](https://github.com/Azure/iotedge-eflow/tree/eflow-usbip/eflow-util/firewall-rules) sample PowerShell cmdlets. Also, you can achieve the same rules creation by following these steps:
+To add a firewall rule to the EFLOW VM, you can use the [EFLOW Util - Firewall Rules](https://github.com/Azure/iotedge-eflow/tree/main/eflow-util#get-eflowvmfirewallrules) sample PowerShell cmdlets. Also, you can achieve the same rules creation by following these steps:
1. Start an elevated _PowerShell_ session using **Run as Administrator**. 1. Connect to the EFLOW virtual machine
iot-edge Troubleshoot Iot Edge For Linux On Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/troubleshoot-iot-edge-for-linux-on-windows.md
Second, if the GPU is correctly assigned, but still not being able to use it ins
The first step before checking *WSSDAgent* logs is to check if the VM was created and is running. 1. Start an elevated _PowerShell_ session using **Run as Administrator**.
-1. On Windows Client SKUs, check the [HCS](/virtualization/community/team-blog/2017/20170127-introducing-the-host-compute-service-hcs.md) virtual machines.
+1. On Windows Client SKUs, check the [HCS](/virtualization/community/team-blog/2017/20170127-introducing-the-host-compute-service-hcs) virtual machines.
```powershell hcsdiag list ```
The first step before checking *WSSDAgent* logs is to check if the VM was create
VM, SavedAsTemplate, 88D7AA8C-0D1F-4786-B4CB-62EFF1DECD92, CmService ```
-1. On Windows Server SKUs, check the [VMMS](/windows-server/virtualization/hyper-v/hyper-v-technology-overview.md) virtual machines
+1. On Windows Server SKUs, check the [VMMS](/windows-server/virtualization/hyper-v/hyper-v-technology-overview) virtual machines
```powershell hcsdiag list ```
key-vault About Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/about-certificates.md
Key Vault allows for creation of multiple issuer objects with different issuer p
Issuer objects are created in the vault and can only be used with KV certificates in the same vault.
+>[!Note]
+>Publicly trusted certificates are sent to Certificate Authorities (CAs) and Certificate Transparency (CT) logs outside of the Azure boundary during enrollment and will be covered by the GDPR policies of those entities.
+ ## Certificate contacts Certificate contacts contain contact information to send notifications triggered by certificate lifetime events. The contacts information is shared by all the certificates in the key vault. A notification is sent to all the specified contacts for an event for any certificate in the key vault. For information on how to set Certificate contact, see [here](overview-renew-certificate.md#steps-to-set-certificate-notifications)
key-vault About Keys Secrets Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/about-keys-secrets-certificates.md
Objects stored in Key Vault are versioned whenever a new instance of an object i
Objects in Key Vault can be addressed by specifying a version or by omitting version for operations on current version of the object. For example, given a Key with the name `MasterKey`, performing operations without specifying a version causes the system to use the latest available version. Performing operations with the version-specific identifier causes the system to use that specific version of the object.
+> [!NOTE]
+> The values you provide for Azure resource or object IDs may be copied globally for the purpose of running the service. The value provided should not include personally identifiable or sensitive information.
+ ### Vault-name and Object-name Objects are uniquely identified within Key Vault using a URL. No two objects in the system have the same URL, regardless of geo-location. The complete URL to an object is called the Object Identifier. The URL consists of a prefix that identifies the Key Vault, object type, user provided Object Name, and an Object Version. The Object Name is case-insensitive and immutable. Identifiers that don't include the Object Version are referred to as Base Identifiers.
key-vault Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/best-practices.md
Managed HSM is a cloud service that safeguards encryption keys. As these keys ar
- [Soft Delete](soft-delete-overview.md) is on by default. You can choose a retention period between 7 and 90 days. - Turn on purge protection to prevent immediate permanent deletion of HSM or keys. When purge protection is on HSM or keys will remain in deleted state until the retention days have passed.
-## Generate and import keys from on-premise HSM
+## Generate and import keys from on-premises HSM
> [!NOTE] > Keys created or imported into Managed HSM are not exportable. -- To ensure long term portability and key durability, generate keys in your on-premise HSM and [import them to Managed HSM](hsm-protected-keys-byok.md). You will have a copy of your key securely stored in your on-premises HSM for future use.
+- To ensure long term portability and key durability, generate keys in your on-premises HSM and [import them to Managed HSM](hsm-protected-keys-byok.md). You will have a copy of your key securely stored in your on-premises HSM for future use.
## Next steps
key-vault Hsm Protected Keys Byok https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/hsm-protected-keys-byok.md
# Import HSM-protected keys to Managed HSM (BYOK)
- Azure Key Vault Managed HSM supports importing keys generated in your on-premise hardware security module (HSM); the keys will never leave the HSM protection boundary. This scenario often is referred to as *bring your own key* (BYOK). Managed HSM uses the Marvell LiquidSecurity HSM adapters (FIPS 140-2 Level 3 validated) to protect your keys.
+ Azure Key Vault Managed HSM supports importing keys generated in your on-premises hardware security module (HSM); the keys will never leave the HSM protection boundary. This scenario often is referred to as *bring your own key* (BYOK). Managed HSM uses the Marvell LiquidSecurity HSM adapters (FIPS 140-2 Level 3 validated) to protect your keys.
Use the information in this article to help you plan for, generate, and transfer your own HSM-protected keys to use with Managed HSM.
key-vault Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/overview.md
For pricing information, please see Managed HSM Pools section on [Azure Key Vaul
### Import keys from your on-premises HSMs -- Generate HSM-protected keys in your on-premise HSM and import them securely into Managed HSM.
+- Generate HSM-protected keys in your on-premises HSM and import them securely into Managed HSM.
## Next steps - See [Quickstart: Provision and activate a managed HSM using Azure CLI](quick-create-cli.md) to create and activate a managed HSM
key-vault Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/policy-reference.md
Title: Built-in policy definitions for Key Vault description: Lists Azure Policy built-in policy definitions for Key Vault. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/16/2022 Last updated : 08/19/2022
logic-apps Create Single Tenant Workflows Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-visual-studio-code.md
After you deploy a logic app to the Azure portal from Visual Studio Code, you ca
![Screenshot that shows the Azure portal search box with the "logic apps" search text.](./media/create-single-tenant-workflows-visual-studio-code/portal-find-logic-app-resource.png)
-1. On the **Logic App (Standard)** pane, select the logic app that you deployed from Visual Studio Code.
+1. On the **Logic apps** pane, select the logic app that you deployed from Visual Studio Code.
![Screenshot that shows the Azure portal and the Logic App (Standard) resources deployed in Azure.](./media/create-single-tenant-workflows-visual-studio-code/logic-app-resources-pane.png)
machine-learning Concept Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automated-ml.md
Automated machine learning, also referred to as automated ML or AutoML, is the p
Traditional machine learning model development is resource-intensive, requiring significant domain knowledge and time to produce and compare dozens of models. With automated machine learning, you'll accelerate the time it takes to get production-ready ML models with great ease and efficiency.
-<a name="parity"></a>
## Ways to use AutoML in Azure Machine Learning
-Azure Machine Learning offers the following two experiences for working with automated ML. See the following sections to understand [feature availability in each experience](#parity).
+Azure Machine Learning offers the following two experiences for working with automated ML. See the following sections to understand feature availability in each experience.
-* For code-experienced customers, [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro). Get started with [Tutorial: Use automated machine learning to predict taxi fares](tutorial-auto-train-models.md).
+* For code-experienced customers, [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro). Get started with [Tutorial: Train an object detection model (preview) with AutoML and Python](tutorial-auto-train-image-models.md)
* For limited/no-code experience customers, Azure Machine Learning studio at [https://ml.azure.com](https://ml.azure.com/). Get started with these tutorials: * [Tutorial: Create a classification model with automated ML in Azure Machine Learning](tutorial-first-experiment-automated-ml.md).
See examples of classification and automated machine learning in these Python no
Similar to classification, regression tasks are also a common supervised learning task. Azure Machine Learning offers [featurizations specifically for these tasks](how-to-configure-auto-features.md#featurization).
-Different from classification where predicted output values are categorical, regression models predict numerical output values based on independent predictors. In regression, the objective is to help establish the relationship among those independent predictor variables by estimating how one variable impacts the others. For example, automobile price based on features like, gas mileage, safety rating, etc. Learn more and see an example of [regression with automated machine learning](tutorial-auto-train-models.md).
+Different from classification where predicted output values are categorical, regression models predict numerical output values based on independent predictors. In regression, the objective is to help establish the relationship among those independent predictor variables by estimating how one variable impacts the others. For example, automobile price based on features like, gas mileage, safety rating, etc. Learn more and see an example of [regression with automated machine learning](v1/how-to-auto-train-models-v1.md).
See examples of regression and automated machine learning for predictions in these Python notebooks: [CPU Performance Prediction](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml/regression-explanation-featurization),
Using **Azure Machine Learning**, you can design and run your automated ML train
1. **Identify the ML problem** to be solved: classification, forecasting, regression or computer vision (preview). 1. **Choose whether you want to use the Python SDK or the studio web experience**:
- Learn about the parity between the [Python SDK and studio web experience](#parity).
+ Learn about the parity between the [Python SDK and studio web experience](#ways-to-use-automl-in-azure-machine-learning).
* For limited or no code experience, try the Azure Machine Learning studio web experience at [https://ml.azure.com](https://ml.azure.com/) * For Python developers, check out the [Azure Machine Learning Python SDK](how-to-configure-auto-train.md)
There are multiple resources to get you up and running with AutoML.
### Tutorials/ how-tos Tutorials are end-to-end introductory examples of AutoML scenarios.
-+ **For a code first experience**, follow the [Tutorial: Train a regression model with AutoML and Python](tutorial-auto-train-models.md).
+++ **For a code first experience**, follow the [Tutorial: Train an object detection model (preview) with AutoML and Python](tutorial-auto-train-image-models.md) + **For a low or no-code experience**, see the [Tutorial: Train a classification model with no-code AutoML in Azure Machine Learning studio](tutorial-first-experiment-automated-ml.md).
machine-learning Concept Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-data.md
Just like `uri_file` and `uri_folder`, you can create a data asset with `mltable
- [Install and set up the CLI (v2)](how-to-configure-cli.md#install-and-set-up-the-cli-v2) - [Create datastores](how-to-datastore.md#create-datastores)-- [Create data assets](how-to-create-register-data-assets.md#create-data-assets)
+- [Create data assets](how-to-create-data-assets.md#create-data-assets)
- [Read and write data in a job](how-to-read-write-data-v2.md#read-and-write-data-in-a-job) - [Data administration](how-to-administrate-data-authentication.md#data-administration)
machine-learning Concept Manage Ml Pitfalls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-manage-ml-pitfalls.md
The following techniques are additional options to handle imbalanced data **outs
See examples and learn how to build models using automated machine learning:
-+ Follow the [Tutorial: Automatically train a regression model with Azure Machine Learning](tutorial-auto-train-models.md)
++ Follow the [Tutorial: Train an object detection model (preview) with AutoML and Python](tutorial-auto-train-image-models.md). + Configure the settings for automatic training experiment: + In Azure Machine Learning studio, [use these steps](how-to-use-automated-ml-for-ml-models.md).
machine-learning Concept Train Machine Learning Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-train-machine-learning-model.md
Define the iterations, hyperparameter settings, featurization, and other setting
* [What is automated machine learning?](concept-automated-ml.md) * [Tutorial: Create your first classification model with automated machine learning](tutorial-first-experiment-automated-ml.md)
-* [Tutorial: Use automated machine learning to predict taxi fares](tutorial-auto-train-models.md)
* [Examples: Jupyter Notebook examples for automated machine learning](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning) * [How to: Configure automated ML experiments in Python](how-to-configure-auto-train.md) * [How to: Autotrain a time-series forecast model](how-to-auto-train-forecast.md)
machine-learning How To Access Azureml Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-azureml-behind-firewall.md
The hosts in the following tables are owned by Microsoft, and provide services r
| Compute cluster/instance | graph.chinacloudapi.cn | TCP | 443 | | Compute instance | \*.instances.azureml.cn | TCP | 443 | | Compute instance | \*.instances.azureml.ms | TCP | 443, 8787, 18881 |
-| Microsoft storage access | \*blob.core.chinacloudapi.cn | TCP | 443 |
+| Microsoft storage access | \*.blob.core.chinacloudapi.cn | TCP | 443 |
| Microsoft storage access | \*.table.core.chinacloudapi.cn | TCP | 443 | | Microsoft storage access | \*.queue.core.chinacloudapi.cn | TCP | 443 | | Your storage account | \<storage\>.file.core.chinacloudapi.cn | TCP | 443, 445 |
machine-learning How To Auto Train Forecast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-forecast.md
For this article you need,
* An Azure Machine Learning workspace. To create the workspace, see [Create workspace resources](quickstart-create-resources.md).
-* This article assumes some familiarity with setting up an automated machine learning experiment. Follow the [tutorial](tutorial-auto-train-models.md) or [how-to](how-to-configure-auto-train.md) to see the main automated machine learning experiment design patterns.
+* This article assumes some familiarity with setting up an automated machine learning experiment. Follow the [how-to](how-to-configure-auto-train.md) to see the main automated machine learning experiment design patterns.
[!INCLUDE [automl-sdk-version](../../includes/machine-learning-automl-sdk-version.md)]
See the [forecasting sample notebooks](https://github.com/Azure/azureml-examples
## Next steps
-* Learn more about [how and where to deploy a model](./v1/how-to-deploy-and-where.md).
+* Learn more about [How to deploy an AutoML model to an online endpoint](how-to-deploy-automl-endpoint.md).
* Learn about [Interpretability: model explanations in automated machine learning (preview)](how-to-machine-learning-interpretability-automl.md).
-* Follow the [Tutorial: Train regression models](tutorial-auto-train-models.md) for an end to end example for creating experiments with automated machine learning.
+
machine-learning How To Configure Auto Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-auto-features.md
automl_settings = {
* Learn more about [how and where to deploy a model](./v1/how-to-deploy-and-where.md).
-* Learn more about [how to train a regression model by using automated machine learning](tutorial-auto-train-models.md) or [how to train by using automated machine learning on a remote resource](./v1/concept-automated-ml-v1.md#local-remote).
+* Learn more about [how to train a regression model by using automated machine learning](./v1/how-to-auto-train-models-v1.md) or [how to train by using automated machine learning on a remote resource](./v1/concept-automated-ml-v1.md#local-remote).
machine-learning How To Configure Cross Validation Data Splits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-cross-validation-data-splits.md
For this article you need,
* An Azure Machine Learning workspace. To create the workspace, see [Create workspace resources](quickstart-create-resources.md).
-* Familiarity with setting up an automated machine learning experiment with the Azure Machine Learning SDK. Follow the [tutorial](tutorial-auto-train-models.md) or [how-to](how-to-configure-auto-train.md) to see the fundamental automated machine learning experiment design patterns.
+* Familiarity with setting up an automated machine learning experiment with the Azure Machine Learning SDK. Follow the [tutorial](tutorial-auto-train-image-models.md) or [how-to](how-to-configure-auto-train.md) to see the fundamental automated machine learning experiment design patterns.
* An understanding of train/validation data splits and cross-validation as machine learning concepts. For a high-level explanation,
Passing the `test_data` or `test_size` parameters into the `AutoMLConfig`, autom
## Next steps * [Prevent imbalanced data and overfitting](concept-manage-ml-pitfalls.md).
-* [Tutorial: Use automated machine learning to predict taxi fares - Split data section](tutorial-auto-train-models.md#split-the-data-into-train-and-test-sets).
+ * How to [Auto-train a time-series forecast model](how-to-auto-train-forecast.md).
machine-learning How To Create Data Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-data-assets.md
+
+ Title: Create Data Assets
+
+description: Learn how to create Azure Machine Learning data assets.
++++++++ Last updated : 05/24/2022+
+# Customer intent: As an experienced data scientist, I need to package my data into a consumable and reusable object to train my machine learning models.
+++
+# Create data assets
+
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK you are using:"]
+> * [v1](./v1/how-to-create-register-datasets.md)
+> * [v2 (current version)](how-to-create-data-assets.md)
++
+In this article, you learn how to create a data asset in Azure Machine Learning. By creating a data asset, you create a *reference* to the data source location, along with a copy of its metadata. Because the data remains in its existing location, you incur no extra storage cost, and don't risk the integrity of your data sources. You can create Data from datastores, Azure Storage, public URLs, and local files.
+
+The benefits of creating data assets are:
+
+* You can **share and reuse data** with other members of the team such that they do not need to remember file locations.
+
+* You can **seamlessly access data** during model training (on any supported compute type) without worrying about connection strings or data paths.
+
+* You can **version** the data.
++
+## Prerequisites
+
+To create and work with data assets, you need:
+
+* An Azure subscription. If you don't have one, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
+
+* An Azure Machine Learning workspace. [Create workspace resources](quickstart-create-resources.md).
+
+* The [Azure Machine Learning CLI/SDK installed](how-to-configure-cli.md) and MLTable package installed (`pip install mltable`).
+
+## Supported paths
+
+When you create a data asset in Azure Machine Learning, you'll need to specify a `path` parameter that points to its location. Below is a table that shows the different data locations supported in Azure Machine Learning and examples for the `path` parameter:
++
+|Location | Examples |
+|||
+|A path on your local computer | `./home/username/data/my_data` |
+|A path on a public http(s) server | `https://raw.githubusercontent.com/pandas-dev/pandas/main/doc/data/titanic.csv` |
+|A path on Azure Storage | `https://<account_name>.blob.core.windows.net/<container_name>/path` <br> `abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>` |
+|A path on a datastore | `azureml://datastores/<data_store_name>/paths/<path>` |
++
+> [!NOTE]
+> When you create a data asset from a local path, it will be automatically uploaded to the default Azure Machine Learning datastore in the cloud.
+
+## Create a `uri_folder` data asset
+
+Below shows you how to create a *folder* as an asset:
+
+# [CLI](#tab/CLI)
+
+Create a `YAML` file (`<file-name>.yml`):
+
+```yaml
+$schema: https://azuremlschemas.azureedge.net/latest/data.schema.json
+
+# Supported paths include:
+# local: ./<path>
+# blob: https://<account_name>.blob.core.windows.net/<container_name>/<path>
+# ADLS gen2: abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>/
+# Datastore: azureml://datastores/<data_store_name>/paths/<path>
+type: uri_folder
+name: <name_of_data>
+description: <description goes here>
+path: <path>
+```
+
+Next, create the data asset using the CLI:
+
+```azurecli
+az ml data create -f <file-name>.yml
+```
+
+# [Python-SDK](#tab/Python-SDK)
+
+You can create a data asset in Azure Machine Learning using the following Python Code:
+
+```python
+from azure.ai.ml.entities import Data
+from azure.ai.ml.constants import AssetTypes
+
+# Supported paths include:
+# local: './<path>'
+# blob: 'https://<account_name>.blob.core.windows.net/<container_name>/<path>'
+# ADLS gen2: 'abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>/'
+# Datastore: 'azureml://datastores/<data_store_name>/paths/<path>'
+
+my_path = '<path>'
+
+my_data = Data(
+ path=my_path,
+ type=AssetTypes.URI_FOLDER,
+ description="<description>",
+ name="<name>",
+ version='<version>'
+)
+
+ml_client.data.create_or_update(my_data)
+```
+++
+## Create a `uri_file` data asset
+
+Below shows you how to create a *specific file* as a data asset:
+
+# [CLI](#tab/CLI)
+
+Sample `YAML` file `<file-name>.yml` for data in local path is as below:
+
+```yaml
+$schema: https://azuremlschemas.azureedge.net/latest/data.schema.json
+
+# Supported paths include:
+# local: ./<path>/<file>
+# blob: https://<account_name>.blob.core.windows.net/<container_name>/<path>/<file>
+# ADLS gen2: abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>/<file>
+# Datastore: azureml://datastores/<data_store_name>/paths/<path>/<file>
+
+type: uri_file
+name: <name>
+description: <description>
+path: <uri>
+```
+
+```cli
+> az ml data create -f <file-name>.yml
+```
+
+# [Python-SDK](#tab/Python-SDK)
+```python
+from azure.ai.ml.entities import Data
+from azure.ai.ml.constants import AssetTypes
+
+# Supported paths include:
+# local: './<path>/<file>'
+# blob: 'https://<account_name>.blob.core.windows.net/<container_name>/<path>/<file>'
+# ADLS gen2: 'abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>/<file>'
+# Datastore: 'azureml://datastores/<data_store_name>/paths/<path>/<file>'
+my_path = '<path>'
+
+my_data = Data(
+ path=my_path,
+ type=AssetTypes.URI_FILE,
+ description="<description>",
+ name="<name>",
+ version="<version>"
+)
+
+ml_client.data.create_or_update(my_data)
+```
++
+
+## Create a `mltable` data asset
+
+`mltable` is a way to abstract the schema definition for tabular data to make it easier to share data assets (an overview can be found in [MLTable](concept-data.md#mltable)).
+
+In this section, we show you how to create a data asset when the type is an `mltable`.
+
+### The MLTable file
+
+The MLTable file is a file that provides the specification of the data's schema so that the `mltable` *engine* can materialize the data into an in-memory object (Pandas/Dask/Spark). An *example* MLTable file is provided below:
+
+```yml
+type: mltable
+
+paths:
+ - pattern: ./*.txt
+transformations:
+ - read_delimited:
+ delimiter: ,
+ encoding: ascii
+ header: all_files_same_headers
+```
+> [!IMPORTANT]
+> We recommend co-locating the MLTable file with the underlying data in storage. For example:
+>
+> ```Text
+> Γö£ΓöÇΓöÇ my_data
+> Γöé Γö£ΓöÇΓöÇ MLTable
+> Γöé Γö£ΓöÇΓöÇ file_1.txt
+> .
+> .
+> .
+> Γöé Γö£ΓöÇΓöÇ file_n.txt
+> ```
+> Co-locating the MLTable with the data ensures a **self-contained *artifact*** where all that is needed is stored in that one folder (`my_data`); regardless of whether that folder is stored on your local drive or in your cloud store or on a public http server. You should **not** specify *absolute paths* in the MLTable file.
+
+In your Python code, you materialize the MLTable artifact into a Pandas dataframe using:
+
+```python
+import mltable
+
+tbl = mltable.load(uri="./my_data")
+df = tbl.to_pandas_dataframe()
+```
+
+The `uri` parameter in `mltable.load()` should be a valid path to a local or cloud **folder** which contains a valid MLTable file.
+
+> [!NOTE]
+> You will need the `mltable` library installed in your Environment (`pip install mltable`).
+
+Below shows you how to create an `mltable` data asset. The `path` can be any of the supported path formats outlined above.
++
+# [CLI](#tab/CLI)
+
+Create a `YAML` file (`<file-name>.yml`):
+
+```yaml
+$schema: https://azuremlschemas.azureedge.net/latest/data.schema.json
+
+# path must point to **folder** containing MLTable artifact (MLTable file + data
+# Supported paths include:
+# local: ./<path>
+# blob: https://<account_name>.blob.core.windows.net/<container_name>/<path>
+# ADLS gen2: abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>/
+# Datastore: azureml://datastores/<data_store_name>/paths/<path>
+
+type: mltable
+name: <name_of_data>
+description: <description goes here>
+path: <path>
+```
+
+> [!NOTE]
+> The path points to the **folder** containing the MLTable artifact.
+
+Next, create the data asset using the CLI:
+
+```azurecli
+az ml data create -f <file-name>.yml
+```
+
+# [Python-SDK](#tab/Python-SDK)
+
+You can create a data asset in Azure Machine Learning using the following Python Code:
+
+```python
+from azure.ai.ml.entities import Data
+from azure.ai.ml.constants import AssetTypes
+
+# my_path must point to folder containing MLTable artifact (MLTable file + data
+# Supported paths include:
+# local: './<path>'
+# blob: 'https://<account_name>.blob.core.windows.net/<container_name>/<path>'
+# ADLS gen2: 'abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>/'
+# Datastore: 'azureml://datastores/<data_store_name>/paths/<path>'
+
+my_path = '<path>'
+
+my_data = Data(
+ path=my_path,
+ type=AssetTypes.MLTABLE,
+ description="<description>",
+ name="<name>",
+ version='<version>'
+)
+
+ml_client.data.create_or_update(my_data)
+```
+
+> [!NOTE]
+> The path points to the **folder** containing the MLTable artifact.
+++
+## Next steps
+
+- [Read data in a job](how-to-read-write-data-v2.md#read-data-in-a-job)
machine-learning How To Datastore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-datastore.md
ml_client.create_or_update(store)
## Next steps - [Read data in a job](how-to-read-write-data-v2.md#read-data-in-a-job)-- [Create data assets](how-to-create-register-data-assets.md#create-data-assets)
+- [Create data assets](how-to-create-data-assets.md#create-data-assets)
- [Data administration](how-to-administrate-data-authentication.md#data-administration)
machine-learning How To Generate Automl Training Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-generate-automl-training-code.md
The following diagram illustrates that you can generate the code for automated M
* An Azure Machine Learning workspace. To create the workspace, see [Create workspace resources](quickstart-create-resources.md).
-* This article assumes some familiarity with setting up an automated machine learning experiment. Follow the [tutorial](tutorial-auto-train-models.md) or [how-to](how-to-configure-auto-train.md) to see the main automated machine learning experiment design patterns.
+* This article assumes some familiarity with setting up an automated machine learning experiment. Follow the [tutorial](tutorial-auto-train-image-models.md) or [how-to](how-to-configure-auto-train.md) to see the main automated machine learning experiment design patterns.
* Automated ML code generation is only available for experiments run on remote Azure ML compute targets. Code generation isn't supported for local runs.
However, in order to load that model in a notebook in your custom local Conda en
## Next steps
-* Learn more about [how and where to deploy a model](/azure/machine-learning/how-to-deploy-managed-online-endpoints).
+* Learn more about [how and where to deploy a model](how-to-deploy-and-where.md).
* See how to [enable interpretability features](how-to-machine-learning-interpretability-automl.md) specifically within automated ML experiments.
machine-learning How To High Availability Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-high-availability-machine-learning.md
To maximize your uptime, plan ahead to maintain business continuity and prepare
Microsoft strives to ensure that Azure services are always available. However, unplanned service outages may occur. We recommend having a disaster recovery plan in place for handling regional service outages. In this article, you'll learn how to: * Plan for a multi-regional deployment of Azure Machine Learning and associated resources.
+* Maximize chances to recover logs, notebooks, docker images, and other metadata.
* Design for high availability of your solution. * Initiate a failover to another region. > [!NOTE]
-> Azure Machine Learning itself does not provide automatic failover or disaster recovery.
+> Azure Machine Learning itself does not provide automatic failover or disaster recovery. Backup and restore of workspace metadata such as run history is unavailable.
In case you have accidentally deleted your workspace or corresponding components, this article also provides you with currently supported recovery options. ## Understand Azure services for Azure Machine Learning
-Azure Machine Learning depends on multiple Azure services and has several layers. Some of these services are provisioned in your (customer) subscription. You're responsible for the high-availability configuration of these services. Other services are created in a Microsoft subscription and managed by Microsoft.
+Azure Machine Learning depends on multiple Azure services. Some of these services are provisioned in your subscription. You're responsible for the high-availability configuration of these services. Other services are created in a Microsoft subscription and are managed by Microsoft.
Azure services include: * **Azure Machine Learning infrastructure**: A Microsoft-managed environment for the Azure Machine Learning workspace.
-* **Associated resources**: Resources provisioned in your subscription during Azure Machine Learning workspace creation. These resources include Azure Storage, Azure Key Vault, Azure Container Registry, and Application Insights. You're responsible for configuring high-availability settings for these resources.
+* **Associated resources**: Resources provisioned in your subscription during Azure Machine Learning workspace creation. These resources include Azure Storage, Azure Key Vault, Azure Container Registry, and Application Insights.
* Default storage has data such as model, training log data, and dataset. * Key Vault has credentials for Azure Storage, Container Registry, and data stores. * Container Registry has a Docker image for training and inferencing environments.
By keeping your data storage isolated from the default storage the workspace use
* Attach the same storage instances as datastores to the primary and secondary workspaces. * Make use of geo-replication for data storage accounts and maximize your uptime.
-### Manage machine learning artifacts as code
+### Manage machine learning assets as code
+
+> [!NOTE]
+> Backup and restore of workspace metadata such as run history, models and environments is unavailable. Specifying assets and configurations as code using YAML specs, will help you re-recreate assets across workspaces in case of a disaster.
Jobs in Azure Machine Learning are defined by a job specification. This specification includes dependencies on input artifacts that are managed on a workspace-instance level, including environments, datasets, and compute. For multi-region job submission and deployments, we recommend the following practices:
If you accidentally deleted your workspace it is currently not possible to recov
## Next steps
-To deploy Azure Machine Learning with associated resources with your high-availability settings, use an [Azure Resource Manager template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices/).
+To learn about repeatable infrastructure deployments with Azure Machine Learning, use an [Azure Resource Manager template](https://docs.microsoft.com/azure/machine-learning/tutorial-create-secure-workspace-template).
machine-learning How To Machine Learning Interpretability Automl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-machine-learning-interpretability-automl.md
In this article, you learn how to:
## Prerequisites - Interpretability features. Run `pip install azureml-interpret` to get the necessary package.-- Knowledge of building automated ML experiments. For more information on how to use the Azure Machine Learning SDK, complete this [regression model tutorial](tutorial-auto-train-models.md) or see how to [configure automated ML experiments](how-to-configure-auto-train.md).
+- Knowledge of building automated ML experiments. For more information on how to use the Azure Machine Learning SDK, complete this [object detection model tutorial](tutorial-auto-train-image-models.md) or see how to [configure automated ML experiments](how-to-configure-auto-train.md).
## Interpretability during training for the best model
machine-learning How To Migrate From V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-migrate-from-v1.md
# How to migrate from v1 to v2
-Azure Machine Learning's v2 REST APIs, Azure CLI extension, and Python SDK (preview) introduce consistency and a set of new features to accelerate the production machine learning lifecycle. In this article, we'll overview migrating from v1 to v2 with recommendations to help you decide on v1, v2, or both.
+Azure Machine Learning's v2 REST APIs, Azure CLI extension, and Python SDK (preview) introduce consistency and a set of new features to accelerate the production machine learning lifecycle. This article provides an overview of migrating from v1 to v2 with recommendations to help you decide on v1, v2, or both.
## Prerequisites
In v2 interfaces via REST API, CLI, and Python SDK (preview) are available. The
|API|Notes| |-|-|
-|REST|Fewest dependencies and overhead. Use for building applications on Azure ML as a platform, directly in programming languages without a SDK provided, or per personal preference.|
+|REST|Fewest dependencies and overhead. Use for building applications on Azure ML as a platform, directly in programming languages without an SDK provided, or per personal preference.|
|CLI|Recommended for automation with CI/CD or per personal preference. Allows quick iteration with YAML files and straightforward separation between Azure ML and ML model code.| |Python SDK|Recommended for complicated scripting (for example, programmatically generating large pipeline jobs) or per personal preference. Allows quick iteration with YAML files or development solely in Python.|
You can continue using your existing v1 model deployments. For new model deploym
|-|-|-| |Local|ACI|Quick test of model deployment locally; not for production.| |Managed online endpoint|ACI, AKS|Enterprise-grade managed model deployment infrastructure with near real-time responses and massive scaling for production.|
-|Managed batch endpoint|ParallelRunStep in a pipeline for batch scoring|Enterprise-grade managed model deployment infrastructure with massively-parallel batch processing for production.|
+|Managed batch endpoint|ParallelRunStep in a pipeline for batch scoring|Enterprise-grade managed model deployment infrastructure with massively parallel batch processing for production.|
|Azure Kubernetes Service (AKS)|ACI, AKS|Manage your own AKS cluster(s) for model deployment, giving flexibility and granular control at the cost of IT overhead.|
-|Azure Arc Kubernetes|N/A|Manage your own Kubernetes cluster(s) in other clouds or on-prem, giving flexibility and granular control at the cost of IT overhead.|
+|Azure Arc Kubernetes|N/A|Manage your own Kubernetes cluster(s) in other clouds or on-premises, giving flexibility and granular control at the cost of IT overhead.|
### Jobs (experiments, runs, pipelines in v1)
Data assets in v2 (or File Datasets in v1) are *references* to files in object s
For details on data in v2, see the [data concept article](concept-data.md).
-We recommend migrating the code for [creating data assets](how-to-create-register-data-assets.md) to v2.
+We recommend migrating the code for [creating data assets](how-to-create-data-assets.md) to v2.
### Model
machine-learning How To Troubleshoot Auto Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-auto-ml.md
If you have over 100 automated ML experiments, this may cause new automated ML e
## Next steps
-+ Learn more about [how to train a regression model with Automated machine learning](tutorial-auto-train-models.md) or [how to train using Automated machine learning on a remote resource](./v1/concept-automated-ml-v1.md#local-remote).
++ Learn more about [how to train a regression model with Automated machine learning](./v1/how-to-auto-train-models-v1.md) or [how to train using Automated machine learning on a remote resource](./v1/concept-automated-ml-v1.md#local-remote). + Learn more about [how and where to deploy a model](./v1/how-to-deploy-and-where.md).
machine-learning How To Use Automated Ml For Ml Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automated-ml-for-ml-models.md
Otherwise, you'll see a list of your recent automated ML experiments, including
1. Select **+ New automated ML job** and populate the form.
-1. Select a data asset from your storage container, or create a new data asset. Data asset can be created from local files, web urls, datastores, or Azure open datasets. Learn more about [data asset creation](how-to-create-register-data-assets.md).
+1. Select a data asset from your storage container, or create a new data asset. Data asset can be created from local files, web urls, datastores, or Azure open datasets. Learn more about [data asset creation](how-to-create-data-assets.md).
>[!Important] > Requirements for training data:
machine-learning Samples Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/samples-notebooks.md
Try these tutorials:
- [Train and deploy an image classification model with MNIST](tutorial-train-deploy-notebook.md) -- [Prepare data and use automated machine learning to train a regression model with the NYC taxi data set](tutorial-auto-train-models.md)
+- [Tutorial: Train an object detection model (preview) with AutoML and Python](tutorial-auto-train-image-models.md)
machine-learning Tutorial Automated Ml Forecast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-automated-ml-forecast.md
You won't write any code in this tutorial, you'll use the studio interface to pe
Also try automated machine learning for these other model types: * For a no-code example of a classification model, see [Tutorial: Create a classification model with automated ML in Azure Machine Learning](tutorial-first-experiment-automated-ml.md).
-* For a code first example of a regression model, see the [Tutorial: Use automated machine learning to predict taxi fares](tutorial-auto-train-models.md).
+* For a code first example of an object detection model, see the [Tutorial: Train an object detection model (preview) with AutoML and Python](tutorial-auto-train-image-models.md).
## Prerequisites
machine-learning Tutorial First Experiment Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-first-experiment-automated-ml.md
You won't write any code in this tutorial, you'll use the studio interface to pe
Also try automated machine learning for these other model types: * For a no-code example of forecasting, see [Tutorial: Demand forecasting & AutoML](tutorial-automated-ml-forecast.md).
-* For a code first example of a regression model, see the [Tutorial: Regression model with AutoML](tutorial-auto-train-models.md).
+* For a code first example of an object detection model, see the [Tutorial: Train an object detection model (preview) with AutoML and Python](tutorial-auto-train-image-models.md),
## Prerequisites
machine-learning Tutorial Train Deploy Notebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-train-deploy-notebook.md
Use these steps to delete your Azure Machine Learning workspace and all compute
+ Learn how to [authenticate to the deployed model](how-to-authenticate-online-endpoint.md). + [Make predictions on large quantities of data](./tutorial-pipeline-batch-scoring-classification.md) asynchronously. + Monitor your Azure Machine Learning models with [Application Insights](./v1/how-to-enable-app-insights.md).
-+ Try out the [automatic algorithm selection](tutorial-auto-train-models.md) tutorial.
+
machine-learning Concept Automated Ml V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-automated-ml-v1.md
Traditional machine learning model development is resource-intensive, requiring
Azure Machine Learning offers the following two experiences for working with automated ML. See the following sections to understand [feature availability in each experience (v1)](#parity).
-* For code-experienced customers, [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro). Get started with [Tutorial: Use automated machine learning to predict taxi fares (v1)](../tutorial-auto-train-models.md).
+* For code-experienced customers, [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro). Get started with [Tutorial: Use automated machine learning to predict taxi fares (v1)](how-to-auto-train-models-v1.md).
* For limited/no-code experience customers, Azure Machine Learning studio at [https://ml.azure.com](https://ml.azure.com/). Get started with these tutorials: * [Tutorial: Create a classification model with automated ML in Azure Machine Learning](../tutorial-first-experiment-automated-ml.md).
See examples of classification and automated machine learning in these Python no
Similar to classification, regression tasks are also a common supervised learning task.
-Different from classification where predicted output values are categorical, regression models predict numerical output values based on independent predictors. In regression, the objective is to help establish the relationship among those independent predictor variables by estimating how one variable impacts the others. For example, automobile price based on features like, gas mileage, safety rating, etc. Learn more and see an example of [regression with automated machine learning (v1)](../tutorial-auto-train-models.md).
+Different from classification where predicted output values are categorical, regression models predict numerical output values based on independent predictors. In regression, the objective is to help establish the relationship among those independent predictor variables by estimating how one variable impacts the others. For example, automobile price based on features like, gas mileage, safety rating, etc. Learn more and see an example of [regression with automated machine learning (v1)](how-to-auto-train-models-v1.md).
See examples of regression and automated machine learning for predictions in these Python notebooks: [CPU Performance Prediction](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml/regression-explanation-featurization),
There are multiple resources to get you up and running with AutoML.
### Tutorials/ how-tos Tutorials are end-to-end introductory examples of AutoML scenarios.
-+ **For a code first experience**, follow the [Tutorial: Train a regression model with AutoML and Python (v1)](../tutorial-auto-train-models.md).
++ **For a code first experience**, follow the [Tutorial: Train a regression model with AutoML and Python (v1)](how-to-auto-train-models-v1.md). + **For a low or no-code experience**, see the [Tutorial: Train a classification model with no-code AutoML in Azure Machine Learning studio](../tutorial-first-experiment-automated-ml.md).
machine-learning How To Auto Train Models V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-auto-train-models-v1.md
+
+ Title: 'AutoML-train regression model (SDK v1)'
+
+description: Train a regression model to predict NYC taxi fares with the Azure Machine Learning Python SDK using Azure Machine Learning automated ML SDK (v1).
+++++++ Last updated : 10/21/2021+++
+# Train a regression model with AutoML and Python (SDK v1)
++
+In this article, you learn how to train a regression model with the Azure Machine Learning Python SDK using Azure Machine Learning automated ML. This regression model predicts NYC taxi fares.
+
+This process accepts training data and configuration settings, and automatically iterates through combinations of different feature normalization/standardization methods, models, and hyperparameter settings to arrive at the best model.
+
+![Flow diagram](./media/how-to-auto-train-models/flow2.png)
+
+You'll write code using the Python SDK in this article. You'll learn the following tasks:
+
+> [!div class="checklist"]
+> * Download, transform, and clean data using Azure Open Datasets
+> * Train an automated machine learning regression model
+> * Calculate model accuracy
+
+For no-code AutoML, try the following tutorials:
+
+* [Tutorial: Train no-code classification models](../tutorial-first-experiment-automated-ml.md)
+
+* [Tutorial: Forecast demand with automated machine learning](../tutorial-automated-ml-forecast.md)
+
+## Prerequisites
+
+If you donΓÇÖt have an Azure subscription, create a free account before you begin. Try the [free or paid version](https://azure.microsoft.com/free/) of Azure Machine Learning today.
+
+* Complete the [Quickstart: Get started with Azure Machine Learning](../quickstart-create-resources.md) if you don't already have an Azure Machine Learning workspace or a compute instance.
+* After you complete the quickstart:
+ 1. Select **Notebooks** in the studio.
+ 1. Select the **Samples** tab.
+ 1. Open the *tutorials/regression-automl-nyc-taxi-data/regression-automated-ml.ipynb* notebook.
+ 1. To run each cell in the tutorial, select **Clone this notebook**
+
+This article is also available on [GitHub](https://github.com/Azure/MachineLearningNotebooks/tree/master/tutorials) if you wish to run it in your own [local environment](../how-to-configure-environment.md#local).
+To get the required packages,
+* [Install the full `automl` client](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment).
+* Run `pip install azureml-opendatasets azureml-widgets` to get the required packages.
+
+## Download and prepare data
+
+Import the necessary packages. The Open Datasets package contains a class representing each data source (`NycTlcGreen` for example) to easily filter date parameters before downloading.
+
+```python
+from azureml.opendatasets import NycTlcGreen
+import pandas as pd
+from datetime import datetime
+from dateutil.relativedelta import relativedelta
+```
+
+Begin by creating a dataframe to hold the taxi data. When working in a non-Spark environment, Open Datasets only allows downloading one month of data at a time with certain classes to avoid `MemoryError` with large datasets.
+
+To download taxi data, iteratively fetch one month at a time, and before appending it to `green_taxi_df` randomly sample 2,000 records from each month to avoid bloating the dataframe. Then preview the data.
++
+```python
+green_taxi_df = pd.DataFrame([])
+start = datetime.strptime("1/1/2015","%m/%d/%Y")
+end = datetime.strptime("1/31/2015","%m/%d/%Y")
+
+for sample_month in range(12):
+ temp_df_green = NycTlcGreen(start + relativedelta(months=sample_month), end + relativedelta(months=sample_month)) \
+ .to_pandas_dataframe()
+ green_taxi_df = green_taxi_df.append(temp_df_green.sample(2000))
+
+green_taxi_df.head(10)
+```
+
+|vendorID| lpepPickupDatetime| lpepDropoffDatetime| passengerCount| tripDistance| puLocationId| doLocationId| pickupLongitude| pickupLatitude| dropoffLongitude |...| paymentType |fareAmount |extra| mtaTax| improvementSurcharge| tipAmount| tollsAmount| ehailFee| totalAmount| tripType|
+|-|-|-|-|-|-||--||||-|-|-|--||-|--|-|-|-|
+|131969|2|2015-01-11 05:34:44|2015-01-11 05:45:03|3|4.84|None|None|-73.88|40.84|-73.94|...|2|15.00|0.50|0.50|0.3|0.00|0.00|nan|16.30|
+|1129817|2|2015-01-20 16:26:29|2015-01-20 16:30:26|1|0.69|None|None|-73.96|40.81|-73.96|...|2|4.50|1.00|0.50|0.3|0.00|0.00|nan|6.30|
+|1278620|2|2015-01-01 05:58:10|2015-01-01 06:00:55|1|0.45|None|None|-73.92|40.76|-73.91|...|2|4.00|0.00|0.50|0.3|0.00|0.00|nan|4.80|
+|348430|2|2015-01-17 02:20:50|2015-01-17 02:41:38|1|0.00|None|None|-73.81|40.70|-73.82|...|2|12.50|0.50|0.50|0.3|0.00|0.00|nan|13.80|
+1269627|1|2015-01-01 05:04:10|2015-01-01 05:06:23|1|0.50|None|None|-73.92|40.76|-73.92|...|2|4.00|0.50|0.50|0|0.00|0.00|nan|5.00|
+|811755|1|2015-01-04 19:57:51|2015-01-04 20:05:45|2|1.10|None|None|-73.96|40.72|-73.95|...|2|6.50|0.50|0.50|0.3|0.00|0.00|nan|7.80|
+|737281|1|2015-01-03 12:27:31|2015-01-03 12:33:52|1|0.90|None|None|-73.88|40.76|-73.87|...|2|6.00|0.00|0.50|0.3|0.00|0.00|nan|6.80|
+|113951|1|2015-01-09 23:25:51|2015-01-09 23:39:52|1|3.30|None|None|-73.96|40.72|-73.91|...|2|12.50|0.50|0.50|0.3|0.00|0.00|nan|13.80|
+|150436|2|2015-01-11 17:15:14|2015-01-11 17:22:57|1|1.19|None|None|-73.94|40.71|-73.95|...|1|7.00|0.00|0.50|0.3|1.75|0.00|nan|9.55|
+|432136|2|2015-01-22 23:16:33 2015-01-22 23:20:13 1 0.65|None|None|-73.94|40.71|-73.94|...|2|5.00|0.50|0.50|0.3|0.00|0.00|nan|6.30|
+
+Remove some of the columns that you won't need for training or additional feature building. Automate machine learning will automatically handle time-based features such as **lpepPickupDatetime**.
+
+```python
+columns_to_remove = ["lpepDropoffDatetime", "puLocationId", "doLocationId", "extra", "mtaTax",
+ "improvementSurcharge", "tollsAmount", "ehailFee", "tripType", "rateCodeID",
+ "storeAndFwdFlag", "paymentType", "fareAmount", "tipAmount"
+ ]
+for col in columns_to_remove:
+ green_taxi_df.pop(col)
+
+green_taxi_df.head(5)
+```
+
+### Cleanse data
+
+Run the `describe()` function on the new dataframe to see summary statistics for each field.
+
+```python
+green_taxi_df.describe()
+```
+
+|vendorID|passengerCount|tripDistance|pickupLongitude|pickupLatitude|dropoffLongitude|dropoffLatitude| totalAmount|month_num day_of_month|day_of_week|hour_of_day
+|-|-|||-|||||||
+|count|48000.00|48000.00|48000.00|48000.00|48000.00|48000.00|48000.00|48000.00|48000.00|48000.00|
+|mean|1.78|1.37|2.87|-73.83|40.69|-73.84|40.70|14.75|6.50|15.13|
+|std|0.41|1.04|2.93|2.76|1.52|2.61|1.44|12.08|3.45|8.45|
+|min|1.00|0.00|0.00|-74.66|0.00|-74.66|0.00|-300.00|1.00|1.00|
+|25%|2.00|1.00|1.06|-73.96|40.70|-73.97|40.70|7.80|3.75|8.00|
+|50%|2.00|1.00|1.90|-73.94|40.75|-73.94|40.75|11.30|6.50|15.00|
+|75%|2.00|1.00|3.60|-73.92|40.80|-73.91|40.79|17.80|9.25|22.00|
+|max|2.00|9.00|97.57|0.00|41.93|0.00|41.94|450.00|12.00|30.00|
++
+From the summary statistics, you see that there are several fields that have outliers or values that will reduce model accuracy. First filter the lat/long fields to be within the bounds of the Manhattan area. This will filter out longer taxi trips or trips that are outliers in respect to their relationship with other features.
+
+Additionally filter the `tripDistance` field to be greater than zero but less than 31 miles (the haversine distance between the two lat/long pairs). This eliminates long outlier trips that have inconsistent trip cost.
+
+Lastly, the `totalAmount` field has negative values for the taxi fares, which don't make sense in the context of our model, and the `passengerCount` field has bad data with the minimum values being zero.
+
+Filter out these anomalies using query functions, and then remove the last few columns unnecessary for training.
++
+```python
+final_df = green_taxi_df.query("pickupLatitude>=40.53 and pickupLatitude<=40.88")
+final_df = final_df.query("pickupLongitude>=-74.09 and pickupLongitude<=-73.72")
+final_df = final_df.query("tripDistance>=0.25 and tripDistance<31")
+final_df = final_df.query("passengerCount>0 and totalAmount>0")
+
+columns_to_remove_for_training = ["pickupLongitude", "pickupLatitude", "dropoffLongitude", "dropoffLatitude"]
+for col in columns_to_remove_for_training:
+ final_df.pop(col)
+```
+
+Call `describe()` again on the data to ensure cleansing worked as expected. You now have a prepared and cleansed set of taxi, holiday, and weather data to use for machine learning model training.
+
+```python
+final_df.describe()
+```
+
+## Configure workspace
+
+Create a workspace object from the existing workspace. A [Workspace](/python/api/azureml-core/azureml.core.workspace.workspace) is a class that accepts your Azure subscription and resource information. It also creates a cloud resource to monitor and track your model runs. `Workspace.from_config()` reads the file **config.json** and loads the authentication details into an object named `ws`. `ws` is used throughout the rest of the code in this article.
+
+```python
+from azureml.core.workspace import Workspace
+ws = Workspace.from_config()
+```
+
+## Split the data into train and test sets
+
+Split the data into training and test sets by using the `train_test_split` function in the `scikit-learn` library. This function segregates the data into the x (**features**) data set for model training and the y (**values to predict**) data set for testing.
+
+The `test_size` parameter determines the percentage of data to allocate to testing. The `random_state` parameter sets a seed to the random generator, so that your train-test splits are deterministic.
+
+```python
+from sklearn.model_selection import train_test_split
+
+x_train, x_test = train_test_split(final_df, test_size=0.2, random_state=223)
+```
+
+The purpose of this step is to have data points to test the finished model that haven't been used to train the model, in order to measure true accuracy.
+
+In other words, a well-trained model should be able to accurately make predictions from data it hasn't already seen. You now have data prepared for auto-training a machine learning model.
+
+## Automatically train a model
+
+To automatically train a model, take the following steps:
+1. Define settings for the experiment run. Attach your training data to the configuration, and modify settings that control the training process.
+1. Submit the experiment for model tuning. After submitting the experiment, the process iterates through different machine learning algorithms and hyperparameter settings, adhering to your defined constraints. It chooses the best-fit model by optimizing an accuracy metric.
+
+### Define training settings
+
+Define the experiment parameter and model settings for training. View the full list of [settings](how-to-configure-auto-train-v1.md). Submitting the experiment with these default settings will take approximately 5-20 min, but if you want a shorter run time, reduce the `experiment_timeout_hours` parameter.
+
+|Property| Value in this article |Description|
+|-|-||
+|**iteration_timeout_minutes**|10|Time limit in minutes for each iteration. Increase this value for larger datasets that need more time for each iteration.|
+|**experiment_timeout_hours**|0.3|Maximum amount of time in hours that all iterations combined can take before the experiment terminates.|
+|**enable_early_stopping**|True|Flag to enable early termination if the score is not improving in the short term.|
+|**primary_metric**| spearman_correlation | Metric that you want to optimize. The best-fit model will be chosen based on this metric.|
+|**featurization**| auto | By using **auto**, the experiment can preprocess the input data (handling missing data, converting text to numeric, etc.)|
+|**verbosity**| logging.INFO | Controls the level of logging.|
+|**n_cross_validations**|5|Number of cross-validation splits to perform when validation data is not specified.|
+
+```python
+import logging
+
+automl_settings = {
+ "iteration_timeout_minutes": 10,
+ "experiment_timeout_hours": 0.3,
+ "enable_early_stopping": True,
+ "primary_metric": 'spearman_correlation',
+ "featurization": 'auto',
+ "verbosity": logging.INFO,
+ "n_cross_validations": 5
+}
+```
+
+Use your defined training settings as a `**kwargs` parameter to an `AutoMLConfig` object. Additionally, specify your training data and the type of model, which is `regression` in this case.
+
+```python
+from azureml.train.automl import AutoMLConfig
+
+automl_config = AutoMLConfig(task='regression',
+ debug_log='automated_ml_errors.log',
+ training_data=x_train,
+ label_column_name="totalAmount",
+ **automl_settings)
+```
+
+> [!NOTE]
+> Automated machine learning pre-processing steps (feature normalization, handling missing data,
+> converting text to numeric, etc.) become part of the underlying model. When using the model for
+> predictions, the same pre-processing steps applied during training are applied to
+> your input data automatically.
+
+### Train the automatic regression model
+
+Create an experiment object in your workspace. An experiment acts as a container for your individual jobs. Pass the defined `automl_config` object to the experiment, and set the output to `True` to view progress during the job.
+
+After starting the experiment, the output shown updates live as the experiment runs. For each iteration, you see the model type, the run duration, and the training accuracy. The field `BEST` tracks the best running training score based on your metric type.
+
+```python
+from azureml.core.experiment import Experiment
+experiment = Experiment(ws, "Tutorial-NYCTaxi")
+local_run = experiment.submit(automl_config, show_output=True)
+```
+
+```output
+Running on local machine
+Parent Run ID: AutoML_1766cdf7-56cf-4b28-a340-c4aeee15b12b
+Current status: DatasetFeaturization. Beginning to featurize the dataset.
+Current status: DatasetEvaluation. Gathering dataset statistics.
+Current status: FeaturesGeneration. Generating features for the dataset.
+Current status: DatasetFeaturizationCompleted. Completed featurizing the dataset.
+Current status: DatasetCrossValidationSplit. Generating individually featurized CV splits.
+Current status: ModelSelection. Beginning model selection.
+
+****************************************************************************************************
+ITERATION: The iteration being evaluated.
+PIPELINE: A summary description of the pipeline being evaluated.
+DURATION: Time taken for the current iteration.
+METRIC: The result of computing score on the fitted pipeline.
+BEST: The best observed score thus far.
+****************************************************************************************************
+
+ ITERATION PIPELINE DURATION METRIC BEST
+ 0 StandardScalerWrapper RandomForest 0:00:16 0.8746 0.8746
+ 1 MinMaxScaler RandomForest 0:00:15 0.9468 0.9468
+ 2 StandardScalerWrapper ExtremeRandomTrees 0:00:09 0.9303 0.9468
+ 3 StandardScalerWrapper LightGBM 0:00:10 0.9424 0.9468
+ 4 RobustScaler DecisionTree 0:00:09 0.9449 0.9468
+ 5 StandardScalerWrapper LassoLars 0:00:09 0.9440 0.9468
+ 6 StandardScalerWrapper LightGBM 0:00:10 0.9282 0.9468
+ 7 StandardScalerWrapper RandomForest 0:00:12 0.8946 0.9468
+ 8 StandardScalerWrapper LassoLars 0:00:16 0.9439 0.9468
+ 9 MinMaxScaler ExtremeRandomTrees 0:00:35 0.9199 0.9468
+ 10 RobustScaler ExtremeRandomTrees 0:00:19 0.9411 0.9468
+ 11 StandardScalerWrapper ExtremeRandomTrees 0:00:13 0.9077 0.9468
+ 12 StandardScalerWrapper LassoLars 0:00:15 0.9433 0.9468
+ 13 MinMaxScaler ExtremeRandomTrees 0:00:14 0.9186 0.9468
+ 14 RobustScaler RandomForest 0:00:10 0.8810 0.9468
+ 15 StandardScalerWrapper LassoLars 0:00:55 0.9433 0.9468
+ 16 StandardScalerWrapper ExtremeRandomTrees 0:00:13 0.9026 0.9468
+ 17 StandardScalerWrapper RandomForest 0:00:13 0.9140 0.9468
+ 18 VotingEnsemble 0:00:23 0.9471 0.9471
+ 19 StackEnsemble 0:00:27 0.9463 0.9471
+```
+
+## Explore the results
+
+Explore the results of automatic training with a [Jupyter widget](/python/api/azureml-widgets/azureml.widgets). The widget allows you to see a graph and table of all individual job iterations, along with training accuracy metrics and metadata. Additionally, you can filter on different accuracy metrics than your primary metric with the dropdown selector.
+
+```python
+from azureml.widgets import RunDetails
+RunDetails(local_run).show()
+```
+
+![Jupyter widget run details](./media/how-to-auto-train-models/automl-dash-output.png)
+![Jupyter widget plot](./media/how-to-auto-train-models/automl-chart-output.png)
+
+### Retrieve the best model
+
+Select the best model from your iterations. The `get_output` function returns the best run and the fitted model for the last fit invocation. By using the overloads on `get_output`, you can retrieve the best run and fitted model for any logged metric or a particular iteration.
+
+```python
+best_run, fitted_model = local_run.get_output()
+print(best_run)
+print(fitted_model)
+```
+
+### Test the best model accuracy
+
+Use the best model to run predictions on the test data set to predict taxi fares. The function `predict` uses the best model and predicts the values of y, **trip cost**, from the `x_test` data set. Print the first 10 predicted cost values from `y_predict`.
+
+```python
+y_test = x_test.pop("totalAmount")
+
+y_predict = fitted_model.predict(x_test)
+print(y_predict[:10])
+```
+
+Calculate the `root mean squared error` of the results. Convert the `y_test` dataframe to a list to compare to the predicted values. The function `mean_squared_error` takes two arrays of values and calculates the average squared error between them. Taking the square root of the result gives an error in the same units as the y variable, **cost**. It indicates roughly how far the taxi fare predictions are from the actual fares.
+
+```python
+from sklearn.metrics import mean_squared_error
+from math import sqrt
+
+y_actual = y_test.values.flatten().tolist()
+rmse = sqrt(mean_squared_error(y_actual, y_predict))
+rmse
+```
+
+Run the following code to calculate mean absolute percent error (MAPE) by using the full `y_actual` and `y_predict` data sets. This metric calculates an absolute difference between each predicted and actual value and sums all the differences. Then it expresses that sum as a percent of the total of the actual values.
+
+```python
+sum_actuals = sum_errors = 0
+
+for actual_val, predict_val in zip(y_actual, y_predict):
+ abs_error = actual_val - predict_val
+ if abs_error < 0:
+ abs_error = abs_error * -1
+
+ sum_errors = sum_errors + abs_error
+ sum_actuals = sum_actuals + actual_val
+
+mean_abs_percent_error = sum_errors / sum_actuals
+print("Model MAPE:")
+print(mean_abs_percent_error)
+print()
+print("Model Accuracy:")
+print(1 - mean_abs_percent_error)
+```
+
+```output
+Model MAPE:
+0.14353867606052823
+
+Model Accuracy:
+0.8564613239394718
+```
++
+From the two prediction accuracy metrics, you see that the model is fairly good at predicting taxi fares from the data set's features, typically within +- $4.00, and approximately 15% error.
+
+The traditional machine learning model development process is highly resource-intensive, and requires significant domain knowledge and time investment to run and compare the results of dozens of models. Using automated machine learning is a great way to rapidly test many different models for your scenario.
+
+## Clean up resources
+
+Do not complete this section if you plan on running other Azure Machine Learning tutorials.
+
+### Stop the compute instance
++
+### Delete everything
+
+If you don't plan to use the resources you created, delete them, so you don't incur any charges.
+
+1. In the Azure portal, select **Resource groups** on the far left.
+1. From the list, select the resource group you created.
+1. Select **Delete resource group**.
+1. Enter the resource group name. Then select **Delete**.
+
+You can also keep the resource group but delete a single workspace. Display the workspace properties and select **Delete**.
+
+## Next steps
+
+In this automated machine learning article, you did the following tasks:
+
+> [!div class="checklist"]
+> * Configured a workspace and prepared data for an experiment.
+> * Trained by using an automated regression model locally with custom parameters.
+> * Explored and reviewed training results.
+
+[Set up AutoML to train computer vision models with Python (v1)](how-to-auto-train-image-models-v1.md)
machine-learning How To Auto Train Nlp Models V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-auto-train-nlp-models-v1.md
You can seamlessly integrate with the [Azure Machine Learning data labeling](../
[!INCLUDE [automl-sdk-version](../../../includes/machine-learning-automl-sdk-version.md)]
-* This article assumes some familiarity with setting up an automated machine learning experiment. Follow the [tutorial](../tutorial-auto-train-models.md) or [how-to](../how-to-configure-auto-train.md) to see the main automated machine learning experiment design patterns.
+* This article assumes some familiarity with setting up an automated machine learning experiment. Follow the [tutorial](how-to-auto-train-models-v1.md) or [how-to](../how-to-configure-auto-train.md) to see the main automated machine learning experiment design patterns.
## Select your NLP task
machine-learning How To Configure Auto Train V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-configure-auto-train-v1.md
In this guide, learn how to set up an automated machine learning, AutoML, training run with the [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro) using Azure Machine Learning automated ML. Automated ML picks an algorithm and hyperparameters for you and generates a model ready for deployment. This guide provides details of the various options that you can use to configure automated ML experiments.
-For an end to end example, see [Tutorial: AutoML- train regression model](../tutorial-auto-train-models.md).
+For an end to end example, see [Tutorial: AutoML- train regression model](how-to-auto-train-models-v1.md).
If you prefer a no-code experience, you can also [Set up no-code AutoML training in the Azure Machine Learning studio](../how-to-use-automated-ml-for-ml-models.md).
For general information on how model explanations and feature importance can be
+ Learn more about [how and where to deploy a model](/azure/machine-learning/how-to-deploy-managed-online-endpoints).
-+ Learn more about [how to train a regression model with Automated machine learning](../tutorial-auto-train-models.md).
++ Learn more about [how to train a regression model with Automated machine learning](how-to-auto-train-models-v1.md). + [Troubleshoot automated ML experiments](../how-to-troubleshoot-auto-ml.md).
machine-learning How To Convert Ml Experiment To Production https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-convert-ml-experiment-to-production.md
+
+ Title: Convert notebook code into Python scripts
+
+description: Turn your machine learning experimental notebooks into production-ready code using the MLOpsPython code template. You can then test, deploy, and automate that code.
+++++ Last updated : 10/21/2021+++
+# Convert ML experiments to production Python code
++
+In this tutorial, you learn how to convert Jupyter notebooks into Python scripts to make it testing and automation friendly using the MLOpsPython code template and Azure Machine Learning. Typically, this process is used to take experimentation / training code from a Jupyter notebook and convert it into Python scripts. Those scripts can then be used testing and CI/CD automation in your production environment.
+
+A machine learning project requires experimentation where hypotheses are tested with agile tools like Jupyter Notebook using real datasets. Once the model is ready for production, the model code should be placed in a production code repository. In some cases, the model code must be converted to Python scripts to be placed in the production code repository. This tutorial covers a recommended approach on how to export experimentation code to Python scripts.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Clean nonessential code
+> * Refactor Jupyter Notebook code into functions
+> * Create Python scripts for related tasks
+> * Create unit tests
+
+## Prerequisites
+
+- Generate the [MLOpsPython template](https://github.com/microsoft/MLOpsPython/generate)
+and use the `experimentation/Diabetes Ridge Regression Training.ipynb` and `experimentation/Diabetes Ridge Regression Scoring.ipynb` notebooks. These notebooks are used as an example of converting from experimentation to production. You can find these notebooks at [https://github.com/microsoft/MLOpsPython/tree/master/experimentation](https://github.com/microsoft/MLOpsPython/tree/master/experimentation).
+- Install `nbconvert`. Follow only the installation instructions under section __Installing nbconvert__ on the [Installation](https://nbconvert.readthedocs.io/en/latest/install.html) page.
+
+## Remove all nonessential code
+
+Some code written during experimentation is only intended for exploratory purposes. Therefore, the first step to convert experimental code into production code is to remove this nonessential code. Removing nonessential code will also make the code more maintainable. In this section, you'll remove code from the `experimentation/Diabetes Ridge Regression Training.ipynb` notebook. The statements printing the shape of `X` and `y` and the cell calling `features.describe` are just for data exploration and can be removed. After removing nonessential code, `experimentation/Diabetes Ridge Regression Training.ipynb` should look like the following code without markdown:
+
+```python
+from sklearn.datasets import load_diabetes
+from sklearn.linear_model import Ridge
+from sklearn.metrics import mean_squared_error
+from sklearn.model_selection import train_test_split
+import joblib
+import pandas as pd
+
+sample_data = load_diabetes()
+
+df = pd.DataFrame(
+ data=sample_data.data,
+ columns=sample_data.feature_names)
+df['Y'] = sample_data.target
+
+X = df.drop('Y', axis=1).values
+y = df['Y'].values
+
+X_train, X_test, y_train, y_test = train_test_split(
+ X, y, test_size=0.2, random_state=0)
+data = {"train": {"X": X_train, "y": y_train},
+ "test": {"X": X_test, "y": y_test}}
+
+args = {
+ "alpha": 0.5
+}
+
+reg_model = Ridge(**args)
+reg_model.fit(data["train"]["X"], data["train"]["y"])
+
+preds = reg_model.predict(data["test"]["X"])
+mse = mean_squared_error(preds, y_test)
+metrics = {"mse": mse}
+print(metrics)
+
+model_name = "sklearn_regression_model.pkl"
+joblib.dump(value=reg, filename=model_name)
+```
+
+## Refactor code into functions
+
+Second, the Jupyter code needs to be refactored into functions. Refactoring code into functions makes unit testing easier and makes the code more maintainable. In this section, you'll refactor:
+
+- The Diabetes Ridge Regression Training notebook(`experimentation/Diabetes Ridge Regression Training.ipynb`)
+- The Diabetes Ridge Regression Scoring notebook(`experimentation/Diabetes Ridge Regression Scoring.ipynb`)
+
+### Refactor Diabetes Ridge Regression Training notebook into functions
+
+In `experimentation/Diabetes Ridge Regression Training.ipynb`, complete the following steps:
+
+1. Create a function called `split_data` to split the data frame into test and train data. The function should take the dataframe `df` as a parameter, and return a dictionary containing the keys `train` and `test`.
+
+ Move the code under the *Split Data into Training and Validation Sets* heading into the `split_data` function and modify it to return the `data` object.
+
+1. Create a function called `train_model`, which takes the parameters `data` and `args` and returns a trained model.
+
+ Move the code under the heading *Training Model on Training Set* into the `train_model` function and modify it to return the `reg_model` object. Remove the `args` dictionary, the values will come from the `args` parameter.
+
+1. Create a function called `get_model_metrics`, which takes parameters `reg_model` and `data`, and evaluates the model then returns a dictionary of metrics for the trained model.
+
+ Move the code under the *Validate Model on Validation Set* heading into the `get_model_metrics` function and modify it to return the `metrics` object.
+
+The three functions should be as follows:
+
+```python
+# Split the dataframe into test and train data
+def split_data(df):
+ X = df.drop('Y', axis=1).values
+ y = df['Y'].values
+
+ X_train, X_test, y_train, y_test = train_test_split(
+ X, y, test_size=0.2, random_state=0)
+ data = {"train": {"X": X_train, "y": y_train},
+ "test": {"X": X_test, "y": y_test}}
+ return data
++
+# Train the model, return the model
+def train_model(data, args):
+ reg_model = Ridge(**args)
+ reg_model.fit(data["train"]["X"], data["train"]["y"])
+ return reg_model
++
+# Evaluate the metrics for the model
+def get_model_metrics(reg_model, data):
+ preds = reg_model.predict(data["test"]["X"])
+ mse = mean_squared_error(preds, data["test"]["y"])
+ metrics = {"mse": mse}
+ return metrics
+```
+
+Still in `experimentation/Diabetes Ridge Regression Training.ipynb`, complete the following steps:
+
+1. Create a new function called `main`, which takes no parameters and returns nothing.
+1. Move the code under the "Load Data" heading into the `main` function.
+1. Add invocations for the newly written functions into the `main` function:
+ ```python
+ # Split Data into Training and Validation Sets
+ data = split_data(df)
+ ```
+
+ ```python
+ # Train Model on Training Set
+ args = {
+ "alpha": 0.5
+ }
+ reg = train_model(data, args)
+ ```
+
+ ```python
+ # Validate Model on Validation Set
+ metrics = get_model_metrics(reg, data)
+ ```
+1. Move the code under the "Save Model" heading into the `main` function.
+
+The `main` function should look like the following code:
+
+```python
+def main():
+ # Load Data
+ sample_data = load_diabetes()
+
+ df = pd.DataFrame(
+ data=sample_data.data,
+ columns=sample_data.feature_names)
+ df['Y'] = sample_data.target
+
+ # Split Data into Training and Validation Sets
+ data = split_data(df)
+
+ # Train Model on Training Set
+ args = {
+ "alpha": 0.5
+ }
+ reg = train_model(data, args)
+
+ # Validate Model on Validation Set
+ metrics = get_model_metrics(reg, data)
+
+ # Save Model
+ model_name = "sklearn_regression_model.pkl"
+
+ joblib.dump(value=reg, filename=model_name)
+```
+
+At this stage, there should be no code remaining in the notebook that isn't in a function, other than import statements in the first cell.
+
+Add a statement that calls the `main` function.
+
+```python
+main()
+```
+
+After refactoring, `experimentation/Diabetes Ridge Regression Training.ipynb` should look like the following code without the markdown:
+
+```python
+from sklearn.datasets import load_diabetes
+from sklearn.linear_model import Ridge
+from sklearn.metrics import mean_squared_error
+from sklearn.model_selection import train_test_split
+import pandas as pd
+import joblib
++
+# Split the dataframe into test and train data
+def split_data(df):
+ X = df.drop('Y', axis=1).values
+ y = df['Y'].values
+
+ X_train, X_test, y_train, y_test = train_test_split(
+ X, y, test_size=0.2, random_state=0)
+ data = {"train": {"X": X_train, "y": y_train},
+ "test": {"X": X_test, "y": y_test}}
+ return data
++
+# Train the model, return the model
+def train_model(data, args):
+ reg_model = Ridge(**args)
+ reg_model.fit(data["train"]["X"], data["train"]["y"])
+ return reg_model
++
+# Evaluate the metrics for the model
+def get_model_metrics(reg_model, data):
+ preds = reg_model.predict(data["test"]["X"])
+ mse = mean_squared_error(preds, data["test"]["y"])
+ metrics = {"mse": mse}
+ return metrics
++
+def main():
+ # Load Data
+ sample_data = load_diabetes()
+
+ df = pd.DataFrame(
+ data=sample_data.data,
+ columns=sample_data.feature_names)
+ df['Y'] = sample_data.target
+
+ # Split Data into Training and Validation Sets
+ data = split_data(df)
+
+ # Train Model on Training Set
+ args = {
+ "alpha": 0.5
+ }
+ reg = train_model(data, args)
+
+ # Validate Model on Validation Set
+ metrics = get_model_metrics(reg, data)
+
+ # Save Model
+ model_name = "sklearn_regression_model.pkl"
+
+ joblib.dump(value=reg, filename=model_name)
+
+main()
+```
+
+### Refactor Diabetes Ridge Regression Scoring notebook into functions
+
+In `experimentation/Diabetes Ridge Regression Scoring.ipynb`, complete the following steps:
+
+1. Create a new function called `init`, which takes no parameters and return nothing.
+1. Copy the code under the "Load Model" heading into the `init` function.
+
+The `init` function should look like the following code:
+
+```python
+def init():
+ model_path = Model.get_model_path(
+ model_name="sklearn_regression_model.pkl")
+ model = joblib.load(model_path)
+```
+
+Once the `init` function has been created, replace all the code under the heading "Load Model" with a single call to `init` as follows:
+
+```python
+init()
+```
+
+In `experimentation/Diabetes Ridge Regression Scoring.ipynb`, complete the following steps:
+
+1. Create a new function called `run`, which takes `raw_data` and `request_headers` as parameters and returns a dictionary of results as follows:
+
+ ```python
+ {"result": result.tolist()}
+ ```
+
+1. Copy the code under the "Prepare Data" and "Score Data" headings into the `run` function.
+
+ The `run` function should look like the following code (Remember to remove the statements that set the variables `raw_data` and `request_headers`, which will be used later when the `run` function is called):
+
+ ```python
+ def run(raw_data, request_headers):
+ data = json.loads(raw_data)["data"]
+ data = numpy.array(data)
+ result = model.predict(data)
+
+ return {"result": result.tolist()}
+ ```
+
+Once the `run` function has been created, replace all the code under the "Prepare Data" and "Score Data" headings with the following code:
+
+```python
+raw_data = '{"data":[[1,2,3,4,5,6,7,8,9,10],[10,9,8,7,6,5,4,3,2,1]]}'
+request_header = {}
+prediction = run(raw_data, request_header)
+print("Test result: ", prediction)
+```
+
+The previous code sets variables `raw_data` and `request_header`, calls the `run` function with `raw_data` and `request_header`, and prints the predictions.
+
+After refactoring, `experimentation/Diabetes Ridge Regression Scoring.ipynb` should look like the following code without the markdown:
+
+```python
+import json
+import numpy
+from azureml.core.model import Model
+import joblib
+
+def init():
+ model_path = Model.get_model_path(
+ model_name="sklearn_regression_model.pkl")
+ model = joblib.load(model_path)
+
+def run(raw_data, request_headers):
+ data = json.loads(raw_data)["data"]
+ data = numpy.array(data)
+ result = model.predict(data)
+
+ return {"result": result.tolist()}
+
+init()
+test_row = '{"data":[[1,2,3,4,5,6,7,8,9,10],[10,9,8,7,6,5,4,3,2,1]]}'
+request_header = {}
+prediction = run(test_row, {})
+print("Test result: ", prediction)
+```
+
+## Combine related functions in Python files
+
+Third, related functions need to be merged into Python files to better help code reuse. In this section, you'll be creating Python files for the following notebooks:
+
+- The Diabetes Ridge Regression Training notebook(`experimentation/Diabetes Ridge Regression Training.ipynb`)
+- The Diabetes Ridge Regression Scoring notebook(`experimentation/Diabetes Ridge Regression Scoring.ipynb`)
+
+### Create Python file for the Diabetes Ridge Regression Training notebook
+
+Convert your notebook to an executable script by running the following statement in a command prompt, which uses the `nbconvert` package and the path of `experimentation/Diabetes Ridge Regression Training.ipynb`:
+
+```
+jupyter nbconvert "Diabetes Ridge Regression Training.ipynb" --to script --output train
+```
+
+Once the notebook has been converted to `train.py`, remove any unwanted comments. Replace the call to `main()` at the end of the file with a conditional invocation like the following code:
+
+```python
+if __name__ == '__main__':
+ main()
+```
+
+Your `train.py` file should look like the following code:
+
+```python
+from sklearn.datasets import load_diabetes
+from sklearn.linear_model import Ridge
+from sklearn.metrics import mean_squared_error
+from sklearn.model_selection import train_test_split
+import pandas as pd
+import joblib
++
+# Split the dataframe into test and train data
+def split_data(df):
+ X = df.drop('Y', axis=1).values
+ y = df['Y'].values
+
+ X_train, X_test, y_train, y_test = train_test_split(
+ X, y, test_size=0.2, random_state=0)
+ data = {"train": {"X": X_train, "y": y_train},
+ "test": {"X": X_test, "y": y_test}}
+ return data
++
+# Train the model, return the model
+def train_model(data, args):
+ reg_model = Ridge(**args)
+ reg_model.fit(data["train"]["X"], data["train"]["y"])
+ return reg_model
++
+# Evaluate the metrics for the model
+def get_model_metrics(reg_model, data):
+ preds = reg_model.predict(data["test"]["X"])
+ mse = mean_squared_error(preds, data["test"]["y"])
+ metrics = {"mse": mse}
+ return metrics
++
+def main():
+ # Load Data
+ sample_data = load_diabetes()
+
+ df = pd.DataFrame(
+ data=sample_data.data,
+ columns=sample_data.feature_names)
+ df['Y'] = sample_data.target
+
+ # Split Data into Training and Validation Sets
+ data = split_data(df)
+
+ # Train Model on Training Set
+ args = {
+ "alpha": 0.5
+ }
+ reg = train_model(data, args)
+
+ # Validate Model on Validation Set
+ metrics = get_model_metrics(reg, data)
+
+ # Save Model
+ model_name = "sklearn_regression_model.pkl"
+
+ joblib.dump(value=reg, filename=model_name)
+
+if __name__ == '__main__':
+ main()
+```
+
+`train.py` can now be invoked from a terminal by running `python train.py`.
+The functions from `train.py` can also be called from other files.
+
+The `train_aml.py` file found in the `diabetes_regression/training` directory in the MLOpsPython repository calls the functions defined in `train.py` in the context of an Azure Machine Learning experiment job. The functions can also be called in unit tests, covered later in this guide.
+
+### Create Python file for the Diabetes Ridge Regression Scoring notebook
+
+Covert your notebook to an executable script by running the following statement in a command prompt that which uses the `nbconvert` package and the path of `experimentation/Diabetes Ridge Regression Scoring.ipynb`:
+
+```
+jupyter nbconvert "Diabetes Ridge Regression Scoring.ipynb" --to script --output score
+```
+
+Once the notebook has been converted to `score.py`, remove any unwanted comments. Your `score.py` file should look like the following code:
+
+```python
+import json
+import numpy
+from azureml.core.model import Model
+import joblib
+
+def init():
+ model_path = Model.get_model_path(
+ model_name="sklearn_regression_model.pkl")
+ model = joblib.load(model_path)
+
+def run(raw_data, request_headers):
+ data = json.loads(raw_data)["data"]
+ data = numpy.array(data)
+ result = model.predict(data)
+
+ return {"result": result.tolist()}
+
+init()
+test_row = '{"data":[[1,2,3,4,5,6,7,8,9,10],[10,9,8,7,6,5,4,3,2,1]]}'
+request_header = {}
+prediction = run(test_row, request_header)
+print("Test result: ", prediction)
+```
+
+The `model` variable needs to be global so that it's visible throughout the script. Add the following statement at the beginning of the `init` function:
+
+```python
+global model
+```
+
+After adding the previous statement, the `init` function should look like the following code:
+
+```python
+def init():
+ global model
+
+ # load the model from file into a global object
+ model_path = Model.get_model_path(
+ model_name="sklearn_regression_model.pkl")
+ model = joblib.load(model_path)
+```
+
+## Create unit tests for each Python file
+
+Fourth, create unit tests for your Python functions. Unit tests protect code against functional regressions and make it easier to maintain. In this section, you'll be creating unit tests for the functions in `train.py`.
+
+`train.py` contains multiple functions, but we'll only create a single unit test for the `train_model` function using the Pytest framework in this tutorial. Pytest isn't the only Python unit testing framework, but it's one of the most commonly used. For more information, visit [Pytest](https://pytest.org).
+
+A unit test usually contains three main actions:
+
+- Arrange object - creating and setting up necessary objects
+- Act on an object
+- Assert what is expected
+
+The unit test will call `train_model` with some hard-coded data and arguments, and validate that `train_model` acted as expected by using the resulting trained model to make a prediction and comparing that prediction to an expected value.
+
+```python
+import numpy as np
+from code.training.train import train_model
++
+def test_train_model():
+ # Arrange
+ X_train = np.array([1, 2, 3, 4, 5, 6]).reshape(-1, 1)
+ y_train = np.array([10, 9, 8, 8, 6, 5])
+ data = {"train": {"X": X_train, "y": y_train}}
+
+ # Act
+ reg_model = train_model(data, {"alpha": 1.2})
+
+ # Assert
+ preds = reg_model.predict([[1], [2]])
+ np.testing.assert_almost_equal(preds, [9.93939393939394, 9.03030303030303])
+```
+
+## Next steps
+
+Now that you understand how to convert from an experiment to production code, see the following links for more information and next steps:
+++ [MLOpsPython](https://github.com/microsoft/MLOpsPython/blob/master/docs/custom_model.md): Build a CI/CD pipeline to train, evaluate and deploy your own model using Azure Pipelines and Azure Machine Learning++ [Monitor Azure ML experiment jobs and metrics](how-to-log-view-metrics.md)++ [Monitor and collect data from ML web service endpoints](how-to-enable-app-insights.md)
machine-learning How To Create Register Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-create-register-datasets.md
Last updated 05/11/2022
> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK you are using:"] > * [v1](how-to-create-register-datasets.md)
-> * [v2 (current version)](../how-to-create-register-data-assets.md)
+> * [v2 (current version)](../how-to-create-data-assets.md)
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
managed-instance-apache-cassandra Configure Hybrid Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/configure-hybrid-cluster.md
This quickstart demonstrates how to use the Azure CLI commands to configure a hy
## <a id="hybrid-real-time-migration"></a>Use hybrid cluster for real-time migration
-The above instructions provide guidance for configuring a hybrid cluster. However, this is also a great way of achieving a seamless zero-downtime migration. If you have an on-premise or other Cassandra environment that you want to decommission with zero downtime, in favour of running your workload in Azure Managed Instance for Apache Cassandra, the following steps must be completed in this order:
+The above instructions provide guidance for configuring a hybrid cluster. However, this is also a great way of achieving a seamless zero-downtime migration. If you have an on-premises or other Cassandra environment that you want to decommission with zero downtime, in favour of running your workload in Azure Managed Instance for Apache Cassandra, the following steps must be completed in this order:
1. Configure hybrid cluster - follow the instructions above. 1. Temporarily disable automatic repairs in Azure Managed Instance for Apache Cassandra for the duration of the migration:
marketplace Dynamics 365 Business Central Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-business-central-availability.md
description: Configure Dynamics 365 Business Central offer availability on Micro
--++ Last updated 11/24/2021
marketplace Dynamics 365 Business Central Offer Listing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-business-central-offer-listing.md
description: Configure Dynamics 365 Business Central offer listing details on Mi
--++ Last updated 03/15/2022
marketplace Dynamics 365 Business Central Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-business-central-offer-setup.md
description: Create a Dynamics 365 Business Central offer on Microsoft AppSource
--++ Last updated 07/20/2022
marketplace Dynamics 365 Business Central Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-business-central-properties.md
description: Configure Dynamics 365 Business Central offer properties on Microso
--++ Last updated 11/24/2021
marketplace Dynamics 365 Business Central Supplemental Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-business-central-supplemental-content.md
description: Set up Dynamics 365 Business Central offer supplemental content on
--++ Last updated 12/04/2021
marketplace Dynamics 365 Business Central Technical Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-business-central-technical-configuration.md
description: Set up Dynamics 365 Business Central offer technical configuration
--++ Last updated 12/03/2021
marketplace Dynamics 365 Customer Engage Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-customer-engage-availability.md
description: Configure Dynamics 365 apps on Dataverse and Power Apps offer avail
--++ Last updated 05/25/2022
marketplace Dynamics 365 Customer Engage Offer Listing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-customer-engage-offer-listing.md
description: Configure Dynamics 365 apps on Dataverse and Power App offer listin
--++ Last updated 12/03/2021
marketplace Dynamics 365 Customer Engage Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-customer-engage-offer-setup.md
description: Create a Dynamics 365 apps on Dataverse and Power Apps offer on Mic
--++ Last updated 07/18/2022
marketplace Dynamics 365 Customer Engage Plans https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-customer-engage-plans.md
description: Configure Dynamics 365 apps on Dataverse and Power Apps offer plans
--++ Last updated 05/25/2022
marketplace Dynamics 365 Customer Engage Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-customer-engage-properties.md
description: Configure Dynamics 365 apps on Dataverse and Power Apps offer prope
--++ Last updated 12/03/2021
marketplace Dynamics 365 Customer Engage Supplemental Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-customer-engage-supplemental-content.md
description: Set up DDynamics 365 apps on Dataverse and Power Apps offer supplem
--++ Last updated 12/03/2021
marketplace Dynamics 365 Customer Engage Technical Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-customer-engage-technical-configuration.md
--++ Last updated 12/03/2021
marketplace Dynamics 365 Operations Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-operations-availability.md
description: Configure Dynamics 365 Operations Apps offer availability on Micros
--++ Last updated 12/04/2021
marketplace Dynamics 365 Operations Offer Listing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-operations-offer-listing.md
description: Configure Dynamics 365 for Operations Apps offer listing details on
--++ Last updated 12/03/2021
marketplace Dynamics 365 Operations Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-operations-offer-setup.md
description: Create a Dynamics 365 Operations Apps offer on Microsoft AppSource
--++ Last updated 07/20/2022
marketplace Dynamics 365 Operations Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-operations-properties.md
description: Configure Dynamics 365 Operations Apps offer properties on Microsof
--++ Last updated 12/03/2021
marketplace Dynamics 365 Operations Supplemental Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-operations-supplemental-content.md
description: Set up Dynamics 365 Operations Apps offer supplemental content on M
--++ Last updated 12/03/2021
marketplace Dynamics 365 Operations Technical Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-operations-technical-configuration.md
description: Set up Dynamics 365 Operations Apps offer technical configuration o
--++ Last updated 12/03/2021
marketplace Dynamics 365 Operations Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-operations-validation.md
description: Functionally validate a Dynamics 365 Operations Apps offer in Micro
--++ Last updated 12/03/2021
marketplace Dynamics 365 Review Publish https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-review-publish.md
description: Review and publish a Dynamics 365 offer to Microsoft AppSource (Azu
--++ Last updated 08/01/2022
marketplace Marketplace Dynamics 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-dynamics-365.md
description: Plan Dynamics 365 offers for Microsoft AppSource
--++ Last updated 06/29/2022
migrate Common Questions Server Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/common-questions-server-migration.md
Azure Migrate: Server Migration provides agentless replication options for the m
The agentless replication option works by using mechanisms provided by the virtualization provider (VMware, Hyper-V). In the case of VMware virtual machines, the agentless replication mechanism uses VMware snapshots and VMware changed block tracking technology to replicate data from virtual machine disks. This mechanism is similar to the one used by many backup products. In the case of Hyper-V virtual machines, the agentless replication mechanism uses VM snapshots and the change tracking capability of the Hyper-V replica to replicate data from virtual machine disks. When replication is configured for a virtual machine, it first goes through an initial replication phase. During initial replication, a VM snapshot is taken, and a full copy of data from the snapshot disks are replicated to managed disks in your subscription. After initial replication for the VM is complete, the replication process transitions to an incremental replication (delta replication) phase. In the incremental replication phase, data changes that have occurred since the last completed replication cycle are periodically replicated and applied to the replica managed disks, thus keeping replication in sync with changes happening on the VM. In the case of VMware virtual machines, VMware changed block tracking technology is used to keep track of changes between replication cycles. At the start of the replication cycle, a VM snapshot is taken and changed block tracking is used to get the changes between the current snapshot and the last successfully replicated snapshot. That way only data that has changed since the last completed replication cycle needs to be replicated to keep replication for the VM in sync. At the end of each replication cycle, the snapshot is released, and snapshot consolidation is performed for the virtual machine. Similarly, in the case of Hyper-V virtual machines, the Hyper-V replica change tracking engine is used to keep track of changes between consecutive replication cycles.
-When you perform the migrate operation on a replicating virtual machine, you have the option to shut down the on-premise virtual machine and perform one final incremental replication to ensure zero data loss. On performing the migration, the replica managed disks corresponding to the virtual machine are used to create the virtual machine in Azure.
+When you perform the migrate operation on a replicating virtual machine, you have the option to shut down the on-premises virtual machine and perform one final incremental replication to ensure zero data loss. On performing the migration, the replica managed disks corresponding to the virtual machine are used to create the virtual machine in Azure.
To get started, refer the [VMware agentless migration](./tutorial-migrate-vmware.md) and [Hyper-V agentless migration](./tutorial-migrate-hyper-v.md) tutorials.
The agentless replication option works by using mechanisms provided by the virtu
When replication is configured for a virtual machine, it first goes through an initial replication phase. During initial replication, a VM snapshot is taken, and a full copy of data from the snapshot disks are replicated to managed disks in your subscription. After initial replication for the VM is complete, the replication process transitions to an incremental replication (delta replication) phase. In the incremental replication phase, data changes that have occurred since the last completed replication cycle are periodically replicated and applied to the replica managed disks, thus keeping replication in sync with changes happening on the VM. In the case of VMware virtual machines, VMware changed block tracking technology is used to keep track of changes between replication cycles. At the start of the replication cycle, a VM snapshot is taken and changed block tracking is used to get the changes between the current snapshot and the last successfully replicated snapshot. That way only data that has changed since the last completed replication cycle needs to be replicated to keep replication for the VM in sync. At the end of each replication cycle, the snapshot is released, and snapshot consolidation is performed for the virtual machine. Similarly, in the case of Hyper-V virtual machines, the Hyper-V replica change tracking engine is used to keep track of changes between consecutive replication cycles.
-When you perform the migrate operation on a replicating virtual machine, you have the option to shut down the on-premise virtual machine and perform one final incremental replication to ensure zero data loss. On performing the migration, the replica managed disks corresponding to the virtual machine are used to create the virtual machine in Azure.
+When you perform the migrate operation on a replicating virtual machine, you have the option to shut down the on-premises virtual machine and perform one final incremental replication to ensure zero data loss. On performing the migration, the replica managed disks corresponding to the virtual machine are used to create the virtual machine in Azure.
To get started, refer the [Hyper-V agentless migration](./tutorial-migrate-hyper-v.md) tutorial.
mysql How To Restart Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-restart-server-portal.md
The time required to complete a restart depends on the MySQL recovery process. T
To complete this how-to guide, you need: - An [Azure Database for MySQL Flexible server](quickstart-create-server-portal.md)
+>[!Note]
+>If the user restarting the server is part of [custom role](../../role-based-access-control/custom-roles.md) the user should have write privilege on the server.
+ ## Perform server restart The following steps restart the MySQL server:
mysql How To Restart Stop Start Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-restart-stop-start-server-cli.md
az mysql flexible-server start
## Restart a server To restart a server, run ```az mysql flexible-server restart``` command. If you are using [local context](/cli/azure/config/param-persist), you don't need to provide any arguments.
+>[!Note]
+>If the user restarting the server is part of [custom role](../../role-based-access-control/custom-roles.md) the user should have write privilege on the server.
+ **Usage:** ```azurecli az mysql flexible-server restart [--name]
mysql How To Restart Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-restart-server-cli.md
To complete this how-to guide:
- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+>[!Note]
+>If the user restarting the server is part of [custom role](../../role-based-access-control/custom-roles.md) the user should have write privilege on the server.
+ ## Restart the server Restart the server with the following command:
mysql How To Restart Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-restart-server-portal.md
The time required to complete a restart depends on the MySQL recovery process. T
To complete this how-to guide, you need: - An [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-portal.md)
+>[!Note]
+>If the user restarting the server is part of [custom role](../../role-based-access-control/custom-roles.md) the user should have write privilege on the server.
++ ## Perform server restart The following steps restart the MySQL server:
mysql How To Restart Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-restart-server-powershell.md
To complete this how-to guide, you need:
> Once the Az.MySql PowerShell module is generally available, it becomes part of future Az > PowerShell module releases and available natively from within Azure Cloud Shell.
+>[!Note]
+>If the user restarting the server is part of [custom role](../../role-based-access-control/custom-roles.md) the user should have write privilege on the server.
+ If you choose to use PowerShell locally, connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) cmdlet.
Restart the server with the following command:
Restart-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup ``` + ## Next steps > [!div class="nextstepaction"]
payment-hsm Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/overview.md
Azure Payment HSM a highly specialized service. Therefore, we recommend that you
Momentum is building as financial institutions move some or all of their payment applications to the cloud. This entails a migration from the legacy on-premises (on-prem) applications and HSMs to a cloud-based infrastructure that isn't generally under their direct control. Often it means a subscription service rather than perpetual ownership of physical equipment and software. Corporate initiatives for efficiency and a scaled-down physical presence are the drivers for this. Conversely, with cloud-native organizations, the adoption of cloud-first without any on-premises presence is their fundamental business model. Whatever the reason, end users of a cloud-based payment infrastructure expect reduced IT complexity, streamlined security compliance, and flexibility to scale their solution seamlessly as their business grows.
-The cloud offers significant benefits, but challenges when migrating a legacy on-premise payment application (involving payment HSMs) to the cloud must be addressed. Some of these are:
+The cloud offers significant benefits, but challenges when migrating a legacy on-premises payment application (involving payment HSMs) to the cloud must be addressed. Some of these are:
- Shared responsibility and trust ΓÇô what potential loss of control in some areas is acceptable? - Latency ΓÇô how can an efficient, high-performance link between the application and HSM be achieved?
End users of the service can leverage Microsoft security and compliance investme
### Customer-managed HSM in Azure
-The Azure Payment HSM is a part of a subscription service that offers single-tenant HSMs for the service customer to have complete administrative control and exclusive access to the HSM. The customer could be a payment service provider acting on behalf of multiple financial institutions or a financial institution that wishes to directly access the Azure Payment HSM service. Once the HSM is allocated to a customer, Microsoft has no access to customer data. Likewise, when the HSM is no longer required, customer data is zeroized and erased as soon as the HSM is released to ensure complete privacy and security is maintained. The customer is responsible for ensuring sufficient HSM subscriptions are active to meet their requirements for backup, disaster recovery, and resilience to achieve the same performance available on their on-premise HSMs.
+The Azure Payment HSM is a part of a subscription service that offers single-tenant HSMs for the service customer to have complete administrative control and exclusive access to the HSM. The customer could be a payment service provider acting on behalf of multiple financial institutions or a financial institution that wishes to directly access the Azure Payment HSM service. Once the HSM is allocated to a customer, Microsoft has no access to customer data. Likewise, when the HSM is no longer required, customer data is zeroized and erased as soon as the HSM is released to ensure complete privacy and security is maintained. The customer is responsible for ensuring sufficient HSM subscriptions are active to meet their requirements for backup, disaster recovery, and resilience to achieve the same performance available on their on-premises HSMs.
### Accelerate digital transformation and innovation in cloud
-For existing Thales payShield customers wishing to add a cloud option, the Azure Payment HSM solution offers native access to a payment HSM in Azure for "lift and shift" while still experiencing the low latency they're accustomed to via their on-premise payShield HSMs. The solution also offers high-performance transactions for mission-critical payment applications. Consequently, customers can continue their digital transformation strategy by leveraging technology innovation in the cloud. Existing Thales payShield customers can utilize their existing remote management solutions (payShield Manager and payShield TMD together with associated smart card readers and smart cards as appropriate) to work with the Azure Payment HSM service. Customers new to payShield can source the hardware accessories from Thales or one of its partners before deploying their HSM as part of the subscription service.
+For existing Thales payShield customers wishing to add a cloud option, the Azure Payment HSM solution offers native access to a payment HSM in Azure for "lift and shift" while still experiencing the low latency they're accustomed to via their on-premises payShield HSMs. The solution also offers high-performance transactions for mission-critical payment applications. Consequently, customers can continue their digital transformation strategy by leveraging technology innovation in the cloud. Existing Thales payShield customers can utilize their existing remote management solutions (payShield Manager and payShield TMD together with associated smart card readers and smart cards as appropriate) to work with the Azure Payment HSM service. Customers new to payShield can source the hardware accessories from Thales or one of its partners before deploying their HSM as part of the subscription service.
## Typical use cases
Sensitive data protection
## Suitable for both existing and new payment HSM users
-The solution provides clear benefits for both Payment HSM users with a legacy on-premise HSM footprint and those new payment ecosystem entrants with no legacy infrastructure to support and who may choose a cloud-native approach from the outset.
+The solution provides clear benefits for both Payment HSM users with a legacy on-premises HSM footprint and those new payment ecosystem entrants with no legacy infrastructure to support and who may choose a cloud-native approach from the outset.
-Benefits for existing on-premise HSM users
+Benefits for existing on-premises HSM users
- Requires no modifications to payment applications or HSM software to migrate existing applications to the Azure solution - Enables more flexibility and efficiency in HSM utilization - Simplifies HSM sharing between multiple teams, geographically dispersed
Benefits for existing on-premise HSM users
- Improves cash flow for new projects Benefits for new payment participants-- Avoids introduction of on-premise HSM infrastructure
+- Avoids introduction of on-premises HSM infrastructure
- Lowers upfront investment via the Azure subscription model - Offers access to latest certified hardware and software on-demand
private-5g-core Create A Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/create-a-site.md
In this step, you'll create the mobile network site resource representing the ph
1. Use the information you collected in [Collect access network values](collect-required-information-for-a-site.md#collect-access-network-values) to fill out the fields in the **Access network** section. Note the following:
- - Use the same value for both the **N2 subnet** and **N3 subnet** fields (if this site will support 5G user equipment (UEs)).
- - Use the same value for both the **N2 gateway** and **N3 gateway** fields (if this site will support 5G UEs).
- - Use the same value for both the **S1-MME subnet** and **S1-U subnet** fields (if this site will support 4G UEs).
- - Use the same value for both the **S1-MME gateway** and **S1-U gateway** fields (if this site will support 4G UEs).
-
-1. In the **Attached data networks** section, select **Add data network**. Use the information you collected in [Collect data network values](collect-required-information-for-a-site.md#collect-data-network-values) to fill out the fields. If you decided not to configure a DNS server, untick the **Specify DNS addresses for UEs?** checkbox.
+ - If this site will support 5G user equipment (UEs):
+ - **N2 interface name** and **N3 interface name** must match the corresponding virtual network names on port 5 on your Azure Stack Edge Pro device.
+ - **N2 subnet** must match **N3 subnet**.
+ - **N2 gateway** must match **N3 gateway**.
+ - If this site will support 4G UEs:
+ - **S1-MME interface name** and **S1-U interface name** must match the corresponding virtual network names on port 5 on your Azure Stack Edge Pro device.
+ - **S1-MME subnet** must match **S1-U subnet**.
+ - **S1-MME gateway** must match **S1-U gateway**.
+
+1. In the **Attached data networks** section, select **Add data network**. Use the information you collected in [Collect data network values](collect-required-information-for-a-site.md#collect-data-network-values) to fill out the fields. Note the following:
+ - **N6 interface name** (if this site will support 5G UEs) or **SGi interface name** (if this site will support 4G UEs) must match the corresponding virtual network name on port 6 on your Azure Stack Edge Pro device.
+ - If you decided not to configure a DNS server, untick the **Specify DNS addresses for UEs?** checkbox.
:::image type="content" source="media/create-a-site/create-site-add-data-network.png" alt-text="Screenshot of the Azure portal showing the Add data network screen.":::
purview Catalog Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-permissions.md
The Microsoft Purview governance portal uses a set of predefined roles to contro
:::image type="content" source="media/catalog-permissions/catalog-permission-role.svg" alt-text="Chart showing Microsoft Purview governance portal roles" lightbox="media/catalog-permissions/catalog-permission-role.svg"::: >[!NOTE]
-> **\*Data source administrator permissions on Policies** - Data source administrators are also able to publish data policies.
+> **\*Data curator** - Data curators can read insights only if they are assigned data curator at the root collection level.
+> **\*\*Data source administrator permissions on Policies** - Data source administrators are also able to publish data policies.
## Understand how to use the Microsoft Purview governance portal's roles and collections
purview How To Deploy Profisee Purview Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-deploy-profisee-purview-integration.md
The reference architecture shows how both Microsoft Purview and Profisee MDM wor
## Microsoft Purview - Profisee integration deployment on Azure Kubernetes Service (AKS)
-1. Get the license file from Profisee by raising a support ticket on [https://support.profisee.com/](https://support.profisee.com/). Only pre-requisite for this step is your need to pre-determine the DNS resolved URL your Profisee setup on Azure. In other words, keep the DNS HOST NAME of the load balancer used in the deployment. It will be something like "[profisee_name].[region].cloudapp.azure.com".
+1. Get the license file from Profisee by raising a support ticket on [https://support.profisee.com/](https://support.profisee.com/). The only pre-requisite for this step is your need to pre-determine the DNS resolved URL your Profisee setup on Azure. In other words, keep the DNS HOST NAME of the load balancer used in the deployment. It will be something like "[profisee_name].[region].cloudapp.azure.com".
For example, DNSHOSTNAME="purviewprofisee.southcentralus.cloudapp.azure.com". Supply this DNSHOSTNAME to Profisee support when you raise the support ticket and Profisee will revert with the license file. You'll need to supply this file during the next configuration steps below.
-1. [Create a user-assigned managed identity in Azure](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#create-a-user-assigned-managed-identity). You must have a managed identity created to run the deployment. This managed identity must have the following permissions when running a deployment. After the deployment is done, the managed identity can be deleted. Based on your ARM template choices, you'll need some or all of the following roles and permissions assigned to your managed identity:
+1. [Create a user-assigned managed identity in Azure](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#create-a-user-assigned-managed-identity). You must have a managed identity created to run the deployment. After the deployment is done, the managed identity can be deleted. Based on your ARM template choices, you'll need some or all of the following roles and permissions assigned to your managed identity:
- Contributor role to the resource group where AKS will be deployed. It can either be assigned directly to the resource group **OR** at the subscription level and down. - DNS Zone Contributor role to the particular DNS zone where the entry will be created **OR** Contributor role to the DNS Zone resource group. This DNS role is needed only if updating DNS hosted in Azure. - Application Administrator role in Azure Active Directory so the required permissions that are needed for the application registration can be assigned.
An output response that looks similar as the above confirms successful installat
## Next steps Through this guide, we learned of the importance of MDM in driving and supporting Data Governance in the context of the Azure data estate, and how to set up and deploy a Microsoft Purview-Profisee integration.
-For more usage details on Profisee MDM, register for scheduled trainings, live product demonstration and Q&A on [Profisee Academy Tutorials and Demos](https://profisee.com/demo/)!
+For more usage details on Profisee MDM, register for scheduled trainings, live product demonstration and Q&A on [Profisee Academy Tutorials and Demos](https://profisee.com/demo/)!
purview How To Monitor Scan Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-monitor-scan-runs.md
Previously updated : 08/03/2022 Last updated : 08/19/2022 # Monitor scan runs in Microsoft Purview In Microsoft Purview, you can register and scan various types of data sources, and you can view the scan status over time. This article outlines how to monitor and get a bird's eye view of your scan runs in Microsoft Purview.
-> [!IMPORTANT]
-> The monitoring experience is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- ## Monitor scan runs 1. Go to your Microsoft Purview account -> open **Microsoft Purview governance portal** -> **Data map** -> **Monitoring**. You need to have **Data source admin** role on any collection to access this page. And you will see the scan runs that belong to the collections on which you have data source admin privilege. 1. The high-level KPIs show total scan runs within a period. The time period is defaulted at last 30 days, you can also choose to select last seven days. Based on the time filter selected, you can see the distribution of successful, failed, and canceled scan runs by week or by the day in the graph.
- :::image type="content" source="./media/how-to-monitor-scan-runs/monitor-scan-runs.png" alt-text="View scan runs over time":::
+ :::image type="content" source="./media/how-to-monitor-scan-runs/monitor-scan-runs.png" alt-text="View scan runs over time" lightbox="./media/how-to-monitor-scan-runs/monitor-scan-runs.png":::
1. At the bottom of the graph, there is a **View more** link for you to explore further. The link opens the **Scan status** page. Here you can see a scan name and the number of times it has succeeded, failed, or been canceled in the time period. You can also filter the list by source types.
purview Tutorial Atlas 2 2 Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-atlas-2-2-apis.md
In this tutorial, learn to programmatically interact with new Atlas 2.2 APIs wit
Business metadata is a template that contains custom attributes (key values). You can create these attributes globally and then apply them across multiple typedefs.
+### Atlas endpoint
+
+For all the requests, you'll need the Atlas endpoint for your Microsoft Purview account.
+
+1. Find your Microsoft Purview account in the [Azure portal](https://portal.azure.com)
+1. Select the **Properties** page on the left side menu
+1. Copy the **Atlas endpoint** value
++ ### Create business metadata with attributes You can send a `POST` request to the following endpoint:
remote-rendering Late Stage Reprojection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/overview/features/late-stage-reprojection.md
Static models are expected to visually maintain their position when you move around them. If they appear to be unstable, this behavior may hint at LSR issues. Mind that extra dynamic transformations, like animations or explosion views, might mask this behavior.
-You may choose between two different LSR modes, namely **Planar LSR** or **Depth LSR**. Both LSR modes improve hologram stability, although they have their distinct limitations. Start by trying Depth LSR, as it is arguably giving better results in most cases.
+You may choose between two different LSR modes, namely **Planar LSR** or **Depth LSR**. Both LSR modes improve hologram stability, although they have their distinct limitations. Start by trying Depth LSR, as it's arguably giving better results in most cases.
## How to set the LSR mode
To mitigate reprojection instability for transparent objects, you can force dept
## Planar LSR
-Planar LSR does not have per-pixel depth information, as Depth LSR does. Instead it reprojects all content based on a plane that you must provide each frame.
+Planar LSR doesn't have per-pixel depth information, as Depth LSR does. Instead it reprojects all content based on a plane that you must provide each frame.
Planar LSR reprojects those objects best that lie close to the supplied plane. The further away an object is, the more unstable it will look. While Depth LSR is better at reprojecting objects at different depths, Planar LSR may work better for content aligning well with a plane.
The general problem scope with hybrid rendering can be stated like this: Remote
![Diagram that illustrates remote and local pose in relation to target viewport.](./media/reprojection-remote-local.png)
-ARR provides two reprojection modes that work orthogonally to the LSR mode discussed above. These modes are referred to as **:::no-loc text="Remote pose mode":::** and **:::no-loc text="Local pose mode":::**. Unlike the LSR mode, the pose modes define how remote and local content is combined. The choice of the mode trades visual quality of local content for runtime performance, so applications should carefully consider which option is appropriate. See considerations below.
+Depending on the `GraphicsBinding` used, ARR provides up to three reprojection modes that work orthogonally to the LSR mode discussed above. These modes are referred to as **:::no-loc text="Remote pose mode":::**, **:::no-loc text="Local pose mode":::**, and **:::no-loc text="Passthrough pose mode":::**. Unlike the LSR mode, the pose modes define how remote and local content is combined. The choice of the mode trades visual quality of local content for runtime performance, so applications should carefully consider which option is appropriate. See considerations below.
### :::no-loc text="Remote pose mode":::
Accordingly, the illustration looks like this:
![Reprojection steps in local pose mode.](./media/reprojection-pose-mode-local.png)
+### :::no-loc text="Passthrough pose mode":::
+
+This pose mode behaves essentially the same as **:::no-loc text="Remote pose mode":::**, meaning the local and remote content are combined in remote space. However, the content won't be reprojected after combination but remain in remote pose space. The main advantage of this mode is that the resulting image won't be affected by reprojection artifacts.
+
+Conceptually, this mode can be compared to conventional cloud-streaming applications. Due to the high latency it incurs, it isn't suitable for head-mounted scenarios, but is a viable alternative for Desktop and other flat-screen applications where higher image quality is desired. It's therefore only available on `GraphicsBindingSimD3D11` for the time being.
+ ### Performance and quality considerations
-The choice of the pose mode has visual quality and performance implications. The additional runtime cost on the client side for doing the extra reprojection in :::no-loc text="Local pose mode"::: on a HoloLens 2 device amounts to about 1 millisecond per frame of GPU time. This extra cost needs to be put into consideration if the client application is already close to the frame budget of 16 milliseconds. On the other hand, there are types of applications with either no local content or local content that is not prone to distortion artifacts. In those cases :::no-loc text="Local pose mode"::: does not gain any visual benefit because the quality of the remote content reprojection is unaffected.
+The choice of the pose mode has visual quality and performance implications. The additional runtime cost on the client side for doing the extra reprojection in :::no-loc text="Local pose mode"::: on a HoloLens 2 device amounts to about 1 millisecond per frame of GPU time. This extra cost needs to be put into consideration if the client application is already close to the frame budget of 16 milliseconds. On the other hand, there are types of applications with either no local content or local content that is not prone to distortion artifacts. In those cases :::no-loc text="Local pose mode"::: doesn't gain any visual benefit because the quality of the remote content reprojection is unaffected.
-The general advice would thus be to test the modes on a per use case basis and see whether the gain in visual quality justifies the extra performance overhead. It is also possible to toggle the mode dynamically, for instance enable local mode only when important UIs are shown.
+The general advice would thus be to test the modes on a per use case basis and see whether the gain in visual quality justifies the extra performance overhead. It's also possible to toggle the mode dynamically, for instance enable local mode only when important UIs are shown.
### How to change the :::no-loc text="Pose mode"::: at runtime
ApiHandle<RenderingSession> session = ...;
session->GetGraphicsBinding()->SetPoseMode(PoseMode::Local); // set local pose mode ```
-In general, the mode can be changed anytime the graphics binding object is available. There is an important distinction for `GraphicsBindingSimD3D11`: the pose mode can only be changed to `PoseMode.Remote`, if it has been initialized with proxy textures. If this isn't the case, `PoseMode.Local` is forced until the graphics binding is reinitialized. See the two overloads of `GraphicsBindingSimD3d11.InitSimulation`, which take either native pointers to [ID3D11Texture2D](/windows/win32/api/d3d11/nn-d3d11-id3d11texture2d) objects (proxy path) or the `width` and `height` of the desired user viewport (non-proxy path).
+In general, the mode can be changed anytime the graphics binding object is available. There's an important distinction for `GraphicsBindingSimD3D11`: the pose mode can only be changed to `PoseMode.Remote`, if it has been initialized with proxy textures. If this isn't the case, the pose mode can only be toggled between `PoseMode.Local` and `PoseMode.Passthrough` until the graphics binding is reinitialized. See the two overloads of `GraphicsBindingSimD3d11.InitSimulation`, which take either native pointers to [ID3D11Texture2D](/windows/win32/api/d3d11/nn-d3d11-id3d11texture2d) objects (proxy path) or the `width` and `height` of the desired user viewport (non-proxy path).
### Desktop Unity runtime considerations
public static void InitRemoteManager(Camera camera)
} ```
-If `PoseMode.Remote` is specified, the graphics binding will be initialized with offscreen proxy textures and all rendering will be redirected from the Unity scene's main camera to a proxy camera. This code path is only recommended for usage if runtime pose mode changes are required.
+If `PoseMode.Remote` is specified, the graphics binding will be initialized with offscreen proxy textures and all rendering will be redirected from the Unity scene's main camera to a proxy camera. This code path is only recommended for usage if runtime pose mode changes to `PoseMode.Remote` are required. If no pose mode is specified, the ARR Unity runtime will select an appropriate default depending on the current platform.
> [!WARNING] > The proxy camera redirection might be incompatible with other Unity extensions, which expect scene rendering to take place with the main camera. The proxy camera can be retrieved via the `RemoteManagerUnity.ProxyCamera` property if it needs to be queried or registered elsewhere.
-If `PoseMode.Local` is used instead, the graphics binding will not be initialized with offscreen proxy textures and a fast path using the Unity scene's main camera to render will be used. This means that if the respective use case requires pose mode changes at runtime, `PoseMode.Remote` should be specified on `RemoteManagerUnity` initialization. It is strongly recommended to only use local pose mode and thus the non-proxy rendering path.
+If `PoseMode.Local` or `PoseMode.Passthrough` is used instead, the graphics binding won't be initialized with offscreen proxy textures and a fast path using the Unity scene's main camera to render will be used. If the respective use case requires remote pose mode at runtime, `PoseMode.Remote` should be specified on `RemoteManagerUnity` initialization. Directly rendering with Unity's main camera is more efficient and can prevent issues with other Unity extensions. Therefore, it's recommended to use the non-proxy rendering path.
## Next steps
role-based-access-control Custom Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/custom-roles.md
Previously updated : 07/28/2022 Last updated : 08/19/2022
If the [Azure built-in roles](built-in-roles.md) don't meet the specific needs of your organization, you can create your own custom roles. Just like built-in roles, you can assign custom roles to users, groups, and service principals at management group (in preview only), subscription, and resource group scopes.
-Custom roles can be shared between subscriptions that trust the same Azure AD directory. There is a limit of **5,000** custom roles per directory. (For Azure Germany and Azure China 21Vianet, the limit is 2,000 custom roles.) Custom roles can be created using the Azure portal, Azure PowerShell, Azure CLI, or the REST API.
+Custom roles can be shared between subscriptions that trust the same Azure AD tenant. There is a limit of **5,000** custom roles per tenant. (For Azure Germany and Azure China 21Vianet, the limit is 2,000 custom roles.) Custom roles can be created using the Azure portal, Azure PowerShell, Azure CLI, or the REST API.
## Steps to create a custom role
The following table describes what the custom role properties mean.
| Property | Required | Type | Description | | | | | |
-| `Name`</br>`roleName` | Yes | String | The display name of the custom role. While a role definition is a management group or subscription-level resource, a role definition can be used in multiple subscriptions that share the same Azure AD directory. This display name must be unique at the scope of the Azure AD directory. Can include letters, numbers, spaces, and special characters. Maximum number of characters is 512. |
+| `Name`</br>`roleName` | Yes | String | The display name of the custom role. While a role definition is a management group or subscription-level resource, a role definition can be used in multiple subscriptions that share the same Azure AD tenant. This display name must be unique at the scope of the Azure AD tenant. Can include letters, numbers, spaces, and special characters. Maximum number of characters is 512. |
| `Id`</br>`name` | Yes | String | The unique ID of the custom role. For Azure PowerShell and Azure CLI, this ID is automatically generated when you create a new role. | | `IsCustom`</br>`roleType` | Yes | String | Indicates whether this is a custom role. Set to `true` or `CustomRole` for custom roles. Set to `false` or `BuiltInRole` for built-in roles. | | `Description`</br>`description` | Yes | String | The description of the custom role. Can include letters, numbers, spaces, and special characters. Maximum number of characters is 2048. |
The following table describes what the custom role properties mean.
| `NotActions`</br>`notActions` | No | String[] | An array of strings that specifies the control plane actions that are excluded from the allowed `Actions`. For more information, see [NotActions](role-definitions.md#notactions). | | `DataActions`</br>`dataActions` | No | String[] | An array of strings that specifies the data plane actions that the role allows to be performed to your data within that object. If you create a custom role with `DataActions`, that role cannot be assigned at the management group scope. For more information, see [DataActions](role-definitions.md#dataactions). | | `NotDataActions`</br>`notDataActions` | No | String[] | An array of strings that specifies the data plane actions that are excluded from the allowed `DataActions`. For more information, see [NotDataActions](role-definitions.md#notdataactions). |
-| `AssignableScopes`</br>`assignableScopes` | Yes | String[] | An array of strings that specifies the scopes that the custom role is available for assignment. Maximum number of `AssignableScopes` is 2,000. You can define only one management group in `AssignableScopes` of a custom role. Adding a management group to `AssignableScopes` is currently in preview. For more information, see [AssignableScopes](role-definitions.md#assignablescopes). |
+| `AssignableScopes`</br>`assignableScopes` | Yes | String[] | An array of strings that specifies the scopes that the custom role is available for assignment. Maximum number of `AssignableScopes` is 2,000. For more information, see [AssignableScopes](role-definitions.md#assignablescopes). |
Permission strings are case-insensitive. When you create your custom roles, the convention is to match the case that you see for permissions in [Azure resource provider operations](resource-provider-operations.md).
Before you can delete a custom role, you must remove any role assignments that u
Here are steps to help find the role assignments before deleting a custom role: - List the [custom role definition](role-definitions-list.md).-- In the [assignable scopes](role-definitions.md#assignablescopes) section, get the management groups, subscriptions, and resource groups.-- Iterate over the assignable scopes and [list the role assignments](role-assignments-list-portal.md).
+- In the [AssignableScopes](role-definitions.md#assignablescopes) section, get the management groups, subscriptions, and resource groups.
+- Iterate over the `AssignableScopes` and [list the role assignments](role-assignments-list-portal.md).
- [Remove the role assignments](role-assignments-remove.md) that use the custom role. - [Delete the custom role](custom-roles-portal.md#delete-a-custom-role).
Here are steps to help find the role assignments before deleting a custom role:
The following list describes the limits for custom roles. -- Each directory can have up to **5000** custom roles.-- Azure Germany and Azure China 21Vianet can have up to 2000 custom roles for each directory.
+- Each tenant can have up to **5000** custom roles.
+- Azure Germany and Azure China 21Vianet can have up to 2000 custom roles for each tenant.
- You cannot set `AssignableScopes` to the root scope (`"/"`). - You cannot use wildcards (`*`) in `AssignableScopes`. This wildcard restriction helps ensure a user can't potentially obtain access to a scope by updating the role definition. - You can only define one management group in `AssignableScopes` of a custom role. Adding a management group to `AssignableScopes` is currently in preview. - You can have only one wildcard in an action string. - Custom roles with `DataActions` cannot be assigned at the management group scope.-- Azure Resource Manager doesn't validate the management group's existence in the role definition's assignable scope.
+- Azure Resource Manager doesn't validate the management group's existence in the role definition's `AssignableScopes`.
For more information about custom roles and management groups, see [What are Azure management groups?](../governance/management-groups/overview.md#azure-custom-role-definition-and-assignment).
role-based-access-control Resource Provider Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/resource-provider-operations.md
Azure service: [Cognitive Services](../cognitive-services/index.yml)
> | Microsoft.CognitiveServices/accounts/CustomVoice/endpoints/write | Create or update an voice endpoint. | > | Microsoft.CognitiveServices/accounts/CustomVoice/endpoints/delete | Delete the specified voice endpoint. | > | Microsoft.CognitiveServices/accounts/CustomVoice/endpoints/read | Get one or more voice endpoints |
-> | Microsoft.CognitiveServices/accounts/CustomVoice/endpoints/manifest/read | Returns an endpoint manifest which can be used in an on-premise container. |
+> | Microsoft.CognitiveServices/accounts/CustomVoice/endpoints/manifest/read | Returns an endpoint manifest which can be used in an on-premises container. |
> | Microsoft.CognitiveServices/accounts/CustomVoice/evaluations/delete | Deletes the specified evaluation. | > | Microsoft.CognitiveServices/accounts/CustomVoice/evaluations/read | Gets details of one or more evaluations | > | Microsoft.CognitiveServices/accounts/CustomVoice/features/read | Gets a list of allowed features. |
Azure service: [Cognitive Services](../cognitive-services/index.yml)
> | Microsoft.CognitiveServices/accounts/SpeechServices/speechrest/models/write | Create or update a model. | > | Microsoft.CognitiveServices/accounts/SpeechServices/speechrest/models/delete | Delete a model | > | Microsoft.CognitiveServices/accounts/SpeechServices/speechrest/models/read | Get one or more models |
-> | Microsoft.CognitiveServices/accounts/SpeechServices/speechrest/models/base/manifest/read | Returns an manifest for this base model which can be used in an on-premise container. |
-> | Microsoft.CognitiveServices/accounts/SpeechServices/speechrest/models/manifest/read | Returns an manifest for this model which can be used in an on-premise container. |
+> | Microsoft.CognitiveServices/accounts/SpeechServices/speechrest/models/base/manifest/read | Returns an manifest for this base model which can be used in an on-premises container. |
+> | Microsoft.CognitiveServices/accounts/SpeechServices/speechrest/models/manifest/read | Returns an manifest for this model which can be used in an on-premises container. |
> | Microsoft.CognitiveServices/accounts/SpeechServices/speechrest/projects/write | Create or update a project | > | Microsoft.CognitiveServices/accounts/SpeechServices/speechrest/projects/delete | Delete a project | > | Microsoft.CognitiveServices/accounts/SpeechServices/speechrest/projects/read | Get one or more projects |
Azure service: [IoT security](../iot-fundamentals/iot-security-architecture.md)
> | Microsoft.IoTSecurity/locations/sites/sensors/triggerTiPackageUpdate/action | Triggers threat intelligence package update | > | Microsoft.IoTSecurity/locations/sites/sensors/downloadResetPassword/action | Downloads reset password file for IoT Sensors | > | Microsoft.IoTSecurity/locations/sites/sensors/updateSoftwareVersion/action | Trigger sensor update |
-> | Microsoft.IoTSecurity/onPremiseSensors/read | Gets on-premise IoT Sensors |
-> | Microsoft.IoTSecurity/onPremiseSensors/write | Creates or updates on-premise IoT Sensors |
-> | Microsoft.IoTSecurity/onPremiseSensors/delete | Deletes on-premise IoT Sensors |
-> | Microsoft.IoTSecurity/onPremiseSensors/downloadActivation/action | Gets on-premise IoT Sensor Activation File |
-> | Microsoft.IoTSecurity/onPremiseSensors/downloadResetPassword/action | Downloads file for reset password of the on-premise IoT Sensor |
+> | Microsoft.IoTSecurity/onPremiseSensors/read | Gets on-premises IoT Sensors |
+> | Microsoft.IoTSecurity/onPremiseSensors/write | Creates or updates on-premises IoT Sensors |
+> | Microsoft.IoTSecurity/onPremiseSensors/delete | Deletes on-premises IoT Sensors |
+> | Microsoft.IoTSecurity/onPremiseSensors/downloadActivation/action | Gets on-premises IoT Sensor Activation File |
+> | Microsoft.IoTSecurity/onPremiseSensors/downloadResetPassword/action | Downloads file for reset password of the on-premises IoT Sensor |
> | Microsoft.IoTSecurity/onPremiseSensors/listDiagnosticsUploadDetails/action | Get details required to upload sensor diagnostics data | > | Microsoft.IoTSecurity/sensors/read | Gets IoT Sensors | > | Microsoft.IoTSecurity/sensors/write | Creates or updates IoT Sensors |
role-based-access-control Role Definitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-definitions.md
Previously updated : 01/06/2022 Last updated : 08/19/2022
The following table shows two examples of the effective date plane permissions f
## AssignableScopes
-The `AssignableScopes` property specifies the scopes (management groups, subscriptions, or resource groups) where this role definition can be assigned. You can make the role available for assignment in only the management groups, subscriptions, or resource groups that require it. You must use at least one management group, subscription, or resource group.
+The `AssignableScopes` property specifies the scopes (root, management group, subscriptions, or resource groups) where a role definition can be assigned. You can make a custom role available for assignment in only the management group, subscriptions, or resource groups that require it. You must use at least one management group, subscription, or resource group.
-Built-in roles have `AssignableScopes` set to the root scope (`"/"`). The root scope indicates that the role is available for assignment in all scopes. Examples of valid assignable scopes include:
+For example, if `AssignableScopes` is set to a subscription, that means that the custom role is available for assignment at subscription scope for the specified subscription, resource group scope for any resource group in the subscription, or resource scope for any resource in the subscription.
+
+Built-in roles have `AssignableScopes` set to the root scope (`"/"`). The root scope indicates that the role is available for assignment in all scopes.
+
+Examples of valid assignable scopes include:
> [!div class="mx-tableFixed"] > | Role is available for assignment | Example |
Built-in roles have `AssignableScopes` set to the root scope (`"/"`). The root s
> | Management group and a subscription | `"/providers/Microsoft.Management/managementGroups/{groupId1}", "/subscriptions/{subscriptionId1}",` | > | All scopes (applies only to built-in roles) | `"/"` |
-For information about `AssignableScopes` for custom roles, see [Azure custom roles](custom-roles.md).
+You can define only one management group in `AssignableScopes` of a custom role. Adding a management group to `AssignableScopes` is currently in preview.
+
+Although it's possible to create a custom role with a resource instance in `AssignableScopes` using the command line, it's not recommended. Each tenant supports a maximum of 5000 custom roles. Using this strategy could potentially exhaust your available custom roles. Ultimately, the level of access is determined by the custom role assignment (scope + role permissions + security principal) and not the `AssignableScopes` listed in the custom role. So, create your custom roles with `AssignableScopes` of management group, subscription, or resource group, but assign the custom roles with narrow scope, such as resource or resource group.
+
+For more information about `AssignableScopes` for custom roles, see [Azure custom roles](custom-roles.md).
## Next steps
search Search Data Sources Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-data-sources-gallery.md
The Confluence Connector is an enterprise grade indexing connector that enables
by [Raytion](https://www.raytion.com/contact)
-Secure enterprise search connector for reliably indexing content from Atlassian Confluence and intelligently searching it with Azure Cognitive Search. It robustly indexes pages, blog posts, attachments, comments, spaces, profiles, and hub sites for tags from on-premise Confluence instances in near real time. The connector fully supports Atlassian ConfluenceΓÇÖs built-in user and group management, as well as Confluence installations based on Active Directory and other directory services.
+Secure enterprise search connector for reliably indexing content from Atlassian Confluence and intelligently searching it with Azure Cognitive Search. It robustly indexes pages, blog posts, attachments, comments, spaces, profiles, and hub sites for tags from on-premises Confluence instances in near real time. The connector fully supports Atlassian ConfluenceΓÇÖs built-in user and group management, as well as Confluence installations based on Active Directory and other directory services.
[More details](https://www.raytion.com/connectors/raytion-confluence-connector)
The Jira Connector enables users to perform searches against all Jira objects, e
by [Raytion](https://www.raytion.com/contact)
-Secure enterprise search connector for reliably indexing content from Atlassian Jira and intelligently searching it with Azure Cognitive Search. It robustly indexes projects, issues, attachments, comments, work logs, issue histories, links, and profiles from on-premise Jira instances in near real time. The connector fully supports Atlassian JiraΓÇÖs built-in user and group management, as well as Jira installations based on Active Directory and other directory services.
+Secure enterprise search connector for reliably indexing content from Atlassian Jira and intelligently searching it with Azure Cognitive Search. It robustly indexes projects, issues, attachments, comments, work logs, issue histories, links, and profiles from on-premises Jira instances in near real time. The connector fully supports Atlassian JiraΓÇÖs built-in user and group management, as well as Jira installations based on Active Directory and other directory services.
[More details](https://www.raytion.com/connectors/raytion-jira-connector)
The Jive Connector was developed for Jive, establishing a secure connection to t
by [Raytion](https://www.raytion.com/contact)
-Secure enterprise search connector for reliably indexing content from Jive and intelligently searching it with Azure Cognitive Search. It robustly indexes discussions, polls, files, blogs, spaces, groups, projects, tasks, videos, messages, ideas, profiles, and status updates from on-premise and cloud-hosted Jive instances in near real time. The connector fully supports JiveΓÇÖs built-in user and group management and supports JiveΓÇÖs native authentication models, OAuth and Basic authentication.
+Secure enterprise search connector for reliably indexing content from Jive and intelligently searching it with Azure Cognitive Search. It robustly indexes discussions, polls, files, blogs, spaces, groups, projects, tasks, videos, messages, ideas, profiles, and status updates from on-premises and cloud-hosted Jive instances in near real time. The connector fully supports JiveΓÇÖs built-in user and group management and supports JiveΓÇÖs native authentication models, OAuth and Basic authentication.
[More details](https://www.raytion.com/connectors/raytion-jive-connector)
The HP TRIM Connector was developed for HP Records Manager, establishing a secur
by [BA Insight](https://www.bainsight.com/)
-Our Microsoft Dynamics 365 CRM connector supports both on-premise CRM installations and Dynamics CRM Online.
+Our Microsoft Dynamics 365 CRM connector supports both on-premises CRM installations and Dynamics CRM Online.
[More details](https://www.bainsight.com/connectors/microsoft-dynamics-crm-connector-sharepoint-azure-elasticsearch/)
security Key Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/key-management.md
Customer-managed keys (CMK), on the other hand, are those that can be read, crea
A specific kind of customer-managed key is the "key encryption key" (KEK). A KEK is a master key, that controls access to one or more encryption keys that are themselves encrypted.
-Customer-managed keys can be stored on-premise or, more commonly, in a cloud key management service.
+Customer-managed keys can be stored on-premises or, more commonly, in a cloud key management service.
## Azure key management services
security Recover From Identity Compromise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/recover-from-identity-compromise.md
Review administrative rights in both your cloud and on-premises environments. Fo
|Environment |Description | ||| |**All cloud environments** | - Review any privileged access rights in the cloud and remove any unnecessary permissions<br> - Implement Privileged Identity Management (PIM)<br> - Set up Conditional Access policies to limit administrative access during hardening |
-|**All on-premises environments** | - Review privileged access on-premise and remove unnecessary permissions<br> - Reduce membership of built-in groups<br> - Verify Active Directory delegations<br> - Harden your Tier 0 environment, and limit who has access to Tier 0 assets |
+|**All on-premises environments** | - Review privileged access on-premises and remove unnecessary permissions<br> - Reduce membership of built-in groups<br> - Verify Active Directory delegations<br> - Harden your Tier 0 environment, and limit who has access to Tier 0 assets |
|**All Enterprise applications** | Review for delegated permissions and consent grants that allow any of the following actions: <br><br> - Modifying privileged users and roles <br>- Reading or accessing all mailboxes <br>- Sending or forwarding email on behalf of other users <br>- Accessing all OneDrive or SharePoint site content <br>- Adding service principals that can read/write to the directory | |**Microsoft 365 environments** |Review access and configuration settings for your Microsoft 365 environment, including: <br>- SharePoint Online Sharing <br>- Microsoft Teams <br>- Power Apps <br>- Microsoft OneDrive for Business | | **Review user accounts in your environments** |- Review and remove guest user accounts that are no longer needed. <br>- Review email configurations for delegates, mailbox folder permissions, ActiveSync mobile device registrations, Inbox rules, and Outlook on the Web options. <br>- Review ApplicationImpersonation rights and reduce any use of legacy authentication as much as possible. <br>- Validate that MFA is enforced and that both MFA and self-service password reset (SSPR) contact information for all users is correct. |
In addition to the recommendations listed earlier in this article, we also recom
|Activity |Description | ||| |**Reset passwords** | Reset passwords on any [break-glass accounts](../../active-directory/roles/security-emergency-access.md) and reduce the number of break-glass accounts to the absolute minimum required. |
-|**Restrict privileged access accounts** | Ensure that service and user accounts with privileged access are cloud-only accounts, and do not use on-premise accounts that are synced or federated to Azure Active Directory. |
+|**Restrict privileged access accounts** | Ensure that service and user accounts with privileged access are cloud-only accounts, and do not use on-premises accounts that are synced or federated to Azure Active Directory. |
|**Enforce MFA** | Enforce Multi-Factor Authentication (MFA) across all elevated users in the tenant. We recommend enforcing MFA across all users in the tenant. | |**Limit administrative access** | Implement [Privileged Identity Management](../../active-directory/privileged-identity-management/pim-configure.md) (PIM) and conditional access to limit administrative access. <br><br>For Microsoft 365 users, implement [Privileged Access Management](https://techcommunity.microsoft.com/t5/microsoft-security-and/privileged-access-management-in-office-365-is-now-generally/ba-p/261751) (PAM) to limit access to sensitive abilities, such as eDiscovery, Global Admin, Account Administration, and more. | |**Review / reduce delegated permissions and consent grants** | Review and reduce all Enterprise Applications delegated permissions or [consent grants](/graph/auth-limit-mailbox-access) that allow any of the following functionalities: <br><br>- Modification of privileged users and roles <br>- Reading, sending email, or accessing all mailboxes <br>- Accessing OneDrive, Teams, or SharePoint content <br>- Adding Service Principals that can read/write to the directory <br>- Application Permissions versus Delegated Access |
service-fabric Service Fabric Get Started Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-get-started-linux.md
sudo yum install servicefabricsdkcommon
-## Included packages
-The Service Fabric runtime that comes with the SDK installation includes the packages in the following table.
-
- | | DotNetCore | Java | Python | NodeJS |
- | | | |
-**Ubuntu** | 2.0.7 | AzulJDK 1.8 | Implicit from npm | latest |
-**RHEL** | - | OpenJDK 1.8 | Implicit from npm | latest |
- ## Set up a local cluster 1. Start a local Service Fabric cluster for development.
service-fabric Service Fabric Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-versions.md
Support for Service Fabric on a specific OS ends when support for the OS version
| Service Fabric runtime | Can upgrade directly from |Can downgrade to*|Compatible SDK or NuGet package version | Supported .NET runtimes** | OS version | End of support | | | | | | | | |
-| 9.0 CU2<br>9.0.1056.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | Current version |
-| 9.0 CU1<br>9.0.1035.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | Current version |
-| 9.0 RTO<br>9.0.1018.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | Current version |
-| 8.2 CU4<br>8.2.1458.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 1, 2022 |
-| 8.2 CU3<br>8.2.1434.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 1, 2022 |
-| 8.2 CU2.1<br>8.2.1397.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 1, 2022 |
-| 8.2 CU2<br>8.2.1285.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 1, 2022 |
-| 8.2 CU1<br>8.2.1204.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 1, 2022 |
-| 8.2 RTO<br>8.2.1124.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 1, 2022 |
+| 9.0 CU2.1<br>9.0.1086.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | Current version |
+| 9.0 CU2<br>9.0.1056.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | August 19, 2022 |
+| 9.0 CU1<br>9.0.1035.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | August 19, 2022 |
+| 9.0 RTO<br>9.0.1018.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | August 19, 2022 |
+| 8.2 CU5.1<br>8.2.1483.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | December 1, 2022 |
+| 8.2 CU4<br>8.2.1458.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | August 19, 2022 |
+| 8.2 CU3<br>8.2.1434.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | August 19, 2022 |
+| 8.2 CU2.1<br>8.2.1397.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | August 19, 2022 |
+| 8.2 CU2<br>8.2.1285.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | August 19, 2022 |
+| 8.2 CU1<br>8.2.1204.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | August 19, 2022 |
+| 8.2 RTO<br>8.2.1124.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | August 19, 2022 |
| 8.1 CU4<br>8.1.360.1 | 7.2 CU7<br>7.2.476.1 | 8.0 | Less than or equal to version 5.1 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | June 30, 2022 | | 8.1 CU3.1<br>8.1.340.1 | 7.2 CU7<br>7.2.476.1 | 8.0 | Less than or equal to version 5.1 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | June 30, 2022 | | 8.1 CU3<br>8.1.334.1 | 7.2 CU7<br>7.2.476.1 | 8.0 | Less than or equal to version 5.1 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | June 30, 2022 |
The following table lists the version names of Service Fabric and their correspo
| Version name | Windows version number | Linux version number | | | | |
+| 9.0 CU2.1 | Not applicable | 9.0.1086.1 |
| 9.0 CU2 | 9.0.1048.9590 | 9.0.1056.1 | | 9.0 CU1 | 9.0.1028.9590 | 9.0.1035.1 | | 9.0 RTO | 9.0.1017.9590 | 9.0.1018.1 |
static-web-apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/overview.md
Azure Static Web Apps is a service that automatically builds and deploys full st
The workflow of Azure Static Web Apps is tailored to a developer's daily workflow. Apps are built and deployed based off code changes.
-When you create an Azure Static Web Apps resource, Azure interacts directly with GitHub or Azure DevOps to monitor a branch of your choice. Every time you push commits or accept pull requests into the watched branch, a build is automatically run and your app and API is deployed to Azure.
+When you create an Azure Static Web Apps resource, Azure interacts directly with GitHub or Azure DevOps to monitor a branch of your choice. Every time you push commits or accept pull requests into the watched branch, a build automatically runs and your app and API is deployed to Azure.
-Static web apps are commonly built using libraries and frameworks like Angular, React, Svelte, Vue, or Blazor where server side rendering is not required. These apps include HTML, CSS, JavaScript, and image assets that make up the application. With a traditional web server, these assets are served from a single server alongside any required API endpoints.
+Static web apps are commonly built using libraries and frameworks like Angular, React, Svelte, Vue, or Blazor where server side rendering isn't required. These apps include HTML, CSS, JavaScript, and image assets that make up the application. With a traditional web server, these assets are served from a single server alongside any required API endpoints.
With Static Web Apps, static assets are separated from a traditional web server and are instead served from points geographically distributed around the world. This distribution makes serving files much faster as files are physically closer to end users. In addition, API endpoints are hosted using a [serverless architecture](../azure-functions/functions-overview.md), which avoids the need for a full back-end server all together.
With Static Web Apps, static assets are separated from a traditional web server
- **Customizable authorization role definition** and assignments. - **Back-end routing rules** enabling full control over the content and routes you serve. - **Generated staging versions** powered by pull requests enabling preview versions of your site before publishing.
+- **CLI support** through the [Azure CLI](/cli/azure/staticwebapp) to create cloud resources, and via the [Azure Static Web Apps CLI](https://github.com/Azure/static-web-apps-cli#azure-static-web-apps-cli) for local development.
## What you can do with Static Web Apps
storage Access Tiers Online Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/access-tiers-online-manage.md
description: Learn how to specify a blob's access tier when you upload it, or ho
Previously updated : 07/21/2022 Last updated : 08/18/2022
The copy operation is synchronous so when the command returns, that indicates th
## Next steps -- [How to manage the default account access tier of an Azure Storage account](../common/manage-account-default-access-tier.md)
+- [Hot, Cool, and Archive access tiers for blob data](access-tiers-overview.md)
+- [Archive a blob](archive-blob.md)
- [Rehydrate an archived blob to an online tier](archive-rehydrate-to-online-tier.md)
storage Data Lake Storage Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-best-practices.md
Consider pre-planning the structure of your data. File format, file size, and di
### File formats
-Data can be ingested in various formats. Data can be appear in human readable formats such as JSON, CSV, or XML or as compressed binary formats such as `.tar.gz`. Data can come in various sizes as well. Data can be composed of large files (a few terabytes) such as data from an export of a SQL table from your on-premise systems. Data can also come in the form of a large number of tiny files (a few kilobytes) such as data from real-time events from an Internet of things (IoT) solution. You can optimize efficiency and costs by choosing an appropriate file format and file size.
+Data can be ingested in various formats. Data can be appear in human readable formats such as JSON, CSV, or XML or as compressed binary formats such as `.tar.gz`. Data can come in various sizes as well. Data can be composed of large files (a few terabytes) such as data from an export of a SQL table from your on-premises systems. Data can also come in the form of a large number of tiny files (a few kilobytes) such as data from real-time events from an Internet of things (IoT) solution. You can optimize efficiency and costs by choosing an appropriate file format and file size.
Hadoop supports a set of file formats that are optimized for storing and processing structured data. Some common formats are Avro, Parquet, and Optimized Row Columnar (ORC) format. All of these formats are machine-readable binary file formats. They are compressed to help you manage file size. They have a schema embedded in each file, which makes them self-describing. The difference between these formats is in how data is stored. Avro stores data in a row-based format and the Parquet and ORC formats store data in a columnar format.
storage Storage Blobs Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-introduction.md
Previously updated : 03/15/2022 Last updated : 08/18/2022
To learn how to create a storage account, see [Create a storage account](../comm
A container organizes a set of blobs, similar to a directory in a file system. A storage account can include an unlimited number of containers, and a container can store an unlimited number of blobs.
-> [!NOTE]
-> The container name must be lowercase. For more information about naming containers, see [Naming and Referencing Containers, Blobs, and Metadata](/rest/api/storageservices/Naming-and-Referencing-Containers--Blobs--and-Metadata).
+A container name must be a valid DNS name, as it forms part of the unique URI used to address the container or its blobs. Follow these rules when naming a container:
+
+- Container names can be between 3 and 63 characters long.
+- Container names must start with a letter or number, and can contain only lowercase letters, numbers, and the dash (-) character.
+- Two or more consecutive dash characters aren't permitted in container names.
+
+The URI for a container is similar to:
+
+`https://myaccount.blob.core.windows.net/mycontainer`
+
+For more information about naming containers, see [Naming and Referencing Containers, Blobs, and Metadata](/rest/api/storageservices/Naming-and-Referencing-Containers--Blobs--and-Metadata).
### Blobs
Azure Storage supports three types of blobs:
For more information about the different types of blobs, see [Understanding Block Blobs, Append Blobs, and Page Blobs](/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs).
+The URI for a blob is similar to:
+
+`https://myaccount.blob.core.windows.net/mycontainer/myblob`
+
+or
+
+`https://myaccount.blob.core.windows.net/mycontainer/myvirtualdirectory/myblob`
+
+Follow these rules when naming a blob:
+
+- A blob name can contain any combination of characters.
+- A blob name must be at least one character long and cannot be more than 1,024 characters long, for blobs in Azure Storage.
+- Blob names are case-sensitive.
+- Reserved URL characters must be properly escaped.
+- The number of path segments comprising the blob name cannot exceed 254. A path segment is the string between consecutive delimiter characters (*e.g.*, the forward slash '/') that corresponds to the name of a virtual directory.
+
+> [!NOTE]
+> Avoid blob names that end with a dot (.), a forward slash (/), or a sequence or combination of the two. No path segments should end with a dot (.).
+
+For more information about naming blobs, see [Naming and Referencing Containers, Blobs, and Metadata](/rest/api/storageservices/Naming-and-Referencing-Containers--Blobs--and-Metadata).
+ ## Move data to Blob storage A number of solutions exist for migrating existing data to Blob storage:
storage Storage Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-redundancy.md
Previously updated : 05/24/2022 Last updated : 08/18/2022
The following diagram shows how your data is replicated across availability zone
ZRS provides excellent performance, low latency, and resiliency for your data if it becomes temporarily unavailable. However, ZRS by itself may not protect your data against a regional disaster where multiple zones are permanently affected. For protection against regional disasters, Microsoft recommends using [geo-zone-redundant storage](#geo-zone-redundant-storage) (GZRS), which uses ZRS in the primary region and also geo-replicates your data to a secondary region.
-The Archive tier for Blob Storage isn't currently supported for ZRS accounts. Unmanaged disks don't support ZRS or GZRS.
+The Archive tier for Blob Storage isn't currently supported for ZRS, GZRS, or RA-GZRS accounts. Unmanaged disks don't support ZRS or GZRS.
For more information about which regions support ZRS, see [Azure regions with availability zones](../../availability-zones/az-overview.md#azure-regions-with-availability-zones).
For more information about which regions support ZRS, see [Azure regions with av
ZRS is supported for all Azure Storage services through standard general-purpose v2 storage accounts, including: -- Azure Blob storage (hot and cool block blobs, non-disk page blobs)
+- Azure Blob storage (hot and cool block blobs and append blobs, non-disk page blobs)
- Azure Files (all standard tiers: transaction optimized, hot, and cool) - Azure Table storage - Azure Queue storage
The following table shows which redundancy options are supported for each type o
<sup>1</sup> Accounts of this type with a hierarchical namespace enabled also support the specified redundancy option.
-All data for all storage accounts is copied according to the redundancy option for the storage account. Objects including block blobs, append blobs, page blobs, queues, tables, and files are copied.
+All data for all storage accounts is copied from the primary to the secondary according to the redundancy option for the storage account. Objects including block blobs, append blobs, page blobs, queues, tables, and files are copied.
+
+Data in all tiers, including the Archive tier, is always copied from the primary to the secondary during geo-replication. The Archive tier for Blob Storage is currently supported for LRS, GRS, and RA-GRS accounts, but not for ZRS, GZRS, or RA-GZRS accounts. For more information about blob tiers, see [Hot, Cool, and Archive access tiers for blob data](../blobs/access-tiers-overview.md).
-Data in all tiers, including the Archive tier, is copied. For more information about blob tiers, see [Hot, Cool, and Archive access tiers for blob data](../blobs/access-tiers-overview.md).
+Unmanaged disks don't support ZRS or GZRS.
For pricing information for each redundancy option, see [Azure Storage pricing](https://azure.microsoft.com/pricing/details/storage/).
storage File Sync Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-monitoring.md
To view the **registered server health** in the portal, navigate to the **Regist
![Screenshot of registered servers health](media/storage-sync-files-troubleshoot/file-sync-registered-servers.png) - If the **Registered server** state is **Online**, the server is successfully communicating with the service.-- If the **Registered server** state is **Appears Offline**, the Storage Sync Monitor process (AzureStorageSyncMonitor.exe) is not running or the server is unable to access the Azure File Sync service. See the [troubleshooting documentation](file-sync-troubleshoot.md?tabs=portal1%252cazure-portal#server-endpoint-noactivity) for guidance.
+- If the **Registered server** state is **Appears Offline**, the Storage Sync Monitor process (AzureStorageSyncMonitor.exe) is not running or the server is unable to access the Azure File Sync service. See the [troubleshooting documentation](file-sync-troubleshoot-sync-group-management.md?tabs=portal1%252cazure-portal#server-endpoint-noactivity) for guidance.
### Server endpoint health
To view the health of a **server endpoint** in the portal, navigate to the **Syn
![Screenshot of server endpoint health](media/storage-sync-files-troubleshoot/file-sync-server-endpoint-health.png) -- The **server endpoint health** and **sync activity** in the portal is based on the sync events that are logged in the Telemetry event log on the server (ID 9102 and 9302). If a sync session fails because of a transient error, such as error canceled, the server endpoint will still show as **healthy** in the portal as long as the current sync session is making progress (files are applied). Event ID 9302 is the sync progress event and Event ID 9102 is logged once a sync session completes. For more information, see [sync health](file-sync-troubleshoot.md?tabs=server%252cazure-portal#broken-sync) and [sync progress](file-sync-troubleshoot.md?tabs=server%252cazure-portal#how-do-i-monitor-the-progress-of-a-current-sync-session). If the server endpoint health shows an **Error** or **No Activity**, see the [troubleshooting documentation](file-sync-troubleshoot.md?tabs=portal1%252cazure-portal#common-sync-errors) for guidance.-- The **files not syncing** count in the portal is based on the Event ID 9121 that is logged in the Telemetry event log on the server. This event is logged for each per-item error once the sync session completes. To resolve per-item errors, see [How do I see if there are specific files or folders that are not syncing?](file-sync-troubleshoot.md?tabs=server%252cazure-portal#how-do-i-see-if-there-are-specific-files-or-folders-that-are-not-syncing).
+- The **server endpoint health** and **sync activity** in the portal is based on the sync events that are logged in the Telemetry event log on the server (ID 9102 and 9302). If a sync session fails because of a transient error, such as error canceled, the server endpoint will still show as **healthy** in the portal as long as the current sync session is making progress (files are applied). Event ID 9302 is the sync progress event and Event ID 9102 is logged once a sync session completes. For more information, see [sync health](file-sync-troubleshoot-sync-errors.md?tabs=server%252cazure-portal#broken-sync) and [sync progress](file-sync-troubleshoot-sync-errors.md?tabs=server%252cazure-portal#how-do-i-monitor-the-progress-of-a-current-sync-session). If the server endpoint health shows an **Error** or **No Activity**, see the [troubleshooting documentation](file-sync-troubleshoot-sync-errors.md?tabs=portal1%252cazure-portal#common-sync-errors) for guidance.
+- The **files not syncing** count in the portal is based on the Event ID 9121 that is logged in the Telemetry event log on the server. This event is logged for each per-item error once the sync session completes. To resolve per-item errors, see [How do I see if there are specific files or folders that are not syncing?](file-sync-troubleshoot-sync-errors.md?tabs=server%252cazure-portal#how-do-i-see-if-there-are-specific-files-or-folders-that-are-not-syncing).
- To view the **cloud tiering efficiency** in the portal, go to the **Server Endpoint Properties** and navigate to the **Cloud Tiering** section. The data provided for cloud tiering efficiency is based on Event ID 9071 that is logged in the Telemetry event log on the server. To learn more, see [Monitor cloud tiering](file-sync-monitor-cloud-tiering.md).-- To view **files not tiering** and **recall errors** in the portal, go to the **Server Endpoint Properties** and navigate to the **Cloud Tiering** section. **Files not tiering** is based on Event ID 9003 that is logged in the Telemetry event log on the server and **recall errors** is based on Event ID 9006. To investigate files that are failing to tier or recall, see [How to troubleshoot files that fail to tier](file-sync-troubleshoot.md?tabs=portal1%252cazure-portal#how-to-troubleshoot-files-that-fail-to-tier) and [How to troubleshoot files that fail to be recalled](file-sync-troubleshoot.md?tabs=portal1%252cazure-portal#how-to-troubleshoot-files-that-fail-to-be-recalled).
+- To view **files not tiering** and **recall errors** in the portal, go to the **Server Endpoint Properties** and navigate to the **Cloud Tiering** section. **Files not tiering** is based on Event ID 9003 that is logged in the Telemetry event log on the server and **recall errors** is based on Event ID 9006. To investigate files that are failing to tier or recall, see [How to troubleshoot files that fail to tier](file-sync-troubleshoot-cloud-tiering.md?tabs=portal1%252cazure-portal#how-to-troubleshoot-files-that-fail-to-tier) and [How to troubleshoot files that fail to be recalled](file-sync-troubleshoot-cloud-tiering.md?tabs=portal1%252cazure-portal#how-to-troubleshoot-files-that-fail-to-be-recalled).
### Metric charts
Use the Telemetry event log on the server to monitor registered server, sync, an
Sync health -- Event ID 9102 is logged once a sync session completes. Use this event to determine if sync sessions are successful (**HResult = 0**) and if there are per-item sync errors (**PerItemErrorCount**). For more information, see the [sync health](file-sync-troubleshoot.md?tabs=server%252cazure-portal#broken-sync) and [per-item errors](file-sync-troubleshoot.md?tabs=server%252cazure-portal#how-do-i-see-if-there-are-specific-files-or-folders-that-are-not-syncing) documentation.
+- Event ID 9102 is logged once a sync session completes. Use this event to determine if sync sessions are successful (**HResult = 0**) and if there are per-item sync errors (**PerItemErrorCount**). For more information, see the [sync health](file-sync-troubleshoot-sync-errors.md?tabs=server%252cazure-portal#broken-sync) and [per-item errors](file-sync-troubleshoot-sync-errors.md?tabs=server%252cazure-portal#how-do-i-see-if-there-are-specific-files-or-folders-that-are-not-syncing) documentation.
> [!Note] > Sometimes sync sessions fail overall or have a non-zero PerItemErrorCount. However, they still make forward progress, and some files sync successfully. You can see this in the Applied fields such as AppliedFileCount, AppliedDirCount, AppliedTombstoneCount, and AppliedSizeBytes. These fields tell you how much of the session succeeded. If you see multiple sync sessions fail in a row, and they have an increasing Applied count, give sync time to try again before you open a support ticket. -- Event ID 9121 is logged for each per-item error once the sync session completes. Use this event to determine the number of files that are failing to sync with this error (**PersistentCount** and **TransientCount**). Persistent per-item errors should be investigated, see [How do I see if there are specific files or folders that are not syncing?](file-sync-troubleshoot.md?tabs=server%252cazure-portal#how-do-i-see-if-there-are-specific-files-or-folders-that-are-not-syncing).
+- Event ID 9121 is logged for each per-item error once the sync session completes. Use this event to determine the number of files that are failing to sync with this error (**PersistentCount** and **TransientCount**). Persistent per-item errors should be investigated, see [How do I see if there are specific files or folders that are not syncing?](file-sync-troubleshoot-sync-errors.md?tabs=server%252cazure-portal#how-do-i-see-if-there-are-specific-files-or-folders-that-are-not-syncing).
-- Event ID 9302 is logged every 5 to 10 minutes if there's an active sync session. Use this event to determine how many items are to be synced (**TotalItemCount**), number of items that have synced so far (**AppliedItemCount**) and number of items that have failed to sync due to a per-item error (**PerItemErrorCount**). If sync is not making progress (**AppliedItemCount=0**), the sync session will eventually fail and an Event ID 9102 will be logged with the error. For more information, see the [sync progress documentation](file-sync-troubleshoot.md?tabs=server%252cazure-portal#how-do-i-monitor-the-progress-of-a-current-sync-session).
+- Event ID 9302 is logged every 5 to 10 minutes if there's an active sync session. Use this event to determine how many items are to be synced (**TotalItemCount**), number of items that have synced so far (**AppliedItemCount**) and number of items that have failed to sync due to a per-item error (**PerItemErrorCount**). If sync is not making progress (**AppliedItemCount=0**), the sync session will eventually fail and an Event ID 9102 will be logged with the error. For more information, see the [sync progress documentation](file-sync-troubleshoot-sync-errors.md?tabs=server%252cazure-portal#how-do-i-monitor-the-progress-of-a-current-sync-session).
Registered server health -- Event ID 9301 is logged every 30 seconds when a server queries the service for jobs. If GetNextJob finishes with **status = 0**, the server is able to communicate with the service. If GetNextJob finishes with an error, check the [troubleshooting documentation](file-sync-troubleshoot.md?tabs=portal1%252cazure-portal#server-endpoint-noactivity) for guidance.
+- Event ID 9301 is logged every 30 seconds when a server queries the service for jobs. If GetNextJob finishes with **status = 0**, the server is able to communicate with the service. If GetNextJob finishes with an error, check the [troubleshooting documentation](file-sync-troubleshoot-sync-group-management.md?tabs=portal1%252cazure-portal#server-endpoint-noactivity) for guidance.
Cloud tiering health
storage File Sync Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-planning.md
Azure File Sync does not interoperate with NTFS Encrypted File System (NTFS EFS)
### Encryption in transit > [!NOTE]
-> Azure File Sync service will remove support for TLS1.0 and 1.1 on August 1st, 2020. All supported Azure File Sync agent versions already use TLS1.2 by default. Using an earlier version of TLS could occur if TLS1.2 was disabled on your server or a proxy is used. If you are using a proxy, we recommend you check the proxy configuration. Azure File Sync service regions added after 5/1/2020 will only support TLS1.2 and support for TLS1.0 and 1.1 will be removed from existing regions on August 1st, 2020. For more information, see the [troubleshooting guide](file-sync-troubleshoot.md#tls-12-required-for-azure-file-sync).
+> Azure File Sync service will remove support for TLS1.0 and 1.1 on August 1st, 2020. All supported Azure File Sync agent versions already use TLS1.2 by default. Using an earlier version of TLS could occur if TLS1.2 was disabled on your server or a proxy is used. If you are using a proxy, we recommend you check the proxy configuration. Azure File Sync service regions added after 5/1/2020 will only support TLS1.2 and support for TLS1.0 and 1.1 will be removed from existing regions on August 1st, 2020. For more information, see the [troubleshooting guide](file-sync-troubleshoot-cloud-tiering.md#tls-12-required-for-azure-file-sync).
Azure File Sync agent communicates with your Storage Sync Service and Azure file share using the Azure File Sync REST protocol and the FileREST protocol, both of which always use HTTPS over port 443. Azure File Sync does not send unencrypted requests over HTTP.
storage File Sync Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-release-notes.md
For more information on how to install and configure the Azure File Sync agent w
### Sync limitations The following items don't sync, but the rest of the system continues to operate normally:-- Files with unsupported characters. See [Troubleshooting guide](file-sync-troubleshoot.md#handling-unsupported-characters) for list of unsupported characters.
+- Files with unsupported characters. See [Troubleshooting guide](file-sync-troubleshoot-sync-errors.md#handling-unsupported-characters) for list of unsupported characters.
- Files or directories that end with a period. - Paths that are longer than 2,048 characters. - The system access control list (SACL) portion of a security descriptor that's used for auditing.
The following items don't sync, but the rest of the system continues to operate
### Cloud endpoint - Azure File Sync supports making changes to the Azure file share directly. However, any changes made on the Azure file share first need to be discovered by an Azure File Sync change detection job. A change detection job is initiated for a cloud endpoint once every 24 hours. To immediately sync files that are changed in the Azure file share, the [Invoke-AzStorageSyncChangeDetection](/powershell/module/az.storagesync/invoke-azstoragesyncchangedetection) PowerShell cmdlet can be used to manually initiate the detection of changes in the Azure file share. In addition, changes made to an Azure file share over the REST protocol will not update the SMB last modified time and will not be seen as a change by sync.-- The storage sync service and/or storage account can be moved to a different resource group, subscription, or Azure AD tenant. After the storage sync service or storage account is moved, you need to give the Microsoft.StorageSync application access to the storage account (see [Ensure Azure File Sync has access to the storage account](file-sync-troubleshoot.md?tabs=portal1%252cportal#troubleshoot-rbac)).
+- The storage sync service and/or storage account can be moved to a different resource group, subscription, or Azure AD tenant. After the storage sync service or storage account is moved, you need to give the Microsoft.StorageSync application access to the storage account (see [Ensure Azure File Sync has access to the storage account](file-sync-troubleshoot-sync-errors.md?tabs=portal1%252cportal#troubleshoot-rbac)).
> [!Note] > When creating the cloud endpoint, the storage sync service and storage account must be in the same Azure AD tenant. Once the cloud endpoint is created, the storage sync service and storage account can be moved to different Azure AD tenants.
For more information on how to install and configure the Azure File Sync agent w
### Sync limitations The following items don't sync, but the rest of the system continues to operate normally:-- Files with unsupported characters. See [Troubleshooting guide](file-sync-troubleshoot.md#handling-unsupported-characters) for list of unsupported characters.
+- Files with unsupported characters. See [Troubleshooting guide](file-sync-troubleshoot-sync-errors.md#handling-unsupported-characters) for list of unsupported characters.
- Files or directories that end with a period. - Paths that are longer than 2,048 characters. - The system access control list (SACL) portion of a security descriptor that's used for auditing.
The following items don't sync, but the rest of the system continues to operate
### Cloud endpoint - Azure File Sync supports making changes to the Azure file share directly. However, any changes made on the Azure file share first need to be discovered by an Azure File Sync change detection job. A change detection job is initiated for a cloud endpoint once every 24 hours. To immediately sync files that are changed in the Azure file share, the [Invoke-AzStorageSyncChangeDetection](/powershell/module/az.storagesync/invoke-azstoragesyncchangedetection) PowerShell cmdlet can be used to manually initiate the detection of changes in the Azure file share. In addition, changes made to an Azure file share over the REST protocol will not update the SMB last modified time and will not be seen as a change by sync.-- The storage sync service and/or storage account can be moved to a different resource group, subscription, or Azure AD tenant. After the storage sync service or storage account is moved, you need to give the Microsoft.StorageSync application access to the storage account (see [Ensure Azure File Sync has access to the storage account](file-sync-troubleshoot.md?tabs=portal1%252cportal#troubleshoot-rbac)).
+- The storage sync service and/or storage account can be moved to a different resource group, subscription, or Azure AD tenant. After the storage sync service or storage account is moved, you need to give the Microsoft.StorageSync application access to the storage account (see [Ensure Azure File Sync has access to the storage account](file-sync-troubleshoot-sync-errors.md?tabs=portal1%252cportal#troubleshoot-rbac)).
> [!Note] > When creating the cloud endpoint, the storage sync service and storage account must be in the same Azure AD tenant. Once the cloud endpoint is created, the storage sync service and storage account can be moved to different Azure AD tenants.
storage File Sync Troubleshoot Cloud Tiering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-troubleshoot-cloud-tiering.md
+
+ Title: Troubleshoot Azure File Sync cloud tiering | Microsoft Docs
+description: Troubleshoot common issues with cloud tiering in an Azure File Sync deployment.
+++ Last updated : 7/28/2022+++++
+# Troubleshoot Azure File Sync cloud tiering
+
+Cloud tiering, an optional feature of Azure File Sync, decreases the amount of local storage required while keeping the performance of an on-premises file server. When enabled, this feature stores only frequently accessed (hot) files on your local server. Infrequently accessed (cool) files are split into namespace (file and folder structure) and file content.
+
+There are two paths for failures in cloud tiering:
+
+- Files can fail to tier, which means that Azure File Sync unsuccessfully attempts to tier a file to Azure Files.
+- Files can fail to recall, which means that the Azure File Sync file system filter (StorageSync.sys) fails to download data when a user attempts to access a file that has been tiered.
+
+There are two main classes of failures that can happen via either failure path:
+
+- Cloud storage failures
+ - *Transient storage service availability issues*. For more information, see the [Service Level Agreement (SLA) for Azure Storage](https://azure.microsoft.com/support/legal/sla/storage/v1_5/).
+ - *Inaccessible Azure file share*. This failure typically happens when you delete the Azure file share when it is still a cloud endpoint in a sync group.
+ - *Inaccessible storage account*. This failure typically happens when you delete the storage account while it still has an Azure file share that is a cloud endpoint in a sync group.
+- Server failures
+ - *Azure File Sync file system filter (StorageSync.sys) is not loaded*. In order to respond to tiering/recall requests, the Azure File Sync file system filter must be loaded. The filter not being loaded can happen for several reasons, but the most common reason is that an administrator unloaded it manually. The Azure File Sync file system filter must be loaded at all times for Azure File Sync to properly function.
+ - *Missing, corrupt, or otherwise broken reparse point*. A reparse point is a special data structure on a file that consists of two parts:
+ 1. A reparse tag, which indicates to the operating system that the Azure File Sync file system filter (StorageSync.sys) may need to do some action on IO to the file.
+ 2. Reparse data, which indicates to the file system filter the URI of the file on the associated cloud endpoint (the Azure file share).
+
+ The most common way a reparse point could become corrupted is if an administrator attempts to modify either the tag or its data.
+ - *Network connectivity issues*. In order to tier or recall a file, the server must have internet connectivity.
+
+The following sections indicate how to troubleshoot cloud tiering issues and determine if an issue is a cloud storage issue or a server issue.
+
+## How to monitor tiering activity on a server
+To monitor tiering activity on a server, use Event ID 9003, 9016 and 9029 in the Telemetry event log (located under Applications and Services\Microsoft\FileSync\Agent in Event Viewer).
+
+- Event ID 9003 provides error distribution for a server endpoint. For example, Total Error Count, ErrorCode, etc. Note, one event is logged per error code.
+- Event ID 9016 provides ghosting results for a volume. For example, Free space percent is, Number of files ghosted in session, Number of files failed to ghost, etc.
+- Event ID 9029 provides ghosting session information for a server endpoint. For example, Number of files attempted in the session, Number of files tiered in the session, Number of files already tiered, etc.
+
+## How to monitor recall activity on a server
+To monitor recall activity on a server, use Event ID 9005, 9006, 9009 and 9059 in the Telemetry event log (located under Applications and Services\Microsoft\FileSync\Agent in Event Viewer).
+
+- Event ID 9005 provides recall reliability for a server endpoint. For example, Total unique files accessed, Total unique files with failed access, etc.
+- Event ID 9006 provides recall error distribution for a server endpoint. For example, Total Failed Requests, ErrorCode, etc. Note, one event is logged per error code.
+- Event ID 9009 provides recall session information for a server endpoint. For example, DurationSeconds, CountFilesRecallSucceeded, CountFilesRecallFailed, etc.
+- Event ID 9059 provides application recall distribution for a server endpoint. For example, ShareId, Application Name, and TotalEgressNetworkBytes.
+
+## How to troubleshoot files that fail to tier
+If files fail to tier to Azure Files:
+
+1. In Event Viewer, review the telemetry, operational and diagnostic event logs, located under Applications and Services\Microsoft\FileSync\Agent.
+ 1. Verify the files exist in the Azure file share.
+
+ > [!NOTE]
+ > A file must be synced to an Azure file share before it can be tiered.
+
+ 2. Verify the server has internet connectivity.
+ 3. Verify the Azure File Sync filter drivers (StorageSync.sys and StorageSyncGuard.sys) are running:
+ - At an elevated command prompt, run `fltmc`. Verify that the StorageSync.sys and StorageSyncGuard.sys file system filter drivers are listed.
+
+> [!NOTE]
+> An Event ID 9003 is logged once an hour in the Telemetry event log if a file fails to tier (one event is logged per error code). Check the [Tiering errors and remediation](#tiering-errors-and-remediation) section to see if remediation steps are listed for the error code.
+
+## Tiering errors and remediation
+
+| HRESULT | HRESULT (decimal) | Error string | Issue | Remediation |
+||-|--|-|-|
+| 0x80c86045 | -2134351803 | ECS_E_INITIAL_UPLOAD_PENDING | The file failed to tier because the initial upload is in progress. | No action required. The file will be tiered once the initial upload completes. |
+| 0x80c86043 | -2134351805 | ECS_E_GHOSTING_FILE_IN_USE | The file failed to tier because it's in use. | No action required. The file will be tiered when it's no longer in use. |
+| 0x80c80241 | -2134375871 | ECS_E_GHOSTING_EXCLUDED_BY_SYNC | The file failed to tier because it's excluded by sync. | No action required. Files in the sync exclusion list cannot be tiered. |
+| 0x80c86042 | -2134351806 | ECS_E_GHOSTING_FILE_NOT_FOUND | The file failed to tier because it was not found on the server. | No action required. If the error persists, check if the file exists on the server. |
+| 0x80c83053 | -2134364077 | ECS_E_CREATE_SV_FILE_DELETED | The file failed to tier because it was deleted in the Azure file share. | No action required. The file should be deleted on the server when the next download sync session runs. |
+| 0x80c8600e | -2134351858 | ECS_E_AZURE_SERVER_BUSY | The file failed to tier due to a network issue. | No action required. If the error persists, check network connectivity to the Azure file share. |
+| 0x80072ee7 | -2147012889 | WININET_E_NAME_NOT_RESOLVED | The file failed to tier due to a network issue. | No action required. If the error persists, check network connectivity to the Azure file share. |
+| 0x80070005 | -2147024891 | ERROR_ACCESS_DENIED | The file failed to tier due to access denied error. This error can occur if the file is located on a DFS-R read-only replication folder. | Azure Files Sync does not support server endpoints on DFS-R read-only replication folders. See [planning guide](file-sync-planning.md#distributed-file-system-dfs) for more information. |
+| 0x80072efe | -2147012866 | WININET_E_CONNECTION_ABORTED | The file failed to tier due to a network issue. | No action required. If the error persists, check network connectivity to the Azure file share. |
+| 0x80c80261 | -2134375839 | ECS_E_GHOSTING_MIN_FILE_SIZE | The file failed to tier because the file size is less than the supported size. | The minimum supported file size is based on the file system cluster size (double file system cluster size). For example, if the file system cluster size is 4 KiB, the minimum file size is 8 KiB. |
+| 0x80c83007 | -2134364153 | ECS_E_STORAGE_ERROR | The file failed to tier due to an Azure storage issue. | If the error persists, open a support request. |
+| 0x800703e3 | -2147023901 | ERROR_OPERATION_ABORTED | The file failed to tier because it was recalled at the same time. | No action required. The file will be tiered when the recall completes and the file is no longer in use. |
+| 0x80c80264 | -2134375836 | ECS_E_GHOSTING_FILE_NOT_SYNCED | The file failed to tier because it has not synced to the Azure file share. | No action required. The file will tier once it has synced to the Azure file share. |
+| 0x80070001 | -2147942401 | ERROR_INVALID_FUNCTION | The file failed to tier because the cloud tiering filter driver (storagesync.sys) is not running. | To resolve this issue, open an elevated command prompt and run the following command: `fltmc load storagesync`<br>If the Azure File Sync filter driver fails to load when running the fltmc command, uninstall the Azure File Sync agent, restart the server and reinstall the Azure File Sync agent. |
+| 0x80070070 | -2147024784 | ERROR_DISK_FULL | The file failed to tier due to insufficient disk space on the volume where the server endpoint is located. | To resolve this issue, free at least 100 MiB of disk space on the volume where the server endpoint is located. |
+| 0x80070490 | -2147023728 | ERROR_NOT_FOUND | The file failed to tier because it has not synced to the Azure file share. | No action required. The file will tier once it has synced to the Azure file share. |
+| 0x80c80262 | -2134375838 | ECS_E_GHOSTING_UNSUPPORTED_RP | The file failed to tier because it's an unsupported reparse point. | If the file is a Data Deduplication reparse point, follow the steps in the [planning guide](file-sync-planning.md#data-deduplication) to enable Data Deduplication support. Files with reparse points other than Data Deduplication are not supported and will not be tiered. |
+| 0x80c83052 | -2134364078 | ECS_E_CREATE_SV_STREAM_ID_MISMATCH | The file failed to tier because it has been modified. | No action required. The file will tier once the modified file has synced to the Azure file share. |
+| 0x80c80269 | -2134375831 | ECS_E_GHOSTING_REPLICA_NOT_FOUND | The file failed to tier because it has not synced to the Azure file share. | No action required. The file will tier once it has synced to the Azure file share. |
+| 0x80072ee2 | -2147012894 | WININET_E_TIMEOUT | The file failed to tier due to a network issue. | No action required. If the error persists, check network connectivity to the Azure file share. |
+| 0x80c80017 | -2134376425 | ECS_E_SYNC_OPLOCK_BROKEN | The file failed to tier because it has been modified. | No action required. The file will tier once the modified file has synced to the Azure file share. |
+| 0x800705aa | -2147023446 | ERROR_NO_SYSTEM_RESOURCES | The file failed to tier due to insufficient system resources. | If the error persists, investigate which application or kernel-mode driver is exhausting system resources. |
+| 0x8e5e03fe | -1906441218 | JET_errDiskIO | The file failed to tier due to an I/O error when writing to the cloud tiering database. | If the error persists, run chkdsk on the volume and check the storage hardware. |
+| 0x8e5e0442 | -1906441150 | JET_errInstanceUnavailable | The file failed to tier because the cloud tiering database is not running. | To resolve this issue, restart the FileSyncSvc service or server. If the error persists, run chkdsk on the volume and check the storage hardware. |
+| 0x80C80285 | -2160591493 | ECS_E_GHOSTING_SKIPPED_BY_CUSTOM_EXCLUSION_LIST | The file cannot be tiered because the file type is excluded from tiering. | To tier files with this file type, modify the GhostingExclusionList registry setting which is located under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Azure\StorageSync. |
+| 0x80C86050 | -2160615504 | ECS_E_REPLICA_NOT_READY_FOR_TIERING | The file failed to tier because the current sync mode is initial upload or reconciliation. | No action required. The file will be tiered once sync completes initial upload or reconciliation. |
+
+## How to troubleshoot files that fail to be recalled
+If files fail to be recalled:
+1. In Event Viewer, review the telemetry, operational and diagnostic event logs, located under Applications and Services\Microsoft\FileSync\Agent.
+ 1. Verify the files exist in the Azure file share.
+ 2. Verify the server has internet connectivity.
+ 3. Open the Services MMC snap-in and verify the Storage Sync Agent service (FileSyncSvc) is running.
+ 4. Verify the Azure File Sync filter drivers (StorageSync.sys and StorageSyncGuard.sys) are running:
+ - At an elevated command prompt, run `fltmc`. Verify that the StorageSync.sys and StorageSyncGuard.sys file system filter drivers are listed.
+
+> [!NOTE]
+> An Event ID 9006 is logged once per hour in the Telemetry event log if a file fails to recall (one event is logged per error code). Check the [Recall errors and remediation](#recall-errors-and-remediation) section to see if remediation steps are listed for the error code.
+
+## Recall errors and remediation
+
+| HRESULT | HRESULT (decimal) | Error string | Issue | Remediation |
+||-|--|-|-|
+| 0x80070079 | -2147942521 | ERROR_SEM_TIMEOUT | The file failed to recall due to an I/O timeout. This issue can occur for several reasons: server resource constraints, poor network connectivity or an Azure storage issue (for example, throttling). | No action required. If the error persists for several hours, please open a support case. |
+| 0x80070036 | -2147024842 | ERROR_NETWORK_BUSY | The file failed to recall due to a network issue. | If the error persists, check network connectivity to the Azure file share. |
+| 0x80c80037 | -2134376393 | ECS_E_SYNC_SHARE_NOT_FOUND | The file failed to recall because the server endpoint was deleted. | To resolve this issue, see [Tiered files are not accessible on the server after deleting a server endpoint](?tabs=portal1%252cazure-portal#tiered-files-are-not-accessible-on-the-server-after-deleting-a-server-endpoint). |
+| 0x80070005 | -2147024891 | ERROR_ACCESS_DENIED | The file failed to recall due to an access denied error. This issue can occur if the firewall and virtual network settings on the storage account are enabled and the server does not have access to the storage account. | To resolve this issue, add the Server IP address or virtual network by following the steps documented in the [Configure firewall and virtual network settings](file-sync-deployment-guide.md?tabs=azure-portal#optional-configure-firewall-and-virtual-network-settings) section in the deployment guide. |
+| 0x80c86002 | -2134351870 | ECS_E_AZURE_RESOURCE_NOT_FOUND | The file failed to recall because it's not accessible in the Azure file share. | To resolve this issue, verify the file exists in the Azure file share. If the file exists in the Azure file share, upgrade to the latest Azure File Sync [agent version](file-sync-release-notes.md#supported-versions). |
+| 0x80c8305f | -2134364065 | ECS_E_EXTERNAL_STORAGE_ACCOUNT_AUTHORIZATION_FAILED | The file failed to recall due to authorization failure to the storage account. | To resolve this issue, verify [Azure File Sync has access to the storage account](file-sync-troubleshoot-sync-errors.md?tabs=portal1%252cazure-portal#troubleshoot-rbac). |
+| 0x80c86030 | -2134351824 | ECS_E_AZURE_FILE_SHARE_NOT_FOUND | The file failed to recall because the Azure file share is not accessible. | Verify the file share exists and is accessible. If the file share was deleted and recreated, perform the steps documented in the [Sync failed because the Azure file share was deleted and recreated](file-sync-troubleshoot-sync-errors.md?tabs=portal1%252cazure-portal#-2134375810) section to delete and recreate the sync group. |
+| 0x800705aa | -2147023446 | ERROR_NO_SYSTEM_RESOURCES | The file failed to recall due to insufficient system resources. | If the error persists, investigate which application or kernel-mode driver is exhausting system resources. |
+| 0x8007000e | -2147024882 | ERROR_OUTOFMEMORY | The file failed to recall due to insufficient memory. | If the error persists, investigate which application or kernel-mode driver is causing the low memory condition. |
+| 0x80070070 | -2147024784 | ERROR_DISK_FULL | The file failed to recall due to insufficient disk space. | To resolve this issue, free up space on the volume by moving files to a different volume, increase the size of the volume, or force files to tier by using the Invoke-StorageSyncCloudTiering cmdlet. |
+| 0x80072f8f | -2147012721 | WININET_E_DECODING_FAILED | The file failed to recall because the server was unable to decode the response from the Azure File Sync service. | This error typically occurs if a network proxy is modifying the response from the Azure File Sync service. Please check your proxy configuration. |
+| 0x80090352 | -2146892974 | SEC_E_ISSUING_CA_UNTRUSTED | The file failed to recall because your organization is using a TLS terminating proxy or a malicious entity is intercepting the traffic between your server and the Azure File Sync service. | If you are certain this is expected (because your organization is using a TLS terminating proxy), follow the steps documented for error [CERT_E_UNTRUSTEDROOT](file-sync-troubleshoot-sync-errors.md#-2146762487) to resolve this issue. |
+| 0x80c86047 | -2134351801 | ECS_E_AZURE_SHARE_SNAPSHOT_NOT_FOUND | The file failed to recall because it's referencing a version of the file which no longer exists in the Azure file share. | This issue can occur if the tiered file was restored from a backup of the Windows Server. To resolve this issue, restore the file from a snapshot in the Azure file share. |
+
+## Tiered files are not accessible on the server after deleting a server endpoint
+Tiered files on a server will become inaccessible if the files are not recalled prior to deleting a server endpoint.
+
+Errors logged if tiered files are not accessible
+- When syncing a file, error code -2147942467 (0x80070043 - ERROR_BAD_NET_NAME) is logged in the ItemResults event log
+- When recalling a file, error code -2134376393 (0x80c80037 - ECS_E_SYNC_SHARE_NOT_FOUND) is logged in the RecallResults event log
+
+Restoring access to your tiered files is possible if the following conditions are met:
+- Server endpoint was deleted within past 30 days
+- Cloud endpoint was not deleted
+- File share was not deleted
+- Sync group was not deleted
+
+If the above conditions are met, you can restore access to the files on the server by recreating the server endpoint at the same path on the server within the same sync group within 30 days.
+
+If the above conditions are not met, restoring access is not possible as these tiered files on the server are now orphaned. Follow the instructions below to remove the orphaned tiered files.
+
+**Notes**
+- When tiered files are not accessible on the server, the full file should still be accessible if you access the Azure file share directly.
+- To prevent orphaned tiered files in the future, follow the steps documented in [Remove a server endpoint](file-sync-server-endpoint-delete.md) when deleting a server endpoint.
+
+<a id="get-orphaned"></a>**How to get the list of orphaned tiered files**
+
+1. Run the following PowerShell commands to list orphaned tiered files:
+```powershell
+Import-Module "C:\Program Files\Azure\StorageSyncAgent\StorageSync.Management.ServerCmdlets.dll"
+$orphanFiles = Get-StorageSyncOrphanedTieredFiles -path <server endpoint path>
+$orphanFiles.OrphanedTieredFiles > OrphanTieredFiles.txt
+```
+2. Save the OrphanTieredFiles.txt output file in case files need to be restored from backup after they are deleted.
+
+<a id="remove-orphaned"></a>**How to remove orphaned tiered files**
+
+*Option 1: Delete the orphaned tiered files*
+
+This option deletes the orphaned tiered files on the Windows Server but requires removing the server endpoint if it exists due to recreation after 30 days or is connected to a different sync group. File conflicts will occur if files are updated on the Windows Server or Azure file share before the server endpoint is recreated.
+
+1. Back up the Azure file share and server endpoint location.
+2. Remove the server endpoint in the sync group (if exists) by following the steps documented in [Remove a server endpoint](file-sync-server-endpoint-delete.md).
+
+> [!Warning]
+> If the server endpoint is not removed prior to using the Remove-StorageSyncOrphanedTieredFiles cmdlet, deleting the orphaned tiered file on the server will delete the full file in the Azure file share.
+
+3. Run the following PowerShell commands to list orphaned tiered files:
+
+```powershell
+Import-Module "C:\Program Files\Azure\StorageSyncAgent\StorageSync.Management.ServerCmdlets.dll"
+$orphanFiles = Get-StorageSyncOrphanedTieredFiles -path <server endpoint path>
+$orphanFiles.OrphanedTieredFiles > OrphanTieredFiles.txt
+```
+4. Save the OrphanTieredFiles.txt output file in case files need to be restored from backup after they are deleted.
+5. Run the following PowerShell commands to delete orphaned tiered files:
+
+```powershell
+Import-Module "C:\Program Files\Azure\StorageSyncAgent\StorageSync.Management.ServerCmdlets.dll"
+$orphanFilesRemoved = Remove-StorageSyncOrphanedTieredFiles -Path <folder path containing orphaned tiered files> -Verbose
+$orphanFilesRemoved.OrphanedTieredFiles > DeletedOrphanFiles.txt
+```
+**Notes**
+- Tiered files modified on the server that are not synced to the Azure file share will be deleted.
+- Tiered files that are accessible (not orphan) will not be deleted.
+- Non-tiered files will remain on the server.
+
+6. Optional: Recreate the server endpoint if deleted in step 3.
+
+*Option 2: Mount the Azure file share and copy the files locally that are orphaned on the server*
+
+This option doesn't require removing the server endpoint but requires sufficient disk space to copy the full files locally.
+
+1. [Mount](../files/storage-how-to-use-files-windows.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json) the Azure file share on the Windows Server that has orphaned tiered files.
+2. Run the following PowerShell commands to list orphaned tiered files:
+```powershell
+Import-Module "C:\Program Files\Azure\StorageSyncAgent\StorageSync.Management.ServerCmdlets.dll"
+$orphanFiles = Get-StorageSyncOrphanedTieredFiles -path <server endpoint path>
+$orphanFiles.OrphanedTieredFiles > OrphanTieredFiles.txt
+```
+3. Use the OrphanTieredFiles.txt output file to identify orphaned tiered files on the server.
+4. Overwrite the orphaned tiered files by copying the full file from the Azure file share to the Windows Server.
+
+## How to troubleshoot files unexpectedly recalled on a server
+Antivirus, backup, and other applications that read large numbers of files cause unintended recalls unless they respect the skip offline attribute and skip reading the content of those files. Skipping offline files for products that support this option helps avoid unintended recalls during operations like antivirus scans or backup jobs.
+
+Consult with your software vendor to learn how to configure their solution to skip reading offline files.
+
+Unintended recalls also might occur in other scenarios, like when you are browsing cloud-tiered files in File Explorer. This is likely to occur on Windows Server 2016 if the folder contains executable files. File Explorer was improved for Windows Server 2019 and later to better handle offline files.
+
+> [!NOTE]
+>Use Event ID 9059 in the Telemetry event log to determine which application(s) is causing recalls. This event provides application recall distribution for a server endpoint and is logged once an hour.
+
+## Process exclusions for Azure File Sync
+
+If you want to configure your antivirus or other applications to skip scanning for files accessed by Azure File Sync, configure the following process exclusions:
+
+- C:\Program Files\Azure\StorageSyncAgent\AfsAutoUpdater.exe
+- C:\Program Files\Azure\StorageSyncAgent\FileSyncSvc.exe
+- C:\Program Files\Azure\StorageSyncAgent\MAAgent\MonAgentLauncher.exe
+- C:\Program Files\Azure\StorageSyncAgent\MAAgent\MonAgentHost.exe
+- C:\Program Files\Azure\StorageSyncAgent\MAAgent\MonAgentManager.exe
+- C:\Program Files\Azure\StorageSyncAgent\MAAgent\MonAgentCore.exe
+- C:\Program Files\Azure\StorageSyncAgent\MAAgent\Extensions\XSyncMonitoringExtension\AzureStorageSyncMonitor.exe
+
+## TLS 1.2 required for Azure File Sync
+
+You can view the TLS settings at your server by looking at the [registry settings](/windows-server/security/tls/tls-registry-settings).
+
+If you're using a proxy, consult your proxy's documentation and ensure it is configured to use TLS 1.2.
+
+## See also
+- [Troubleshoot Azure File Sync agent installation and server registration](file-sync-troubleshoot-installation.md)
+- [Troubleshoot Azure File Sync sync group management](file-sync-troubleshoot-sync-group-management.md)
+- [Troubleshoot Azure File Sync sync errors](file-sync-troubleshoot-sync-errors.md)
+- [Monitor Azure File Sync](file-sync-monitoring.md)
+- [Troubleshoot Azure Files problems in Windows](../files/storage-troubleshoot-windows-file-connection-problems.md)
+- [Troubleshoot Azure Files problems in Linux](../files/storage-troubleshoot-linux-file-connection-problems.md)
storage File Sync Troubleshoot Installation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-troubleshoot-installation.md
+
+ Title: Troubleshoot Azure File Sync agent installation and server registration | Microsoft Docs
+description: Troubleshoot common issues with installing the Azure File Sync agent and registering Windows Server with the Storage Sync Service.
+++ Last updated : 7/28/2022+++++
+# Troubleshoot Azure File Sync agent installation and server registration
+
+After deploying the Storage Sync Service, the next steps in deploying Azure File Sync are installing the Azure File Sync agent and registering Windows Server with the Storage Sync Service. This article is designed to help you troubleshoot and resolve issues that you might encounter during these steps.
+
+## Agent installation
+<a id="agent-installation-failures"></a>**Troubleshoot agent installation failures**
+If the Azure File Sync agent installation fails, at an elevated command prompt, run the following command to turn on logging during agent installation:
+
+```
+StorageSyncAgent.msi /l*v AFSInstaller.log
+```
+
+Review installer.log to determine the cause of the installation failure.
+
+<a id="agent-installation-gpo"></a>**Agent installation fails with error: Storage Sync Agent Setup Wizard ended prematurely because of an error**
+
+In the agent installation log, the following error is logged:
+
+```
+CAQuietExec64: + CategoryInfo : SecurityError: (:) , PSSecurityException
+CAQuietExec64: + FullyQualifiedErrorId : UnauthorizedAccess
+CAQuietExec64: Error 0x80070001: Command line returned an error.
+```
+
+This issue occurs if the [PowerShell execution policy](/powershell/module/microsoft.powershell.core/about/about_execution_policies#use-group-policy-to-manage-execution-policy) is configured using group policy and the policy setting is "Allow only signed scripts." All scripts included with the Azure File Sync agent are signed. The Azure File Sync agent installation fails because the installer is performing the script execution using the Bypass execution policy setting.
+
+To resolve this issue, temporarily disable the [Turn on Script Execution](/powershell/module/microsoft.powershell.core/about/about_execution_policies#use-group-policy-to-manage-execution-policy) group policy setting on the server. Once the agent installation completes, the group policy setting can be re-enabled.
+
+<a id="agent-installation-on-DC"></a>**Agent installation fails on Active Directory Domain Controller**
+If you try to install the sync agent on an Active Directory domain controller where the PDC role owner is on a Windows Server 2008 R2 or below OS version, you may hit the issue where the sync agent will fail to install.
+
+To resolve, transfer the PDC role to another domain controller running Windows Server 2012 R2 or more recent, then install sync.
+
+<a id="parameter-is-incorrect"></a>**Accessing a volume on Windows Server 2012 R2 fails with error: The parameter is incorrect**
+After creating a server endpoint on Windows Server 2012 R2, the following error occurs when accessing the volume:
+
+drive letter:\ is not accessible.
+The parameter is incorrect.
+
+To resolve this issue, install [KB2919355](https://support.microsoft.com/help/2919355/windows-rt-8-1-windows-8-1-windows-server-2012-r2-update-april-2014) and restart the server. If this update will not install because a later update is already installed, go to Windows Update, install the latest updates for Windows Server 2012 R2 and restart the server.
+
+## Server registration
+
+<a id="server-registration-missing-subscriptions"></a>**Server Registration does not list all Azure Subscriptions**
+When registering a server using ServerRegistration.exe, subscriptions are missing when you click the Azure Subscription drop-down.
+
+This issue occurs because ServerRegistration.exe will only retrieve subscriptions from the first five Azure AD tenants.
+
+To increase the Server Registration tenant limit on the server, create a DWORD value called ServerRegistrationTenantLimit under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Azure\StorageSync with a value greater than 5.
+
+You can also work around this issue by using the following PowerShell commands to register the server:
+
+```powershell
+Connect-AzAccount -Subscription "<guid>" -Tenant "<guid>"
+Register-AzStorageSyncServer -ResourceGroupName "<your-resource-group-name>" -StorageSyncServiceName "<your-storage-sync-service-name>"
+```
+
+<a id="server-registration-prerequisites"></a>**Server Registration displays the following message: "Pre-requisites are missing"**
+This message appears if Az or AzureRM PowerShell module is not installed on PowerShell 5.1.
+
+> [!Note]
+> ServerRegistration.exe does not support PowerShell 6.x. You can use the Register-AzStorageSyncServer cmdlet on PowerShell 6.x to register the server.
+
+To install the Az or AzureRM module on PowerShell 5.1, perform the following steps:
+
+1. Type **powershell** from an elevated command prompt and hit enter.
+2. Install the latest Az or AzureRM module by following the documentation:
+ - [Az module (requires .NET 4.7.2)](/powershell/azure/install-az-ps)
+ - [AzureRM module](https://go.microsoft.com/fwlink/?linkid=856959)
+3. Run ServerRegistration.exe, and complete the wizard to register the server with a Storage Sync Service.
+
+<a id="server-already-registered"></a>**Server Registration displays the following message: "This server is already registered"**
+
+![A screenshot of the Server Registration dialog with the "server is already registered" error message](media/storage-sync-files-troubleshoot/server-registration-1.png)
+
+This message appears if the server was previously registered with a Storage Sync Service. To unregister the server from the current Storage Sync Service and then register with a new Storage Sync Service, complete the steps that are described in [Unregister a server with Azure File Sync](file-sync-server-registration.md#unregister-the-server-with-storage-sync-service).
+
+If the server is not listed under **Registered servers** in the Storage Sync Service, on the server that you want to unregister, run the following PowerShell commands:
+
+```powershell
+Import-Module "C:\Program Files\Azure\StorageSyncAgent\StorageSync.Management.ServerCmdlets.dll"
+Reset-StorageSyncServer
+```
+
+> [!Note]
+> If the server is part of a cluster, use the Reset-StorageSyncServer -CleanClusterRegistration parameter to remove the server from the Azure File Sync cluster registration detail.
+
+<a id="web-site-not-trusted"></a>**When I register a server, I see numerous "web site not trusted" responses. Why?**
+This issue occurs when the **Enhanced Internet Explorer Security** policy is enabled during server registration. For more information about how to correctly disable the **Enhanced Internet Explorer Security** policy, see [Prepare Windows Server to use with Azure File Sync](file-sync-deployment-guide.md#prepare-windows-server-to-use-with-azure-file-sync) and [How to deploy Azure File Sync](file-sync-deployment-guide.md).
+
+<a id="server-registration-missing"></a>**Server is not listed under registered servers in the Azure portal**
+If a server is not listed under **Registered servers** for a Storage Sync Service:
+1. Sign in to the server that you want to register.
+2. Open File Explorer, and then go to the Storage Sync Agent installation directory (the default location is C:\Program Files\Azure\StorageSyncAgent).
+3. Run ServerRegistration.exe, and complete the wizard to register the server with a Storage Sync Service.
+
+## See also
+- [Troubleshoot Azure File Sync sync group management](file-sync-troubleshoot-sync-group-management.md)
+- [Troubleshoot Azure File Sync sync errors](file-sync-troubleshoot-sync-errors.md)
+- [Troubleshoot Azure File Sync cloud tiering](file-sync-troubleshoot-cloud-tiering.md)
+- [Monitor Azure File Sync](file-sync-monitoring.md)
+- [Troubleshoot Azure Files problems in Windows](../files/storage-troubleshoot-windows-file-connection-problems.md)
+- [Troubleshoot Azure Files problems in Linux](../files/storage-troubleshoot-linux-file-connection-problems.md)
storage File Sync Troubleshoot Sync Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-troubleshoot-sync-errors.md
+
+ Title: Troubleshoot sync health and errors in Azure File Sync | Microsoft Docs
+description: Troubleshoot common issues with monitoring sync health and resolving sync errors in an Azure File Sync deployment.
+++ Last updated : 6/2/2022+++++
+# Troubleshoot Azure File Sync sync health and errors
+
+This article is designed to help you troubleshoot and resolve common sync issues that you might encounter with your Azure File Sync deployment.
+
+## Sync health
+
+<a id="afs-change-detection"></a>**If I created a file directly in my Azure file share over SMB or through the portal, how long does it take for the file to sync to servers in the sync group?**
+
+<a id="serverendpoint-pending"></a>**Server endpoint health is in a pending state for several hours**
+This issue is expected if you create a cloud endpoint and use an Azure file share that contains data. The change enumeration job that scans for changes in the Azure file share must complete before files can sync between the cloud and server endpoints. The time to complete the job is dependent on the size of the namespace in the Azure file share. The server endpoint health should update once the change enumeration job completes.
+
+### <a id="broken-sync"></a>How do I monitor sync health?
+# [Portal](#tab/portal1)
+Within each sync group, you can drill down into its individual server endpoints to see the status of the last completed sync sessions. A green Health column and a Files Not Syncing value of 0 indicate that sync is working as expected. If not, see below for a list of common sync errors and how to handle files that are not syncing.
+
+![A screenshot of the Azure portal](media/storage-sync-files-troubleshoot/portal-sync-health.png)
+
+# [Server](#tab/server)
+Go to the server's telemetry logs, which can be found in the Event Viewer at `Applications and Services Logs\Microsoft\FileSync\Agent\Telemetry`. Event 9102 corresponds to a completed sync session; for the latest status of sync, look for the most recent event with ID 9102. SyncDirection tells you if this session was an upload or download. If the `HResult` is 0, then the sync session was successful. A non-zero `HResult` means that there was an error during sync; see below for a list of common errors. If the PerItemErrorCount is greater than 0, then some files or folders did not sync properly. It is possible to have an `HResult` of 0 but a PerItemErrorCount that is greater than 0.
+
+Below is an example of a successful upload. For the sake of brevity, only some of the values contained in each 9102 event are listed below.
+
+```
+Replica Sync session completed.
+SyncDirection: Upload,
+HResult: 0,
+SyncFileCount: 2, SyncDirectoryCount: 0,
+AppliedFileCount: 2, AppliedDirCount: 0, AppliedTombstoneCount 0, AppliedSizeBytes: 0.
+PerItemErrorCount: 0,
+TransferredFiles: 2, TransferredBytes: 0, FailedToTransferFiles: 0, FailedToTransferBytes: 0.
+```
+
+Conversely, an unsuccessful upload might look like this:
+
+```
+Replica Sync session completed.
+SyncDirection: Upload,
+HResult: -2134364065,
+SyncFileCount: 0, SyncDirectoryCount: 0,
+AppliedFileCount: 0, AppliedDirCount: 0, AppliedTombstoneCount 0, AppliedSizeBytes: 0.
+PerItemErrorCount: 0,
+TransferredFiles: 0, TransferredBytes: 0, FailedToTransferFiles: 0, FailedToTransferBytes: 0.
+```
+
+Sometimes sync sessions fail overall or have a non-zero PerItemErrorCount but still make forward progress, with some files syncing successfully. Progress can be determined by looking into the *Applied* fields (AppliedFileCount, AppliedDirCount, AppliedTombstoneCount, and AppliedSizeBytes). These fields describe how much of the session is succeeding. If you see multiple sync sessions in a row that are failing but have an increasing *Applied* count, then you should give sync time to try again before opening a support ticket.
+++
+### How do I monitor the progress of a current sync session?
+# [Portal](#tab/portal1)
+Within your sync group, go to the server endpoint in question and look at the Sync Activity section to see the count of files uploaded or downloaded in the current sync session. Keep in mind that this status will be delayed by about 5 minutes, and if your sync session is small enough to be completed within this period, it may not be reported in the portal.
+
+# [Server](#tab/server)
+Look at the most recent 9302 event in the telemetry log on the server (in the Event Viewer, go to Applications and Services Logs\Microsoft\FileSync\Agent\Telemetry). This event indicates the state of the current sync session. TotalItemCount denotes how many files are to be synced, AppliedItemCount the number of files that have been synced so far, and PerItemErrorCount the number of files that are failing to sync (see below for how to deal with this).
+
+```
+Replica Sync Progress.
+ServerEndpointName: <CI>sename</CI>, SyncGroupName: <CI>sgname</CI>, ReplicaName: <CI>rname</CI>,
+SyncDirection: Upload, CorrelationId: {AB4BA07D-5B5C-461D-AAE6-4ED724762B65}.
+AppliedItemCount: 172473, TotalItemCount: 624196. AppliedBytes: 51473711577,
+TotalBytes: 293363829906.
+AreTotalCountsFinal: true.
+PerItemErrorCount: 1006.
+```
++
+### How do I know if my servers are in sync with each other?
+# [Portal](#tab/portal1)
+For each server in a given sync group, make sure:
+- The timestamps for the Last Attempted Sync for both upload and download are recent.
+- The status is green for both upload and download.
+- The Sync Activity field shows very few or no files remaining to sync.
+- The Files Not Syncing field is 0 for both upload and download.
+
+# [Server](#tab/server)
+Look at the completed sync sessions, which are marked by 9102 events in the telemetry event log for each server (in the Event Viewer, go to `Applications and Services Logs\Microsoft\FileSync\Agent\Telemetry`).
+
+1. On any given server, you want to make sure the latest upload and download sessions completed successfully. To do this, check that the `HResult` and PerItemErrorCount are 0 for both upload and download (the SyncDirection field indicates if a given session is an upload or download session). Note that if you do not see a recently completed sync session, it is likely a sync session is currently in progress, which is to be expected if you just added or modified a large amount of data.
+2. When a server is fully up to date with the cloud and has no changes to sync in either direction, you will see empty sync sessions. These are indicated by upload and download events in which all the Sync* fields (SyncFileCount, SyncDirCount, SyncTombstoneCount, and SyncSizeBytes) are zero, meaning there was nothing to sync. Note that these empty sync sessions may not occur on high-churn servers as there is always something new to sync. If there is no sync activity, they should occur every 30 minutes.
+3. If all servers are up to date with the cloud, meaning their recent upload and download sessions are empty sync sessions, you can say with reasonable certainty that the system as a whole is in sync.
+
+If you made changes directly in your Azure file share, Azure File Sync will not detect these changes until change enumeration runs, which happens once every 24 hours. It is possible that a server will say it is up to date with the cloud when it is in fact missing recent changes made directly in the Azure file share.
+++
+### How do I see if there are specific files or folders that are not syncing?
+If your PerItemErrorCount on the server or Files Not Syncing count in the portal are greater than 0 for any given sync session, that means some items are failing to sync. Files and folders can have characteristics that prevent them from syncing. These characteristics can be persistent and require explicit action to resume sync, for example removing unsupported characters from the file or folder name. They can also be transient, meaning the file or folder will automatically resume sync; for example, files with open handles will automatically resume sync when the file is closed. When the Azure File Sync engine detects such a problem, an error log is produced that can be parsed to list the items currently not syncing properly.
+
+To see these errors, run the **FileSyncErrorsReport.ps1** PowerShell script (located in the agent installation directory of the Azure File Sync agent) to identify files that failed to sync because of open handles, unsupported characters, or other issues. The ItemPath field tells you the location of the file in relation to the root sync directory. See the list of common sync errors below for remediation steps.
+
+> [!Note]
+> If the FileSyncErrorsReport.ps1 script returns "There were no file errors found" or does not list per-item errors for the sync group, the cause is either:
+>
+>- Cause 1: The last completed sync session did not have per-item errors. The portal should be updated soon to show 0 Files Not Syncing. By default, the FileSyncErrorsReport.ps1 script will only show per-item errors for the last completed sync session. To view per-item errors for all sync sessions, use the -ReportAllErrors parameter.
+> - Check the most recent [Event ID 9102](?tabs=server%252cazure-portal#broken-sync) in the Telemetry event log to confirm the PerItemErrorCount is 0.
+>
+>- Cause 2: The ItemResults event log on the server wrapped due to too many per-item errors and the event log no longer contains errors for this sync group.
+> - To prevent this issue, increase the ItemResults event log size. The ItemResults event log can be found under "Applications and Services Logs\Microsoft\FileSync\Agent" in Event Viewer.
+
+## Sync errors
+
+### Troubleshooting per file/directory sync errors
+**ItemResults log - per-item sync errors**
+
+| HRESULT | HRESULT (decimal) | Error string | Issue | Remediation |
+||-|--|-|-|
+| 0x80070043 | -2147942467 | ERROR_BAD_NET_NAME | The tiered file on the server is not accessible. This issue occurs if the tiered file was not recalled prior to deleting a server endpoint. | To resolve this issue, see [Tiered files are not accessible on the server after deleting a server endpoint](file-sync-troubleshoot-cloud-tiering.md#tiered-files-are-not-accessible-on-the-server-after-deleting-a-server-endpoint). |
+| 0x80c80207 | -2134375929 | ECS_E_SYNC_CONSTRAINT_CONFLICT | The file or directory change cannot be synced yet because a dependent folder is not yet synced. This item will sync after the dependent changes are synced. | No action required. If the error persists for several days, use the FileSyncErrorsReport.ps1 PowerShell script to determine why the dependent folder is not yet synced. |
+| 0x80C8028A | -2134375798 | ECS_E_SYNC_CONSTRAINT_CONFLICT_ON_FAILED_DEPENDEE | The file or directory change cannot be synced yet because a dependent folder is not yet synced. This item will sync after the dependent changes are synced. | No action required. If the error persists for several days, use the FileSyncErrorsReport.ps1 PowerShell script to determine why the dependent folder is not yet synced. |
+| 0x80c80284 | -2134375804 | ECS_E_SYNC_CONSTRAINT_CONFLICT_SESSION_FAILED | The file or directory change cannot be synced yet because a dependent folder is not yet synced and the sync session failed. This item will sync after the dependent changes are synced. | No action required. If the error persists, investigate the sync session failure. |
+| 0x8007007b | -2147024773 | ERROR_INVALID_NAME | The file or directory name is invalid. | Rename the file or directory in question. See [Handling unsupported characters](?tabs=portal1%252cazure-portal#handling-unsupported-characters) for more information. |
+| 0x80c80255 | -2134375851 | ECS_E_XSMB_REST_INCOMPATIBILITY | The file or directory name is invalid. | Rename the file or directory in question. See [Handling unsupported characters](?tabs=portal1%252cazure-portal#handling-unsupported-characters) for more information. |
+| 0x80c80018 | -2134376424 | ECS_E_SYNC_FILE_IN_USE | The file cannot be synced because it's in use. The file will be synced when it's no longer in use. | No action required. Azure File Sync creates a temporary VSS snapshot once a day on the server to sync files that have open handles. |
+| 0x80c8031d | -2134375651 | ECS_E_CONCURRENCY_CHECK_FAILED | The file has changed, but the change has not yet been detected by sync. Sync will recover after this change is detected. | No action required. |
+| 0x80070002 | -2147024894 | ERROR_FILE_NOT_FOUND | The file was deleted and sync is not aware of the change. | No action required. Sync will stop logging this error once change detection detects the file was deleted. |
+| 0x80070003 | -2147942403 | ERROR_PATH_NOT_FOUND | Deletion of a file or directory cannot be synced because the item was already deleted in the destination and sync is not aware of the change. | No action required. Sync will stop logging this error once change detection runs on the destination and sync detects the item was deleted. |
+| 0x80c80205 | -2134375931 | ECS_E_SYNC_ITEM_SKIP | The file or directory was skipped but will be synced during the next sync session. If this error is reported when downloading the item, the file or directory name is more than likely invalid. | No action required if this error is reported when uploading the file. If the error is reported when downloading the file, rename the file or directory in question. See [Handling unsupported characters](?tabs=portal1%252cazure-portal#handling-unsupported-characters) for more information. |
+| 0x800700B7 | -2147024713 | ERROR_ALREADY_EXISTS | Creation of a file or directory cannot be synced because the item already exists in the destination and sync is not aware of the change. | No action required. Sync will stop logging this error once change detection runs on the destination and sync is aware of this new item. |
+| 0x80c8603e | -2134351810 | ECS_E_AZURE_STORAGE_SHARE_SIZE_LIMIT_REACHED | The file cannot be synced because the Azure file share limit is reached. | To resolve this issue, see [You reached the Azure file share storage limit](?tabs=portal1%252cazure-portal#-2134351810) section in the troubleshooting guide. |
+| 0x80c83008 | -2134364152 | ECS_E_CANNOT_CREATE_AZURE_STAGED_FILE | The file cannot be synced because the Azure file share limit is reached. | To resolve this issue, see [You reached the Azure file share storage limit](?tabs=portal1%252cazure-portal#-2134351810) section in the troubleshooting guide. |
+| 0x80c8027C | -2134375812 | ECS_E_ACCESS_DENIED_EFS | The file is encrypted by an unsupported solution (like NTFS EFS). | Decrypt the file and use a supported encryption solution. For a list of support solutions, see the [Encryption](file-sync-planning.md#encryption) section of the planning guide. |
+| 0x80c80283 | -2160591491 | ECS_E_ACCESS_DENIED_DFSRRO | The file is located on a DFS-R read-only replication folder. | File is located on a DFS-R read-only replication folder. Azure Files Sync does not support server endpoints on DFS-R read-only replication folders. See [planning guide](file-sync-planning.md#distributed-file-system-dfs) for more information. |
+| 0x80070005 | -2147024891 | ERROR_ACCESS_DENIED | The file has a delete pending state. | No action required. File will be deleted once all open file handles are closed. |
+| 0x80c86044 | -2134351804 | ECS_E_AZURE_AUTHORIZATION_FAILED | The file cannot be synced because the firewall and virtual network settings on the storage account are enabled and the server does not have access to the storage account. | Add the Server IP address or virtual network by following the steps documented in the [Configure firewall and virtual network settings](file-sync-deployment-guide.md?tabs=azure-portal#optional-configure-firewall-and-virtual-network-settings) section in the deployment guide. |
+| 0x80c80243 | -2134375869 | ECS_E_SECURITY_DESCRIPTOR_SIZE_TOO_LARGE | The file cannot be synced because the security descriptor size exceeds the 64 KiB limit. | To resolve this issue, remove access control entries (ACE) on the file to reduce the security descriptor size. |
+| 0x8000ffff | -2147418113 | E_UNEXPECTED | The file cannot be synced due to an unexpected error. | If the error persists for several days, please open a support case. |
+| 0x80070020 | -2147024864 | ERROR_SHARING_VIOLATION | The file cannot be synced because it's in use. The file will be synced when it's no longer in use. | No action required. |
+| 0x80c80017 | -2134376425 | ECS_E_SYNC_OPLOCK_BROKEN | The file was changed during sync, so it needs to be synced again. | No action required. |
+| 0x80070017 | -2147024873 | ERROR_CRC | The file cannot be synced due to CRC error. This error can occur if a tiered file was not recalled prior to deleting a server endpoint or if the file is corrupt. | To resolve this issue, see [Tiered files are not accessible on the server after deleting a server endpoint](file-sync-troubleshoot-cloud-tiering.md#tiered-files-are-not-accessible-on-the-server-after-deleting-a-server-endpoint) to remove tiered files that are orphaned. If the error continues to occur after removing orphaned tiered files, run [chkdsk](/windows-server/administration/windows-commands/chkdsk) on the volume. |
+| 0x80c80200 | -2134375936 | ECS_E_SYNC_CONFLICT_NAME_EXISTS | The file cannot be synced because the maximum number of conflict files has been reached. Azure File Sync supports 100 conflict files per file. To learn more about file conflicts, see Azure File Sync [FAQ](../files/storage-files-faq.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json#afs-conflict-resolution). | To resolve this issue, reduce the number of conflict files. The file will sync once the number of conflict files is less than 100. |
+| 0x80c8027d | -2134375811 | ECS_E_DIRECTORY_RENAME_FAILED | Rename of a directory cannot be synced because files or folders within the directory have open handles. | No action required. The rename of the directory will be synced once all open file handles within the directory are closed. |
+| 0x800700de | -2147024674 | ERROR_BAD_FILE_TYPE | The tiered file on the server is not accessible because it's referencing a version of the file which no longer exists in the Azure file share. | This issue can occur if the tiered file was restored from a backup of the Windows Server. To resolve this issue, restore the file from a snapshot in the Azure file share. |
+
+### Handling unsupported characters
+If the **FileSyncErrorsReport.ps1** PowerShell script shows per-item sync errors due to unsupported characters (error code 0x8007007b or 0x80c80255), you should remove or rename the characters at fault from the respective file names. PowerShell will likely print these characters as question marks or empty rectangles since most of these characters have no standard visual encoding.
+> [!Note]
+> The [Evaluation Tool](file-sync-planning.md#evaluation-cmdlet) can be used to identify characters that are not supported. If your dataset has several files with invalid characters, use the [ScanUnsupportedChars](https://github.com/Azure-Samples/azure-files-samples/tree/master/ScanUnsupportedChars) script to rename files which contain unsupported characters.
+
+The table below contains all of the unicode characters Azure File Sync does not yet support.
+
+| Character set | Character count |
+||--|
+| 0x00000000 - 0x0000001F (control characters) | 32 |
+| 0x0000FDD0 - 0x0000FDDD (arabic presentation forms-a) | 14 |
+| <ul><li>0x00000022 (quotation mark)</li><li>0x0000002A (asterisk)</li><li>0x0000002F (forward slash)</li><li>0x0000003A (colon)</li><li>0x0000003C (less than)</li><li>0x0000003E (greater than)</li><li>0x0000003F (question mark)</li><li>0x0000005C (backslash)</li><li>0x0000007C (pipe or bar)</li></ul> | 9 |
+| <ul><li>0x0004FFFE - 0x0004FFFF = 2 (noncharacter)</li><li>0x0008FFFE - 0x0008FFFF = 2 (noncharacter)</li><li>0x000CFFFE - 0x000CFFFF = 2 (noncharacter)</li><li>0x0010FFFE - 0x0010FFFF = 2 (noncharacter)</li></ul> | 8 |
+| <ul><li>0x0000009D (`osc` operating system command)</li><li>0x00000090 (dcs device control string)</li><li>0x0000008F (ss3 single shift three)</li><li>0x00000081 (high octet preset)</li><li>0x0000007F (del delete)</li><li>0x0000008D (ri reverse line feed)</li></ul> | 6 |
+| 0x0000FFF0, 0x0000FFFD, 0x0000FFFE, 0x0000FFFF (specials) | 4 |
+| Files or directories that end with a period | 1 |
+
+### Common sync errors
+<a id="-2147023673"></a>**The sync session was canceled.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x800704c7 |
+| **HRESULT (decimal)** | -2147023673 |
+| **Error string** | ERROR_CANCELLED |
+| **Remediation required** | No |
+
+Sync sessions may fail for various reasons including the server being restarted or updated, VSS snapshots, etc. Although this error looks like it requires follow-up, it is safe to ignore this error unless it persists over a period of several hours.
+
+<a id="-2147012889"></a>**A connection with the service could not be established.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80072ee7 |
+| **HRESULT (decimal)** | -2147012889 |
+| **Error string** | WININET_E_NAME_NOT_RESOLVED |
+| **Remediation required** | Yes |
++
+> [!Note]
+> Once network connectivity to the Azure File Sync service is restored, sync may not resume immediately. By default, Azure File Sync will initiate a sync session every 30 minutes if no changes are detected within the server endpoint location. To force a sync session, restart the Storage Sync Agent (FileSyncSvc) service or make a change to a file or directory within the server endpoint location.
+
+<a id="-2134376372"></a>**The user request was throttled by the service.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c8004c |
+| **HRESULT (decimal)** | -2134376372 |
+| **Error string** | ECS_E_USER_REQUEST_THROTTLED |
+| **Remediation required** | No |
+
+No action is required; the server will try again. If this error persists for several hours, create a support request.
+
+<a id="-2134364160"></a>**Sync failed because the operation was aborted**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c83000 |
+| **HRESULT (decimal)** | -2134364160 |
+| **Error string** | ECS_E_OPERATION_ABORTED |
+| **Remediation required** | No |
+
+No action is required. If this error persists for several hours, create a support request.
+
+<a id="-2134364043"></a>**Sync is blocked until change detection completes post restore**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c83075 |
+| **HRESULT (decimal)** | -2134364043 |
+| **Error string** | ECS_E_SYNC_BLOCKED_ON_CHANGE_DETECTION_POST_RESTORE |
+| **Remediation required** | No |
+
+No action is required. When a file or file share (cloud endpoint) is restored using Azure Backup, sync is blocked until change detection completes on the Azure file share. Change detection runs immediately once the restore is complete and the duration is based on the number of files in the file share.
+
+<a id="-2147216747"></a>**Sync failed because the sync database was unloaded.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80041295 |
+| **HRESULT (decimal)** | -2147216747 |
+| **Error string** | SYNC_E_METADATA_INVALID_OPERATION |
+| **Remediation required** | No |
+
+This error typically occurs when a backup application creates a VSS snapshot and the sync database is unloaded. If this error persists for several hours, create a support request.
+
+<a id="-2134364065"></a>**Sync can't access the Azure file share specified in the cloud endpoint.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c8305f |
+| **HRESULT (decimal)** | -2134364065 |
+| **Error string** | ECS_E_EXTERNAL_STORAGE_ACCOUNT_AUTHORIZATION_FAILED |
+| **Remediation required** | Yes |
+
+This error occurs because the Azure File Sync agent cannot access the Azure file share, which may be because the Azure file share or the storage account hosting it no longer exists. You can troubleshoot this error by working through the following steps:
+
+1. [Verify the storage account exists.](#troubleshoot-storage-account)
+2. [Ensure the Azure file share exists.](#troubleshoot-azure-file-share)
+3. [Ensure Azure File Sync has access to the storage account.](#troubleshoot-rbac)
+4. [Verify the firewall and virtual network settings on the storage account are configured properly (if enabled)](file-sync-deployment-guide.md?tabs=azure-portal#optional-configure-firewall-and-virtual-network-settings)
+
+<a id="-2134351804"></a>**Sync failed because the request is not authorized to perform this operation.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c86044 |
+| **HRESULT (decimal)** | -2134351804 |
+| **Error string** | ECS_E_AZURE_AUTHORIZATION_FAILED |
+| **Remediation required** | Yes |
+
+This error occurs because the Azure File Sync agent is not authorized to access the Azure file share. You can troubleshoot this error by working through the following steps:
+
+1. [Verify the storage account exists.](#troubleshoot-storage-account)
+2. [Ensure the Azure file share exists.](#troubleshoot-azure-file-share)
+3. [Verify the firewall and virtual network settings on the storage account are configured properly (if enabled)](file-sync-deployment-guide.md?tabs=azure-portal#optional-configure-firewall-and-virtual-network-settings)
+4. [Ensure Azure File Sync has access to the storage account.](#troubleshoot-rbac)
+
+<a id="-2134364064"></a><a id="cannot-resolve-storage"></a>**The storage account name used could not be resolved.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80C83060 |
+| **HRESULT (decimal)** | -2134364064 |
+| **Error string** | ECS_E_STORAGE_ACCOUNT_NAME_UNRESOLVED |
+| **Remediation required** | Yes |
+
+1. Check that you can resolve the storage DNS name from the server.
+
+ ```powershell
+ Test-NetConnection -ComputerName <storage-account-name>.file.core.windows.net -Port 443
+ ```
+2. [Verify the storage account exists.](#troubleshoot-storage-account)
+3. [Verify the firewall and virtual network settings on the storage account are configured properly (if enabled)](file-sync-deployment-guide.md?tabs=azure-portal#optional-configure-firewall-and-virtual-network-settings)
+
+> [!Note]
+> Once network connectivity to the Azure File Sync service is restored, sync may not resume immediately. By default, Azure File Sync will initiate a sync session every 30 minutes if no changes are detected within the server endpoint location. To force a sync session, restart the Storage Sync Agent (FileSyncSvc) service or make a change to a file or directory within the server endpoint location.
+
+<a id="-2134364022"></a><a id="storage-unknown-error"></a>**An unknown error occurred while accessing the storage account.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c8308a |
+| **HRESULT (decimal)** | -2134364022 |
+| **Error string** | ECS_E_STORAGE_ACCOUNT_UNKNOWN_ERROR |
+| **Remediation required** | Yes |
+
+1. [Verify the storage account exists.](#troubleshoot-storage-account)
+2. [Verify the firewall and virtual network settings on the storage account are configured properly (if enabled)](file-sync-deployment-guide.md?tabs=azure-portal#optional-configure-firewall-and-virtual-network-settings)
+
+<a id="-2134364014"></a>**Sync failed due to storage account locked.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c83092 |
+| **HRESULT (decimal)** | -2134364014 |
+| **Error string** | ECS_E_STORAGE_ACCOUNT_LOCKED |
+| **Remediation required** | Yes |
+
+This error occurs because the storage account has a read-only [resource lock](../../azure-resource-manager/management/lock-resources.md). To resolve this issue, remove the read-only resource lock on the storage account.
+
+<a id="-1906441138"></a>**Sync failed due to a problem with the sync database.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x8e5e044e |
+| **HRESULT (decimal)** | -1906441138 |
+| **Error string** | JET_errWriteConflict |
+| **Remediation required** | Yes |
+
+This error occurs when there is a problem with the internal database used by Azure File Sync. When this issue occurs, create a support request and we will contact you to help you resolve this issue.
+
+<a id="-2134364053"></a>**The Azure File Sync agent version installed on the server is not supported.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80C8306B |
+| **HRESULT (decimal)** | -2134364053 |
+| **Error string** | ECS_E_AGENT_VERSION_BLOCKED |
+| **Remediation required** | Yes |
+
+This error occurs if the Azure File Sync agent version installed on the server is not supported. To resolve this issue, [upgrade](file-sync-release-notes.md#azure-file-sync-agent-update-policy) to a [supported agent version](file-sync-release-notes.md#supported-versions).
+
+<a id="-2134351810"></a>**You reached the Azure file share storage limit.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c8603e |
+| **HRESULT (decimal)** | -2134351810 |
+| **Error string** | ECS_E_AZURE_STORAGE_SHARE_SIZE_LIMIT_REACHED |
+| **Remediation required** | Yes |
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c80249 |
+| **HRESULT (decimal)** | -2134375863 |
+| **Error string** | ECS_E_NOT_ENOUGH_REMOTE_STORAGE |
+| **Remediation required** | Yes |
+
+Sync sessions fail with either of these errors when the Azure file share storage limit has been reached, which can happen if a quota is applied for an Azure file share or if the usage exceeds the limits for an Azure file share. For more information, see the [current limits for an Azure file share](../files/storage-files-scale-targets.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json).
+
+1. Navigate to the sync group within the Storage Sync Service.
+2. Select the cloud endpoint within the sync group.
+3. Note the Azure file share name in the opened pane.
+4. Select the linked storage account. If this link fails, the referenced storage account has been removed.
+
+ ![A screenshot showing the cloud endpoint detail pane with a link to the storage account.](media/storage-sync-files-troubleshoot/file-share-inaccessible-1.png)
+
+5. Select **Files** to view the list of file shares.
+6. Click the three dots at the end of the row for the Azure file share referenced by the cloud endpoint.
+7. Verify that the **Usage** is below the **Quota**. Note unless an alternate quota has been specified, the quota will match the [maximum size of the Azure file share](../files/storage-files-scale-targets.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json).
+
+ ![A screenshot of the Azure file share properties.](media/storage-sync-files-troubleshoot/file-share-limit-reached-1.png)
+
+If the share is full and a quota is not set, one possible way of fixing this issue is to make each subfolder of the current server endpoint into its own server endpoint in their own separate sync groups. This way each subfolder will sync to individual Azure file shares.
+
+<a id="-2134351824"></a>**The Azure file share cannot be found.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c86030 |
+| **HRESULT (decimal)** | -2134351824 |
+| **Error string** | ECS_E_AZURE_FILE_SHARE_NOT_FOUND |
+| **Remediation required** | Yes |
+
+This error occurs when the Azure file share is not accessible. To troubleshoot:
+
+1. [Verify the storage account exists.](#troubleshoot-storage-account)
+2. [Ensure the Azure file share exists.](#troubleshoot-azure-file-share)
+
+If the Azure file share was deleted, you need to create a new file share and then recreate the sync group.
+
+<a id="-2134364042"></a>**Sync is paused while this Azure subscription is suspended.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80C83076 |
+| **HRESULT (decimal)** | -2134364042 |
+| **Error string** | ECS_E_SYNC_BLOCKED_ON_SUSPENDED_SUBSCRIPTION |
+| **Remediation required** | Yes |
+
+This error occurs when the Azure subscription is suspended. Sync will be reenabled when the Azure subscription is restored. See [Why is my Azure subscription disabled and how do I reactivate it?](../../cost-management-billing/manage/subscription-disabled.md) for more information.
+
+<a id="-2134375618"></a>**The storage account has a firewall or virtual networks configured.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c8033e |
+| **HRESULT (decimal)** | -2134375618 |
+| **Error string** | ECS_E_SERVER_BLOCKED_BY_NETWORK_ACL |
+| **Remediation required** | Yes |
+
+This error occurs when the Azure file share is inaccessible because of a storage account firewall or because the storage account belongs to a virtual network. Verify the firewall and virtual network settings on the storage account are configured properly. For more information, see [Configure firewall and virtual network settings](file-sync-deployment-guide.md?tabs=azure-portal#optional-configure-firewall-and-virtual-network-settings).
+
+<a id="-2134375911"></a>**Sync failed due to a problem with the sync database.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c80219 |
+| **HRESULT (decimal)** | -2134375911 |
+| **Error string** | ECS_E_SYNC_METADATA_WRITE_LOCK_TIMEOUT |
+| **Remediation required** | No |
+
+This error usually resolves itself, and can occur if there are:
+
+* A high number of file changes across the servers in the sync group.
+* A large number of errors on individual files and directories.
+
+If this error persists for longer than a few hours, create a support request and we will contact you to help you resolve this issue.
+
+<a id="-2146762487"></a>**The server failed to establish a secure connection. The cloud service received an unexpected certificate.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x800b0109 |
+| **HRESULT (decimal)** | -2146762487 |
+| **Error string** | CERT_E_UNTRUSTEDROOT |
+| **Remediation required** | Yes |
+
+This error can happen if your organization is using a TLS terminating proxy or if a malicious entity is intercepting the traffic between your server and the Azure File Sync service. If you are certain that this is expected (because your organization is using a TLS terminating proxy), you skip certificate verification with a registry override.
+
+1. Create the SkipVerifyingPinnedRootCertificate registry value.
+
+ ```powershell
+ New-ItemProperty -Path HKLM:\SOFTWARE\Microsoft\Azure\StorageSync -Name SkipVerifyingPinnedRootCertificate -PropertyType DWORD -Value 1
+ ```
+
+2. Restart the sync service on the registered server.
+
+ ```powershell
+ Restart-Service -Name FileSyncSvc -Force
+ ```
+
+By setting this registry value, the Azure File Sync agent will accept any locally trusted TLS/SSL certificate when transferring data between the server and the cloud service.
+
+<a id="-2147012894"></a>**A connection with the service could not be established.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80072ee2 |
+| **HRESULT (decimal)** | -2147012894 |
+| **Error string** | WININET_E_TIMEOUT |
+| **Remediation required** | Yes |
++
+> [!Note]
+> Once network connectivity to the Azure File Sync service is restored, sync may not resume immediately. By default, Azure File Sync will initiate a sync session every 30 minutes if no changes are detected within the server endpoint location. To force a sync session, restart the Storage Sync Agent (FileSyncSvc) service or make a change to a file or directory within the server endpoint location.
+
+<a id="-2147012721"></a>**Sync failed because the server was unable to decode the response from the Azure File Sync service**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80072f8f |
+| **HRESULT (decimal)** | -2147012721 |
+| **Error string** | WININET_E_DECODING_FAILED |
+| **Remediation required** | Yes |
+
+This error typically occurs if a network proxy is modifying the response from the Azure File Sync service. Please check your proxy configuration.
+
+<a id="-2134375680"></a>**Sync failed due to a problem with authentication.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c80300 |
+| **HRESULT (decimal)** | -2134375680 |
+| **Error string** | ECS_E_SERVER_CREDENTIAL_NEEDED |
+| **Remediation required** | Yes |
+
+This error typically occurs because the server time is incorrect. If the server is running in a virtual machine, verify the time on the host is correct.
+
+<a id="-2134364040"></a>**Sync failed due to certificate expiration.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c83078 |
+| **HRESULT (decimal)** | -2134364040 |
+| **Error string** | ECS_E_AUTH_SRV_CERT_EXPIRED |
+| **Remediation required** | Yes |
+
+This error occurs because the certificate used for authentication is expired.
+
+To confirm the certificate is expired, perform the following steps:
+1. Open the Certificates MMC snap-in, select Computer Account and navigate to Certificates (Local Computer)\Personal\Certificates.
+2. Check if the client authentication certificate is expired.
+
+If the client authentication certificate is expired, run the following PowerShell command on the server:
+
+```powershell
+Reset-AzStorageSyncServerCertificate -ResourceGroupName <string> -StorageSyncServiceName <string>
+```
+<a id="-2134375896"></a>**Sync failed due to authentication certificate not found.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c80228 |
+| **HRESULT (decimal)** | -2134375896 |
+| **Error string** | ECS_E_AUTH_SRV_CERT_NOT_FOUND |
+| **Remediation required** | Yes |
+
+This error occurs because the certificate used for authentication is not found.
+
+To resolve this issue, run the following PowerShell command on the server:
+
+```powershell
+Reset-AzStorageSyncServerCertificate -ResourceGroupName <string> -StorageSyncServiceName <string>
+```
+<a id="-2134364039"></a>**Sync failed due to authentication identity not found.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c83079 |
+| **HRESULT (decimal)** | -2134364039 |
+| **Error string** | ECS_E_AUTH_IDENTITY_NOT_FOUND |
+| **Remediation required** | Yes |
+
+This error occurs because the server endpoint deletion failed and the endpoint is now in a partially deleted state. To resolve this issue, retry deleting the server endpoint.
+
+<a id="-1906441711"></a><a id="-2134375654"></a><a id="doesnt-have-enough-free-space"></a>**The volume where the server endpoint is located is low on disk space.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x8e5e0211 |
+| **HRESULT (decimal)** | -1906441711 |
+| **Error string** | JET_errLogDiskFull |
+| **Remediation required** | Yes |
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c8031a |
+| **HRESULT (decimal)** | -2134375654 |
+| **Error string** | ECS_E_NOT_ENOUGH_LOCAL_STORAGE |
+| **Remediation required** | Yes |
+
+Sync sessions fail with one of these errors because either the volume has insufficient disk space or disk quota limit is reached. This error commonly occurs because files outside the server endpoint are using up space on the volume. Free up space on the volume by adding additional server endpoints, moving files to a different volume, or increasing the size of the volume the server endpoint is on. If a disk quota is configured on the volume using [File Server Resource Manager](/windows-server/storage/fsrm/fsrm-overview) or [NTFS quota](/windows-server/administration/windows-commands/fsutil-quota), increase the quota limit.
+
+<a id="-2134364145"></a><a id="replica-not-ready"></a>**The service is not yet ready to sync with this server endpoint.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c8300f |
+| **HRESULT (decimal)** | -2134364145 |
+| **Error string** | ECS_E_REPLICA_NOT_READY |
+| **Remediation required** | No |
+
+This error occurs because the cloud endpoint was created with content already existing on the Azure file share. Azure File Sync must scan the Azure file share for all content before allowing the server endpoint to proceed with its initial synchronization.
+
+<a id="-2134375877"></a><a id="-2134375908"></a><a id="-2134375853"></a>**Sync failed due to problems with many individual files.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c8023b |
+| **HRESULT (decimal)** | -2134375877 |
+| **Error string** | ECS_E_SYNC_METADATA_KNOWLEDGE_SOFT_LIMIT_REACHED |
+| **Remediation required** | Yes |
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c8021c |
+| **HRESULT (decimal)** | -2134375908 |
+| **Error string** | ECS_E_SYNC_METADATA_KNOWLEDGE_LIMIT_REACHED |
+| **Remediation required** | Yes |
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c80253 |
+| **HRESULT (decimal)** | -2134375853 |
+| **Error string** | ECS_E_TOO_MANY_PER_ITEM_ERRORS |
+| **Remediation required** | Yes |
+
+Sync sessions fail with one of these errors when there are many files that are failing to sync with per-item errors. Perform the steps documented in the [How do I see if there are specific files or folders that are not syncing?](?tabs=portal1%252cazure-portal#how-do-i-see-if-there-are-specific-files-or-folders-that-are-not-syncing) section to resolve the per-item errors. For sync error ECS_E_SYNC_METADATA_KNOWLEDGE_LIMIT_REACHED, please open a support case.
+
+> [!NOTE]
+> Azure File Sync creates a temporary VSS snapshot once a day on the server to sync files that have open handles.
+
+<a id="-2134376423"></a>**Sync failed due to a problem with the server endpoint path.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c80019 |
+| **HRESULT (decimal)** | -2134376423 |
+| **Error string** | ECS_E_SYNC_INVALID_PATH |
+| **Remediation required** | Yes |
+
+Ensure the path exists, is on a local NTFS volume, and is not a reparse point or existing server endpoint.
+
+<a id="-2134375817"></a>**Sync failed because the filter driver version is not compatible with the agent version**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80C80277 |
+| **HRESULT (decimal)** | -2134375817 |
+| **Error string** | ECS_E_INCOMPATIBLE_FILTER_VERSION |
+| **Remediation required** | Yes |
+
+This error occurs because the Cloud Tiering filter driver (StorageSync.sys) version loaded is not compatible with the Storage Sync Agent (FileSyncSvc) service. If the Azure File Sync agent was upgraded, restart the server to complete the installation. If the error continues to occur, uninstall the agent, restart the server and reinstall the Azure File Sync agent.
+
+<a id="-2134376373"></a>**The service is currently unavailable.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c8004b |
+| **HRESULT (decimal)** | -2134376373 |
+| **Error string** | ECS_E_SERVICE_UNAVAILABLE |
+| **Remediation required** | No |
+
+This error occurs because the Azure File Sync service is unavailable. This error will auto-resolve when the Azure File Sync service is available again.
+
+> [!Note]
+> Once network connectivity to the Azure File Sync service is restored, sync may not resume immediately. By default, Azure File Sync will initiate a sync session every 30 minutes if no changes are detected within the server endpoint location. To force a sync session, restart the Storage Sync Agent (FileSyncSvc) service or make a change to a file or directory within the server endpoint location.
+
+<a id="-2146233088"></a>**Sync failed due to an exception.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80131500 |
+| **HRESULT (decimal)** | -2146233088 |
+| **Error string** | COR_E_EXCEPTION |
+| **Remediation required** | No |
+
+This error occurs because sync failed due to an exception. If the error persists for several hours, please create a support request.
+
+<a id="-2134364045"></a>**Sync failed because the storage account has failed over to another region.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c83073 |
+| **HRESULT (decimal)** | -2134364045 |
+| **Error string** | ECS_E_STORAGE_ACCOUNT_FAILED_OVER |
+| **Remediation required** | Yes |
+
+This error occurs because the storage account has failed over to another region. Azure File Sync does not support the storage account failover feature. Storage accounts containing Azure file shares being used as cloud endpoints in Azure File Sync should not be failed over. Doing so will cause sync to stop working and may also cause unexpected data loss in the case of newly tiered files. To resolve this issue, move the storage account to the primary region.
+
+<a id="-2134375922"></a>**Sync failed due to a transient problem with the sync database.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c8020e |
+| **HRESULT (decimal)** | -2134375922 |
+| **Error string** | ECS_E_SYNC_METADATA_WRITE_LEASE_LOST |
+| **Remediation required** | No |
+
+This error occurs because of an internal problem with the sync database. This error will auto-resolve when sync retries. If this error continues for an extend period of time, create a support request and we will contact you to help you resolve this issue.
+
+<a id="-2134364024"></a>**Sync failed due to change in Azure Active Directory tenant**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c83088 |
+| **HRESULT (decimal)** | -2134364024 |
+| **Error string** | ECS_E_INVALID_AAD_TENANT |
+| **Remediation required** | Yes |
+
+Verify you have the latest Azure File Sync agent version installed and give the Microsoft.StorageSync application access to the storage account (see [Ensure Azure File Sync has access to the storage account](#troubleshoot-rbac)).
+
+<a id="-2134364010"></a>**Sync failed due to firewall and virtual network exception not configured**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c83096 |
+| **HRESULT (decimal)** | -2134364010 |
+| **Error string** | ECS_E_MGMT_STORAGEACLSBYPASSNOTSET |
+| **Remediation required** | Yes |
+
+This error occurs if the firewall and virtual network settings are enabled on the storage account and the "Allow trusted Microsoft services to access this storage account" exception is not checked. To resolve this issue, follow the steps documented in the [Configure firewall and virtual network settings](file-sync-deployment-guide.md?tabs=azure-portal#optional-configure-firewall-and-virtual-network-settings) section in the deployment guide.
+
+<a id="-2147024891"></a>**Sync failed with access denied due to security settings on the storage account or NTFS permissions on the server.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80070005 |
+| **HRESULT (decimal)** | -2147024891 |
+| **Error string** | ERROR_ACCESS_DENIED |
+| **Remediation required** | Yes |
+
+This error can occur if Azure File Sync cannot access the storage account due to security settings or if the NT AUTHORITY\SYSTEM account does not have permissions to the System Volume Information folder on the volume where the server endpoint is located. Note, if individual files are failing to sync with ERROR_ACCESS_DENIED, perform the steps documented in the [Troubleshooting per file/directory sync errors](?tabs=portal1%252cazure-portal#troubleshooting-per-filedirectory-sync-errors) section.
+
+1. Verify the **SMB security settings** on the storage account are allowing **SMB 3.1.1** protocol version, **NTLM v2** authentication and **AES-128-GCM** encryption. To check the SMB security settings on the storage account, see [SMB security settings](../files/files-smb-protocol.md#smb-security-settings).
+2. [Verify the firewall and virtual network settings on the storage account are configured properly (if enabled)](file-sync-deployment-guide.md?tabs=azure-portal#optional-configure-firewall-and-virtual-network-settings)
+3. Verify the **NT AUTHORITY\SYSTEM** account has permissions to the System Volume Information folder on the volume where the server endpoint is located by performing the following steps:
+
+ a. Download [Psexec](/sysinternals/downloads/psexec) tool.
+ b. Run the following command from an elevated command prompt to launch a command prompt using the system account: `PsExec.exe -i -s -d cmd`
+ c. From the command prompt running under the system account, run the following command to confirm the NT AUTHORITY\SYSTEM account does not have access to the System Volume Information folder: `cacls "drive letter:\system volume information" /T /C`
+ d. If the NT AUTHORITY\SYSTEM account does not have access to the System Volume Information folder, run the following command: `cacls "drive letter:\system volume information" /T /E /G "NT AUTHORITY\SYSTEM:F"`
+ - If step #d fails with access denied, run the following command to take ownership of the System Volume Information folder and then repeat step #d: `takeown /A /R /F "drive letter:\System Volume Information"`
+
+<a id="-2134375810"></a>**Sync failed because the Azure file share was deleted and recreated.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c8027e |
+| **HRESULT (decimal)** | -2134375810 |
+| **Error string** | ECS_E_SYNC_REPLICA_ROOT_CHANGED |
+| **Remediation required** | Yes |
+
+This error occurs because Azure File Sync does not support deleting and recreating an Azure file share in the same sync group.
+
+To resolve this issue, delete and recreate the sync group by performing the following steps:
+
+1. Delete all server endpoints in the sync group.
+2. Delete the cloud endpoint.
+3. Delete the sync group.
+4. If cloud tiering was enabled on a server endpoint, delete the orphaned tiered files on the server by performing the steps documented in the [Tiered files are not accessible on the server after deleting a server endpoint](file-sync-troubleshoot-cloud-tiering.md#tiered-files-are-not-accessible-on-the-server-after-deleting-a-server-endpoint) section.
+5. Recreate the sync group.
+
+<a id="-2134375852"></a>**Sync detected the replica has been restored to an older state**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c80254 |
+| **HRESULT (decimal)** | -2134375852 |
+| **Error string** | ECS_E_SYNC_REPLICA_BACK_IN_TIME |
+| **Remediation required** | No |
+
+No action is required. This error occurs because sync detected the replica has been restored to an older state. Sync will now enter a reconciliation mode, where it recreates the sync relationship by merging the contents of the Azure file share and the data on the server endpoint. When reconciliation mode is triggered, the process can be very time consuming depending upon the namespace size. Regular synchronization does not happen until the reconciliation finishes, and files that are different (last modified time or size) between the Azure file share and server endpoint will result in file conflicts.
+
+<a id="-2145844941"></a>**Sync failed because the HTTP request was redirected**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80190133 |
+| **HRESULT (decimal)** | -2145844941 |
+| **Error string** | HTTP_E_STATUS_REDIRECT_KEEP_VERB |
+| **Remediation required** | Yes |
+
+This error occurs because Azure File Sync does not support HTTP redirection (3xx status code). To resolve this issue, disable HTTP redirect on your proxy server or network device.
+
+<a id="-2134364027"></a>**A timeout occurred during offline data transfer, but it is still in progress.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c83085 |
+| **HRESULT (decimal)** | -2134364027 |
+| **Error string** | ECS_E_DATA_INGESTION_WAIT_TIMEOUT |
+| **Remediation required** | No |
+
+This error occurs when a data ingestion operation exceeds the timeout. This error can be ignored if sync is making progress (AppliedItemCount is greater than 0). See [How do I monitor the progress of a current sync session?](#how-do-i-monitor-the-progress-of-a-current-sync-session).
+
+<a id="-2134375814"></a>**Sync failed because the server endpoint path cannot be found on the server.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c8027a |
+| **HRESULT (decimal)** | -2134375814 |
+| **Error string** | ECS_E_SYNC_ROOT_DIRECTORY_NOT_FOUND |
+| **Remediation required** | Yes |
+
+This error occurs if the directory used as the server endpoint path was renamed or deleted. If the directory was renamed, rename the directory back to the original name and restart the Storage Sync Agent service (FileSyncSvc).
+
+If the directory was deleted, perform the following steps to remove the existing server endpoint and create a new server endpoint using a new path:
+
+1. Remove the server endpoint in the sync group by following the steps documented in [Remove a server endpoint](file-sync-server-endpoint-delete.md).
+1. Create a new server endpoint in the sync group by following the steps documented in [Add a server endpoint](file-sync-server-endpoint-create.md).
+
+<a id="-2134375783"></a>**Server endpoint provisioning failed due to an empty server path.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80C80299 |
+| **HRESULT (decimal)** | -2134375783 |
+| **Error string** | ECS_E_SYNC_AUTHORITATIVE_UPLOAD_EMPTY_SET |
+| **Remediation required** | Yes |
+
+Server endpoint provisioning fails with this error code if these conditions are met:
+* This server endpoint was provisioned with the initial sync mode: [server authoritative](file-sync-server-endpoint-create.md#initial-sync-section)
+* Local server path is empty or contains no items recognized as able to sync.
+
+This provisioning error protects you from deleting all content that might be available in an Azure file share. Server authoritative upload is a special mode to catch up a cloud location that was already seeded, with the updates from the server location. Review this [migration guide](../files/storage-files-migration-server-hybrid-databox.md) to understand the scenario for which this mode has been built for.
+
+1. Remove the server endpoint in the sync group by following the steps documented in [Remove a server endpoint](file-sync-server-endpoint-delete.md).
+1. Create a new server endpoint in the sync group by following the steps documented in [Add a server endpoint](file-sync-server-endpoint-create.md).
+
+### Common troubleshooting steps
+<a id="troubleshoot-storage-account"></a>**Verify the storage account exists.**
+# [Portal](#tab/azure-portal)
+1. Navigate to the sync group within the Storage Sync Service.
+2. Select the cloud endpoint within the sync group.
+3. Note the Azure file share name in the opened pane.
+4. Select the linked storage account. If this link fails, the referenced storage account has been removed.
+ ![A screenshot showing the cloud endpoint detail pane with a link to the storage account.](media/storage-sync-files-troubleshoot/file-share-inaccessible-1.png)
+
+# [PowerShell](#tab/azure-powershell)
+```powershell
+# Variables for you to populate based on your configuration
+$region = "<Az_Region>"
+$resourceGroup = "<RG_Name>"
+$syncService = "<storage-sync-service>"
+$syncGroup = "<sync-group>"
+
+# Log into the Azure account
+Connect-AzAccount
+
+# Check to ensure Azure File Sync is available in the selected Azure
+# region.
+$regions = [System.String[]]@()
+Get-AzLocation | ForEach-Object {
+ if ($_.Providers -contains "Microsoft.StorageSync") {
+ $regions += $_.Location
+ }
+}
+
+if ($regions -notcontains $region) {
+ throw [System.Exception]::new("Azure File Sync is either not available in the " + `
+ " selected Azure Region or the region is mistyped.")
+}
+
+# Check to ensure resource group exists
+$resourceGroups = [System.String[]]@()
+Get-AzResourceGroup | ForEach-Object {
+ $resourceGroups += $_.ResourceGroupName
+}
+
+if ($resourceGroups -notcontains $resourceGroup) {
+ throw [System.Exception]::new("The provided resource group $resourceGroup does not exist.")
+}
+
+# Check to make sure the provided Storage Sync Service
+# exists.
+$syncServices = [System.String[]]@()
+
+Get-AzStorageSyncService -ResourceGroupName $resourceGroup | ForEach-Object {
+ $syncServices += $_.StorageSyncServiceName
+}
+
+if ($syncServices -notcontains $syncService) {
+ throw [System.Exception]::new("The provided Storage Sync Service $syncService does not exist.")
+}
+
+# Check to make sure the provided Sync Group exists
+$syncGroups = [System.String[]]@()
+
+Get-AzStorageSyncGroup -ResourceGroupName $resourceGroup -StorageSyncServiceName $syncService | ForEach-Object {
+ $syncGroups += $_.SyncGroupName
+}
+
+if ($syncGroups -notcontains $syncGroup) {
+ throw [System.Exception]::new("The provided sync group $syncGroup does not exist.")
+}
+
+# Get reference to cloud endpoint
+$cloudEndpoint = Get-AzStorageSyncCloudEndpoint `
+ -ResourceGroupName $resourceGroup `
+ -StorageSyncServiceName $syncService `
+ -SyncGroupName $syncGroup
+
+# Get reference to storage account
+$storageAccount = Get-AzStorageAccount | Where-Object {
+ $_.Id -eq $cloudEndpoint.StorageAccountResourceId
+}
+
+if ($storageAccount -eq $null) {
+ throw [System.Exception]::new("The storage account referenced in the cloud endpoint does not exist.")
+}
+```
++
+<a id="troubleshoot-azure-file-share"></a>**Ensure the Azure file share exists.**
+# [Portal](#tab/azure-portal)
+1. Click **Overview** on the left-hand table of contents to return to the main storage account page.
+2. Select **Files** to view the list of file shares.
+3. Verify the file share referenced by the cloud endpoint appears in the list of file shares (you should have noted this in step 1 above).
+
+# [PowerShell](#tab/azure-powershell)
+```powershell
+$fileShare = Get-AzStorageShare -Context $storageAccount.Context | Where-Object {
+ $_.Name -eq $cloudEndpoint.AzureFileShareName -and
+ $_.IsSnapshot -eq $false
+}
+
+if ($fileShare -eq $null) {
+ throw [System.Exception]::new("The Azure file share referenced by the cloud endpoint does not exist")
+}
+```
++
+<a id="troubleshoot-rbac"></a>**Ensure Azure File Sync has access to the storage account.**
+# [Portal](#tab/azure-portal)
+1. Click **Access control (IAM)** on the left-hand table of contents.
+1. Click the **Role assignments** tab to the list the users and applications (*service principals*) that have access to your storage account.
+1. Verify **Microsoft.StorageSync** or **Hybrid File Sync Service** (old application name) appears in the list with the **Reader and Data Access** role.
+
+ ![A screenshot of the Hybrid File Sync Service service principal in the access control tab of the storage account](media/storage-sync-files-troubleshoot/file-share-inaccessible-3.png)
+
+ If **Microsoft.StorageSync** or **Hybrid File Sync Service** does not appear in the list, perform the following steps:
+
+ - Click **Add**.
+ - In the **Role** field, select **Reader and Data Access**.
+ - In the **Select** field, type **Microsoft.StorageSync**, select the role and click **Save**.
+
+# [PowerShell](#tab/azure-powershell)
+```powershell
+$role = Get-AzRoleAssignment -Scope $storageAccount.Id | Where-Object { $_.DisplayName -eq "Microsoft.StorageSync" }
+
+if ($role -eq $null) {
+ throw [System.Exception]::new("The storage account does not have the Azure File Sync " + `
+ "service principal authorized to access the data within the " + `
+ "referenced Azure file share.")
+}
+```
++
+## See also
+- [Troubleshoot Azure File Sync sync group management](file-sync-troubleshoot-sync-group-management.md)
+- [Troubleshoot Azure File Sync agent installation and server registration](file-sync-troubleshoot-installation.md)
+- [Troubleshoot Azure File Sync cloud tiering](file-sync-troubleshoot-cloud-tiering.md)
+- [Monitor Azure File Sync](file-sync-monitoring.md)
+- [Troubleshoot Azure Files problems in Windows](../files/storage-troubleshoot-windows-file-connection-problems.md)
+- [Troubleshoot Azure Files problems in Linux](../files/storage-troubleshoot-linux-file-connection-problems.md)
storage File Sync Troubleshoot Sync Group Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-troubleshoot-sync-group-management.md
+
+ Title: Troubleshoot Azure File Sync sync group management | Microsoft Docs
+description: Troubleshoot common issues in managing Azure File Sync sync groups, including cloud endpoint creation and server endpoint creation, deletion, and health.
+++ Last updated : 7/28/2022+++++
+# Troubleshoot Azure File Sync sync group management
+A sync group defines the sync topology for a set of files. Endpoints within a sync group are kept in sync with each other. A sync group must contain one cloud endpoint, which represents an Azure file share, and one or more server endpoints, which represents a path on a registered server. This article is designed to help you troubleshoot and resolve issues that you might encounter when managing sync groups.
+
+## Cloud endpoint creation errors
+
+<a id="cloud-endpoint-mgmtinternalerror"></a>**Cloud endpoint creation fails, with this error: "MgmtInternalError"**
+This error can occur if the Azure File Sync service cannot access the storage account due to SMB security settings. To enable Azure File Sync to access the storage account, the SMB security settings on the storage account must allow **SMB 3.1.1** protocol version, **NTLM v2** authentication and **AES-128-GCM** encryption. To check the SMB security settings on the storage account, see [SMB security settings](../files/files-smb-protocol.md#smb-security-settings).
+
+<a id="cloud-endpoint-authfailed"></a>**Cloud endpoint creation fails, with this error: "AuthorizationFailed"**
+This error occurs if your user account doesn't have sufficient rights to create a cloud endpoint.
+
+To create a cloud endpoint, your user account must have the following Microsoft Authorization permissions:
+* Read: Get role definition
+* Write: Create or update custom role definition
+* Read: Get role assignment
+* Write: Create role assignment
+
+The following built-in roles have the required Microsoft Authorization permissions:
+* Owner
+* User Access Administrator
+
+To determine whether your user account role has the required permissions:
+1. In the Azure portal, select **Resource groups**.
+2. Select the resource group where the storage account is located, and then select **Access control (IAM)**.
+3. Select the **Role assignments** tab.
+4. Select the **Role** (for example, Owner or Contributor) for your user account.
+5. In the **Resource Provider** list, select **Microsoft Authorization**.
+ * **Role assignment** should have **Read** and **Write** permissions.
+ * **Role definition** should have **Read** and **Write** permissions.
+
+<a id="cloud-endpoint-using-share"></a>**Cloud endpoint creation fails, with this error: "The specified Azure FileShare is already in use by a different CloudEndpoint"**
+This error occurs if the Azure file share is already in use by another cloud endpoint.
+
+If you see this message and the Azure file share currently is not in use by a cloud endpoint, complete the following steps to clear the Azure File Sync metadata on the Azure file share:
+
+> [!Warning]
+> Deleting the metadata on an Azure file share that is currently in use by a cloud endpoint causes Azure File Sync operations to fail. If you then use this file share for sync in a different sync group, data loss for files in the old sync group is almost certain.
+
+1. In the Azure portal, go to your Azure file share.  
+2. Right-click the Azure file share, and then select **Edit metadata**.
+3. Right-click **SyncService**, and then select **Delete**.
+
+## Server endpoint creation and deletion errors
+
+<a id="-2134375898"></a>**Server endpoint creation fails, with this error: "MgmtServerJobFailed" (Error code: -2134375898 or 0x80c80226)**
+This error occurs if the server endpoint path is on the system volume and cloud tiering is enabled. Cloud tiering is not supported on the system volume. To create a server endpoint on the system volume, disable cloud tiering when creating the server endpoint.
+
+<a id="-2147024894"></a>**Server endpoint creation fails, with this error: "MgmtServerJobFailed" (Error code: -2147024894 or 0x80070002)**
+This error occurs if the server endpoint path specified is not valid. Verify the server endpoint path specified is a locally attached NTFS volume. Note, Azure File Sync does not support mapped drives as a server endpoint path.
+
+<a id="-2134375640"></a>**Server endpoint creation fails, with this error: "MgmtServerJobFailed" (Error code: -2134375640 or 0x80c80328)**
+This error occurs if the server endpoint path specified is not an NTFS volume. Verify the server endpoint path specified is a locally attached NTFS volume. Note, Azure File Sync does not support mapped drives as a server endpoint path.
+
+<a id="-2134347507"></a>**Server endpoint creation fails, with this error: "MgmtServerJobFailed" (Error code: -2134347507 or 0x80c8710d)**
+This error occurs because Azure File Sync does not support server endpoints on volumes, which have a compressed System Volume Information folder. To resolve this issue, decompress the System Volume Information folder. If the System Volume Information folder is the only folder compressed on the volume, perform the following steps:
+
+1. Download [PsExec](/sysinternals/downloads/psexec) tool.
+2. Run the following command from an elevated command prompt to launch a command prompt running under the system account: **PsExec.exe -i -s -d cmd**
+3. From the command prompt running under the system account, type the following commands and hit enter:
+ **cd /d "drive letter:\System Volume Information"**
+ **compact /u /s**
+
+<a id="-2134376345"></a>**Server endpoint creation fails, with this error: "MgmtServerJobFailed" (Error code: -2134376345 or 0x80C80067)**
+This error occurs if the limit of server endpoints per server is reached. Azure File Sync currently supports up to 30 server endpoints per server. For more information, see
+[Azure File Sync scale targets](../files/storage-files-scale-targets.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json#azure-file-sync-scale-targets).
+
+<a id="-2134376427"></a>**Server endpoint creation fails, with this error: "MgmtServerJobFailed" (Error code: -2134376427 or 0x80c80015)**
+This error occurs if another server endpoint is already syncing the server endpoint path specified. Azure File Sync does not support multiple server endpoints syncing the same directory or volume.
+
+<a id="-2160590967"></a>**Server endpoint creation fails, with this error: "MgmtServerJobFailed" (Error code: -2160590967 or 0x80c80077)**
+This error occurs if the server endpoint path contains orphaned tiered files. If a server endpoint was recently removed, wait until the orphaned tiered files cleanup has completed. An Event ID 6662 is logged to the Telemetry event log once the orphaned tiered files cleanup has started. An Event ID 6661 is logged once the orphaned tiered files cleanup has completed and a server endpoint can be recreated using the path. If the server endpoint creation fails after the tiered files cleanup has completed or if Event ID 6661 cannot be found in the Telemetry event log due to event log rollover, remove the orphaned tiered files by performing the steps documented in [Tiered files are not accessible on the server after deleting a server endpoint](file-sync-troubleshoot-cloud-tiering.md#tiered-files-are-not-accessible-on-the-server-after-deleting-a-server-endpoint).
+
+<a id="-2134347757"></a>**Server endpoint deletion fails, with this error: "MgmtServerJobExpired" (Error code: -2134347757 or 0x80c87013)**
+This error occurs if the server is offline or doesn't have network connectivity. If the server is no longer available, unregister the server in the portal, which will delete the server endpoints. To delete the server endpoints, follow the steps that are described in [Unregister a server with Azure File Sync](file-sync-server-registration.md#unregister-the-server-with-storage-sync-service).
+
+## Server endpoint health
+
+<a id="server-endpoint-provisioningfailed"></a>**Unable to open server endpoint properties page or update cloud tiering policy**
+This issue can occur if a management operation on the server endpoint fails. If the server endpoint properties page does not open in the Azure portal, updating server endpoint using PowerShell commands from the server may fix this issue.
+
+```powershell
+# Get the server endpoint id based on the server endpoint DisplayName property
+Get-AzStorageSyncServerEndpoint `
+ -ResourceGroupName myrgname `
+ -StorageSyncServiceName storagesvcname `
+ -SyncGroupName mysyncgroup | `
+Tee-Object -Variable serverEndpoint
+
+# Update the free space percent policy for the server endpoint
+Set-AzStorageSyncServerEndpoint `
+ -InputObject $serverEndpoint
+ -CloudTiering `
+ -VolumeFreeSpacePercent 60
+```
+<a id="server-endpoint-noactivity"></a>**Server endpoint has a health status of "No Activity" or "Pending" and the server state on the registered servers blade is "Appears offline"**
+
+This issue can occur if the Storage Sync Monitor process (AzureStorageSyncMonitor.exe) is not running or the server is unable to access the Azure File Sync service.
+
+On the server that is showing as "Appears offline" in the portal, look at Event ID 9301 in the Telemetry event log (located under Applications and Services\Microsoft\FileSync\Agent in Event Viewer) to determine why the server is unable to access the Azure File Sync service.
+
+- If **GetNextJob completed with status: 0** is logged, the server can communicate with the Azure File Sync service.
+ - Open Task Manager on the server and verify the Storage Sync Monitor (AzureStorageSyncMonitor.exe) process is running. If the process is not running, first try restarting the server. If restarting the server does not resolve the issue, upgrade to the latest Azure File Sync [agent version](file-sync-release-notes.md).
+
+- If **GetNextJob completed with status: -2134347756** is logged, the server is unable to communicate with the Azure File Sync service due to a firewall, proxy, or TLS cipher suite order configuration.
+ - If the server is behind a firewall, verify port 443 outbound is allowed. If the firewall restricts traffic to specific domains, confirm the domains listed in the Firewall [documentation](file-sync-firewall-and-proxy.md#firewall) are accessible.
+ - If the server is behind a proxy, configure the machine-wide or app-specific proxy settings by following the steps in the Proxy [documentation](file-sync-firewall-and-proxy.md#proxy).
+ - Use the Test-StorageSyncNetworkConnectivity cmdlet to check network connectivity to the service endpoints. To learn more, see [Test network connectivity to service endpoints](file-sync-firewall-and-proxy.md#test-network-connectivity-to-service-endpoints).
+ - If the TLS cipher suite order is configured on the server, you can use group policy or TLS cmdlets to add cipher suites:
+ - To use group policy, see [Configuring TLS Cipher Suite Order by using Group Policy](/windows-server/security/tls/manage-tls#configuring-tls-cipher-suite-order-by-using-group-policy).
+ - To use TLS cmdlets, see [Configuring TLS Cipher Suite Order by using TLS PowerShell Cmdlets](/windows-server/security/tls/manage-tls#configuring-tls-cipher-suite-order-by-using-tls-powershell-cmdlets).
+
+ Azure File Sync currently supports the following cipher suites for TLS 1.2 protocol:
+ - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
+ - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
+ - TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
+ - TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
+
+- If **GetNextJob completed with status: -2134347764** is logged, the server is unable to communicate with the Azure File Sync service due to an expired or deleted certificate.
+ - Run the following PowerShell command on the server to reset the certificate used for authentication:
+ ```powershell
+ Reset-AzStorageSyncServerCertificate -ResourceGroupName <string> -StorageSyncServiceName <string>
+ ```
+<a id="endpoint-noactivity-sync"></a>**Server endpoint has a health status of "No Activity" and the server state on the registered servers blade is "Online"**
+
+A server endpoint health status of "No Activity" means the server endpoint has not logged sync activity in the past two hours.
+
+To check current sync activity on a server, see [How do I monitor the progress of a current sync session?](file-sync-troubleshoot-sync-errors.md#how-do-i-monitor-the-progress-of-a-current-sync-session)
+
+A server endpoint may not log sync activity for several hours due to a bug or insufficient system resources. Verify the latest Azure File Sync [agent version](file-sync-release-notes.md) is installed. If the issue persists, open a support request.
+
+> [!Note]
+> If the server state on the registered servers blade is "Appears Offline," perform the steps documented in the [Server endpoint has a health status of "No Activity" or "Pending" and the server state on the registered servers blade is "Appears offline"](#server-endpoint-noactivity) section.
+
+## See also
+- [Troubleshoot Azure File Sync sync errors](file-sync-troubleshoot-sync-errors.md)
+- [Troubleshoot Azure File Sync agent installation and server registration](file-sync-troubleshoot-installation.md)
+- [Troubleshoot Azure File Sync cloud tiering](file-sync-troubleshoot-cloud-tiering.md)
+- [Monitor Azure File Sync](file-sync-monitoring.md)
+- [Troubleshoot Azure Files problems in Windows](../files/storage-troubleshoot-windows-file-connection-problems.md)
+- [Troubleshoot Azure Files problems in Linux](../files/storage-troubleshoot-linux-file-connection-problems.md)
storage File Sync Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-troubleshoot.md
Title: Troubleshoot Azure File Sync | Microsoft Docs
-description: Troubleshoot common issues in a deployment on Azure File Sync, which you can use to transform Windows Server into a quick cache of your Azure file share.
+description: Troubleshoot common issues that you might encounter with Azure File Sync, which you can use to transform Windows Server into a quick cache of your Azure file share.
Previously updated : 6/2/2022 Last updated : 8/08/2022 # Troubleshoot Azure File Sync
-Use Azure File Sync to centralize your organization's file shares in Azure Files, while keeping the flexibility, performance, and compatibility of an on-premises file server. Azure File Sync transforms Windows Server into a quick cache of your Azure file share. You can use any protocol that's available on Windows Server to access your data locally, including SMB, NFS, and FTPS. You can have as many caches as you need across the world.
-
-This article is designed to help you troubleshoot and resolve issues that you might encounter with your Azure File Sync deployment. We also describe how to collect important logs from the system if a deeper investigation of the issue is required. If you don't see the answer to your question, you can contact us through the following channels (in escalating order):
+You can use Azure File Sync to centralize your organization's file shares in Azure Files, while keeping the flexibility, performance, and compatibility of an on-premises file server. This article is designed to help you troubleshoot and resolve issues that you might encounter with your Azure File Sync deployment. We also describe how to collect important logs from the system if a deeper investigation of the issue is required. If you don't see the answer to your question, you can contact us through the following channels (in escalating order):
- [Microsoft Q&A question page for Azure Files](/answers/products/azure?product=storage). - [Azure Community Feedback](https://feedback.azure.com/d365community/forum/a8bb4a47-3525-ec11-b6e6-000d3a4f0f84?c=c860fa6b-3525-ec11-b6e6-000d3a4f0f84).
This article is designed to help you troubleshoot and resolve issues that you mi
## I'm having an issue with Azure File Sync on my server (sync, cloud tiering, etc.). Should I remove and recreate my server endpoint? [!INCLUDE [storage-sync-files-remove-server-endpoint](../../../includes/storage-sync-files-remove-server-endpoint.md)]
-## Agent installation and server registration
-<a id="agent-installation-failures"></a>**Troubleshoot agent installation failures**
-If the Azure File Sync agent installation fails, at an elevated command prompt, run the following command to turn on logging during agent installation:
-
-```
-StorageSyncAgent.msi /l*v AFSInstaller.log
-```
-
-Review installer.log to determine the cause of the installation failure.
-
-<a id="agent-installation-gpo"></a>**Agent installation fails with error: Storage Sync Agent Setup Wizard ended prematurely because of an error**
-
-In the agent installation log, the following error is logged:
-
-```
-CAQuietExec64: + CategoryInfo : SecurityError: (:) , PSSecurityException
-CAQuietExec64: + FullyQualifiedErrorId : UnauthorizedAccess
-CAQuietExec64: Error 0x80070001: Command line returned an error.
-```
-
-This issue occurs if the [PowerShell execution policy](/powershell/module/microsoft.powershell.core/about/about_execution_policies#use-group-policy-to-manage-execution-policy) is configured using group policy and the policy setting is "Allow only signed scripts." All scripts included with the Azure File Sync agent are signed. The Azure File Sync agent installation fails because the installer is performing the script execution using the Bypass execution policy setting.
-
-To resolve this issue, temporarily disable the [Turn on Script Execution](/powershell/module/microsoft.powershell.core/about/about_execution_policies#use-group-policy-to-manage-execution-policy) group policy setting on the server. Once the agent installation completes, the group policy setting can be re-enabled.
-
-<a id="agent-installation-on-DC"></a>**Agent installation fails on Active Directory Domain Controller**
-If you try to install the sync agent on an Active Directory domain controller where the PDC role owner is on a Windows Server 2008 R2 or below OS version, you may hit the issue where the sync agent will fail to install.
-
-To resolve, transfer the PDC role to another domain controller running Windows Server 2012 R2 or more recent, then install sync.
-
-<a id="parameter-is-incorrect"></a>**Accessing a volume on Windows Server 2012 R2 fails with error: The parameter is incorrect**
-After creating a server endpoint on Windows Server 2012 R2, the following error occurs when accessing the volume:
-
-drive letter:\ is not accessible.
-The parameter is incorrect.
-
-To resolve this issue, install [KB2919355](https://support.microsoft.com/help/2919355/windows-rt-8-1-windows-8-1-windows-server-2012-r2-update-april-2014) and restart the server. If this update will not install because a later update is already installed, go to Windows Update, install the latest updates for Windows Server 2012 R2 and restart the server.
-
-<a id="server-registration-missing-subscriptions"></a>**Server Registration does not list all Azure Subscriptions**
-When registering a server using ServerRegistration.exe, subscriptions are missing when you click the Azure Subscription drop-down.
-
-This issue occurs because ServerRegistration.exe will only retrieve subscriptions from the first five Azure AD tenants.
-
-To increase the Server Registration tenant limit on the server, create a DWORD value called ServerRegistrationTenantLimit under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Azure\StorageSync with a value greater than 5.
-
-You can also work around this issue by using the following PowerShell commands to register the server:
-
-```powershell
-Connect-AzAccount -Subscription "<guid>" -Tenant "<guid>"
-Register-AzStorageSyncServer -ResourceGroupName "<your-resource-group-name>" -StorageSyncServiceName "<your-storage-sync-service-name>"
-```
-
-<a id="server-registration-prerequisites"></a>**Server Registration displays the following message: "Pre-requisites are missing"**
-This message appears if Az or AzureRM PowerShell module is not installed on PowerShell 5.1.
-
-> [!Note]
-> ServerRegistration.exe does not support PowerShell 6.x. You can use the Register-AzStorageSyncServer cmdlet on PowerShell 6.x to register the server.
-
-To install the Az or AzureRM module on PowerShell 5.1, perform the following steps:
-
-1. Type **powershell** from an elevated command prompt and hit enter.
-2. Install the latest Az or AzureRM module by following the documentation:
- - [Az module (requires .NET 4.7.2)](/powershell/azure/install-az-ps)
- - [AzureRM module](https://go.microsoft.com/fwlink/?linkid=856959)
-3. Run ServerRegistration.exe, and complete the wizard to register the server with a Storage Sync Service.
-
-<a id="server-already-registered"></a>**Server Registration displays the following message: "This server is already registered"**
-
-![A screenshot of the Server Registration dialog with the "server is already registered" error message](media/storage-sync-files-troubleshoot/server-registration-1.png)
-
-This message appears if the server was previously registered with a Storage Sync Service. To unregister the server from the current Storage Sync Service and then register with a new Storage Sync Service, complete the steps that are described in [Unregister a server with Azure File Sync](file-sync-server-registration.md#unregister-the-server-with-storage-sync-service).
-
-If the server is not listed under **Registered servers** in the Storage Sync Service, on the server that you want to unregister, run the following PowerShell commands:
-
-```powershell
-Import-Module "C:\Program Files\Azure\StorageSyncAgent\StorageSync.Management.ServerCmdlets.dll"
-Reset-StorageSyncServer
-```
-
-> [!Note]
-> If the server is part of a cluster, you can use the optional *Reset-StorageSyncServer -CleanClusterRegistration* parameter to also remove the cluster registration.
-
-<a id="web-site-not-trusted"></a>**When I register a server, I see numerous "web site not trusted" responses. Why?**
-This issue occurs when the **Enhanced Internet Explorer Security** policy is enabled during server registration. For more information about how to correctly disable the **Enhanced Internet Explorer Security** policy, see [Prepare Windows Server to use with Azure File Sync](file-sync-deployment-guide.md#prepare-windows-server-to-use-with-azure-file-sync) and [How to deploy Azure File Sync](file-sync-deployment-guide.md).
-
-<a id="server-registration-missing"></a>**Server is not listed under registered servers in the Azure portal**
-If a server is not listed under **Registered servers** for a Storage Sync Service:
-1. Sign in to the server that you want to register.
-2. Open File Explorer, and then go to the Storage Sync Agent installation directory (the default location is C:\Program Files\Azure\StorageSyncAgent).
-3. Run ServerRegistration.exe, and complete the wizard to register the server with a Storage Sync Service.
-
-## Sync group management
-
-### Cloud endpoint creation errors
-
-<a id="cloud-endpoint-mgmtinternalerror"></a>**Cloud endpoint creation fails, with this error: "MgmtInternalError"**
-This error can occur if the Azure File Sync service cannot access the storage account due to SMB security settings. To enable Azure File Sync to access the storage account, the SMB security settings on the storage account must allow **SMB 3.1.1** protocol version, **NTLM v2** authentication and **AES-128-GCM** encryption. To check the SMB security settings on the storage account, see [SMB security settings](../files/files-smb-protocol.md#smb-security-settings).
-
-<a id="cloud-endpoint-authfailed"></a>**Cloud endpoint creation fails, with this error: "AuthorizationFailed"**
-This error occurs if your user account doesn't have sufficient rights to create a cloud endpoint.
-
-To create a cloud endpoint, your user account must have the following Microsoft Authorization permissions:
-* Read: Get role definition
-* Write: Create or update custom role definition
-* Read: Get role assignment
-* Write: Create role assignment
-
-The following built-in roles have the required Microsoft Authorization permissions:
-* Owner
-* User Access Administrator
-
-To determine whether your user account role has the required permissions:
-1. In the Azure portal, select **Resource groups**.
-2. Select the resource group where the storage account is located, and then select **Access control (IAM)**.
-3. Select the **Role assignments** tab.
-4. Select the **Role** (for example, Owner or Contributor) for your user account.
-5. In the **Resource Provider** list, select **Microsoft Authorization**.
- * **Role assignment** should have **Read** and **Write** permissions.
- * **Role definition** should have **Read** and **Write** permissions.
-
-<a id="cloud-endpoint-using-share"></a>**Cloud endpoint creation fails, with this error: "The specified Azure FileShare is already in use by a different CloudEndpoint"**
-This error occurs if the Azure file share is already in use by another cloud endpoint.
-
-If you see this message and the Azure file share currently is not in use by a cloud endpoint, complete the following steps to clear the Azure File Sync metadata on the Azure file share:
-
-> [!Warning]
-> Deleting the metadata on an Azure file share that is currently in use by a cloud endpoint causes Azure File Sync operations to fail. If you then use this file share for sync in a different sync group, data loss for files in the old sync group is almost certain.
-
-1. In the Azure portal, go to your Azure file share.  
-2. Right-click the Azure file share, and then select **Edit metadata**.
-3. Right-click **SyncService**, and then select **Delete**.
-
-### Server endpoint creation and deletion errors
-
-<a id="-2134375898"></a>**Server endpoint creation fails, with this error: "MgmtServerJobFailed" (Error code: -2134375898 or 0x80c80226)**
-This error occurs if the server endpoint path is on the system volume and cloud tiering is enabled. Cloud tiering is not supported on the system volume. To create a server endpoint on the system volume, disable cloud tiering when creating the server endpoint.
-
-<a id="-2147024894"></a>**Server endpoint creation fails, with this error: "MgmtServerJobFailed" (Error code: -2147024894 or 0x80070002)**
-This error occurs if the server endpoint path specified is not valid. Verify the server endpoint path specified is a locally attached NTFS volume. Note, Azure File Sync does not support mapped drives as a server endpoint path.
-
-<a id="-2134375640"></a>**Server endpoint creation fails, with this error: "MgmtServerJobFailed" (Error code: -2134375640 or 0x80c80328)**
-This error occurs if the server endpoint path specified is not an NTFS volume. Verify the server endpoint path specified is a locally attached NTFS volume. Note, Azure File Sync does not support mapped drives as a server endpoint path.
-
-<a id="-2134347507"></a>**Server endpoint creation fails, with this error: "MgmtServerJobFailed" (Error code: -2134347507 or 0x80c8710d)**
-This error occurs because Azure File Sync does not support server endpoints on volumes, which have a compressed System Volume Information folder. To resolve this issue, decompress the System Volume Information folder. If the System Volume Information folder is the only folder compressed on the volume, perform the following steps:
-
-1. Download [PsExec](/sysinternals/downloads/psexec) tool.
-2. Run the following command from an elevated command prompt to launch a command prompt running under the system account: **PsExec.exe -i -s -d cmd**
-3. From the command prompt running under the system account, type the following commands and hit enter:
- **cd /d "drive letter:\System Volume Information"**
- **compact /u /s**
-
-<a id="-2134376345"></a>**Server endpoint creation fails, with this error: "MgmtServerJobFailed" (Error code: -2134376345 or 0x80C80067)**
-This error occurs if the limit of server endpoints per server is reached. Azure File Sync currently supports up to 30 server endpoints per server. For more information, see
-[Azure File Sync scale targets](../files/storage-files-scale-targets.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json#azure-file-sync-scale-targets).
-
-<a id="-2134376427"></a>**Server endpoint creation fails, with this error: "MgmtServerJobFailed" (Error code: -2134376427 or 0x80c80015)**
-This error occurs if another server endpoint is already syncing the server endpoint path specified. Azure File Sync does not support multiple server endpoints syncing the same directory or volume.
-
-<a id="-2160590967"></a>**Server endpoint creation fails, with this error: "MgmtServerJobFailed" (Error code: -2160590967 or 0x80c80077)**
-This error occurs if the server endpoint path contains orphaned tiered files. If a server endpoint was recently removed, wait until the orphaned tiered files cleanup has completed. An Event ID 6662 is logged to the Telemetry event log once the orphaned tiered files cleanup has started. An Event ID 6661 is logged once the orphaned tiered files cleanup has completed and a server endpoint can be recreated using the path. If the server endpoint creation fails after the tiered files cleanup has completed or if Event ID 6661 cannot be found in the Telemetry event log due to event log rollover, remove the orphaned tiered files by performing the steps documented in the [Tiered files are not accessible on the server after deleting a server endpoint](?tabs=portal1%252cazure-portal#tiered-files-are-not-accessible-on-the-server-after-deleting-a-server-endpoint) section.
-
-<a id="-2134347757"></a>**Server endpoint deletion fails, with this error: "MgmtServerJobExpired" (Error code: -2134347757 or 0x80c87013)**
-This error occurs if the server is offline or doesn't have network connectivity. If the server is no longer available, unregister the server in the portal, which will delete the server endpoints. To delete the server endpoints, follow the steps that are described in [Unregister a server with Azure File Sync](file-sync-server-registration.md#unregister-the-server-with-storage-sync-service).
-
-### Server endpoint health
-
-<a id="server-endpoint-provisioningfailed"></a>**Unable to open server endpoint properties page or update cloud tiering policy**
-This issue can occur if a management operation on the server endpoint fails. If the server endpoint properties page does not open in the Azure portal, updating server endpoint using PowerShell commands from the server may fix this issue.
-
-```powershell
-# Get the server endpoint id based on the server endpoint DisplayName property
-Get-AzStorageSyncServerEndpoint `
- -ResourceGroupName myrgname `
- -StorageSyncServiceName storagesvcname `
- -SyncGroupName mysyncgroup | `
-Tee-Object -Variable serverEndpoint
-
-# Update the free space percent policy for the server endpoint
-Set-AzStorageSyncServerEndpoint `
- -InputObject $serverEndpoint
- -CloudTiering `
- -VolumeFreeSpacePercent 60
-```
-<a id="server-endpoint-noactivity"></a>**Server endpoint has a health status of "No Activity" or "Pending" and the server state on the registered servers blade is "Appears offline"**
-
-This issue can occur if the Storage Sync Monitor process (AzureStorageSyncMonitor.exe) is not running or the server is unable to access the Azure File Sync service.
-
-On the server that is showing as "Appears offline" in the portal, look at Event ID 9301 in the Telemetry event log (located under Applications and Services\Microsoft\FileSync\Agent in Event Viewer) to determine why the server is unable to access the Azure File Sync service.
--- If **GetNextJob completed with status: 0** is logged, the server can communicate with the Azure File Sync service.
- - Open Task Manager on the server and verify the Storage Sync Monitor (AzureStorageSyncMonitor.exe) process is running. If the process is not running, first try restarting the server. If restarting the server does not resolve the issue, upgrade to the latest Azure File Sync [agent version](file-sync-release-notes.md).
--- If **GetNextJob completed with status: -2134347756** is logged, the server is unable to communicate with the Azure File Sync service due to a firewall, proxy, or TLS cipher suite order configuration.
- - If the server is behind a firewall, verify port 443 outbound is allowed. If the firewall restricts traffic to specific domains, confirm the domains listed in the Firewall [documentation](file-sync-firewall-and-proxy.md#firewall) are accessible.
- - If the server is behind a proxy, configure the machine-wide or app-specific proxy settings by following the steps in the Proxy [documentation](file-sync-firewall-and-proxy.md#proxy).
- - Use the Test-StorageSyncNetworkConnectivity cmdlet to check network connectivity to the service endpoints. To learn more, see [Test network connectivity to service endpoints](file-sync-firewall-and-proxy.md#test-network-connectivity-to-service-endpoints).
- - If the TLS cipher suite order is configured on the server, you can use group policy or TLS cmdlets to add cipher suites:
- - To use group policy, see [Configuring TLS Cipher Suite Order by using Group Policy](/windows-server/security/tls/manage-tls#configuring-tls-cipher-suite-order-by-using-group-policy).
- - To use TLS cmdlets, see [Configuring TLS Cipher Suite Order by using TLS PowerShell Cmdlets](/windows-server/security/tls/manage-tls#configuring-tls-cipher-suite-order-by-using-tls-powershell-cmdlets).
-
- Azure File Sync currently supports the following cipher suites for TLS 1.2 protocol:
- - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
- - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
- - TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
- - TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
--- If **GetNextJob completed with status: -2134347764** is logged, the server is unable to communicate with the Azure File Sync service due to an expired or deleted certificate.
- - Run the following PowerShell command on the server to reset the certificate used for authentication:
- ```powershell
- Reset-AzStorageSyncServerCertificate -ResourceGroupName <string> -StorageSyncServiceName <string>
- ```
-<a id="endpoint-noactivity-sync"></a>**Server endpoint has a health status of "No Activity" and the server state on the registered servers blade is "Online"**
-
-A server endpoint health status of "No Activity" means the server endpoint has not logged sync activity in the past two hours.
-
-To check current sync activity on a server, see [How do I monitor the progress of a current sync session?](#how-do-i-monitor-the-progress-of-a-current-sync-session).
-
-A server endpoint may not log sync activity for several hours due to a bug or insufficient system resources. Verify the latest Azure File Sync [agent version](file-sync-release-notes.md) is installed. If the issue persists, open a support request.
-
-> [!Note]
-> If the server state on the registered servers blade is "Appears Offline," perform the steps documented in the [Server endpoint has a health status of "No Activity" or "Pending" and the server state on the registered servers blade is "Appears offline"](#server-endpoint-noactivity) section.
-
-## Sync
-<a id="afs-change-detection"></a>**If I created a file directly in my Azure file share over SMB or through the portal, how long does it take for the file to sync to servers in the sync group?**
-
-<a id="serverendpoint-pending"></a>**Server endpoint health is in a pending state for several hours**
-This issue is expected if you create a cloud endpoint and use an Azure file share that contains data. The change enumeration job that scans for changes in the Azure file share must complete before files can sync between the cloud and server endpoints. The time to complete the job is dependent on the size of the namespace in the Azure file share. The server endpoint health should update once the change enumeration job completes.
-
-### <a id="broken-sync"></a>How do I monitor sync health?
-# [Portal](#tab/portal1)
-Within each sync group, you can drill down into its individual server endpoints to see the status of the last completed sync sessions. A green Health column and a Files Not Syncing value of 0 indicate that sync is working as expected. If not, see below for a list of common sync errors and how to handle files that are not syncing.
-
-![A screenshot of the Azure portal](media/storage-sync-files-troubleshoot/portal-sync-health.png)
-
-# [Server](#tab/server)
-Go to the server's telemetry logs, which can be found in the Event Viewer at `Applications and Services Logs\Microsoft\FileSync\Agent\Telemetry`. Event 9102 corresponds to a completed sync session; for the latest status of sync, look for the most recent event with ID 9102. SyncDirection tells you if this session was an upload or download. If the `HResult` is 0, then the sync session was successful. A non-zero `HResult` means that there was an error during sync; see below for a list of common errors. If the PerItemErrorCount is greater than 0, then some files or folders did not sync properly. It is possible to have an `HResult` of 0 but a PerItemErrorCount that is greater than 0.
-
-Below is an example of a successful upload. For the sake of brevity, only some of the values contained in each 9102 event are listed below.
-
-```
-Replica Sync session completed.
-SyncDirection: Upload,
-HResult: 0,
-SyncFileCount: 2, SyncDirectoryCount: 0,
-AppliedFileCount: 2, AppliedDirCount: 0, AppliedTombstoneCount 0, AppliedSizeBytes: 0.
-PerItemErrorCount: 0,
-TransferredFiles: 2, TransferredBytes: 0, FailedToTransferFiles: 0, FailedToTransferBytes: 0.
-```
-
-Conversely, an unsuccessful upload might look like this:
-
-```
-Replica Sync session completed.
-SyncDirection: Upload,
-HResult: -2134364065,
-SyncFileCount: 0, SyncDirectoryCount: 0,
-AppliedFileCount: 0, AppliedDirCount: 0, AppliedTombstoneCount 0, AppliedSizeBytes: 0.
-PerItemErrorCount: 0,
-TransferredFiles: 0, TransferredBytes: 0, FailedToTransferFiles: 0, FailedToTransferBytes: 0.
-```
-
-Sometimes sync sessions fail overall or have a non-zero PerItemErrorCount but still make forward progress, with some files syncing successfully. Progress can be determined by looking into the *Applied* fields (AppliedFileCount, AppliedDirCount, AppliedTombstoneCount, and AppliedSizeBytes). These fields describe how much of the session is succeeding. If you see multiple sync sessions in a row that are failing but have an increasing *Applied* count, then you should give sync time to try again before opening a support ticket.
---
-### How do I monitor the progress of a current sync session?
-# [Portal](#tab/portal1)
-Within your sync group, go to the server endpoint in question and look at the Sync Activity section to see the count of files uploaded or downloaded in the current sync session. Keep in mind that this status will be delayed by about 5 minutes, and if your sync session is small enough to be completed within this period, it may not be reported in the portal.
-
-# [Server](#tab/server)
-Look at the most recent 9302 event in the telemetry log on the server (in the Event Viewer, go to Applications and Services Logs\Microsoft\FileSync\Agent\Telemetry). This event indicates the state of the current sync session. TotalItemCount denotes how many files are to be synced, AppliedItemCount the number of files that have been synced so far, and PerItemErrorCount the number of files that are failing to sync (see below for how to deal with this).
-
-```
-Replica Sync Progress.
-ServerEndpointName: <CI>sename</CI>, SyncGroupName: <CI>sgname</CI>, ReplicaName: <CI>rname</CI>,
-SyncDirection: Upload, CorrelationId: {AB4BA07D-5B5C-461D-AAE6-4ED724762B65}.
-AppliedItemCount: 172473, TotalItemCount: 624196. AppliedBytes: 51473711577,
-TotalBytes: 293363829906.
-AreTotalCountsFinal: true.
-PerItemErrorCount: 1006.
-```
--
-### How do I know if my servers are in sync with each other?
-# [Portal](#tab/portal1)
-For each server in a given sync group, make sure:
-- The timestamps for the Last Attempted Sync for both upload and download are recent.-- The status is green for both upload and download.-- The Sync Activity field shows very few or no files remaining to sync.-- The Files Not Syncing field is 0 for both upload and download.-
-# [Server](#tab/server)
-Look at the completed sync sessions, which are marked by 9102 events in the telemetry event log for each server (in the Event Viewer, go to `Applications and Services Logs\Microsoft\FileSync\Agent\Telemetry`).
-
-1. On any given server, you want to make sure the latest upload and download sessions completed successfully. To do this, check that the `HResult` and PerItemErrorCount are 0 for both upload and download (the SyncDirection field indicates if a given session is an upload or download session). Note that if you do not see a recently completed sync session, it is likely a sync session is currently in progress, which is to be expected if you just added or modified a large amount of data.
-2. When a server is fully up to date with the cloud and has no changes to sync in either direction, you will see empty sync sessions. These are indicated by upload and download events in which all the Sync* fields (SyncFileCount, SyncDirCount, SyncTombstoneCount, and SyncSizeBytes) are zero, meaning there was nothing to sync. Note that these empty sync sessions may not occur on high-churn servers as there is always something new to sync. If there is no sync activity, they should occur every 30 minutes.
-3. If all servers are up to date with the cloud, meaning their recent upload and download sessions are empty sync sessions, you can say with reasonable certainty that the system as a whole is in sync.
-
-If you made changes directly in your Azure file share, Azure File Sync will not detect these changes until change enumeration runs, which happens once every 24 hours. It is possible that a server will say it is up to date with the cloud when it is in fact missing recent changes made directly in the Azure file share.
---
-### How do I see if there are specific files or folders that are not syncing?
-If your PerItemErrorCount on the server or Files Not Syncing count in the portal are greater than 0 for any given sync session, that means some items are failing to sync. Files and folders can have characteristics that prevent them from syncing. These characteristics can be persistent and require explicit action to resume sync, for example removing unsupported characters from the file or folder name. They can also be transient, meaning the file or folder will automatically resume sync; for example, files with open handles will automatically resume sync when the file is closed. When the Azure File Sync engine detects such a problem, an error log is produced that can be parsed to list the items currently not syncing properly.
-
-To see these errors, run the **FileSyncErrorsReport.ps1** PowerShell script (located in the agent installation directory of the Azure File Sync agent) to identify files that failed to sync because of open handles, unsupported characters, or other issues. The ItemPath field tells you the location of the file in relation to the root sync directory. See the list of common sync errors below for remediation steps.
-
-> [!Note]
-> If the FileSyncErrorsReport.ps1 script returns "There were no file errors found" or does not list per-item errors for the sync group, the cause is either:
->
->- Cause 1: The last completed sync session did not have per-item errors. The portal should be updated soon to show 0 Files Not Syncing. By default, the FileSyncErrorsReport.ps1 script will only show per-item errors for the last completed sync session. To view per-item errors for all sync sessions, use the -ReportAllErrors parameter.
-> - Check the most recent [Event ID 9102](?tabs=server%252cazure-portal#broken-sync) in the Telemetry event log to confirm the PerItemErrorCount is 0.
->
->- Cause 2: The ItemResults event log on the server wrapped due to too many per-item errors and the event log no longer contains errors for this sync group.
-> - To prevent this issue, increase the ItemResults event log size. The ItemResults event log can be found under "Applications and Services Logs\Microsoft\FileSync\Agent" in Event Viewer.
-
-#### Troubleshooting per file/directory sync errors
-**ItemResults log - per-item sync errors**
-
-| HRESULT | HRESULT (decimal) | Error string | Issue | Remediation |
-||-|--|-|-|
-| 0x80070043 | -2147942467 | ERROR_BAD_NET_NAME | The tiered file on the server is not accessible. This issue occurs if the tiered file was not recalled prior to deleting a server endpoint. | To resolve this issue, see [Tiered files are not accessible on the server after deleting a server endpoint](?tabs=portal1%252cazure-portal#tiered-files-are-not-accessible-on-the-server-after-deleting-a-server-endpoint). |
-| 0x80c80207 | -2134375929 | ECS_E_SYNC_CONSTRAINT_CONFLICT | The file or directory change cannot be synced yet because a dependent folder is not yet synced. This item will sync after the dependent changes are synced. | No action required. If the error persists for several days, use the FileSyncErrorsReport.ps1 PowerShell script to determine why the dependent folder is not yet synced. |
-| 0x80C8028A | -2134375798 | ECS_E_SYNC_CONSTRAINT_CONFLICT_ON_FAILED_DEPENDEE | The file or directory change cannot be synced yet because a dependent folder is not yet synced. This item will sync after the dependent changes are synced. | No action required. If the error persists for several days, use the FileSyncErrorsReport.ps1 PowerShell script to determine why the dependent folder is not yet synced. |
-| 0x80c80284 | -2134375804 | ECS_E_SYNC_CONSTRAINT_CONFLICT_SESSION_FAILED | The file or directory change cannot be synced yet because a dependent folder is not yet synced and the sync session failed. This item will sync after the dependent changes are synced. | No action required. If the error persists, investigate the sync session failure. |
-| 0x8007007b | -2147024773 | ERROR_INVALID_NAME | The file or directory name is invalid. | Rename the file or directory in question. See [Handling unsupported characters](?tabs=portal1%252cazure-portal#handling-unsupported-characters) for more information. |
-| 0x80c80255 | -2134375851 | ECS_E_XSMB_REST_INCOMPATIBILITY | The file or directory name is invalid. | Rename the file or directory in question. See [Handling unsupported characters](?tabs=portal1%252cazure-portal#handling-unsupported-characters) for more information. |
-| 0x80c80018 | -2134376424 | ECS_E_SYNC_FILE_IN_USE | The file cannot be synced because it's in use. The file will be synced when it's no longer in use. | No action required. Azure File Sync creates a temporary VSS snapshot once a day on the server to sync files that have open handles. |
-| 0x80c8031d | -2134375651 | ECS_E_CONCURRENCY_CHECK_FAILED | The file has changed, but the change has not yet been detected by sync. Sync will recover after this change is detected. | No action required. |
-| 0x80070002 | -2147024894 | ERROR_FILE_NOT_FOUND | The file was deleted and sync is not aware of the change. | No action required. Sync will stop logging this error once change detection detects the file was deleted. |
-| 0x80070003 | -2147942403 | ERROR_PATH_NOT_FOUND | Deletion of a file or directory cannot be synced because the item was already deleted in the destination and sync is not aware of the change. | No action required. Sync will stop logging this error once change detection runs on the destination and sync detects the item was deleted. |
-| 0x80c80205 | -2134375931 | ECS_E_SYNC_ITEM_SKIP | The file or directory was skipped but will be synced during the next sync session. If this error is reported when downloading the item, the file or directory name is more than likely invalid. | No action required if this error is reported when uploading the file. If the error is reported when downloading the file, rename the file or directory in question. See [Handling unsupported characters](?tabs=portal1%252cazure-portal#handling-unsupported-characters) for more information. |
-| 0x800700B7 | -2147024713 | ERROR_ALREADY_EXISTS | Creation of a file or directory cannot be synced because the item already exists in the destination and sync is not aware of the change. | No action required. Sync will stop logging this error once change detection runs on the destination and sync is aware of this new item. |
-| 0x80c8603e | -2134351810 | ECS_E_AZURE_STORAGE_SHARE_SIZE_LIMIT_REACHED | The file cannot be synced because the Azure file share limit is reached. | To resolve this issue, see [You reached the Azure file share storage limit](?tabs=portal1%252cazure-portal#-2134351810) section in the troubleshooting guide. |
-| 0x80c83008 | -2134364152 | ECS_E_CANNOT_CREATE_AZURE_STAGED_FILE | The file cannot be synced because the Azure file share limit is reached. | To resolve this issue, see [You reached the Azure file share storage limit](?tabs=portal1%252cazure-portal#-2134351810) section in the troubleshooting guide. |
-| 0x80c8027C | -2134375812 | ECS_E_ACCESS_DENIED_EFS | The file is encrypted by an unsupported solution (like NTFS EFS). | Decrypt the file and use a supported encryption solution. For a list of support solutions, see the [Encryption](file-sync-planning.md#encryption) section of the planning guide. |
-| 0x80c80283 | -2160591491 | ECS_E_ACCESS_DENIED_DFSRRO | The file is located on a DFS-R read-only replication folder. | File is located on a DFS-R read-only replication folder. Azure Files Sync does not support server endpoints on DFS-R read-only replication folders. See [planning guide](file-sync-planning.md#distributed-file-system-dfs) for more information. |
-| 0x80070005 | -2147024891 | ERROR_ACCESS_DENIED | The file has a delete pending state. | No action required. File will be deleted once all open file handles are closed. |
-| 0x80c86044 | -2134351804 | ECS_E_AZURE_AUTHORIZATION_FAILED | The file cannot be synced because the firewall and virtual network settings on the storage account are enabled and the server does not have access to the storage account. | Add the Server IP address or virtual network by following the steps documented in the [Configure firewall and virtual network settings](file-sync-deployment-guide.md?tabs=azure-portal#optional-configure-firewall-and-virtual-network-settings) section in the deployment guide. |
-| 0x80c80243 | -2134375869 | ECS_E_SECURITY_DESCRIPTOR_SIZE_TOO_LARGE | The file cannot be synced because the security descriptor size exceeds the 64 KiB limit. | To resolve this issue, remove access control entries (ACE) on the file to reduce the security descriptor size. |
-| 0x8000ffff | -2147418113 | E_UNEXPECTED | The file cannot be synced due to an unexpected error. | If the error persists for several days, please open a support case. |
-| 0x80070020 | -2147024864 | ERROR_SHARING_VIOLATION | The file cannot be synced because it's in use. The file will be synced when it's no longer in use. | No action required. |
-| 0x80c80017 | -2134376425 | ECS_E_SYNC_OPLOCK_BROKEN | The file was changed during sync, so it needs to be synced again. | No action required. |
-| 0x80070017 | -2147024873 | ERROR_CRC | The file cannot be synced due to CRC error. This error can occur if a tiered file was not recalled prior to deleting a server endpoint or if the file is corrupt. | To resolve this issue, see [Tiered files are not accessible on the server after deleting a server endpoint](?tabs=portal1%252cazure-portal#tiered-files-are-not-accessible-on-the-server-after-deleting-a-server-endpoint) to remove tiered files that are orphaned. If the error continues to occur after removing orphaned tiered files, run [chkdsk](/windows-server/administration/windows-commands/chkdsk) on the volume. |
-| 0x80c80200 | -2134375936 | ECS_E_SYNC_CONFLICT_NAME_EXISTS | The file cannot be synced because the maximum number of conflict files has been reached. Azure File Sync supports 100 conflict files per file. To learn more about file conflicts, see Azure File Sync [FAQ](../files/storage-files-faq.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json#afs-conflict-resolution). | To resolve this issue, reduce the number of conflict files. The file will sync once the number of conflict files is less than 100. |
-| 0x80c8027d | -2134375811 | ECS_E_DIRECTORY_RENAME_FAILED | Rename of a directory cannot be synced because files or folders within the directory have open handles. | No action required. The rename of the directory will be synced once all open file handles within the directory are closed. |
-| 0x800700de | -2147024674 | ERROR_BAD_FILE_TYPE | The tiered file on the server is not accessible because it's referencing a version of the file which no longer exists in the Azure file share. | This issue can occur if the tiered file was restored from a backup of the Windows Server. To resolve this issue, restore the file from a snapshot in the Azure file share. |
-
-#### Handling unsupported characters
-If the **FileSyncErrorsReport.ps1** PowerShell script shows per-item sync errors due to unsupported characters (error code 0x8007007b or 0x80c80255), you should remove or rename the characters at fault from the respective file names. PowerShell will likely print these characters as question marks or empty rectangles since most of these characters have no standard visual encoding.
-> [!Note]
-> The [Evaluation Tool](file-sync-planning.md#evaluation-cmdlet) can be used to identify characters that are not supported. If your dataset has several files with invalid characters, use the [ScanUnsupportedChars](https://github.com/Azure-Samples/azure-files-samples/tree/master/ScanUnsupportedChars) script to rename files which contain unsupported characters.
-
-The table below contains all of the unicode characters Azure File Sync does not yet support.
-
-| Character set | Character count |
-||--|
-| 0x00000000 - 0x0000001F (control characters) | 32 |
-| 0x0000FDD0 - 0x0000FDDD (arabic presentation forms-a) | 14 |
-| <ul><li>0x00000022 (quotation mark)</li><li>0x0000002A (asterisk)</li><li>0x0000002F (forward slash)</li><li>0x0000003A (colon)</li><li>0x0000003C (less than)</li><li>0x0000003E (greater than)</li><li>0x0000003F (question mark)</li><li>0x0000005C (backslash)</li><li>0x0000007C (pipe or bar)</li></ul> | 9 |
-| <ul><li>0x0004FFFE - 0x0004FFFF = 2 (noncharacter)</li><li>0x0008FFFE - 0x0008FFFF = 2 (noncharacter)</li><li>0x000CFFFE - 0x000CFFFF = 2 (noncharacter)</li><li>0x0010FFFE - 0x0010FFFF = 2 (noncharacter)</li></ul> | 8 |
-| <ul><li>0x0000009D (`osc` operating system command)</li><li>0x00000090 (dcs device control string)</li><li>0x0000008F (ss3 single shift three)</li><li>0x00000081 (high octet preset)</li><li>0x0000007F (del delete)</li><li>0x0000008D (ri reverse line feed)</li></ul> | 6 |
-| 0x0000FFF0, 0x0000FFFD, 0x0000FFFE, 0x0000FFFF (specials) | 4 |
-| Files or directories that end with a period | 1 |
-
-### Common sync errors
-<a id="-2147023673"></a>**The sync session was canceled.**
-
-| Error | Code |
-|-|-|
-| **HRESULT** | 0x800704c7 |
-| **HRESULT (decimal)** | -2147023673 |
-| **Error string** | ERROR_CANCELLED |
-| **Remediation required** | No |
-
-Sync sessions may fail for various reasons including the server being restarted or updated, VSS snapshots, etc. Although this error looks like it requires follow-up, it is safe to ignore this error unless it persists over a period of several hours.
-
-<a id="-2147012889"></a>**A connection with the service could not be established.**
-
-| Error | Code |
-|-|-|
-| **HRESULT** | 0x80072ee7 |
-| **HRESULT (decimal)** | -2147012889 |
-| **Error string** | WININET_E_NAME_NOT_RESOLVED |
-| **Remediation required** | Yes |
--
-> [!Note]
-> Once network connectivity to the Azure File Sync service is restored, sync may not resume immediately. By default, Azure File Sync will initiate a sync session every 30 minutes if no changes are detected within the server endpoint location. To force a sync session, restart the Storage Sync Agent (FileSyncSvc) service or make a change to a file or directory within the server endpoint location.
-
-<a id="-2134376372"></a>**The user request was throttled by the service.**
-
-| Error | Code |
-|-|-|
-| **HRESULT** | 0x80c8004c |
-| **HRESULT (decimal)** | -2134376372 |
-| **Error string** | ECS_E_USER_REQUEST_THROTTLED |
-| **Remediation required** | No |
-
-No action is required; the server will try again. If this error persists for several hours, create a support request.
-
-<a id="-2134364160"></a>**Sync failed because the operation was aborted**
-
-| Error | Code |
-|-|-|
-| **HRESULT** | 0x80c83000 |
-| **HRESULT (decimal)** | -2134364160 |
-| **Error string** | ECS_E_OPERATION_ABORTED |
-| **Remediation required** | No |
-
-No action is required. If this error persists for several hours, create a support request.
-
-<a id="-2134364043"></a>**Sync is blocked until change detection completes post restore**
-
-| Error | Code |
-|-|-|
-| **HRESULT** | 0x80c83075 |
-| **HRESULT (decimal)** | -2134364043 |
-| **Error string** | ECS_E_SYNC_BLOCKED_ON_CHANGE_DETECTION_POST_RESTORE |
-| **Remediation required** | No |
-
-No action is required. When a file or file share (cloud endpoint) is restored using Azure Backup, sync is blocked until change detection completes on the Azure file share. Change detection runs immediately once the restore is complete and the duration is based on the number of files in the file share.
-
-<a id="-2147216747"></a>**Sync failed because the sync database was unloaded.**
-
-| Error | Code |
-|-|-|
-| **HRESULT** | 0x80041295 |
-| **HRESULT (decimal)** | -2147216747 |
-| **Error string** | SYNC_E_METADATA_INVALID_OPERATION |
-| **Remediation required** | No |
-
-This error typically occurs when a backup application creates a VSS snapshot and the sync database is unloaded. If this error persists for several hours, create a support request.
-
-<a id="-2134364065"></a>**Sync can't access the Azure file share specified in the cloud endpoint.**
-
-| Error | Code |
-|-|-|
-| **HRESULT** | 0x80c8305f |
-| **HRESULT (decimal)** | -2134364065 |
-| **Error string** | ECS_E_EXTERNAL_STORAGE_ACCOUNT_AUTHORIZATION_FAILED |
-| **Remediation required** | Yes |
-
-This error occurs because the Azure File Sync agent cannot access the Azure file share, which may be because the Azure file share or the storage account hosting it no longer exists. You can troubleshoot this error by working through the following steps:
-
-1. [Verify the storage account exists.](#troubleshoot-storage-account)
-2. [Ensure the Azure file share exists.](#troubleshoot-azure-file-share)
-3. [Ensure Azure File Sync has access to the storage account.](#troubleshoot-rbac)
-4. [Verify the firewall and virtual network settings on the storage account are configured properly (if enabled)](file-sync-deployment-guide.md?tabs=azure-portal#optional-configure-firewall-and-virtual-network-settings)
-
-<a id="-2134351804"></a>**Sync failed because the request is not authorized to perform this operation.**
-
-| Error | Code |
-|-|-|
-| **HRESULT** | 0x80c86044 |
-| **HRESULT (decimal)** | -2134351804 |
-| **Error string** | ECS_E_AZURE_AUTHORIZATION_FAILED |
-| **Remediation required** | Yes |
-
-This error occurs because the Azure File Sync agent is not authorized to access the Azure file share. You can troubleshoot this error by working through the following steps:
-
-1. [Verify the storage account exists.](#troubleshoot-storage-account)
-2. [Ensure the Azure file share exists.](#troubleshoot-azure-file-share)
-3. [Verify the firewall and virtual network settings on the storage account are configured properly (if enabled)](file-sync-deployment-guide.md?tabs=azure-portal#optional-configure-firewall-and-virtual-network-settings)
-4. [Ensure Azure File Sync has access to the storage account.](#troubleshoot-rbac)
-
-<a id="-2134364064"></a><a id="cannot-resolve-storage"></a>**The storage account name used could not be resolved.**
-
-| Error | Code |
-|-|-|
-| **HRESULT** | 0x80C83060 |
-| **HRESULT (decimal)** | -2134364064 |
-| **Error string** | ECS_E_STORAGE_ACCOUNT_NAME_UNRESOLVED |
-| **Remediation required** | Yes |
-
-1. Check that you can resolve the storage DNS name from the server.
-
- ```powershell
- Test-NetConnection -ComputerName <storage-account-name>.file.core.windows.net -Port 443
- ```
-2. [Verify the storage account exists.](#troubleshoot-storage-account)
-3. [Verify the firewall and virtual network settings on the storage account are configured properly (if enabled)](file-sync-deployment-guide.md?tabs=azure-portal#optional-configure-firewall-and-virtual-network-settings)
-
-> [!Note]
-> Once network connectivity to the Azure File Sync service is restored, sync may not resume immediately. By default, Azure File Sync will initiate a sync session every 30 minutes if no changes are detected within the server endpoint location. To force a sync session, restart the Storage Sync Agent (FileSyncSvc) service or make a change to a file or directory within the server endpoint location.
-
-<a id="-2134364022"></a><a id="storage-unknown-error"></a>**An unknown error occurred while accessing the storage account.**
-
-| Error | Code |
-|-|-|
-| **HRESULT** | 0x80c8308a |
-| **HRESULT (decimal)** | -2134364022 |
-| **Error string** | ECS_E_STORAGE_ACCOUNT_UNKNOWN_ERROR |
-| **Remediation required** | Yes |
-
-1. [Verify the storage account exists.](#troubleshoot-storage-account)
-2. [Verify the firewall and virtual network settings on the storage account are configured properly (if enabled)](file-sync-deployment-guide.md?tabs=azure-portal#optional-configure-firewall-and-virtual-network-settings)
-
-<a id="-2134364014"></a>**Sync failed due to storage account locked.**
-
-| Error | Code |
-|-|-|
-| **HRESULT** | 0x80c83092 |
-| **HRESULT (decimal)** | -2134364014 |
-| **Error string** | ECS_E_STORAGE_ACCOUNT_LOCKED |
-| **Remediation required** | Yes |
-
-This error occurs because the storage account has a read-only [resource lock](../../azure-resource-manager/management/lock-resources.md). To resolve this issue, remove the read-only resource lock on the storage account.
-
-<a id="-1906441138"></a>**Sync failed due to a problem with the sync database.**
-
-| Error | Code |
-|-|-|
-| **HRESULT** | 0x8e5e044e |
-| **HRESULT (decimal)** | -1906441138 |
-| **Error string** | JET_errWriteConflict |
-| **Remediation required** | Yes |
-
-This error occurs when there is a problem with the internal database used by Azure File Sync. When this issue occurs, create a support request and we will contact you to help you resolve this issue.
-
-<a id="-2134364053"></a>**The Azure File Sync agent version installed on the server is not supported.**
-
-| Error | Code |
-|-|-|
-| **HRESULT** | 0x80C8306B |
-| **HRESULT (decimal)** | -2134364053 |
-| **Error string** | ECS_E_AGENT_VERSION_BLOCKED |
-| **Remediation required** | Yes |
-
-This error occurs if the Azure File Sync agent version installed on the server is not supported. To resolve this issue, [upgrade](file-sync-release-notes.md#azure-file-sync-agent-update-policy) to a [supported agent version](file-sync-release-notes.md#supported-versions).
-
-<a id="-2134351810"></a>**You reached the Azure file share storage limit.**
-
-| Error | Code |
-|-|-|
-| **HRESULT** | 0x80c8603e |
-| **HRESULT (decimal)** | -2134351810 |
-| **Error string** | ECS_E_AZURE_STORAGE_SHARE_SIZE_LIMIT_REACHED |
-| **Remediation required** | Yes |
-
-| Error | Code |
-|-|-|
-| **HRESULT** | 0x80c80249 |
-| **HRESULT (decimal)** | -2134375863 |
-| **Error string** | ECS_E_NOT_ENOUGH_REMOTE_STORAGE |
-| **Remediation required** | Yes |
-
-Sync sessions fail with either of these errors when the Azure file share storage limit has been reached, which can happen if a quota is applied for an Azure file share or if the usage exceeds the limits for an Azure file share. For more information, see the [current limits for an Azure file share](../files/storage-files-scale-targets.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json).
-
-1. Navigate to the sync group within the Storage Sync Service.
-2. Select the cloud endpoint within the sync group.
-3. Note the Azure file share name in the opened pane.
-4. Select the linked storage account. If this link fails, the referenced storage account has been removed.
-
- ![A screenshot showing the cloud endpoint detail pane with a link to the storage account.](media/storage-sync-files-troubleshoot/file-share-inaccessible-1.png)
-
-5. Select **Files** to view the list of file shares.
-6. Click the three dots at the end of the row for the Azure file share referenced by the cloud endpoint.
-7. Verify that the **Usage** is below the **Quota**. Note unless an alternate quota has been specified, the quota will match the [maximum size of the Azure file share](../files/storage-files-scale-targets.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json).
-
- ![A screenshot of the Azure file share properties.](media/storage-sync-files-troubleshoot/file-share-limit-reached-1.png)
-
-If the share is full and a quota is not set, one possible way of fixing this issue is to make each subfolder of the current server endpoint into its own server endpoint in their own separate sync groups. This way each subfolder will sync to individual Azure file shares.
-
-<a id="-2134351824"></a>**The Azure file share cannot be found.**
-
-| Error | Code |
-|-|-|
-| **HRESULT** | 0x80c86030 |
-| **HRESULT (decimal)** | -2134351824 |
-| **Error string** | ECS_E_AZURE_FILE_SHARE_NOT_FOUND |
-| **Remediation required** | Yes |
-
-This error occurs when the Azure file share is not accessible. To troubleshoot:
-
-1. [Verify the storage account exists.](#troubleshoot-storage-account)
-2. [Ensure the Azure file share exists.](#troubleshoot-azure-file-share)
-
-If the Azure file share was deleted, you need to create a new file share and then recreate the sync group.
-
-<a id="-2134364042"></a>**Sync is paused while this Azure subscription is suspended.**
-
-| Error | Code |
-|-|-|
-| **HRESULT** | 0x80C83076 |
-| **HRESULT (decimal)** | -2134364042 |
-| **Error string** | ECS_E_SYNC_BLOCKED_ON_SUSPENDED_SUBSCRIPTION |
-| **Remediation required** | Yes |
-
-This error occurs when the Azure subscription is suspended. Sync will be reenabled when the Azure subscription is restored. See [Why is my Azure subscription disabled and how do I reactivate it?](../../cost-management-billing/manage/subscription-disabled.md) for more information.
-
-<a id="-2134375618"></a>**The storage account has a firewall or virtual networks configured.**
-
-| Error | Code |
-|-|-|
-| **HRESULT** | 0x80c8033e |
-| **HRESULT (decimal)** | -2134375618 |
-| **Error string** | ECS_E_SERVER_BLOCKED_BY_NETWORK_ACL |
-| **Remediation required** | Yes |
-
-This error occurs when the Azure file share is inaccessible because of a storage account firewall or because the storage account belongs to a virtual network. Verify the firewall and virtual network settings on the storage account are configured properly. For more information, see [Configure firewall and virtual network settings](file-sync-deployment-guide.md?tabs=azure-portal#optional-configure-firewall-and-virtual-network-settings).
-
-<a id="-2134375911"></a>**Sync failed due to a problem with the sync database.**
-
-| Error | Code |
-|-|-|
-| **HRESULT** | 0x80c80219 |
-| **HRESULT (decimal)** | -2134375911 |
-| **Error string** | ECS_E_SYNC_METADATA_WRITE_LOCK_TIMEOUT |
-| **Remediation required** | No |
-
-This error usually resolves itself, and can occur if there are:
-
-* A high number of file changes across the servers in the sync group.
-* A large number of errors on individual files and directories.
-
-If this error persists for longer than a few hours, create a support request and we will contact you to help you resolve this issue.
-
-<a id="-2146762487"></a>**The server failed to establish a secure connection. The cloud service received an unexpected certificate.**
-
-| Error | Code |
-|-|-|
-| **HRESULT** | 0x800b0109 |
-| **HRESULT (decimal)** | -2146762487 |
-| **Error string** | CERT_E_UNTRUSTEDROOT |
-| **Remediation required** | Yes |
-
-This error can happen if your organization is using a TLS terminating proxy or if a malicious entity is intercepting the traffic between your server and the Azure File Sync service. If you are certain that this is expected (because your organization is using a TLS terminating proxy), you skip certificate verification with a registry override.
-
-1. Create the SkipVerifyingPinnedRootCertificate registry value.
-
- ```powershell
- New-ItemProperty -Path HKLM:\SOFTWARE\Microsoft\Azure\StorageSync -Name SkipVerifyingPinnedRootCertificate -PropertyType DWORD -Value 1
- ```
-
-2. Restart the sync service on the registered server.
-
- ```powershell
- Restart-Service -Name FileSyncSvc -Force
- ```
-
-By setting this registry value, the Azure File Sync agent will accept any locally trusted TLS/SSL certificate when transferring data between the server and the cloud service.
-
-<a id="-2147012894"></a>**A connection with the service could not be established.**
-
-| Error | Code |
-|-|-|
-| **HRESULT** | 0x80072ee2 |
-| **HRESULT (decimal)** | -2147012894 |
-| **Error string** | WININET_E_TIMEOUT |
-| **Remediation required** | Yes |
--
-> [!Note]
-> Once network connectivity to the Azure File Sync service is restored, sync may not resume immediately. By default, Azure File Sync will initiate a sync session every 30 minutes if no changes are detected within the server endpoint location. To force a sync session, restart the Storage Sync Agent (FileSyncSvc) service or make a change to a file or directory within the server endpoint location.
-
-<a id="-2147012721"></a>**Sync failed because the server was unable to decode the response from the Azure File Sync service**
-
-| Error | Code |
-|-|-|
-| **HRESULT** | 0x80072f8f |
-| **HRESULT (decimal)** | -2147012721 |
-| **Error string** | WININET_E_DECODING_FAILED |
-| **Remediation required** | Yes |
-
-This error typically occurs if a network proxy is modifying the response from the Azure File Sync service. Please check your proxy configuration.
-
-<a id="-2134375680"></a>**Sync failed due to a problem with authentication.**
-
-| Error | Code |
-|-|-|
-| **HRESULT** | 0x80c80300 |
-| **HRESULT (decimal)** | -2134375680 |
-| **Error string** | ECS_E_SERVER_CREDENTIAL_NEEDED |
-| **Remediation required** | Yes |
-
-This error typically occurs because the server time is incorrect. If the server is running in a virtual machine, verify the time on the host is correct.
-
-<a id="-2134364040"></a>**Sync failed due to certificate expiration.**
-
-| Error | Code |
-|-|-|
-| **HRESULT** | 0x80c83078 |
-| **HRESULT (decimal)** | -2134364040 |
-| **Error string** | ECS_E_AUTH_SRV_CERT_EXPIRED |
-| **Remediation required** | Yes |
-
-This error occurs because the certificate used for authentication is expired.
-
-To confirm the certificate is expired, perform the following steps:
-1. Open the Certificates MMC snap-in, select Computer Account and navigate to Certificates (Local Computer)\Personal\Certificates.
-2. Check if the client authentication certificate is expired.
-
-If the client authentication certificate is expired, run the following PowerShell command on the server:
-
-```powershell
-Reset-AzStorageSyncServerCertificate -ResourceGroupName <string> -StorageSyncServiceName <string>
-```
-<a id="-2134375896"></a>**Sync failed due to authentication certificate not found.**
-
-| Error | Code |
-|-|-|
-| **HRESULT** | 0x80c80228 |
-| **HRESULT (decimal)** | -2134375896 |
-| **Error string** | ECS_E_AUTH_SRV_CERT_NOT_FOUND |
-| **Remediation required** | Yes |
-
-This error occurs because the certificate used for authentication is not found.
-
-To resolve this issue, run the following PowerShell command on the server:
-
-```powershell
-Reset-AzStorageSyncServerCertificate -ResourceGroupName <string> -StorageSyncServiceName <string>
-```
-<a id="-2134364039"></a>**Sync failed due to authentication identity not found.**
-
-| Error | Code |
-|-|-|
-| **HRESULT** | 0x80c83079 |
-| **HRESULT (decimal)** | -2134364039 |
-| **Error string** | ECS_E_AUTH_IDENTITY_NOT_FOUND |
-| **Remediation required** | Yes |
-
-This error occurs because the server endpoint deletion failed and the endpoint is now in a partially deleted state. To resolve this issue, retry deleting the server endpoint.
-
-<a id="-1906441711"></a><a id="-2134375654"></a><a id="doesnt-have-enough-free-space"></a>**The volume where the server endpoint is located is low on disk space.**
-
-| Error | Code |
-|-|-|
-| **HRESULT** | 0x8e5e0211 |
-| **HRESULT (decimal)** | -1906441711 |
-| **Error string** | JET_errLogDiskFull |
-| **Remediation required** | Yes |
-
-| Error | Code |
-|-|-|
-| **HRESULT** | 0x80c8031a |
-| **HRESULT (decimal)** | -2134375654 |
-| **Error string** | ECS_E_NOT_ENOUGH_LOCAL_STORAGE |
-| **Remediation required** | Yes |
-
-Sync sessions fail with one of these errors because either the volume has insufficient disk space or disk quota limit is reached. This error commonly occurs because files outside the server endpoint are using up space on the volume. Free up space on the volume by adding additional server endpoints, moving files to a different volume, or increasing the size of the volume the server endpoint is on. If a disk quota is configured on the volume using [File Server Resource Manager](/windows-server/storage/fsrm/fsrm-overview) or [NTFS quota](/windows-server/administration/windows-commands/fsutil-quota), increase the quota limit.
-
-<a id="-2134364145"></a><a id="replica-not-ready"></a>**The service is not yet ready to sync with this server endpoint.**
-
-| Error | Code |
-|-|-|
-| **HRESULT** | 0x80c8300f |
-| **HRESULT (decimal)** | -2134364145 |
-| **Error string** | ECS_E_REPLICA_NOT_READY |
-| **Remediation required** | No |
-
-This error occurs because the cloud endpoint was created with content already existing on the Azure file share. Azure File Sync must scan the Azure file share for all content before allowing the server endpoint to proceed with its initial synchronization.
-
-<a id="-2134375877"></a><a id="-2134375908"></a><a id="-2134375853"></a>**Sync failed due to problems with many individual files.**
-
-| Error | Code |
-|-|-|
-| **HRESULT** | 0x80c8023b |
-| **HRESULT (decimal)** | -2134375877 |
-| **Error string** | ECS_E_SYNC_METADATA_KNOWLEDGE_SOFT_LIMIT_REACHED |
-| **Remediation required** | Yes |
-
-| Error | Code |
-|-|-|
-| **HRESULT** | 0x80c8021c |
-| **HRESULT (decimal)** | -2134375908 |
-| **Error string** | ECS_E_SYNC_METADATA_KNOWLEDGE_LIMIT_REACHED |
-| **Remediation required** | Yes |
-
-| Error | Code |
-|-|-|
-| **HRESULT** | 0x80c80253 |
-| **HRESULT (decimal)** | -2134375853 |
-| **Error string** | ECS_E_TOO_MANY_PER_ITEM_ERRORS |
-| **Remediation required** | Yes |
-
-Sync sessions fail with one of these errors when there are many files that are failing to sync with per-item errors. Perform the steps documented in the [How do I see if there are specific files or folders that are not syncing?](?tabs=portal1%252cazure-portal#how-do-i-see-if-there-are-specific-files-or-folders-that-are-not-syncing) section to resolve the per-item errors. For sync error ECS_E_SYNC_METADATA_KNOWLEDGE_LIMIT_REACHED, please open a support case.
-
-> [!NOTE]
-> Azure File Sync creates a temporary VSS snapshot once a day on the server to sync files that have open handles.
-
-<a id="-2134376423"></a>**Sync failed due to a problem with the server endpoint path.**
-
-| Error | Code |
-|-|-|
-| **HRESULT** | 0x80c80019 |
-| **HRESULT (decimal)** | -2134376423 |
-| **Error string** | ECS_E_SYNC_INVALID_PATH |
-| **Remediation required** | Yes |
-
-Ensure the path exists, is on a local NTFS volume, and is not a reparse point or existing server endpoint.
-
-<a id="-2134375817"></a>**Sync failed because the filter driver version is not compatible with the agent version**
-
-| Error | Code |
-|-|-|
-| **HRESULT** | 0x80C80277 |
-| **HRESULT (decimal)** | -2134375817 |
-| **Error string** | ECS_E_INCOMPATIBLE_FILTER_VERSION |
-| **Remediation required** | Yes |
-
-This error occurs because the Cloud Tiering filter driver (StorageSync.sys) version loaded is not compatible with the Storage Sync Agent (FileSyncSvc) service. If the Azure File Sync agent was upgraded, restart the server to complete the installation. If the error continues to occur, uninstall the agent, restart the server and reinstall the Azure File Sync agent.
-
-<a id="-2134376373"></a>**The service is currently unavailable.**
-
-| Error | Code |
-|-|-|
-| **HRESULT** | 0x80c8004b |
-| **HRESULT (decimal)** | -2134376373 |
-| **Error string** | ECS_E_SERVICE_UNAVAILABLE |
-| **Remediation required** | No |
-
-This error occurs because the Azure File Sync service is unavailable. This error will auto-resolve when the Azure File Sync service is available again.
-
-> [!Note]
-> Once network connectivity to the Azure File Sync service is restored, sync may not resume immediately. By default, Azure File Sync will initiate a sync session every 30 minutes if no changes are detected within the server endpoint location. To force a sync session, restart the Storage Sync Agent (FileSyncSvc) service or make a change to a file or directory within the server endpoint location.
-
-<a id="-2146233088"></a>**Sync failed due to an exception.**
-
-| Error | Code |
-|-|-|
-| **HRESULT** | 0x80131500 |
-| **HRESULT (decimal)** | -2146233088 |
-| **Error string** | COR_E_EXCEPTION |
-| **Remediation required** | No |
-
-This error occurs because sync failed due to an exception. If the error persists for several hours, please create a support request.
-
-<a id="-2134364045"></a>**Sync failed because the storage account has failed over to another region.**
-
-| Error | Code |
-|-|-|
-| **HRESULT** | 0x80c83073 |
-| **HRESULT (decimal)** | -2134364045 |
-| **Error string** | ECS_E_STORAGE_ACCOUNT_FAILED_OVER |
-| **Remediation required** | Yes |
-
-This error occurs because the storage account has failed over to another region. Azure File Sync does not support the storage account failover feature. Storage accounts containing Azure file shares being used as cloud endpoints in Azure File Sync should not be failed over. Doing so will cause sync to stop working and may also cause unexpected data loss in the case of newly tiered files. To resolve this issue, move the storage account to the primary region.
-
-<a id="-2134375922"></a>**Sync failed due to a transient problem with the sync database.**
-
-| Error | Code |
-|-|-|
-| **HRESULT** | 0x80c8020e |
-| **HRESULT (decimal)** | -2134375922 |
-| **Error string** | ECS_E_SYNC_METADATA_WRITE_LEASE_LOST |
-| **Remediation required** | No |
-
-This error occurs because of an internal problem with the sync database. This error will auto-resolve when sync retries. If this error continues for an extend period of time, create a support request and we will contact you to help you resolve this issue.
-
-<a id="-2134364024"></a>**Sync failed due to change in Azure Active Directory tenant**
-
-| Error | Code |
-|-|-|
-| **HRESULT** | 0x80c83088 |
-| **HRESULT (decimal)** | -2134364024 |
-| **Error string** | ECS_E_INVALID_AAD_TENANT |
-| **Remediation required** | Yes |
-
-Verify you have the latest Azure File Sync agent version installed and give the Microsoft.StorageSync application access to the storage account (see [Ensure Azure File Sync has access to the storage account](#troubleshoot-rbac)).
-
-<a id="-2134364010"></a>**Sync failed due to firewall and virtual network exception not configured**
-
-| Error | Code |
-|-|-|
-| **HRESULT** | 0x80c83096 |
-| **HRESULT (decimal)** | -2134364010 |
-| **Error string** | ECS_E_MGMT_STORAGEACLSBYPASSNOTSET |
-| **Remediation required** | Yes |
-
-This error occurs if the firewall and virtual network settings are enabled on the storage account and the "Allow trusted Microsoft services to access this storage account" exception is not checked. To resolve this issue, follow the steps documented in the [Configure firewall and virtual network settings](file-sync-deployment-guide.md?tabs=azure-portal#optional-configure-firewall-and-virtual-network-settings) section in the deployment guide.
-
-<a id="-2147024891"></a>**Sync failed with access denied due to security settings on the storage account or NTFS permissions on the server.**
-
-| Error | Code |
-|-|-|
-| **HRESULT** | 0x80070005 |
-| **HRESULT (decimal)** | -2147024891 |
-| **Error string** | ERROR_ACCESS_DENIED |
-| **Remediation required** | Yes |
-
-This error can occur if Azure File Sync cannot access the storage account due to security settings or if the NT AUTHORITY\SYSTEM account does not have permissions to the System Volume Information folder on the volume where the server endpoint is located. Note, if individual files are failing to sync with ERROR_ACCESS_DENIED, perform the steps documented in the [Troubleshooting per file/directory sync errors](?tabs=portal1%252cazure-portal#troubleshooting-per-filedirectory-sync-errors) section.
-
-1. Verify the **SMB security settings** on the storage account are allowing **SMB 3.1.1** protocol version, **NTLM v2** authentication and **AES-128-GCM** encryption. To check the SMB security settings on the storage account, see [SMB security settings](../files/files-smb-protocol.md#smb-security-settings).
-2. [Verify the firewall and virtual network settings on the storage account are configured properly (if enabled)](file-sync-deployment-guide.md?tabs=azure-portal#optional-configure-firewall-and-virtual-network-settings)
-3. Verify the **NT AUTHORITY\SYSTEM** account has permissions to the System Volume Information folder on the volume where the server endpoint is located by performing the following steps:
-
- a. Download [Psexec](/sysinternals/downloads/psexec) tool.
- b. Run the following command from an elevated command prompt to launch a command prompt using the system account: `PsExec.exe -i -s -d cmd`
- c. From the command prompt running under the system account, run the following command to confirm the NT AUTHORITY\SYSTEM account does not have access to the System Volume Information folder: `cacls "drive letter:\system volume information" /T /C`
- d. If the NT AUTHORITY\SYSTEM account does not have access to the System Volume Information folder, run the following command: `cacls "drive letter:\system volume information" /T /E /G "NT AUTHORITY\SYSTEM:F"`
- - If step #d fails with access denied, run the following command to take ownership of the System Volume Information folder and then repeat step #d: `takeown /A /R /F "drive letter:\System Volume Information"`
-
-<a id="-2134375810"></a>**Sync failed because the Azure file share was deleted and recreated.**
-
-| Error | Code |
-|-|-|
-| **HRESULT** | 0x80c8027e |
-| **HRESULT (decimal)** | -2134375810 |
-| **Error string** | ECS_E_SYNC_REPLICA_ROOT_CHANGED |
-| **Remediation required** | Yes |
-
-This error occurs because Azure File Sync does not support deleting and recreating an Azure file share in the same sync group.
-
-To resolve this issue, delete and recreate the sync group by performing the following steps:
-
-1. Delete all server endpoints in the sync group.
-2. Delete the cloud endpoint.
-3. Delete the sync group.
-4. If cloud tiering was enabled on a server endpoint, delete the orphaned tiered files on the server by performing the steps documented in the [Tiered files are not accessible on the server after deleting a server endpoint](?tabs=portal1%252cazure-portal#tiered-files-are-not-accessible-on-the-server-after-deleting-a-server-endpoint) section.
-5. Recreate the sync group.
-
-<a id="-2134375852"></a>**Sync detected the replica has been restored to an older state**
-
-| Error | Code |
-|-|-|
-| **HRESULT** | 0x80c80254 |
-| **HRESULT (decimal)** | -2134375852 |
-| **Error string** | ECS_E_SYNC_REPLICA_BACK_IN_TIME |
-| **Remediation required** | No |
-
-No action is required. This error occurs because sync detected the replica has been restored to an older state. Sync will now enter a reconciliation mode, where it recreates the sync relationship by merging the contents of the Azure file share and the data on the server endpoint. When reconciliation mode is triggered, the process can be very time consuming depending upon the namespace size. Regular synchronization does not happen until the reconciliation finishes, and files that are different (last modified time or size) between the Azure file share and server endpoint will result in file conflicts.
-
-<a id="-2145844941"></a>**Sync failed because the HTTP request was redirected**
-
-| Error | Code |
-|-|-|
-| **HRESULT** | 0x80190133 |
-| **HRESULT (decimal)** | -2145844941 |
-| **Error string** | HTTP_E_STATUS_REDIRECT_KEEP_VERB |
-| **Remediation required** | Yes |
-
-This error occurs because Azure File Sync does not support HTTP redirection (3xx status code). To resolve this issue, disable HTTP redirect on your proxy server or network device.
-
-<a id="-2134364027"></a>**A timeout occurred during offline data transfer, but it is still in progress.**
-
-| Error | Code |
-|-|-|
-| **HRESULT** | 0x80c83085 |
-| **HRESULT (decimal)** | -2134364027 |
-| **Error string** | ECS_E_DATA_INGESTION_WAIT_TIMEOUT |
-| **Remediation required** | No |
-
-This error occurs when a data ingestion operation exceeds the timeout. This error can be ignored if sync is making progress (AppliedItemCount is greater than 0). See [How do I monitor the progress of a current sync session?](#how-do-i-monitor-the-progress-of-a-current-sync-session).
-
-<a id="-2134375814"></a>**Sync failed because the server endpoint path cannot be found on the server.**
-
-| Error | Code |
-|-|-|
-| **HRESULT** | 0x80c8027a |
-| **HRESULT (decimal)** | -2134375814 |
-| **Error string** | ECS_E_SYNC_ROOT_DIRECTORY_NOT_FOUND |
-| **Remediation required** | Yes |
-
-This error occurs if the directory used as the server endpoint path was renamed or deleted. If the directory was renamed, rename the directory back to the original name and restart the Storage Sync Agent service (FileSyncSvc).
-
-If the directory was deleted, perform the following steps to remove the existing server endpoint and create a new server endpoint using a new path:
-
-1. Remove the server endpoint in the sync group by following the steps documented in [Remove a server endpoint](file-sync-server-endpoint-delete.md).
-1. Create a new server endpoint in the sync group by following the steps documented in [Add a server endpoint](file-sync-server-endpoint-create.md).
-
-<a id="-2134375783"></a>**Server endpoint provisioning failed due to an empty server path.**
-
-| Error | Code |
-|-|-|
-| **HRESULT** | 0x80C80299 |
-| **HRESULT (decimal)** | -2134375783 |
-| **Error string** | ECS_E_SYNC_AUTHORITATIVE_UPLOAD_EMPTY_SET |
-| **Remediation required** | Yes |
-
-Server endpoint provisioning fails with this error code if these conditions are met:
-* This server endpoint was provisioned with the initial sync mode: [server authoritative](file-sync-server-endpoint-create.md#initial-sync-section)
-* Local server path is empty or contains no items recognized as able to sync.
-
-This provisioning error protects you from deleting all content that might be available in an Azure file share. Server authoritative upload is a special mode to catch up a cloud location that was already seeded, with the updates from the server location. Review this [migration guide](../files/storage-files-migration-server-hybrid-databox.md) to understand the scenario for which this mode has been built for.
-
-1. Remove the server endpoint in the sync group by following the steps documented in [Remove a server endpoint](file-sync-server-endpoint-delete.md).
-1. Create a new server endpoint in the sync group by following the steps documented in [Add a server endpoint](file-sync-server-endpoint-create.md).
-
-### Common troubleshooting steps
-<a id="troubleshoot-storage-account"></a>**Verify the storage account exists.**
-# [Portal](#tab/azure-portal)
-1. Navigate to the sync group within the Storage Sync Service.
-2. Select the cloud endpoint within the sync group.
-3. Note the Azure file share name in the opened pane.
-4. Select the linked storage account. If this link fails, the referenced storage account has been removed.
- ![A screenshot showing the cloud endpoint detail pane with a link to the storage account.](media/storage-sync-files-troubleshoot/file-share-inaccessible-1.png)
-
-# [PowerShell](#tab/azure-powershell)
-```powershell
-# Variables for you to populate based on your configuration
-$region = "<Az_Region>"
-$resourceGroup = "<RG_Name>"
-$syncService = "<storage-sync-service>"
-$syncGroup = "<sync-group>"
-
-# Log into the Azure account
-Connect-AzAccount
-
-# Check to ensure Azure File Sync is available in the selected Azure
-# region.
-$regions = [System.String[]]@()
-Get-AzLocation | ForEach-Object {
- if ($_.Providers -contains "Microsoft.StorageSync") {
- $regions += $_.Location
- }
-}
-
-if ($regions -notcontains $region) {
- throw [System.Exception]::new("Azure File Sync is either not available in the " + `
- " selected Azure Region or the region is mistyped.")
-}
-
-# Check to ensure resource group exists
-$resourceGroups = [System.String[]]@()
-Get-AzResourceGroup | ForEach-Object {
- $resourceGroups += $_.ResourceGroupName
-}
-
-if ($resourceGroups -notcontains $resourceGroup) {
- throw [System.Exception]::new("The provided resource group $resourceGroup does not exist.")
-}
-
-# Check to make sure the provided Storage Sync Service
-# exists.
-$syncServices = [System.String[]]@()
-
-Get-AzStorageSyncService -ResourceGroupName $resourceGroup | ForEach-Object {
- $syncServices += $_.StorageSyncServiceName
-}
-
-if ($syncServices -notcontains $syncService) {
- throw [System.Exception]::new("The provided Storage Sync Service $syncService does not exist.")
-}
-
-# Check to make sure the provided Sync Group exists
-$syncGroups = [System.String[]]@()
-
-Get-AzStorageSyncGroup -ResourceGroupName $resourceGroup -StorageSyncServiceName $syncService | ForEach-Object {
- $syncGroups += $_.SyncGroupName
-}
-
-if ($syncGroups -notcontains $syncGroup) {
- throw [System.Exception]::new("The provided sync group $syncGroup does not exist.")
-}
-
-# Get reference to cloud endpoint
-$cloudEndpoint = Get-AzStorageSyncCloudEndpoint `
- -ResourceGroupName $resourceGroup `
- -StorageSyncServiceName $syncService `
- -SyncGroupName $syncGroup
-
-# Get reference to storage account
-$storageAccount = Get-AzStorageAccount | Where-Object {
- $_.Id -eq $cloudEndpoint.StorageAccountResourceId
-}
-
-if ($storageAccount -eq $null) {
- throw [System.Exception]::new("The storage account referenced in the cloud endpoint does not exist.")
-}
-```
--
-<a id="troubleshoot-azure-file-share"></a>**Ensure the Azure file share exists.**
-# [Portal](#tab/azure-portal)
-1. Click **Overview** on the left-hand table of contents to return to the main storage account page.
-2. Select **Files** to view the list of file shares.
-3. Verify the file share referenced by the cloud endpoint appears in the list of file shares (you should have noted this in step 1 above).
-
-# [PowerShell](#tab/azure-powershell)
-```powershell
-$fileShare = Get-AzStorageShare -Context $storageAccount.Context | Where-Object {
- $_.Name -eq $cloudEndpoint.AzureFileShareName -and
- $_.IsSnapshot -eq $false
-}
-
-if ($fileShare -eq $null) {
- throw [System.Exception]::new("The Azure file share referenced by the cloud endpoint does not exist")
-}
-```
--
-<a id="troubleshoot-rbac"></a>**Ensure Azure File Sync has access to the storage account.**
-# [Portal](#tab/azure-portal)
-1. Click **Access control (IAM)** on the left-hand table of contents.
-1. Click the **Role assignments** tab to the list the users and applications (*service principals*) that have access to your storage account.
-1. Verify **Microsoft.StorageSync** or **Hybrid File Sync Service** (old application name) appears in the list with the **Reader and Data Access** role.
-
- ![A screenshot of the Hybrid File Sync Service service principal in the access control tab of the storage account](media/storage-sync-files-troubleshoot/file-share-inaccessible-3.png)
-
- If **Microsoft.StorageSync** or **Hybrid File Sync Service** does not appear in the list, perform the following steps:
-
- - Click **Add**.
- - In the **Role** field, select **Reader and Data Access**.
- - In the **Select** field, type **Microsoft.StorageSync**, select the role and click **Save**.
-
-# [PowerShell](#tab/azure-powershell)
-```powershell
-$role = Get-AzRoleAssignment -Scope $storageAccount.Id | Where-Object { $_.DisplayName -eq "Microsoft.StorageSync" }
-
-if ($role -eq $null) {
- throw [System.Exception]::new("The storage account does not have the Azure File Sync " + `
- "service principal authorized to access the data within the " + `
- "referenced Azure file share.")
-}
-```
--
-## Cloud tiering
-There are two paths for failures in cloud tiering:
--- Files can fail to tier, which means that Azure File Sync unsuccessfully attempts to tier a file to Azure Files.-- Files can fail to recall, which means that the Azure File Sync file system filter (StorageSync.sys) fails to download data when a user attempts to access a file that has been tiered.-
-There are two main classes of failures that can happen via either failure path:
--- Cloud storage failures
- - *Transient storage service availability issues*. For more information, see the [Service Level Agreement (SLA) for Azure Storage](https://azure.microsoft.com/support/legal/sla/storage/v1_5/).
- - *Inaccessible Azure file share*. This failure typically happens when you delete the Azure file share when it is still a cloud endpoint in a sync group.
- - *Inaccessible storage account*. This failure typically happens when you delete the storage account while it still has an Azure file share that is a cloud endpoint in a sync group.
-- Server failures
- - *Azure File Sync file system filter (StorageSync.sys) is not loaded*. In order to respond to tiering/recall requests, the Azure File Sync file system filter must be loaded. The filter not being loaded can happen for several reasons, but the most common reason is that an administrator unloaded it manually. The Azure File Sync file system filter must be loaded at all times for Azure File Sync to properly function.
- - *Missing, corrupt, or otherwise broken reparse point*. A reparse point is a special data structure on a file that consists of two parts:
- 1. A reparse tag, which indicates to the operating system that the Azure File Sync file system filter (StorageSync.sys) may need to do some action on IO to the file.
- 2. Reparse data, which indicates to the file system filter the URI of the file on the associated cloud endpoint (the Azure file share).
-
- The most common way a reparse point could become corrupted is if an administrator attempts to modify either the tag or its data.
- - *Network connectivity issues*. In order to tier or recall a file, the server must have internet connectivity.
-
-The following sections indicate how to troubleshoot cloud tiering issues and determine if an issue is a cloud storage issue or a server issue.
-
-### How to monitor tiering activity on a server
-To monitor tiering activity on a server, use Event ID 9003, 9016 and 9029 in the Telemetry event log (located under Applications and Services\Microsoft\FileSync\Agent in Event Viewer).
--- Event ID 9003 provides error distribution for a server endpoint. For example, Total Error Count, ErrorCode, etc. Note, one event is logged per error code.-- Event ID 9016 provides ghosting results for a volume. For example, Free space percent is, Number of files ghosted in session, Number of files failed to ghost, etc.-- Event ID 9029 provides ghosting session information for a server endpoint. For example, Number of files attempted in the session, Number of files tiered in the session, Number of files already tiered, etc.-
-### How to monitor recall activity on a server
-To monitor recall activity on a server, use Event ID 9005, 9006, 9009 and 9059 in the Telemetry event log (located under Applications and Services\Microsoft\FileSync\Agent in Event Viewer).
--- Event ID 9005 provides recall reliability for a server endpoint. For example, Total unique files accessed, Total unique files with failed access, etc.-- Event ID 9006 provides recall error distribution for a server endpoint. For example, Total Failed Requests, ErrorCode, etc. Note, one event is logged per error code.-- Event ID 9009 provides recall session information for a server endpoint. For example, DurationSeconds, CountFilesRecallSucceeded, CountFilesRecallFailed, etc.-- Event ID 9059 provides application recall distribution for a server endpoint. For example, ShareId, Application Name, and TotalEgressNetworkBytes.-
-### How to troubleshoot files that fail to tier
-If files fail to tier to Azure Files:
-
-1. In Event Viewer, review the telemetry, operational and diagnostic event logs, located under Applications and Services\Microsoft\FileSync\Agent.
- 1. Verify the files exist in the Azure file share.
-
- > [!NOTE]
- > A file must be synced to an Azure file share before it can be tiered.
-
- 2. Verify the server has internet connectivity.
- 3. Verify the Azure File Sync filter drivers (StorageSync.sys and StorageSyncGuard.sys) are running:
- - At an elevated command prompt, run `fltmc`. Verify that the StorageSync.sys and StorageSyncGuard.sys file system filter drivers are listed.
-
-> [!NOTE]
-> An Event ID 9003 is logged once an hour in the Telemetry event log if a file fails to tier (one event is logged per error code). Check the [Tiering errors and remediation](#tiering-errors-and-remediation) section to see if remediation steps are listed for the error code.
-
-### Tiering errors and remediation
-
-| HRESULT | HRESULT (decimal) | Error string | Issue | Remediation |
-||-|--|-|-|
-| 0x80c86045 | -2134351803 | ECS_E_INITIAL_UPLOAD_PENDING | The file failed to tier because the initial upload is in progress. | No action required. The file will be tiered once the initial upload completes. |
-| 0x80c86043 | -2134351805 | ECS_E_GHOSTING_FILE_IN_USE | The file failed to tier because it's in use. | No action required. The file will be tiered when it's no longer in use. |
-| 0x80c80241 | -2134375871 | ECS_E_GHOSTING_EXCLUDED_BY_SYNC | The file failed to tier because it's excluded by sync. | No action required. Files in the sync exclusion list cannot be tiered. |
-| 0x80c86042 | -2134351806 | ECS_E_GHOSTING_FILE_NOT_FOUND | The file failed to tier because it was not found on the server. | No action required. If the error persists, check if the file exists on the server. |
-| 0x80c83053 | -2134364077 | ECS_E_CREATE_SV_FILE_DELETED | The file failed to tier because it was deleted in the Azure file share. | No action required. The file should be deleted on the server when the next download sync session runs. |
-| 0x80c8600e | -2134351858 | ECS_E_AZURE_SERVER_BUSY | The file failed to tier due to a network issue. | No action required. If the error persists, check network connectivity to the Azure file share. |
-| 0x80072ee7 | -2147012889 | WININET_E_NAME_NOT_RESOLVED | The file failed to tier due to a network issue. | No action required. If the error persists, check network connectivity to the Azure file share. |
-| 0x80070005 | -2147024891 | ERROR_ACCESS_DENIED | The file failed to tier due to access denied error. This error can occur if the file is located on a DFS-R read-only replication folder. | Azure Files Sync does not support server endpoints on DFS-R read-only replication folders. See [planning guide](file-sync-planning.md#distributed-file-system-dfs) for more information. |
-| 0x80072efe | -2147012866 | WININET_E_CONNECTION_ABORTED | The file failed to tier due to a network issue. | No action required. If the error persists, check network connectivity to the Azure file share. |
-| 0x80c80261 | -2134375839 | ECS_E_GHOSTING_MIN_FILE_SIZE | The file failed to tier because the file size is less than the supported size. | The minimum supported file size is based on the file system cluster size (double file system cluster size). For example, if the file system cluster size is 4 KiB, the minimum file size is 8 KiB. |
-| 0x80c83007 | -2134364153 | ECS_E_STORAGE_ERROR | The file failed to tier due to an Azure storage issue. | If the error persists, open a support request. |
-| 0x800703e3 | -2147023901 | ERROR_OPERATION_ABORTED | The file failed to tier because it was recalled at the same time. | No action required. The file will be tiered when the recall completes and the file is no longer in use. |
-| 0x80c80264 | -2134375836 | ECS_E_GHOSTING_FILE_NOT_SYNCED | The file failed to tier because it has not synced to the Azure file share. | No action required. The file will tier once it has synced to the Azure file share. |
-| 0x80070001 | -2147942401 | ERROR_INVALID_FUNCTION | The file failed to tier because the cloud tiering filter driver (storagesync.sys) is not running. | To resolve this issue, open an elevated command prompt and run the following command: `fltmc load storagesync`<br>If the Azure File Sync filter driver fails to load when running the fltmc command, uninstall the Azure File Sync agent, restart the server and reinstall the Azure File Sync agent. |
-| 0x80070070 | -2147024784 | ERROR_DISK_FULL | The file failed to tier due to insufficient disk space on the volume where the server endpoint is located. | To resolve this issue, free at least 100 MiB of disk space on the volume where the server endpoint is located. |
-| 0x80070490 | -2147023728 | ERROR_NOT_FOUND | The file failed to tier because it has not synced to the Azure file share. | No action required. The file will tier once it has synced to the Azure file share. |
-| 0x80c80262 | -2134375838 | ECS_E_GHOSTING_UNSUPPORTED_RP | The file failed to tier because it's an unsupported reparse point. | If the file is a Data Deduplication reparse point, follow the steps in the [planning guide](file-sync-planning.md#data-deduplication) to enable Data Deduplication support. Files with reparse points other than Data Deduplication are not supported and will not be tiered. |
-| 0x80c83052 | -2134364078 | ECS_E_CREATE_SV_STREAM_ID_MISMATCH | The file failed to tier because it has been modified. | No action required. The file will tier once the modified file has synced to the Azure file share. |
-| 0x80c80269 | -2134375831 | ECS_E_GHOSTING_REPLICA_NOT_FOUND | The file failed to tier because it has not synced to the Azure file share. | No action required. The file will tier once it has synced to the Azure file share. |
-| 0x80072ee2 | -2147012894 | WININET_E_TIMEOUT | The file failed to tier due to a network issue. | No action required. If the error persists, check network connectivity to the Azure file share. |
-| 0x80c80017 | -2134376425 | ECS_E_SYNC_OPLOCK_BROKEN | The file failed to tier because it has been modified. | No action required. The file will tier once the modified file has synced to the Azure file share. |
-| 0x800705aa | -2147023446 | ERROR_NO_SYSTEM_RESOURCES | The file failed to tier due to insufficient system resources. | If the error persists, investigate which application or kernel-mode driver is exhausting system resources. |
-| 0x8e5e03fe | -1906441218 | JET_errDiskIO | The file failed to tier due to an I/O error when writing to the cloud tiering database. | If the error persists, run chkdsk on the volume and check the storage hardware. |
-| 0x8e5e0442 | -1906441150 | JET_errInstanceUnavailable | The file failed to tier because the cloud tiering database is not running. | To resolve this issue, restart the FileSyncSvc service or server. If the error persists, run chkdsk on the volume and check the storage hardware. |
-| 0x80C80285 | -2160591493 | ECS_E_GHOSTING_SKIPPED_BY_CUSTOM_EXCLUSION_LIST | The file cannot be tiered because the file type is excluded from tiering. | To tier files with this file type, modify the GhostingExclusionList registry setting which is located under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Azure\StorageSync. |
-| 0x80C86050 | -2160615504 | ECS_E_REPLICA_NOT_READY_FOR_TIERING | The file failed to tier because the current sync mode is initial upload or reconciliation. | No action required. The file will be tiered once sync completes initial upload or reconciliation. |
---
-### How to troubleshoot files that fail to be recalled
-If files fail to be recalled:
-1. In Event Viewer, review the telemetry, operational and diagnostic event logs, located under Applications and Services\Microsoft\FileSync\Agent.
- 1. Verify the files exist in the Azure file share.
- 2. Verify the server has internet connectivity.
- 3. Open the Services MMC snap-in and verify the Storage Sync Agent service (FileSyncSvc) is running.
- 4. Verify the Azure File Sync filter drivers (StorageSync.sys and StorageSyncGuard.sys) are running:
- - At an elevated command prompt, run `fltmc`. Verify that the StorageSync.sys and StorageSyncGuard.sys file system filter drivers are listed.
-
-> [!NOTE]
-> An Event ID 9006 is logged once per hour in the Telemetry event log if a file fails to recall (one event is logged per error code). Check the [Recall errors and remediation](#recall-errors-and-remediation) section to see if remediation steps are listed for the error code.
-
-### Recall errors and remediation
-
-| HRESULT | HRESULT (decimal) | Error string | Issue | Remediation |
-||-|--|-|-|
-| 0x80070079 | -2147942521 | ERROR_SEM_TIMEOUT | The file failed to recall due to an I/O timeout. This issue can occur for several reasons: server resource constraints, poor network connectivity or an Azure storage issue (for example, throttling). | No action required. If the error persists for several hours, please open a support case. |
-| 0x80070036 | -2147024842 | ERROR_NETWORK_BUSY | The file failed to recall due to a network issue. | If the error persists, check network connectivity to the Azure file share. |
-| 0x80c80037 | -2134376393 | ECS_E_SYNC_SHARE_NOT_FOUND | The file failed to recall because the server endpoint was deleted. | To resolve this issue, see [Tiered files are not accessible on the server after deleting a server endpoint](?tabs=portal1%252cazure-portal#tiered-files-are-not-accessible-on-the-server-after-deleting-a-server-endpoint). |
-| 0x80070005 | -2147024891 | ERROR_ACCESS_DENIED | The file failed to recall due to an access denied error. This issue can occur if the firewall and virtual network settings on the storage account are enabled and the server does not have access to the storage account. | To resolve this issue, add the Server IP address or virtual network by following the steps documented in the [Configure firewall and virtual network settings](file-sync-deployment-guide.md?tabs=azure-portal#optional-configure-firewall-and-virtual-network-settings) section in the deployment guide. |
-| 0x80c86002 | -2134351870 | ECS_E_AZURE_RESOURCE_NOT_FOUND | The file failed to recall because it's not accessible in the Azure file share. | To resolve this issue, verify the file exists in the Azure file share. If the file exists in the Azure file share, upgrade to the latest Azure File Sync [agent version](file-sync-release-notes.md#supported-versions). |
-| 0x80c8305f | -2134364065 | ECS_E_EXTERNAL_STORAGE_ACCOUNT_AUTHORIZATION_FAILED | The file failed to recall due to authorization failure to the storage account. | To resolve this issue, verify [Azure File Sync has access to the storage account](?tabs=portal1%252cazure-portal#troubleshoot-rbac). |
-| 0x80c86030 | -2134351824 | ECS_E_AZURE_FILE_SHARE_NOT_FOUND | The file failed to recall because the Azure file share is not accessible. | Verify the file share exists and is accessible. If the file share was deleted and recreated, perform the steps documented in the [Sync failed because the Azure file share was deleted and recreated](?tabs=portal1%252cazure-portal#-2134375810) section to delete and recreate the sync group. |
-| 0x800705aa | -2147023446 | ERROR_NO_SYSTEM_RESOURCES | The file failed to recall due to insufficient system resources. | If the error persists, investigate which application or kernel-mode driver is exhausting system resources. |
-| 0x8007000e | -2147024882 | ERROR_OUTOFMEMORY | The file failed to recall due to insufficient memory. | If the error persists, investigate which application or kernel-mode driver is causing the low memory condition. |
-| 0x80070070 | -2147024784 | ERROR_DISK_FULL | The file failed to recall due to insufficient disk space. | To resolve this issue, free up space on the volume by moving files to a different volume, increase the size of the volume, or force files to tier by using the Invoke-StorageSyncCloudTiering cmdlet. |
-| 0x80072f8f | -2147012721 | WININET_E_DECODING_FAILED | The file failed to recall because the server was unable to decode the response from the Azure File Sync service. | This error typically occurs if a network proxy is modifying the response from the Azure File Sync service. Please check your proxy configuration. |
-| 0x80090352 | -2146892974 | SEC_E_ISSUING_CA_UNTRUSTED | The file failed to recall because your organization is using a TLS terminating proxy or a malicious entity is intercepting the traffic between your server and the Azure File Sync service. | If you are certain this is expected (because your organization is using a TLS terminating proxy), follow the steps documented for error [CERT_E_UNTRUSTEDROOT](#-2146762487) to resolve this issue. |
-| 0x80c86047 | -2134351801 | ECS_E_AZURE_SHARE_SNAPSHOT_NOT_FOUND | The file failed to recall because it's referencing a version of the file which no longer exists in the Azure file share. | This issue can occur if the tiered file was restored from a backup of the Windows Server. To resolve this issue, restore the file from a snapshot in the Azure file share. |
-
-### Tiered files are not accessible on the server after deleting a server endpoint
-Tiered files on a server will become inaccessible if the files are not recalled prior to deleting a server endpoint.
-
-Errors logged if tiered files are not accessible
-- When syncing a file, error code -2147942467 (0x80070043 - ERROR_BAD_NET_NAME) is logged in the ItemResults event log-- When recalling a file, error code -2134376393 (0x80c80037 - ECS_E_SYNC_SHARE_NOT_FOUND) is logged in the RecallResults event log-
-Restoring access to your tiered files is possible if the following conditions are met:
-- Server endpoint was deleted within past 30 days-- Cloud endpoint was not deleted -- File share was not deleted-- Sync group was not deleted-
-If the above conditions are met, you can restore access to the files on the server by recreating the server endpoint at the same path on the server within the same sync group within 30 days.
-
-If the above conditions are not met, restoring access is not possible as these tiered files on the server are now orphaned. Follow the instructions below to remove the orphaned tiered files.
-
-**Notes**
-- When tiered files are not accessible on the server, the full file should still be accessible if you access the Azure file share directly.-- To prevent orphaned tiered files in the future, follow the steps documented in [Remove a server endpoint](file-sync-server-endpoint-delete.md) when deleting a server endpoint.-
-<a id="get-orphaned"></a>**How to get the list of orphaned tiered files**
-
-1. Run the following PowerShell commands to list orphaned tiered files:
-```powershell
-Import-Module "C:\Program Files\Azure\StorageSyncAgent\StorageSync.Management.ServerCmdlets.dll"
-$orphanFiles = Get-StorageSyncOrphanedTieredFiles -path <server endpoint path>
-$orphanFiles.OrphanedTieredFiles > OrphanTieredFiles.txt
-```
-2. Save the OrphanTieredFiles.txt output file in case files need to be restored from backup after they are deleted.
-
-<a id="remove-orphaned"></a>**How to remove orphaned tiered files**
-
-*Option 1: Delete the orphaned tiered files*
-
-This option deletes the orphaned tiered files on the Windows Server but requires removing the server endpoint if it exists due to recreation after 30 days or is connected to a different sync group. File conflicts will occur if files are updated on the Windows Server or Azure file share before the server endpoint is recreated.
-
-1. Back up the Azure file share and server endpoint location.
-2. Remove the server endpoint in the sync group (if exists) by following the steps documented in [Remove a server endpoint](file-sync-server-endpoint-delete.md).
-
-> [!Warning]
-> If the server endpoint is not removed prior to using the Remove-StorageSyncOrphanedTieredFiles cmdlet, deleting the orphaned tiered file on the server will delete the full file in the Azure file share.
-
-3. Run the following PowerShell commands to list orphaned tiered files:
-
-```powershell
-Import-Module "C:\Program Files\Azure\StorageSyncAgent\StorageSync.Management.ServerCmdlets.dll"
-$orphanFiles = Get-StorageSyncOrphanedTieredFiles -path <server endpoint path>
-$orphanFiles.OrphanedTieredFiles > OrphanTieredFiles.txt
-```
-4. Save the OrphanTieredFiles.txt output file in case files need to be restored from backup after they are deleted.
-5. Run the following PowerShell commands to delete orphaned tiered files:
-
-```powershell
-Import-Module "C:\Program Files\Azure\StorageSyncAgent\StorageSync.Management.ServerCmdlets.dll"
-$orphanFilesRemoved = Remove-StorageSyncOrphanedTieredFiles -Path <folder path containing orphaned tiered files> -Verbose
-$orphanFilesRemoved.OrphanedTieredFiles > DeletedOrphanFiles.txt
-```
-**Notes**
-- Tiered files modified on the server that are not synced to the Azure file share will be deleted.-- Tiered files that are accessible (not orphan) will not be deleted.-- Non-tiered files will remain on the server.-
-6. Optional: Recreate the server endpoint if deleted in step 3.
-
-*Option 2: Mount the Azure file share and copy the files locally that are orphaned on the server*
-
-This option doesn't require removing the server endpoint but requires sufficient disk space to copy the full files locally.
-
-1. [Mount](../files/storage-how-to-use-files-windows.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json) the Azure file share on the Windows Server that has orphaned tiered files.
-2. Run the following PowerShell commands to list orphaned tiered files:
-```powershell
-Import-Module "C:\Program Files\Azure\StorageSyncAgent\StorageSync.Management.ServerCmdlets.dll"
-$orphanFiles = Get-StorageSyncOrphanedTieredFiles -path <server endpoint path>
-$orphanFiles.OrphanedTieredFiles > OrphanTieredFiles.txt
-```
-3. Use the OrphanTieredFiles.txt output file to identify orphaned tiered files on the server.
-4. Overwrite the orphaned tiered files by copying the full file from the Azure file share to the Windows Server.
-
-### How to troubleshoot files unexpectedly recalled on a server
-Antivirus, backup, and other applications that read large numbers of files cause unintended recalls unless they respect the skip offline attribute and skip reading the content of those files. Skipping offline files for products that support this option helps avoid unintended recalls during operations like antivirus scans or backup jobs.
-
-Consult with your software vendor to learn how to configure their solution to skip reading offline files.
-
-Unintended recalls also might occur in other scenarios, like when you are browsing cloud-tiered files in File Explorer. This is likely to occur on Windows Server 2016 if the folder contains executable files. File Explorer was improved for Windows Server 2019 and later to better handle offline files.
-
-> [!NOTE]
->Use Event ID 9059 in the Telemetry event log to determine which application(s) is causing recalls. This event provides application recall distribution for a server endpoint and is logged once an hour.
-
-### Process exclusions for Azure File Sync
-
-If you want to configure your antivirus or other applications to skip scanning for files accessed by Azure File Sync, configure the following process exclusions:
--- C:\Program Files\Azure\StorageSyncAgent\AfsAutoUpdater.exe-- C:\Program Files\Azure\StorageSyncAgent\FileSyncSvc.exe-- C:\Program Files\Azure\StorageSyncAgent\MAAgent\MonAgentLauncher.exe-- C:\Program Files\Azure\StorageSyncAgent\MAAgent\MonAgentHost.exe-- C:\Program Files\Azure\StorageSyncAgent\MAAgent\MonAgentManager.exe-- C:\Program Files\Azure\StorageSyncAgent\MAAgent\MonAgentCore.exe-- C:\Program Files\Azure\StorageSyncAgent\MAAgent\Extensions\XSyncMonitoringExtension\AzureStorageSyncMonitor.exe-
-### TLS 1.2 required for Azure File Sync
-
-You can view the TLS settings at your server by looking at the [registry settings](/windows-server/security/tls/tls-registry-settings).
-
-If you are using a proxy, consult your proxy's documentation and ensure it is configured to use TLS1.2.
-
-## General troubleshooting
+## General troubleshooting first steps
If you encounter issues with Azure File Sync on a server, start by completing the following steps: 1. In Event Viewer, review the telemetry, operational and diagnostic event logs. - Sync, tiering, and recall issues are logged in the telemetry, diagnostic and operational event logs under Applications and Services\Microsoft\FileSync\Agent.
To run AFSDiag, perform the steps below:
``` 2. Reproduce the issue. When you're finished, enter **D**.
-3. A .zip file that contains logs and trace files is saved to the output directory that you specified.
+3. A .zip file that contains logs and trace files is saved to the output directory that you specified.
+
+## Common troubleshooting subject areas
+
+For more detailed information, choose the subject area that you'd like to troubleshoot.
+
+- [Agent installation and server registration issues](file-sync-troubleshoot-installation.md)
+- [Sync group management (including cloud endpoint and server endpoint creation)](file-sync-troubleshoot-sync-group-management.md)
+- [Sync errors](file-sync-troubleshoot-sync-errors.md)
+- [Cloud tiering issues](file-sync-troubleshoot-cloud-tiering.md)
+
+Some issues can be related to more than one subject area.
## See also - [Monitor Azure File Sync](file-sync-monitoring.md) - [Troubleshoot Azure Files problems in Windows](../files/storage-troubleshoot-windows-file-connection-problems.md) - [Troubleshoot Azure Files problems in Linux](../files/storage-troubleshoot-linux-file-connection-problems.md)
+- [Troubleshoot Azure file shares performance issues](../files/storage-troubleshooting-files-performance.md)
storage Storage Files Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-faq.md
* <a id="afs-resource-move"></a> **Can I move the storage sync service and/or storage account to a different resource group, subscription, or Azure AD tenant?**
- Yes, the storage sync service and/or storage account can be moved to a different resource group, subscription, or Azure AD tenant. After the storage sync service or storage account is moved, you need to give the Microsoft.StorageSync application access to the storage account (see [Ensure Azure File Sync has access to the storage account](../file-sync/file-sync-troubleshoot.md?tabs=portal1%252cportal#troubleshoot-rbac)).
+ Yes, the storage sync service and/or storage account can be moved to a different resource group, subscription, or Azure AD tenant. After the storage sync service or storage account is moved, you need to give the Microsoft.StorageSync application access to the storage account (see [Ensure Azure File Sync has access to the storage account](../file-sync/file-sync-troubleshoot-sync-errors.md?tabs=portal1%252cportal#troubleshoot-rbac)).
> [!Note] > When creating the cloud endpoint, the storage sync service and storage account must be in the same Azure AD tenant. Once the cloud endpoint is created, the storage sync service and storage account can be moved to different Azure AD tenants.
storage Storage Files Migration Storsimple 8000 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-storsimple-8000.md
At this point, there are differences between your on-premises Windows Server ins
> [!WARNING] > You *must not* start the RoboCopy before the server has the namespace for an Azure file share downloaded fully. For more information, see [Determine when your namespace has fully downloaded to your server](#determine-when-your-namespace-has-fully-synced-to-your-server).
- You only want to copy files that were changed after the migration job last ran and files that haven't moved through these jobs before. You can solve the problem as to why they didn't move later on the server, after the migration is complete. For more information, see [Azure File Sync troubleshooting](../file-sync/file-sync-troubleshoot.md#how-do-i-see-if-there-are-specific-files-or-folders-that-are-not-syncing).
+ You only want to copy files that were changed after the migration job last ran and files that haven't moved through these jobs before. You can solve the problem as to why they didn't move later on the server, after the migration is complete. For more information, see [Azure File Sync troubleshooting](../file-sync/file-sync-troubleshoot-sync-errors.md#how-do-i-see-if-there-are-specific-files-or-folders-that-are-not-syncing).
RoboCopy has several parameters. The following example showcases a finished command and a list of reasons for choosing these parameters.
storage Storage Troubleshooting Files Nfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-troubleshooting-files-nfs.md
The following diagram depicts connectivity using public endpoints.
- [Private endpoint](storage-files-networking-endpoints.md#create-a-private-endpoint) - Access is more secure than the service endpoint.
- - Access to NFS share via private link is available from within and outside the storage account's Azure region (cross-region, on-premise)
+ - Access to NFS share via private link is available from within and outside the storage account's Azure region (cross-region, on-premises)
- Virtual network peering with virtual networks hosted in the private endpoint give NFS share access to the clients in peered virtual networks. - Private endpoints can be used with ExpressRoute, point-to-site, and site-to-site VPNs.
storage Partner Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/solution-integration/validated-partners/primary-secondary-storage/partner-overview.md
This article highlights Microsoft partner companies that deliver a network attac
| ![Panzura.](./media/panzura-logo.png) |**Panzura**<br>Panzura is the fabric that transforms Azure cloud storage into a high-performance global file system. By delivering one authoritative data source for all users, Panzura allows enterprises to use Azure as a globally available data center, with all the functionality and speed of a single-site NAS, including automatic file locking, immediate global data consistency, and local file operation performance. |[Partner page](https://panzura.com/partners/microsoft-azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/panzura-file-system.panzura-freedom-filer)| | ![Pure Storage.](./media/pure-logo.png) |**Pure Storage**<br>Pure delivers a modern data experience that empowers organizations to run their operations as a true, automated, storage as-a-service model seamlessly across multiple clouds.|[Partner page](https://www.purestorage.com/company/technology-partners/microsoft.html)<br>[Solution Video](https://azure.microsoft.com/resources/videos/pure-storage-overview)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/purestoragemarketplaceadmin.pure_storage_cloud_block_store_deployment?tab=Overview)| | ![Qumulo.](./media/qumulo-logo.png)|**Qumulo**<br>Qumulo is a fast, scalable, and simple to use file system which makes it easy to store, manage, and run applications that use file data at scale on Microsoft Azure. Qumulo on Azure offers multiple petabytes (PB) of storage capacity and up to 20 GB/s of performance per file system. Windows (SMB) and Linux (NFS) are both natively supported. Patented software architecture delivers a low per-terabyte (TB) cost Media & Entertainment, Genomics, Technology, Natural Resources, and Finance companies all run their most demanding workloads on Qumulo in the cloud. With a Net Promoter Score of 89, customers use Qumulo for its scale, performance and ease of use capabilities like real-time visual insights into how storage is used and award winning Slack based support. Sign up for a free POC today through [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/qumulo1584033880660.qumulo-saas?tab=Overview) or [Qumulo.com](https://qumulo.com/). | [Partner page](https://qumulo.com/azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/qumulo1584033880660.qumulo-saas?tab=Overview)<br>[Datasheet](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RWUtF0)|
-| ![Scality.](./media/scality-logo.png) |**Scality**<br>Scality builds a software-defined file and object platform designed for on-premise, hybrid, and multi-cloud environments. ScalityΓÇÖs integration with Azure Blob Storage enable enterprises to manage and secure their data between on-premises environments and Azure, and meet the demand of high-performance, cloud-based file workloads. |[Partner page](https://www.scality.com/partners/azure/)|
+| ![Scality.](./media/scality-logo.png) |**Scality**<br>Scality builds a software-defined file and object platform designed for on-premises, hybrid, and multi-cloud environments. ScalityΓÇÖs integration with Azure Blob Storage enable enterprises to manage and secure their data between on-premises environments and Azure, and meet the demand of high-performance, cloud-based file workloads. |[Partner page](https://www.scality.com/partners/azure/)|
| ![Tiger Technology company logo.](./media/tiger-logo.png) |**Tiger Technology**<br>Tiger Technology offers high-performance, secure, data management software solutions. Tiger Technology enables organizations of any size to manage their digital assets on-premises, in any public cloud, or through a hybrid model. <br><br> Tiger Bridge is a non-proprietary, software-only data, and storage management system. It blends on-premises and multi-tier cloud storage into a single space, and enables hybrid workflows. This transparent file server extension lets you benefit from Azure scale and services, while preserving legacy applications and workflows. Tiger Bridge addresses several data management challenges, including: file server extension, disaster recovery, cloud migration, backup and archive, remote collaboration, and multi-site sync. It also offers continuous data protection. |[Partner page](https://www.tiger-technology.com/partners/microsoft-azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/tiger-technology.tigerbridge_vm)| | ![XenData company logo.](./media/xendata-logo.png) |**XenData**<br>XenData software creates multi-tier storage systems that manage files and folders across on-premises storage and Azure Blob Storage. XenData Multi-Site Sync software creates a global file system for distributed teams, enabling them to share and synchronize files across multiple locations. XenData cloud solutions are optimized for video files, supporting video streaming and partial file restore. They are integrated with many complementary software products used in the Media and Entertainment industry and support a variety of workflows. Other industries and applications that use XenData solutions include Oil and Gas, Engineering and Scientific Data, Video Surveillance and Medical Imaging. |[Partner page](https://xendata.com/tech_partners_cloud/azure/)| | ![Silk company logo.](./media/silk-logo.jpg) |**Silk**<br>The Silk Platform quickly moves mission-critical data to Azure and keeps it operating at performance standards on par with even the fastest on-prem environments. Silk works to ensure a seamless, efficient, and smooth migration process, followed by unparalleled performance speeds for all data and applications in the Azure cloud. The platform makes cloud environments run up to 10x faster and the entire application stack is more resilient to any infrastructure hiccups or malfunctions. |[Partner page](https://silk.us/solutions/azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/silk.silk_cloud_data_platform?tab=overview)|
storage Monitor Table Storage Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/monitor-table-storage-reference.md
Previously updated : 10/02/2020 Last updated : 08/18/2022
Capacity metrics values are sent to Azure Monitor every hour. The values are ref
Azure Storage provides the following capacity metrics in Azure Monitor.
-#### Account Level
+#### Account-level metrics
[!INCLUDE [Account level capacity metrics](../../../includes/azure-storage-account-capacity-metrics.md)]
This table shows [Table storage metrics](../../azure-monitor/essentials/metrics-
| TableCount | The number of tables in the storage account. <br/><br/> Unit: Count <br/> Aggregation Type: Average <br/> Value example: 1024 | | TableEntityCount | The number of table entities in the storage account. <br/><br/> Unit: Count <br/> Aggregation Type: Average <br/> Value example: 1024 |
+To learn how to calculate Table storage capacity, see [Calculate the size/capacity of storage account and it services](https://techcommunity.microsoft.com/t5/azure-paas-blog/calculate-the-size-capacity-of-storage-account-and-it-services/ba-p/1064046).
+ ### Transaction metrics Transaction metrics are emitted on every request to a storage account from Azure Storage to Azure Monitor. In the case of no activity on your storage account, there will be no data on transaction metrics in the period. All transaction metrics are available at both account and Table storage service level. The time grain defines the time interval that metric values are presented. The supported time grains for all transaction metrics are PT1H and PT1M.
The following table lists the properties for Azure Storage resource logs when th
## See also - See [Monitoring Azure Table storage](monitor-table-storage.md) for a description of monitoring Azure Storage.-- See [Monitoring Azure resources with Azure Monitor](../../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
+- See [Monitoring Azure resources with Azure Monitor](../../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
storsimple Storsimple 8000 Aad Registration Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-aad-registration-key.md
description: Explains how to use Azure AD-based authentication for your service,
Previously updated : 02/08/2022 Last updated : 08/18/2022 # Use Azure Active Directory (AD) authentication for your StorSimple ## Overview
storsimple Storsimple 8000 Automation Azurerm Runbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-automation-azurerm-runbook.md
description: Learn how to use Azure Automation Runbook to manage your StorSimple
Previously updated : 10/23/2017 Last updated : 08/18/2022 # Use Azure Automation runbooks to manage StorSimple devices + This article describes how Azure Automation runbooks are used to manage your StorSimple 8000 series device in Azure portal. A sample runbook is included to walk you through the steps of configuring your environment to execute this runbook.
storsimple Storsimple 8000 Automation Azurerm Scripts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-automation-azurerm-scripts.md
description: Learn how to use Azure Resource Manager SDK-based scripts to manage
Previously updated : 10/03/2017 Last updated : 08/18/2022 # Use Azure Resource Manager SDK-based scripts to manage StorSimple devices + This article describes how Azure Resource Manager SDK-based scripts can be used to manage your StorSimple 8000 series device. A sample script is also included to walk you through the steps of configuring your environment to run these scripts. This article applies to StorSimple 8000 series devices running in Azure portal only.
storsimple Storsimple 8000 Battery Replacement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-battery-replacement.md
description: Describes how to remove, replace, and maintain the backup battery m
Previously updated : 01/09/2018 Last updated : 08/18/2022 # Replace the backup battery module on your StorSimple device + ## Overview The primary enclosure Power and Cooling Module (PCM) on your Microsoft Azure StorSimple device has an additional battery pack. This pack provides power so that the StorSimple device can save data if there is loss of AC power to the primary enclosure. This battery pack is referred to as the *backup battery module*. The backup battery module exists only for the primary enclosure in your StorSimple device (the EBOD enclosure does not contain a backup battery module).
storsimple Storsimple 8000 Change Passwords https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-change-passwords.md
NA Previously updated : 07/03/2017 Last updated : 08/18/2022 # Use the StorSimple Device Manager service to change your StorSimple passwords + ## Overview The Azure portal **Device settings** option contains all the device parameters that you can reconfigure on a StorSimple device that is managed by a StorSimple Device Manager service. This tutorial explains how you can use the **Security** option under **Device settings** to change your device administrator or StorSimple Snapshot Manager password.
storsimple Storsimple 8000 Chassis Replacement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-chassis-replacement.md
NA Previously updated : 06/05/2017 Last updated : 08/18/2022 # Replace the chassis on your StorSimple device++ ## Overview This tutorial explains how to remove and replace a chassis in a StorSimple 8000 series device. The StorSimple 8100 model is a single enclosure device (one chassis), whereas the 8600 is a dual enclosure device (two chassis). For an 8600 model, there are potentially two chassis that could fail in the device: the chassis for the primary enclosure or the chassis for the EBOD enclosure.
storsimple Storsimple 8000 Choose Storage Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-choose-storage-solution.md
Previously updated : 04/01/2019 Last updated : 08/18/2022 # Compare StorSimple with Azure File Sync and Azure Stack Edge data transfer options This document provides an overview of options for on-premises data transfer to Azure, comparing: Azure Stack Edge vs. Azure File Sync vs. StorSimple 8000 series.
storsimple Storsimple 8000 Clone Volume U2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-clone-volume-u2.md
ms.assetid: Previously updated : 07/15/2020 Last updated : 08/18/2022 # Use the StorSimple Device Manager service in Azure portal to clone a volume + ## Overview This tutorial describes how you can use a backup set to clone an individual volume via the **Backup catalog** blade. It also explains the difference between *transient* and *permanent* clones. The guidance in this tutorial applies to all the StorSimple 8000 series device running Update 3 or later.
storsimple Storsimple 8000 Cloud Appliance U2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-cloud-appliance-u2.md
NA Previously updated : 11/08/2017 Last updated : 08/18/2022 # Deploy and manage a StorSimple Cloud Appliance in Azure (Update 3 and later) ## Overview
storsimple Storsimple 8000 Configure Chap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-configure-chap.md
na Previously updated : 05/09/2018 Last updated : 08/18/2022 # Configure CHAP for your StorSimple device + This tutorial explains how to configure CHAP for your StorSimple device. The procedure detailed in this article applies to StorSimple 8000 series devices. CHAP stands for Challenge Handshake Authentication Protocol. It is an authentication scheme used by servers to validate the identity of remote clients. The verification is based on a shared password or secret. CHAP can be one way (unidirectional) or mutual (bidirectional). One way CHAP is when the target authenticates an initiator. In mutual or reverse CHAP, the target authenticates the initiator and then the initiator authenticates the target. Initiator authentication can be implemented without target authentication. However, target authentication can be implemented only if initiator authentication is also implemented.
storsimple Storsimple 8000 Configure Mpio Windows Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-configure-mpio-windows-server.md
NA Previously updated : 03/26/2018 Last updated : 08/18/2022 # Configure Multipath I/O for your StorSimple device + This tutorial describes the steps you should follow to install and use the Multipath I/O (MPIO) feature on a host running Windows Server 2012 R2 and connected to a StorSimple physical device. The guidance in this article applies to StorSimple 8000 series physical devices only. MPIO is currently not supported on a StorSimple Cloud Appliance. Microsoft built support for the Multipath I/O (MPIO) feature in Windows Server to help build highly available, fault-tolerant iSCSI network configurations. MPIO uses redundant physical path components ΓÇö adapters, cables, and switches ΓÇö to create logical paths between the server and the storage device. If there is a component failure, causing a logical path to fail, multipathing logic uses an alternate path for I/O so that applications can still access their data. Additionally depending on your configuration, MPIO can also improve performance by rebalancing the load across all these paths. For more information, see [MPIO overview](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc725907(v=ws.11) "MPIO overview and features").
storsimple Storsimple 8000 Configure Web Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-configure-web-proxy.md
na Previously updated : 04/19/2017 Last updated : 08/18/2022 # Configure web proxy for your StorSimple device + ## Overview This tutorial describes how to use Windows PowerShell for StorSimple to configure and view web proxy settings for your StorSimple device. The web proxy settings are used by the StorSimple device when communicating with the cloud. A web proxy server is used to add another layer of security, filter content, cache to ease bandwidth requirements or even help with analytics.
storsimple Storsimple 8000 Contact Microsoft Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-contact-microsoft-support.md
description: Learn how to log support request and start a support session on you
Previously updated : 01/09/2018 Last updated : 08/18/2022 # Contact Microsoft Support + The StorSimple Device Manager provides the capability to **log a new support request** within the service summary blade. If you encounter any issues with your StorSimple solution, you can create a service request for technical support. In an online session with your support engineer, you may also need to start a support session on your StorSimple device. This article walks you through: * How to create a support request.
storsimple Storsimple 8000 Controller Replacement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-controller-replacement.md
NA Previously updated : 06/05/2017 Last updated : 08/18/2022 # Replace a controller module on your StorSimple device++ ## Overview This tutorial explains how to remove and replace one or both controller modules in a StorSimple device. It also discusses the underlying logic for the single and dual controller replacement scenarios.
storsimple Storsimple 8000 Create Manage Support Package https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-create-manage-support-package.md
description: Learn how to create, decrypt, and edit a support package for your S
Previously updated : 01/09/2018 Last updated : 08/18/2022 # Create and manage a support package for StorSimple 8000 series + ## Overview A StorSimple support package is an easy-to-use mechanism that collects all relevant logs to assist Microsoft Support with troubleshooting any StorSimple device issues. The collected logs are encrypted and compressed.
storsimple Storsimple 8000 Deactivate And Delete Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-deactivate-and-delete-device.md
na Previously updated : 07/23/2018 Last updated : 08/18/2022 # Deactivate and delete a StorSimple device + ## Overview This article describes how to deactivate and delete a StorSimple device that is connected to a StorSimple Device Manager service. The guidance in this article applies only to StorSimple 8000 series devices including the StorSimple Cloud Appliances. If you are using a StorSimple Virtual Array, then go to [Deactivate and delete a StorSimple Virtual Array](storsimple-virtual-array-deactivate-and-delete-device.md).
storsimple Storsimple 8000 Deployment Walkthrough Gov U2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-deployment-walkthrough-gov-u2.md
NA Previously updated : 06/22/2017 Last updated : 08/18/2022 # Deploy your on-premises StorSimple device in the Government portal ## Overview Welcome to Microsoft Azure StorSimple device deployment. These deployment tutorials apply to the StorSimple 8000 Series running Update 3 software or later in the Azure Government portal. This series of tutorials includes a configuration checklist, a list of configuration prerequisites, and detailed configuration steps for your StorSimple device.
storsimple Storsimple 8000 Deployment Walkthrough U2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-deployment-walkthrough-u2.md
description: Describes the steps and best practices for deploying the StorSimple
Previously updated : 04/23/2018 Last updated : 08/18/2022 # Deploy your on-premises StorSimple device (Update 3 and later) ## Overview Welcome to Microsoft Azure StorSimple device deployment. These deployment tutorials apply to StorSimple 8000 Series Update 3 or later. This series of tutorials includes a configuration checklist, configuration prerequisites, and detailed configuration steps for your StorSimple device.
storsimple Storsimple 8000 Device Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-device-dashboard.md
NA Previously updated : 07/03/2017 Last updated : 08/18/2022 # Use the device summary in StorSimple Device Manager service + ## Overview The StorSimple device summary blade gives you an overview of information for a specific StorSimple device, in contrast to the service summary blade, which gives you information about all the devices included in your Microsoft Azure StorSimple solution.
storsimple Storsimple 8000 Device Failover Cloud Appliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-device-failover-cloud-appliance.md
na Previously updated : 07/03/2017 Last updated : 08/18/2022 # Fail over to your StorSimple Cloud Appliance + ## Overview This tutorial describes the steps required to fail over a StorSimple 8000 series physical device to a StorSimple Cloud Appliance if there is a disaster. StorSimple uses the device failover feature to migrate data from a source physical device in the datacenter to a cloud appliance running in Azure. The guidance in this tutorial applies to StorSimple 8000 series physical devices and cloud appliances running software versions Update 3 and later.
storsimple Storsimple 8000 Device Failover Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-device-failover-disaster-recovery.md
na Previously updated : 05/03/2017 Last updated : 08/18/2022 # Failover and disaster recovery for your StorSimple 8000 series device + ## Overview This article describes the device failover feature for the StorSimple 8000 series devices and how this feature can be used to recover StorSimple devices if a disaster occurs. StorSimple uses device failover to migrate the data from a source device in the datacenter to another target device. The guidance in this article applies to StorSimple 8000 series physical devices and cloud appliances running software versions Update 3 and later.
storsimple Storsimple 8000 Device Failover Physical Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-device-failover-physical-device.md
na Previously updated : 05/03/2017 Last updated : 08/18/2022 # Fail over to a StorSimple 8000 series physical device + ## Overview This tutorial describes the steps required to fail over a StorSimple 8000 series physical device to another StorSimple physical device if there is a disaster. StorSimple uses the device failover feature to migrate data from a source physical device in the datacenter to another physical device. The guidance in this tutorial applies to StorSimple 8000 series physical devices running software versions Update 3 and later.
storsimple Storsimple 8000 Device Failover Same Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-device-failover-same-device.md
na Previously updated : 06/23/2017 Last updated : 08/18/2022 # Fail over your StorSimple physical device to same device + ## Overview This tutorial describes the steps required to fail over a StorSimple 8000 series physical device to itself if there is a disaster. StorSimple uses the device failover feature to migrate data from a source physical device in the datacenter to another physical device. The guidance in this tutorial applies to StorSimple 8000 series physical devices running software versions Update 3 and later.
storsimple Storsimple 8000 Device Modes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-device-modes.md
na Previously updated : 06/29/2017 Last updated : 08/18/2022 # Change the device mode on your StorSimple device + This article provides a brief description of the various modes in which your StorSimple device can operate. Your StorSimple device can function in three modes: normal, maintenance, and recovery. After reading this article, you will know:
storsimple Storsimple 8000 Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-diagnostics.md
na Previously updated : 01/09/2018 Last updated : 08/18/2022 # Use the StorSimple Diagnostics Tool to troubleshoot 8000 series device issues + ## Overview The StorSimple Diagnostics tool diagnoses issues related to system, performance, network, and hardware component health for a StorSimple device. The diagnostics tool can be used in various scenarios. These scenarios include workload planning, deploying a StorSimple device, assessing the network environment, and determining the performance of an operational device. This article provides an overview of the diagnostics tool and describes how the tool can be used with a StorSimple device.
storsimple Storsimple 8000 Disk Drive Replacement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-disk-drive-replacement.md
NA Previously updated : 8/25/2017 Last updated : 08/18/2022 # Replace a disk drive on your StorSimple 8000 series device + ## Overview This tutorial explains how you can remove and replace a malfunctioning or failed hard disk drive on a Microsoft Azure StorSimple device. To replace a disk drive, you need to:
storsimple Storsimple 8000 Ebod Controller Replacement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-ebod-controller-replacement.md
NA Previously updated : 06/02/2017 Last updated : 08/18/2022 # Replace an EBOD controller on your StorSimple device + ## Overview This tutorial explains how to replace a faulty EBOD controller module on your Microsoft Azure StorSimple device. To replace an EBOD controller module, you need to:
storsimple Storsimple 8000 Hardware Component Replacement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-hardware-component-replacement.md
NA Previously updated : 06/02/2017 Last updated : 08/18/2022 # Replace a hardware component on your StorSimple 8000 series device + ## Overview The hardware component replacement tutorials describe the hardware components of your Microsoft Azure StorSimple 8000 series device and the steps necessary to remove and replace them. This article describes the safety icons, provides pointers to the detailed tutorials, and lists the components that are replaceable.
storsimple Storsimple 8000 Install Update 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-install-update-4.md
NA Previously updated : 08/02/2017 Last updated : 08/18/2022 # Install Update 4 on your StorSimple device + ## Overview This tutorial explains how to install Update 4 on a StorSimple device running an earlier software version via the Azure portal and using the hotfix method. The hotfix method is used when a gateway is configured on a network interface other than DATA 0 of the StorSimple device and you are trying to update from a pre-Update 1 software version.
storsimple Storsimple 8000 Install Update 5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-install-update-5.md
NA Previously updated : 11/13/2017 Last updated : 08/18/2022
-# Install Update 5 on your StorSimple device
+# Install Update 5 on your StorSimple device\\
+ ## Overview
storsimple Storsimple 8000 Install Update 51 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-install-update-51.md
NA Previously updated : 04/21/2021 Last updated : 08/18/2022 # Install Update 5.1 on your StorSimple device + ## Overview This tutorial explains how to install Update 5.1 on a StorSimple device running an earlier software version via the Azure portal or the hotfix method.
storsimple Storsimple 8000 Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-limits.md
NA Previously updated : 03/28/2017 Last updated : 08/18/2022 # What are StorSimple 8000 series system limits? ## Overview
storsimple Storsimple 8000 Manage Acrs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-manage-acrs.md
na Previously updated : 05/31/2017 Last updated : 08/18/2022 # Use the StorSimple Manager service to manage access control records + ## Overview Access control records (ACRs) allow you to specify which hosts can connect to a volume on the StorSimple device. ACRs are set to a specific volume and contain the iSCSI Qualified Names (IQNs) of the hosts. When a host tries to connect to a volume, the device checks the ACR associated with that volume for the IQN name and if there is a match, then the connection is established. The access control records in the **Configuration** section of your StorSimple Device Manager service blade display all the access control records with the corresponding IQNs of the hosts.
storsimple Storsimple 8000 Manage Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-manage-alerts.md
description: Describes StorSimple alert conditions and severity, how to configur
Previously updated : 03/14/2019 Last updated : 08/18/2022 # Use the StorSimple Device Manager service to view and manage StorSimple alerts + ## Overview The **Alerts** blade in the StorSimple Device Manager service provides a way for you to review and clear StorSimple deviceΓÇôrelated alerts on a real-time basis. From this blade, you can centrally monitor the health issues of your StorSimple devices and the overall Microsoft Azure StorSimple solution.
storsimple Storsimple 8000 Manage Backup Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-manage-backup-catalog.md
NA Previously updated : 06/29/2017 Last updated : 08/18/2022 # Use the StorSimple Device Manager service to manage your backup catalog++ ## Overview The StorSimple Device Manager service **Backup Catalog** blade displays all the backup sets that are created when manual or scheduled backups are taken. You can use this page to list all the backups for a backup policy or a volume, select or delete backups, or use a backup to restore or clone a volume.
storsimple Storsimple 8000 Manage Backup Policies U2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-manage-backup-policies-u2.md
NA Previously updated : 09/15/2021 Last updated : 08/18/2022 # Use the StorSimple Device Manager service in Azure portal to manage backup policies + ## Overview
storsimple Storsimple 8000 Manage Bandwidth Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-manage-bandwidth-templates.md
na Previously updated : 06/29/2017 Last updated : 08/18/2022 # Use the StorSimple Device Manager service to manage StorSimple bandwidth templates + ## Overview Bandwidth templates allow you to configure network bandwidth usage across multiple time-of-day schedules to tier the data from the StorSimple device to the cloud.
storsimple Storsimple 8000 Manage Device Controller https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-manage-device-controller.md
na Previously updated : 06/19/2017 Last updated : 08/18/2022 # Manage your StorSimple device controllers + ## Overview This tutorial describes the different operations that can be performed on your StorSimple device controllers. The controllers in your StorSimple device are redundant (peer) controllers in an active-passive configuration. At a given time, only one controller is active and is processing all the disk and network operations. The other controller is in a passive mode. If the active controller fails, the passive controller automatically becomes active.
storsimple Storsimple 8000 Manage Jobs U2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-manage-jobs-u2.md
NA Previously updated : 06/29/2017 Last updated : 08/18/2022 # Use the StorSimple Device Manager service to view and manage jobs (Update 3 and later) + ## Overview The **Jobs** blade provides a single central portal for viewing and managing jobs that were started on devices connected to your StorSimple Device Manager service. You can view scheduled, running, completed, canceled, and failed jobs for multiple devices. Results are presented in a tabular format.
storsimple Storsimple 8000 Manage Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-manage-service.md
na Previously updated : 05/09/2018 Last updated : 08/18/2022 # Deploy the StorSimple Device Manager service for StorSimple 8000 series devices ## Overview
storsimple Storsimple 8000 Manage Storage Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-manage-storage-accounts.md
NA Previously updated : 06/29/2017 Last updated : 08/18/2022 # Use the StorSimple Device Manager service to manage your storage account credentials + ## Overview The **Configuration** section in the StorSimple Device Manager service blade presents all the global service parameters that can be created in the StorSimple Device Manager service. These parameters can be applied to all the devices connected to the service, and include:
storsimple Storsimple 8000 Manage Volume Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-manage-volume-containers.md
NA Previously updated : 07/16/2021 Last updated : 08/18/2022 # Use the StorSimple Device Manager service to manage StorSimple volume containers + ## Overview This tutorial explains how to use the StorSimple Device Manager service to create and manage StorSimple volume containers.
storsimple Storsimple 8000 Manage Volumes U2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-manage-volumes-u2.md
description: Explains how to add, modify, monitor, and delete StorSimple volumes
Previously updated : 01/05/2022 Last updated : 08/18/2022 # Use the StorSimple Device Manager service to manage volumes (Update 3 or later) + ## Overview This tutorial explains how to use the StorSimple Device Manager service to create and manage volumes on the StorSimple 8000 series devices running Update 3 and later.
storsimple Storsimple 8000 Manager Service Administration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-manager-service-administration.md
na Previously updated : 03/17/2021 Last updated : 08/18/2022 # Use the StorSimple Device Manager service to administer your StorSimple device + ## Overview This article describes the StorSimple Device Manager service interface, including how to connect to it, the various options available, and links out to the specific workflows that can be performed via this UI. This guidance is applicable to both; the StorSimple physical device and the cloud appliance.
storsimple Storsimple 8000 Migrate Classic Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-migrate-classic-azure-portal.md
na Previously updated : 03/14/2019 Last updated : 08/18/2022 # Migrate subscriptions and storage accounts associated with StorSimple Device Manager service + You may need to move your StorSimple service to a new enrollment or to a new subscription. These migration scenarios are either account changes or datacenter changes. Use the following table to understand which of these scenarios are supported including the detailed steps to move. ## Account changes
storsimple Storsimple 8000 Migrate From 5000 7000 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-migrate-from-5000-7000.md
NA Previously updated : 09/25/2020 Last updated : 08/18/2022 # Migrate data from StorSimple 5000-7000 series to 8000 series device + > [!IMPORTANT] > - On July 31, 2019 the StorSimple 5000/7000 series will reach end of support (EOS) status. We recommend that StorSimple 5000/7000 series customers migrate to one of the alternatives described in the document. > - Migration is currently an assisted operation. If you intend to migrate data from your StorSimple 5000-7000 series device to an 8000 series device, you need to schedule migration with Microsoft Support. Microsoft Support will then enable your subscription for migration. For more information, see how to [Open a Support ticket](storsimple-8000-contact-microsoft-support.md).
storsimple Storsimple 8000 Migration From 8000 Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-migration-from-8000-options.md
Previously updated : 12/08/2021 Last updated : 08/18/2022 # Options to migrate data from StorSimple 8000 series
-> [!IMPORTANT]
-> In December 2022, the StorSimple 8000 series will reach the [end of its extended support](/lifecycle/products/azure-storsimple-8000-series). Microsoft will no longer support hardware and software of these devices, and the cloud service will be discontinued.</br></br>
-> Data loss! You will lose the ability to interpret the proprietary StorSimple data format. You must migrate your data before December 2022 or you will lose access.
## Migration options
storsimple Storsimple 8000 Migration Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-migration-options.md
NA Previously updated : 09/02/2021 Last updated : 08/18/2022
-# Options to migrate data from StorSimple 5000-7000 series
+# Options to migrate data from StorSimple 5000-7000 series
+ > [!IMPORTANT] > On July 9, 2019 the StorSimple 5000/7000 series will reach end of support (EOS) status. We recommend that StorSimple 5000/7000 series customers migrate to one of the alternatives described in the document.
storsimple Storsimple 8000 Modify Data 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-modify-data-0.md
na Previously updated : 03/27/2017 Last updated : 08/18/2022 # Modify the DATA 0 network interface settings on your StorSimple 8000 series device + ## Overview Your Microsoft Azure StorSimple device has six network interfaces, from DATA 0 to DATA 5. The DATA 0 interface is always configured through the Windows PowerShell interface or the serial console, and is automatically cloud-enabled. Note that you cannot configure DATA 0 network interface through the Azure portal.
storsimple Storsimple 8000 Modify Device Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-modify-device-config.md
NA Previously updated : 09/28/2017 Last updated : 08/18/2022 # Use the StorSimple Device Manager service to modify your StorSimple device configuration + ## Overview The Azure portal **Device settings** section in the **Settings** blade contains all the device parameters that you can reconfigure on a StorSimple device that is managed by a StorSimple Device Manager service. This tutorial explains how you can use the **Settings** blade to perform the following device-level tasks:
storsimple Storsimple 8000 Monitor Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-monitor-device.md
description: Describes how to use the StorSimple Device Manager service to monit
Previously updated : 10/17/2017 Last updated : 08/18/2022 # Use the StorSimple Device Manager service to monitor your StorSimple device + ## Overview You can use the StorSimple Device Manager service to monitor specific devices within your StorSimple solution. You can create custom charts based on I/O performance, capacity utilization, network throughput, and device performance metrics and pin those to the dashboard. For more information, go to [customize your portal dashboard](../azure-portal/azure-portal-dashboards.md).
storsimple Storsimple 8000 Monitor Hardware Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-monitor-hardware-status.md
na Previously updated : 08/15/2018 Last updated : 08/18/2022 # Use the StorSimple Device Manager service to monitor hardware components and status + ## Overview This article describes the various physical and logical components in your on-premises StorSimple 8000 series device. It also explains how to monitor the device component status by using the **Status and hardware health** blade in the StorSimple Device Manager service.
storsimple Storsimple 8000 Power Cooling Module Replacement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-power-cooling-module-replacement.md
NA Previously updated : 06/02/2017 Last updated : 08/18/2022 # Replace a Power and Cooling Module on your StorSimple device++ ## Overview The Power and Cooling Module (PCM) in your Microsoft Azure StorSimple device consists of a power supply and cooling fans that are controlled through the primary and EBOD enclosures. There is only one model of PCM that is certified for each enclosure. The primary enclosure is certified for a 764 W PCM and the EBOD enclosure is certified for a 580 W PCM. Although the PCMs for the primary enclosure and the EBOD enclosure are different, the replacement procedure is identical.
storsimple Storsimple 8000 Remote Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-remote-connect.md
description: Explains how to configure your device for remote management and how
Previously updated : 01/02/2018 Last updated : 08/18/2022 # Connect remotely to your StorSimple 8000 series device + ## Overview You can remotely connect to your device via Windows PowerShell. When you connect this way, you do not see a menu. (You see a menu only if you use the serial console on the device to connect.) With Windows PowerShell remoting, you connect to a specific runspace. You can also specify the display language.
storsimple Storsimple 8000 Restore From Backup Set U2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-restore-from-backup-set-u2.md
ms.assetid: Previously updated : 07/15/2020 Last updated : 08/18/2022 # Restore a StorSimple volume from a backup set + ## Overview This tutorial describes the restore operation performed on a StorSimple 8000 series device using an existing backup set. Use the **Backup catalog** blade to restore a volume from a local or cloud backup. The **Backup catalog** blade displays all the backup sets that are created when manual or automated backups are taken. The restore operation from a backup set brings the volume online immediately while data is downloaded in the background.
storsimple Storsimple 8000 Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-role-based-access-control.md
na Previously updated : 10/11/2017 Last updated : 08/18/2022 # Azure role-based access control for StorSimple + This article provides a brief description of how Azure role-based access control (Azure RBAC) can be used for your StorSimple device. Azure RBAC offers fine-grained access management for Azure. Use Azure RBAC to grant just the right amount of access to the StorSimple users to do their jobs instead of giving everyone unrestricted access. For more information on the basics of access management in Azure, see [What is Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md). This article applies to StorSimple 8000 series devices running Update 3.0 or later in the Azure portal.
storsimple Storsimple 8000 Safety https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-safety.md
na Previously updated : 04/04/2017 Last updated : 08/18/2022 # Safely install and operate your StorSimple device++ ![Warning Icon](./media/storsimple-safety/IC740879.png) ![Read Safety Notice Icon](./media/storsimple-safety/IC740885.png) **READ SAFETY AND HEALTH INFORMATION**
storsimple Storsimple 8000 Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-security.md
NA Previously updated : 05/18/2018 Last updated : 08/18/2022 # StorSimple security and data protection ## Overview
storsimple Storsimple 8000 Service Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-service-dashboard.md
na Previously updated : 03/27/2017 Last updated : 08/18/2022 # Use the service summary blade for StorSimple 8000 series device + ## Overview The StorSimple Device Manager service summary blade provides a summary view of all the devices that are connected to the StorSimple Device Manager service, highlighting those devices that need a system administrator's attention. This tutorial introduces the service summary blade, explains the dashboard content and function, and describes the tasks that you can perform from this page.
storsimple Storsimple 8000 Support Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-support-options.md
description: Describes support options for StorSimple 8000 series enterprise sto
Previously updated : 05/18/2022 Last updated : 08/18/2022 # StorSimple solution support + ## StorSimple support Microsoft offers flexible support options for StorSimple enterprise storage customers. We're deeply committed to delivering a high-quality support experience that allows you to maximize the impact of your investment in the StorSimple solution and Microsoft Azure. As a StorSimple customer, you receive:
storsimple Storsimple 8000 System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-system-requirements.md
NA Previously updated : 02/11/2021 Last updated : 08/18/2022
## Overview Welcome to Microsoft Azure StorSimple. This article describes important system requirements and best practices for your StorSimple device and for the storage clients accessing the device. We recommend that you review the information carefully before you deploy your StorSimple system, and then refer back to it as necessary during deployment and subsequent operation.
storsimple Storsimple 8000 Technical Specifications And Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-technical-specifications-and-compliance.md
NA Previously updated : 06/02/2017 Last updated : 08/18/2022
## Overview The hardware components of your Microsoft Azure StorSimple device adhere to the technical specifications and regulatory standards outlined in this article. The technical specifications describe the Power and Cooling Modules (PCMs), disk drives, storage capacity, and enclosures. The compliance information covers such things as international standards, safety and emissions, and cabling.
storsimple Storsimple 8000 Troubleshoot Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-troubleshoot-deployment.md
NA Previously updated : 01/25/2021 Last updated : 08/18/2022 # Troubleshoot StorSimple device deployment issues++ ## Overview This article provides helpful troubleshooting guidance for your Microsoft Azure StorSimple deployment. It describes common issues, possible causes, and recommended steps to help you resolve problems that you might experience when you configure StorSimple.
storsimple Storsimple 8000 Windows Powershell Administration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-windows-powershell-administration.md
description: Learn how to use Windows PowerShell for StorSimple to manage your S
Previously updated : 01/25/2021 Last updated : 08/18/2022 # Use Windows PowerShell for StorSimple to administer your device + ## Overview Windows PowerShell for StorSimple provides a command-line interface that you can use to manage your Microsoft Azure StorSimple device. As the name suggests, it is a Windows PowerShell-based, command-line interface that is built in a constrained runspace. From the perspective of the user at the command line, a constrained runspace appears as a restricted version of Windows PowerShell. While maintaining some of the basic capabilities of Windows PowerShell, this interface has other, dedicated cmdlets that are geared towards managing your Microsoft Azure StorSimple device.
storsimple Storsimple Data Manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-data-manager-overview.md
NA Previously updated : 03/18/2022 Last updated : 08/18/2022 # StorSimple Data Manager overview ## Overview
storsimple Storsimple Ova Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-ova-best-practices.md
NA Previously updated : 07/25/2019 Last updated : 08/19/2022 # StorSimple Virtual Array best practices
-## Overview
+## Overview
Microsoft Azure StorSimple Virtual Array is an integrated storage solution that manages storage tasks between an on-premises virtual device running in a hypervisor and Microsoft Azure cloud storage. StorSimple Virtual Array is an efficient, cost-effective alternative to the 8000 series physical array. The virtual array can run on your existing hypervisor infrastructure, supports both the iSCSI and the SMB protocols, and is well-suited for remote office/branch office scenarios. For more information on the StorSimple solutions, go to [Microsoft Azure StorSimple Overview](https://www.microsoft.com/en-us/server-cloud/products/storsimple/overview.aspx).
storsimple Storsimple Ova Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-ova-limits.md
NA Previously updated : 07/25/2019 Last updated : 08/19/2022 # What are StorSimple Virtual Array limits?
-## Overview
+## Overview
Consider these limits as you plan, deploy, and operate your Microsoft Azure StorSimple Virtual Array. The following table describes these limits for the virtual device.
storsimple Storsimple Ova Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-ova-overview.md
ms.assetid: 169c639b-1124-46a5-ae69-ba9695525b77 Previously updated : 03/08/2021 Last updated : 08/18/2022 # Introduction to the StorSimple Virtual Array ## Overview
storsimple Storsimple Ova System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-ova-system-requirements.md
ms.assetid: ea1d3bca-e71b-453d-aa82-440d2638f5e3 Previously updated : 07/25/2019 Last updated : 08/19/2022 # StorSimple Virtual Array system requirements ## Overview
storsimple Storsimple Ova Update 01 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-ova-update-01-release-notes.md
NA Previously updated : 06/16/2016 Last updated : 08/19/2022 # StorSimple Virtual Array Update 0.2 and 0.1 release notes++ ## Overview The following release notes identify the critical open issues and the resolved issues for Microsoft Azure StorSimple Virtual Array updates. (Microsoft Azure StorSimple Virtual Array is also known as the StorSimple on-premises virtual device or the StorSimple virtual device.)
storsimple Storsimple Ova Update 03 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-ova-update-03-release-notes.md
NA Previously updated : 09/15/2016 Last updated : 08/19/2022 # StorSimple Virtual Array Update 0.3 release notes++ ## Overview The following release notes identify the critical open issues and the resolved issues for Microsoft Azure StorSimple Virtual Array updates.
storsimple Storsimple Ova Web Ui Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-ova-web-ui-admin.md
NA Previously updated : 12/1/2016 Last updated : 08/19/2022 # Use the Web UI to administer your StorSimple Virtual Array++ ![setup process flow](./media/storsimple-ova-web-ui-admin/manage4.png) ## Overview
storsimple Storsimple Virtual Array Aad Registration Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-virtual-array-aad-registration-key.md
description: Learn about AAD authentication, the associated new service registra
Previously updated : 07/25/2019 Last updated : 08/19/2022 # Use the new authentication for your StorSimple
-## Overview
+## Overview
The StorSimple Device Manager service runs in Microsoft Azure and connects to multiple StorSimple Virtual Arrays. To date, StorSimple Device Manager service has used an Access Control service (ACS) to authenticate the service to your StorSimple device. The ACS mechanism will be deprecated soon and replaced by an Azure Active Directory (AAD) authentication.
storsimple Storsimple Virtual Array Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-virtual-array-backup.md
NA Previously updated : 02/27/2017 Last updated : 08/19/2022 # Back up shares or volumes on your StorSimple Virtual Array + ## Overview The StorSimple Virtual Array is a hybrid cloud storage on-premises virtual device that can be configured as a file server or an iSCSI server. The virtual array allows the user to create scheduled and manual backups of all the shares or volumes on the device. When configured as a file server, it also allows item-level recovery. This tutorial describes how to create scheduled and manual backups and perform item-level recovery to restore a deleted file on your virtual array.
storsimple Storsimple Virtual Array Change Device Admin Password https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-virtual-array-change-device-admin-password.md
NA Previously updated : 02/27/2017 Last updated : 08/19/2022 # Change the StorSimple Virtual Array device administrator password via StorSimple Device Manager + ## Overview When you use the Windows PowerShell interface to access the StorSimple Virtual Array, you are required to enter a device administrator password. When the StorSimple device is first provisioned and started, the default password is *Password1*. For the security of your data, the default password expires the first time that you sign in and you are required to change this password.
storsimple Storsimple Virtual Array Clone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-virtual-array-clone.md
NA Previously updated : 11/21/2016 Last updated : 08/19/2022 # Clone from a backup of your StorSimple Virtual Array + ## Overview This article describes step-by-step how to clone a backup set of your shares or volumes on your Microsoft Azure StorSimple Virtual Array. The cloned backup is used to recover a deleted or lost file. The article also includes detailed steps to perform an item-level recovery on your StorSimple Virtual Array configured as a file server.
storsimple Storsimple Virtual Array Deactivate And Delete Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-virtual-array-deactivate-and-delete-device.md
na Previously updated : 04/27/2021 Last updated : 08/19/2022 # Deactivate and delete a StorSimple Virtual Array + ## Overview When you deactivate a StorSimple Virtual Array, you break the connection between the device and the corresponding StorSimple Device Manager service. This tutorial explains how to:
storsimple Storsimple Virtual Array Deploy1 Portal Prep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-virtual-array-deploy1-portal-prep.md
ms.assetid: 68a4cfd3-94c9-46cb-805c-46217290ce02 Previously updated : 07/25/2019 Last updated : 08/19/2022 # Deploy StorSimple Virtual Array - Prepare the Azure portal + ![Diagram showing the steps that are needed to deploy a virtual array. The first step is labeled Get started and is highlighted.](./media/storsimple-virtual-array-deploy1-portal-prep/getstarted4.png) ## Overview - This is the first article in the series of deployment tutorials required to completely deploy your virtual array as a file server or an iSCSI server using the Resource Manager model. This article describes the preparation required to create and configure your StorSimple Device Manager service prior to provisioning a virtual array. This article also links out to a deployment configuration checklist and configuration prerequisites. You need administrator privileges to complete the setup and configuration process. We recommend that you review the deployment configuration checklist before you begin. The portal preparation takes less than 10 minutes.
storsimple Storsimple Virtual Array Deploy2 Provision Hyperv https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-virtual-array-deploy2-provision-hyperv.md
NA Previously updated : 07/25/2019 Last updated : 08/19/2022 # Deploy StorSimple Virtual Array - Provision in Hyper-V++ ![Diagram showing the steps needed to deploy a virtual array. The first part of the second step is labeled Provision on Hyper-V and is highlighted.](./media/storsimple-virtual-array-deploy2-provision-hyperv/hyperv4.png) ## Overview - This tutorial describes how to provision a StorSimple Virtual Array on a host system running Hyper-V on Windows Server 2012 R2, Windows Server 2012, or Windows Server 2008 R2. This article applies to the deployment of StorSimple Virtual Arrays in Azure portal and Microsoft Azure Government Cloud. You need administrator privileges to provision and configure a virtual array. The provisioning and initial setup can take around 10 minutes to complete.
storsimple Storsimple Virtual Array Deploy2 Provision Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-virtual-array-deploy2-provision-vmware.md
ms.assetid: 0425b2a9-d36f-433d-8131-ee0cacef95f8 Previously updated : 07/25/2019 Last updated : 08/19/2022 # Deploy StorSimple Virtual Array - Provision in VMware++ ![Diagram showing the steps needed to deploy a virtual array. The second part of the second step is labeled Provision on VMware and is highlighted.](./media/storsimple-virtual-array-deploy2-provision-vmware/vmware4.png) ## Overview - This tutorial describes how to provision and connect to a StorSimple Virtual Array on a host system running VMware ESXi 5.0, 5.5, 6.0 or 6.5. This article applies to the deployment of StorSimple Virtual Arrays in Azure portal and the Microsoft Azure Government Cloud. You need administrator privileges to provision and connect to a virtual device. The provisioning and initial setup can take around 10 minutes to complete.
storsimple Storsimple Virtual Array Deploy3 Fs Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-virtual-array-deploy3-fs-setup.md
NA Previously updated : 07/25/2019 Last updated : 08/19/2022 # Deploy StorSimple Virtual Array - Set up as file server via Azure portal++ ![Diagram showing the steps needed to deploy a virtual array. The first part of the third step is labeled Set up as file server and is highlighted.](./media/storsimple-virtual-array-deploy3-fs-setup/fileserver4.png) ## Introduction - This article describes how to perform initial setup, register your StorSimple file server, complete the device setup, and create and connect to SMB shares. This is the last article in the series of deployment tutorials required to completely deploy your virtual array as a file server or an iSCSI server. The setup and configuration process can take around 10 minutes to complete. The information in this article applies only to the deployment of the StorSimple Virtual Array. For the deployment of StorSimple 8000 series devices, go to:
storsimple Storsimple Virtual Array Deploy3 Iscsi Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-virtual-array-deploy3-iscsi-setup.md
NA Previously updated : 07/25/2019 Last updated : 08/19/2022 # Deploy StorSimple Virtual Array ΓÇô Set up as an iSCSI server via Azure portal + ![iscsi setup process flow](./media/storsimple-virtual-array-deploy3-iscsi-setup/iscsi4.png) ## Overview - This deployment tutorial applies to the Microsoft Azure StorSimple Virtual Array. This tutorial describes how to perform the initial setup, register your StorSimple iSCSI server, complete the device setup, and then create, mount, initialize, and format volumes on your StorSimple Virtual Array configured as an iSCSI server. The procedures described here take approximately 30 minutes to 1 hour to complete. The information published in this article applies to StorSimple Virtual Arrays only.
storsimple Storsimple Virtual Array Device Summary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-virtual-array-device-summary.md
na Previously updated : 11/29/2016 Last updated : 08/19/2022 # Use the device summary blade for StorSimple Device Manager connected to StorSimple Virtual Array + ## Overview The StorSimple Device Manager device blade provides a summary view of a StorSimple Virtual Array that is registered with a given StorSimple Device Manager, highlighting those device issues that need a system administrator's attention. This tutorial introduces the device summary blade, explains the content and function, and describes the tasks that you can perform from this blade.
storsimple Storsimple Virtual Array Diagnose Problems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-virtual-array-diagnose-problems.md
na Previously updated : 11/21/2016 Last updated : 08/19/2022 # Use the StorSimple Device Manager service to troubleshoot the StorSimple Virtual Array++ ## Overview The StorSimple Device Manager provides a **Diagnose and solve problems** setting within the service summary blade, which highlights some of the commonly occurring issues that can occur with your virtual array and how to solve them. This tutorial introduces the self-serve troubleshooting capability provided within the StorSimple Device Manager service.
storsimple Storsimple Virtual Array Failover Dr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-virtual-array-failover-dr.md
NA Previously updated : 02/27/2017 Last updated : 08/19/2022 # Disaster recovery and device failover for your StorSimple Virtual Array via Azure portal + ## Overview This article describes the disaster recovery for your Microsoft Azure StorSimple Virtual Array including the detailed steps to fail over to another virtual array. A failover allows you to move your data from a *source* device in the datacenter to a *target* device. The target device may be located in the same or a different geographical location. The device failover is for the entire device. During failover, the cloud data for the source device changes ownership to that of the target device.
storsimple Storsimple Virtual Array Install Update 04 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-virtual-array-install-update-04.md
NA Previously updated : 02/07/2017 Last updated : 08/19/2022 # Install Update 0.4 on your StorSimple Virtual Array + ## Overview This article describes the steps required to install Update 0.4 on your StorSimple Virtual Array via the local web UI and via the Azure portal. You need to apply software updates or hotfixes to keep your StorSimple Virtual Array up-to-date.
storsimple Storsimple Virtual Array Install Update 05 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-virtual-array-install-update-05.md
NA Previously updated : 05/10/2017 Last updated : 08/19/2022 # Install Update 0.5 on your StorSimple Virtual Array + ## Overview This article describes the steps required to install Update 0.5 on your StorSimple Virtual Array via the local web UI and via the Azure portal. You need to apply software updates or hotfixes to keep your StorSimple Virtual Array up-to-date.
storsimple Storsimple Virtual Array Install Update 06 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-virtual-array-install-update-06.md
NA Previously updated : 05/18/2017 Last updated : 08/19/2022 # Install Update 0.6 on your StorSimple Virtual Array + ## Overview This article describes the steps required to install Update 0.6 on your StorSimple Virtual Array via the local web UI and via the Azure portal. You apply the software updates or hotfixes to keep your StorSimple Virtual Array up-to-date.
storsimple Storsimple Virtual Array Install Update 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-virtual-array-install-update-1.md
NA Previously updated : 11/02/2017 Last updated : 08/19/2022 # Install Update 1.0 on your StorSimple Virtual Array + ## Overview This article describes the steps required to install Update 1.0 on your StorSimple Virtual Array via the local web UI and via the Azure portal.
storsimple Storsimple Virtual Array Install Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-virtual-array-install-update.md
NA Previously updated : 02/27/2017 Last updated : 08/19/2022 # Install Updates on your StorSimple Virtual Array - Azure portal + ## Overview This article describes the steps required to install updates on your StorSimple Virtual Array via the local web UI and via the Azure portal. You need to apply software updates or hotfixes to keep your StorSimple Virtual Array up-to-date.
storsimple Storsimple Virtual Array Log Support Ticket https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-virtual-array-log-support-ticket.md
na Previously updated : 11/21/2016 Last updated : 08/19/2022 # Use the StorSimple Device Manager service to log a Support request for the StorSimple Virtual Array + ## Overview The StorSimple Device Manager provides the capability to **log a new support request** within the service summary blade. This article explains how you can log a new support request and manage its lifecycle from within the portal.
storsimple Storsimple Virtual Array Manage Acrs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-virtual-array-manage-acrs.md
na Previously updated : 02/27/2017 Last updated : 08/19/2022 # Use StorSimple Device Manager to manage access control records for StorSimple Virtual Array + ## Overview Access control records (ACRs) allow you to specify which hosts can connect to a volume on the StorSimple Virtual Array (also known as the StorSimple on-premises virtual device). ACRs are set to a specific volume and contain the iSCSI Qualified Names (IQNs) of the hosts. When a host tries to connect to a volume, the device checks the ACR associated with that volume for the IQN name, and if there is a match, then the connection is established. The **Access control records** blade within the **Configuration** section of your Device Manager service displays all the access control records with the corresponding IQNs of the hosts.
storsimple Storsimple Virtual Array Manage Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-virtual-array-manage-alerts.md
NA Previously updated : 01/12/2018 Last updated : 08/19/2022 # Use StorSimple Device Manager to manage alerts for the StorSimple Virtual Array + ## Overview The alerts feature in the StorSimple Device Manager service provides a way for you to review and clear alerts related to StorSimple Virtual Arrays on a real-time basis. You can use the alerts on the **Service summary** blade to centrally monitor the health issues of your StorSimple Virtual Arrays and the overall Microsoft Azure StorSimple solution.
storsimple Storsimple Virtual Array Manage Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-virtual-array-manage-jobs.md
NA Previously updated : 11/11/2016 Last updated : 08/19/2022 # Use the StorSimple Device Manager service to view jobs for the StorSimple Virtual Array++ ## Overview The **Jobs** blade provides a single central portal for viewing and managing jobs that are started on virtual arrays that are connected to your StorSimple Device Manager service. You can view running, completed, and failed jobs for multiple virtual devices. Results are presented in a tabular format.
storsimple Storsimple Virtual Array Manage Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-virtual-array-manage-service.md
na Previously updated : 07/25/2019 Last updated : 8/19/2022 # Deploy the StorSimple Device Manager service for StorSimple Virtual Array
-## Overview
+## Overview
The StorSimple Device Manager service runs in Microsoft Azure and connects to multiple StorSimple devices. After you create the service, you can use it to manage the devices from the Microsoft Azure portal running in a browser. This allows you to monitor all the devices that are connected to the StorSimple Device Manager service from a single, central location, thereby minimizing administrative burden.
storsimple Storsimple Virtual Array Manage Shares https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-virtual-array-manage-shares.md
na Previously updated : 11/21/2016 Last updated : 08/19/2022 # Use the StorSimple Device Manager service to manage shares on the StorSimple Virtual Array + ## Overview This tutorial explains how to use the StorSimple Device Manager service to create and manage shares on your StorSimple Virtual Array.
storsimple Storsimple Virtual Array Manage Storage Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-virtual-array-manage-storage-accounts.md
NA Previously updated : 02/27/2017 Last updated : 08/19/2022 # Use StorSimple Device Manager to manage storage account credentials for StorSimple Virtual Array + ## Overview The **Configuration** section of the StorSimple Device Manager service blade of your StorSimple Virtual Array presents the global service parameters that can be created in the StorSimple Manager service. These parameters can be applied to all the devices connected to the service, and include:
storsimple Storsimple Virtual Array Manage Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-virtual-array-manage-volumes.md
na Previously updated : 11/21/2016 Last updated : 08/19/2022 # Use StorSimple Device Manager service to manage volumes on the StorSimple Virtual Array + ## Overview This tutorial explains how to use the StorSimple Device Manager service to create and manage volumes on your StorSimple Virtual Array.
storsimple Storsimple Virtual Array Manager Service Administration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-virtual-array-manager-service-administration.md
na Previously updated : 03/17/2021 Last updated : 08/19/2022 # Use the StorSimple Device Manager service to administer your StorSimple Virtual Array++ ![setup process flow](./media/storsimple-virtual-array-manager-service-administration/manage4.png) ## Overview
storsimple Storsimple Virtual Array Service Summary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-virtual-array-service-summary.md
na Previously updated : 11/21/2016 Last updated : 08/19/2022 # Use the service summary blade for StorSimple Device Manager connected to StorSimple Virtual Array++ ## Overview The service summary blade for the StorSimple Device Manager provides a summary view of the StorSimple Virtual Arrays (also known as StorSimple on-premises virtual devices or virtual devices) that are connected to your service, highlighting those that need a system administrator's attention. This tutorial introduces the service summary blade, explains the content and function, and describes the tasks that you can perform from this blade.
storsimple Storsimple Virtual Array Update 04 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-virtual-array-update-04-release-notes.md
NA Previously updated : 04/05/2017 Last updated : 08/19/2022 # StorSimple Virtual Array Update 0.4 release notes + ## Overview The following release notes identify the critical open issues and the resolved issues for Microsoft Azure StorSimple Virtual Array updates.
storsimple Storsimple Virtual Array Update 05 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-virtual-array-update-05-release-notes.md
NA Previously updated : 05/08/2017 Last updated : 08/19/2022 # StorSimple Virtual Array Update 0.5 release notes + ## Overview The following release notes identify the critical open issues and the resolved issues for Microsoft Azure StorSimple Virtual Array updates.
storsimple Storsimple Virtual Array Update 06 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-virtual-array-update-06-release-notes.md
NA Previously updated : 05/24/2017 Last updated : 08/19/2022 # StorSimple Virtual Array Update 0.6 release notes + ## Overview The following release notes identify the critical open issues and the resolved issues for Microsoft Azure StorSimple Virtual Array updates.
storsimple Storsimple Virtual Array Update 1 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-virtual-array-update-1-release-notes.md
description: Describes critical open issues and resolutions for the StorSimple V
Previously updated : 11/02/2017 Last updated : 08/19/2022 # StorSimple Virtual Array Update 1.0 release notes + ## Overview The following release notes identify the critical open issues and the resolved issues for Microsoft Azure StorSimple Virtual Array updates.
storsimple Storsimple Virtual Array Update 12 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-virtual-array-update-12-release-notes.md
Previously updated : 05/29/2019 Last updated : 08/19/2022 # StorSimple Virtual Array Update 1.2 release notes + The following release notes identify the critical open issues and the resolved issues for Microsoft Azure StorSimple Virtual Array updates. The release notes are continuously updated. As critical issues requiring a workaround are discovered, they are added. Before you deploy your StorSimple Virtual Array, carefully review the information contained in the release notes.
storsimple Storsimple Virtual Array Update 13 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-virtual-array-update-13-release-notes.md
description: Describes critical open issues and resolutions for the Azure StorSi
Previously updated : 04/13/2021 Last updated : 08/19/2022 # StorSimple Virtual Array Update 1.3 release notes + The following release notes identify the critical open issues and the resolved issues for Microsoft Azure StorSimple Virtual Array updates. The release notes are continuously updated. As critical issues requiring a workaround are discovered, they are added. Before you deploy your StorSimple Virtual Array, carefully review the information contained in the release notes.
synapse-analytics Data Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/partner/data-integration.md
To create your data warehouse solution using the dedicated SQL pool in Azure Syn
| ![Segment](./media/data-integration/segment_logo.png) |**Segment**<br>Segment is a data management and analytics solution that helps you make sense of customer data coming from various sources. It allows you to connect your data to over 200 tools to create better decisions, products, and experiences. Segment will transform and load multiple data sources into your warehouse for you using its built-in data connectors|[Product page](https://segment.com/)<br> | | ![Skyvia](./media/data-integration/skyvia_logo.png) |**Skyvia (data integration)**<br>Skyvia data integration provides a wizard that automates data imports. This wizard allows you to migrate data between different kinds of sources - CRMs, application database, CSV files, and more. |[Product page](https://skyvia.com/)<br> | | ![SnapLogic](./media/data-integration/snaplogic_logo.png) |**SnapLogic**<br>The SnapLogic Platform enables customers to quickly transfer data into and out of an Azure Synapse data warehouse. It offers the ability to integrate hundreds of applications, services, and IoT scenarios in one solution.|[Product page](https://www.snaplogic.com/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/snaplogic.snaplogic-elastic-integration-windows)<br> |
-| ![SnowMirror](./media/data-integration/snowmirror-logo.png) |**SnowMirror by GuideVision**<br>SnowMirror is a smart data replication tool for ServiceNow. It loads data from a ServiceNow instance and stores it in an on-premise or cloud database. You can then use your replicated data for custom reporting and dashboards with tools like Power BI. Because your data is replicated, it reduces load on your ServiceNow cloud instance. It can be used for system integration, disaster recovery and more. SnowMirror can be used either on premises or in the cloud, and is compatible with all leading databases, including Microsoft SQL Server and Azure Synapse.|[Product page](https://www.snow-mirror.com/)|
+| ![SnowMirror](./media/data-integration/snowmirror-logo.png) |**SnowMirror by GuideVision**<br>SnowMirror is a smart data replication tool for ServiceNow. It loads data from a ServiceNow instance and stores it in an on-premises or cloud database. You can then use your replicated data for custom reporting and dashboards with tools like Power BI. Because your data is replicated, it reduces load on your ServiceNow cloud instance. It can be used for system integration, disaster recovery and more. SnowMirror can be used either on premises or in the cloud, and is compatible with all leading databases, including Microsoft SQL Server and Azure Synapse.|[Product page](https://www.snow-mirror.com/)|
| ![StreamSets](./media/data-integration/streamsets_logo.png) |**StreamSets**<br>StreamSets provides a data integration platform for DataOps. It operationalizes the full design-deploy-operate lifecycle of integrating data into an Azure Synapse data warehouse. You can quickly ingest and integrate data to and from the warehouse via streaming, batch, or changed data capture. Also, you can ensure continuous operations with smart data pipelines that provide end-to-end data flow visibility and resiliency.|[Product page](https://streamsets.com/partners/microsoft)| | ![Talend](./media/data-integration/talend-logo.png) |**Talend Cloud**<br>Talend Cloud is an enterprise data integration platform to connect, access, and transform any data across the cloud or on-premises. It's an integration platform-as-a-service that provides broad connectivity, built-in data quality, and native support for the latest big data and cloud technologies. |[Product page](https://www.talend.com/)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/talend.talendremoteengine?source=datamarket&tab=Overview) | | ![Theobald](./media/data-integration/theobald-logo.png) |**Theobald Software**<br>Theobald Software has been offering various solutions for data integration with SAP since 2004. Secure, stable, fast and, if required, incremental access to all types of SAP data objects on SAP ERP, S/4, BW or BW/4 systems is their area of expertise; an expertise that has been officially certified by SAP and which more than 3,500 global customers are making use of. Their products, Xtract IS for Azure and Xtract Universal, are constantly improving and have evolved into SAP ETL/ELT solutions that seamlessly integrate with Microsoft Azure, where Synapse and Data Factory pipelines can be used to orchestrate SAP data extractions, while Azure Storage serves as a destination for SAP data ingestions. |[Product page](https://theobald-software.com/en/products-technologies/)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/theobaldsoftwaregmbh.xtractisforazure) |
synapse-analytics Shared Databases Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/shared-databases-access-control.md
# How to set up access control on synchronized objects in serverless SQL pool In Azure Synapse Analytics, Spark [databases](../metadat) backed tables created with Spark are automatically available in serverless SQL pool. This feature allows using serverless SQL pool to explore and query data prepared by using Spark pools.
-On below diagram, you can see a high-level architecture overview to utilize this feature. First, Azure Synapse Pipelines are moving data from on-premise (or other) storage to Azure Data Lake Storage. Spark can now enrich the data, and create databases, and tables that are getting synchronized to serverless Synapse SQL. Later, user can execute ad-hoc queries on top of the enriched data or serve it to Power BI for example.
+On below diagram, you can see a high-level architecture overview to utilize this feature. First, Azure Synapse Pipelines are moving data from on-premises (or other) storage to Azure Data Lake Storage. Spark can now enrich the data, and create databases, and tables that are getting synchronized to serverless Synapse SQL. Later, user can execute ad-hoc queries on top of the enriched data or serve it to Power BI for example.
![Enrich in Spark, serve with SQL diagram.](./media/shared-databases-access-control/enrich-in-spark-serve-sql.png)
virtual-desktop App Attach File Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/app-attach-file-share.md
Here are some other things we recommend you do to optimize MSIX app attach perfo
- Separate the storage fabric for MSIX app attach from FSLogix profile containers. - All VM system accounts and user accounts must have read-only permissions to access the file share.-- Any disaster recovery plans for Azure Virtual Desktop must include replicating the MSIX app attach file share in your secondary failover location. To learn more about disaster recovery, see [Set up a business continuity and disaster recovery plan](disaster-recovery.md).
+- Any disaster recovery plans for Azure Virtual Desktop must include replicating the MSIX app attach file share in your secondary failover location. To learn more about disaster recovery, see [Set up a business continuity and disaster recovery plan](disaster-recovery.md). You'll also need to ensure your file share path is accessible in the secondary location. You can use [Distributed File System (DFS) Namespaces](/windows-server/storage/dfs-namespaces/dfs-overview) to provide a single share name across different file shares.
## How to set up the file share
virtual-desktop Create Host Pools Azure Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-host-pools-azure-marketplace.md
To start creating your new host pool:
> ![A screenshot of the Azure portal showing the Location field with the East US location selected. Next to the field is text that says, "Metadata will be stored in East US."](media/portal-location-field.png) >[!NOTE]
- > If you want to create your host pool in [a supported region](data-locations.md) outside the US, you'll need to re-register the resource provider. After re-registering, you should see the other regions in the drop-down for selecting the location. Learn how to re-register at our [Host pool creation](troubleshoot-set-up-issues.md#i-only-see-us-when-setting-the-location-for-my-service-objects) troubleshooting article.
+ > If you want to create your host pool in [a supported region](data-locations.md) outside the US, you'll need to re-register the resource provider. After re-registering, you should see the other regions in the drop-down for selecting the location. Learn how to re-register at our [Host pool creation](troubleshoot-set-up-issues.md#i-dont-see-the-azure-region-i-want-to-use-when-selecting-the-location-for-my-service-objects) troubleshooting article.
8. Under Host pool type, select whether your host pool will be **Personal** or **Pooled**.
virtual-desktop Create Host Pools Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-host-pools-powershell.md
If you haven't already done so, follow the instructions in [Set up the PowerShel
Run the following cmdlet to sign in to the Azure Virtual Desktop environment: ```powershell
-New-AzWvdHostPool -ResourceGroupName <resourcegroupname> -Name <hostpoolname> -WorkspaceName <workspacename> -HostPoolType <Pooled|Personal> -LoadBalancerType <BreadthFirst|DepthFirst|Persistent> -Location <region> -DesktopAppGroupName <appgroupname>
+New-AzWvdHostPool -ResourceGroupName <resourcegroupname> -Name <hostpoolname> -WorkspaceName <workspacename> -HostPoolType <Pooled|Personal> -LoadBalancerType <BreadthFirst|DepthFirst|Persistent> -Location <region> -DesktopAppGroupName <appgroupname> -PreferredAppGroupType <appgrouptype>
``` This cmdlet will create the host pool, workspace and desktop app group. Additionally, it will register the desktop app group to the workspace. You can either create a workspace with this cmdlet or use an existing workspace.
virtual-desktop Diagnostics Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/diagnostics-log-analytics.md
Azure Virtual Desktop uses [Azure Monitor](../azure-monitor/overview.md) for mon
- Are users encountering any issues with specific activities? This feature can generate a table that tracks activity data for you as long as the information is joined with the activities. - Checkpoints: - Specific steps in the lifetime of an activity that were reached. For example, during a session, a user was load balanced to a particular host, then the user was signed on during a connection, and so on.
+- Agent Health Status:
+ - Monitor the health and status of the Azure Virtual Desktop agent installed on each session host. For example, verify that the agents are up to date, or whether the agent is in a healthy state and ready to accept new user sessions.
+- Connection Network Data:
+ - Track the average network data for user sessions to monitor for details including the estimated round trip time and available bandwidth throughout their connection.
Connections that don't reach Azure Virtual Desktop won't show up in diagnostics results because the diagnostics role service itself is part of Azure Virtual Desktop. Azure Virtual Desktop connection issues can happen when the user is experiencing network connectivity issues. Azure Monitor lets you analyze Azure Virtual Desktop data and review virtual machine (VM) performance counters, all within the same tool. This article will tell you more about how to enable diagnostics for your Azure Virtual Desktop environment. >[!NOTE]
->To learn how to monitor your VMs in Azure, see [Monitoring Azure virtual machines with Azure Monitor](../azure-monitor/vm/monitor-vm-azure.md). Also, make sure to [review the performance counter thresholds](../virtual-desktop/virtual-desktop-fall-2019/deploy-diagnostics.md#windows-performance-counter-thresholds) for a better understanding of your user experience on the session host.
+>To learn how to monitor your VMs in Azure, see [Monitoring Azure virtual machines with Azure Monitor](../azure-monitor/vm/monitor-vm-azure.md). Also, make sure to review the [Azure Monitor glossary](./azure-monitor-glossary.md) for a better understanding of your user experience on the session host.
## Before you get started
You can access Log Analytics workspaces on the Azure portal or Azure Monitor.
5. You are ready to query diagnostics. All diagnostics tables have a "WVD" prefix. >[!NOTE]
->For more detailed information about the tables stored in Azure Monitor Logs, see the [Azure Monitor data refence](/azure/azure-monitor/reference/). All tables related to Azure Virtual Desktop are labeled "WVD."
+>For more detailed information about the tables stored in Azure Monitor Logs, see the [Azure Monitor data reference](/azure/azure-monitor/reference/tables/tables-category#azure-virtual-desktop). All tables related to Azure Virtual Desktop are prefixed with "WVD."
## Cadence for sending diagnostic events
virtual-desktop Troubleshoot Set Up Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-set-up-issues.md
If your operation goes over the quota limit, you can do one of the following thi
**Fix**: You'll need to reassign users to app groups.
-### I only see US when setting the location for my service objects
+### I don't see the Azure region I want to use when selecting the location for my service objects
**Cause**: Azure doesn't currently support that region for the Azure Virtual Desktop service. To learn about which geographies we support, check out [Data locations](data-locations.md). If Azure Virtual Desktop supports the location but it still doesn't appear when you're trying to select a location, that means your resource provider hasn't updated yet.
virtual-machines Create Ssh Keys Detailed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/create-ssh-keys-detailed.md
Previously updated : 02/17/2021 Last updated : 08/18/2022
With a secure shell (SSH) key pair, you can create a Linux virtual machine that uses SSH keys for authentication. This article shows you how to create and use an SSH RSA public-private key file pair for SSH client connections.
-If you want quick commands, see [How to create an SSH public-private key pair for Linux VMs in Azure](mac-create-ssh-keys.md).
+If you want quick commands rather than a more in-depth explaination of SSH keys, see [How to create an SSH public-private key pair for Linux VMs in Azure](mac-create-ssh-keys.md).
To create SSH keys and use them to connect to a Linux VM from a **Windows** computer, see [How to use SSH keys with Windows on Azure](ssh-from-windows.md). You can also use the [Azure portal](../ssh-keys-portal.md) to create and manage SSH keys for creating VMs in the portal.
To create SSH keys and use them to connect to a Linux VM from a **Windows** comp
## SSH keys use and benefits
-When you create an Azure VM by specifying the public key, Azure copies the public key (in the `.pub` format) to the `~/.ssh/authorized_keys` folder on the VM. SSH keys in `~/.ssh/authorized_keys` are used to challenge the client to match the corresponding private key on an SSH connection. In an Azure Linux VM that uses SSH keys for authentication, Azure configures the SSHD server to not allow password sign-in, only SSH keys. By creating an Azure Linux VM with SSH keys, you can help secure the VM deployment and save yourself the typical post-deployment configuration step of disabling passwords in the `sshd_config` file.
+When you create an Azure VM by specifying the public key, Azure copies the public key (in the `.pub` format) to the `~/.ssh/authorized_keys` folder on the VM. SSH keys in `~/.ssh/authorized_keys` ensure that connecting clients present the corresponding private key during an SSH connection. In an Azure Linux VM that uses SSH keys for authentication, Azure disables the SSH server's password authentication system and only allows for SSH key authentication. By creating an Azure Linux VM with SSH keys, you can help secure the VM deployment and save yourself the typical post-deployment configuration step of disabling passwords in the `sshd_config` file.
-If you do not wish to use SSH keys, you can set up your Linux VM to use password authentication. If your VM is not exposed to the Internet, using passwords may be sufficient. However, you still need to manage your passwords for each Linux VM and maintain healthy password policies and practices, such as minimum password length and regular updates.
+If you do not wish to use SSH keys, you can set up your Linux VM to use password authentication. If your VM is not exposed to the Internet, using passwords may be sufficient. However, you still need to manage your passwords for each Linux VM and maintain healthy password policies and practices, such as minimum password length and regular system updates.
## Generate keys with ssh-keygen
-To create the keys, a preferred command is `ssh-keygen`, which is available with OpenSSH utilities in the Azure Cloud Shell, a macOS or Linux host, and Windows 10. `ssh-keygen` asks a series of questions and then writes a private key and a matching public key.
+To create the keys, a preferred command is `ssh-keygen`, which is available with OpenSSH utilities in the Azure Cloud Shell, a macOS or Linux host, and Windows (10 & 11). `ssh-keygen` asks a series of questions and then writes a private key and a matching public key.
SSH keys are by default kept in the `~/.ssh` directory. If you do not have a `~/.ssh` directory, the `ssh-keygen` command creates it for you with the correct permissions. An SSH key is created as a resource and stored in Azure for later use.
SSH keys are by default kept in the `~/.ssh` directory. If you do not have a `~
### Basic example
-The following `ssh-keygen` command generates 4096-bit SSH RSA public and private key files by default in the `~/.ssh` directory. If an SSH key pair exists in the current location, those files are overwritten.
+The following `ssh-keygen` command generates 4096-bit SSH RSA public and private key files by default in the `~/.ssh` directory. If an existing SSH key pair is found in the current location, those files are overwritten.
```bash ssh-keygen -m PEM -t rsa -b 4096
The key pair name for this article. Having a key pair named `id_rsa` is the defa
#### List of the `~/.ssh` directory
+To view existing files in the `~/.ssh` directory, run the following command. If no files are found in the directory or the directory itself is missing, make sure that all previous commands were successfully run. You may require root access to modify files in this directory on certain Linux distributions.
+ ```bash ls -al ~/.ssh -rw- 1 azureuser staff 1675 Aug 25 18:04 id_rsa
It is *strongly* recommended to add a passphrase to your private key. Without a
## Generate keys automatically during deployment
-If you use the [Azure CLI](/cli/azure) to create your VM, you can optionally generate SSH public and private key files by running the [az vm create](/cli/azure/vm) command with the `--generate-ssh-keys` option. The keys are stored in the ~/.ssh directory. Note that this command option does not overwrite keys if they already exist in that location.
+If you use the [Azure CLI](/cli/azure) to create your VM, you can optionally generate both public and private SSH key files by running the [az vm create](/cli/azure/vm) command with the `--generate-ssh-keys` option. The keys are stored in the ~/.ssh directory. Note that this command option does not overwrite keys if they already exist in that location, such as with some pre-configured Compute Gallery images.
## Provide SSH public key when deploying a VM
If you're not familiar with the format of an SSH public key, you can see your pu
cat ~/.ssh/id_rsa.pub ```
-Output is similar to the following (here redacted):
+Output is similar to the following (redacted example below):
``` ssh-rsa XXXXXXXXXXc2EAAAADAXABAAABAXC5Am7+fGZ+5zXBGgXS6GUvmsXCLGc7tX7/rViXk3+eShZzaXnt75gUmT1I2f75zFn2hlAIDGKWf4g12KWcZxy81TniUOTjUsVlwPymXUXxESL/UfJKfbdstBhTOdy5EG9rYWA0K43SJmwPhH28BpoLfXXXXXG+/ilsXXXXXKgRLiJ2W19MzXHp8z3Lxw7r9wx3HaVlP4XiFv9U4hGcp8RMI1MP1nNesFlOBpG4pV2bJRBTXNXeY4l6F8WZ3C4kuf8XxOo08mXaTpvZ3T1841altmNTZCcPkXuMrBjYSJbA8npoXAXNwiivyoe3X2KMXXXXXdXXXXXXXXXXCXXXXX/ azureuser@myserver
ssh-rsa XXXXXXXXXXc2EAAAADAXABAAABAXC5Am7+fGZ+5zXBGgXS6GUvmsXCLGc7tX7/rViXk3+eSh
If you copy and paste the contents of the public key file into the Azure portal or a Resource Manager template, make sure you don't copy any additional whitespace or introduce additional line breaks. For example, if you use macOS, you can pipe the public key file (by default, `~/.ssh/id_rsa.pub`) to **pbcopy** to copy the contents (there are other Linux programs that do the same thing, such as `xclip`).
-If you prefer to use a public key that is in a multiline format, you can generate an RFC4716 formatted key in a pem container from the public key you previously created.
+If you prefer to use a public key that is in a multiline format, you can generate an RFC4716 formatted key in a 'pem' container from the public key you previously created.
To create a RFC4716 formatted key from an existing SSH public key:
If the VM is using the just-in-time access policy, you need to request access be
## Use ssh-agent to store your private key passphrase
-To avoid typing your private key file passphrase with every SSH sign-in, you can use `ssh-agent` to cache your private key file passphrase. If you are using a Mac, the macOS Keychain securely stores the private key passphrase when you invoke `ssh-agent`.
+To avoid typing your private key file passphrase with every SSH sign-in, you can use `ssh-agent` to cache your private key file passphrase on your local system. If you are using a Mac, the macOS Keychain securely stores the private key passphrase when you invoke `ssh-agent`.
Verify and use `ssh-agent` and `ssh-add` to inform the SSH system about the key files so that you do not need to use the passphrase interactively.
Edit the file to add the new SSH configuration
vim ~/.ssh/config ```
-Add configuration settings appropriate for your host VM. In this example, the VM name is *myvm* and the account name is *azureuser*.
+Add configuration settings appropriate for your host VM. In this example, the VM name (Host) is *myvm*, the account name (User) is *azureuser* and the IP Address or FQDN (Hostname) is 192.168.0.255.
```bash # Azure Keys Host myvm
- Hostname 102.160.203.241
+ Hostname 192.168.0.255
User azureuser # ./Azure Keys ``` You can add configurations for additional hosts to enable each to use its own dedicated key pair. See [SSH config file](https://www.ssh.com/ssh/config/) for more advanced configuration options.
-Now that you have an SSH key pair and a configured SSH config file, you are able to sign in to your Linux VM quickly and securely. When you run the following command, SSH locates and loads any settings from the `Host myvm` block in the SSH config file.
+Now that you have an SSH key pair and a configured SSH config file, you are able to remotely access your Linux VM quickly and securely. When you run the following command, SSH locates and loads any settings from the `Host myvm` block in the SSH config file.
```bash ssh myvm
virtual-machines Np Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/np-series.md
VM Generation Support: Generation 1<br>
**Q:** Do I need to use NP VMs to develop my solution?
-**A:** No, you can develop on-premise and deploy to the cloud. Please make sure to follow the [attestation documentation](./field-programmable-gate-arrays-attestation.md) to deploy on NP VMs.
+**A:** No, you can develop on-premises and deploy to the cloud. Please make sure to follow the [attestation documentation](./field-programmable-gate-arrays-attestation.md) to deploy on NP VMs.
**Q:** Which file returned from attestation should I use when programming my FPGA in an NP VM?
virtual-machines Oracle Database Backup Strategies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-database-backup-strategies.md
To learn more about using Azure NetApp Files for Oracle Databases on Azure, read
## Azure Backup service
-[The Azure Backup](../../../backup/backup-overview.md) is a fully managed PaaS that provides simple, secure, and cost-effective solutions to back up your data and recover it from the Microsoft Azure cloud. Azure Backup can back up and restore on-premise clients, Azure VMΓÇÖs, Azure Files shares, as well as SQL Server, Oracle, MySQL, PostreSQL, and SAP HANA databases on Azure VMs.
+[The Azure Backup](../../../backup/backup-overview.md) is a fully managed PaaS that provides simple, secure, and cost-effective solutions to back up your data and recover it from the Microsoft Azure cloud. Azure Backup can back up and restore on-premises clients, Azure VMΓÇÖs, Azure Files shares, as well as SQL Server, Oracle, MySQL, PostreSQL, and SAP HANA databases on Azure VMs.
Azure Backup provides independent and isolated backups to guard against accidental destruction of original data. Backups are stored in a [Recovery Services vault](../../../backup/backup-azure-recovery-services-vault-overview.md) with built-in management of recovery points. Configuration and scalability are simple, backups are optimized, and you can easily restore as needed. It uses the underlying power and unlimited scale of the Azure cloud to deliver high-availability with no maintenance or monitoring overhead. Azure Backup doesn't limit the amount of inbound or outbound data you transfer, or charge for the data that's transferred, and data is secured in transit and at rest.
virtual-machines Jboss Eap On Azure Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/jboss-eap-on-azure-best-practices.md
JBoss EAP is a Jakarta Enterprise Edition (EE) 8 compatible implementation for b
See the [JBoss EAP Supported Configurations](https://access.redhat.com/articles/2026253) documentation for details on Operating Systems (OS), Java platforms, and other supported platforms on which EAP can be used.
-The Red Hat Cloud Access program allows you to use a JBoss EAP subscription to install JBoss EAP on your Azure virtual machine, which are On-Demand Pay-As-You-Go (PAYG) operating systems from the Microsoft Azure Marketplace. Virtual machine operating system subscriptions, in this case Red Hat Enterprise Linux (RHEL), is separate from a JBoss EAP subscription. Red Hat Cloud Access is a Red Hat subscription feature that provides support for JBoss EAP on Red Hat certified cloud infrastructure providers, such as Microsoft Azure. Red Hat Cloud Access allows you to move your subscriptions between traditional on-premise servers and public cloud-based resources in a simple and cost-effective manner.
+The Red Hat Cloud Access program allows you to use a JBoss EAP subscription to install JBoss EAP on your Azure virtual machine, which are On-Demand Pay-As-You-Go (PAYG) operating systems from the Microsoft Azure Marketplace. Virtual machine operating system subscriptions, in this case Red Hat Enterprise Linux (RHEL), is separate from a JBoss EAP subscription. Red Hat Cloud Access is a Red Hat subscription feature that provides support for JBoss EAP on Red Hat certified cloud infrastructure providers, such as Microsoft Azure. Red Hat Cloud Access allows you to move your subscriptions between traditional on-premises servers and public cloud-based resources in a simple and cost-effective manner.
You can find more information about [Red Hat Cloud Access on the Customer Portal](https://www.redhat.com/en/technologies/cloud-computing/cloud-access). Reminder, you don't need to Red Hat Cloud Access for any PAYG offers on Azure Marketplace.
virtual-machines Jboss Eap On Azure Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/jboss-eap-on-azure-migration.md
To quickly get started, select one of the Quickstart template that closely match
## Migration flow and architecture
-This section outlines free tools for migrating JBoss EAP applications from another application server to run on JBoss EAP and from traditional on-premise servers to Microsoft Azure cloud environment.
+This section outlines free tools for migrating JBoss EAP applications from another application server to run on JBoss EAP and from traditional on-premises servers to Microsoft Azure cloud environment.
### Red Hat migration toolkit for applications (MTA)
virtual-machines Dbms_Guide_Sqlserver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/dbms_guide_sqlserver.md
There's some SQL Server in IaaS specific information you should know before cont
* **SQL Version Support**: For SAP customers, SQL Server 2008 R2 and higher is supported on Microsoft Azure Virtual Machine. Earlier editions aren't supported. Review this general [Support Statement](https://support.microsoft.com/kb/956893) for more details. In general, SQL Server 2008 is supported by Microsoft as well. However due to significant functionality for SAP, which was introduced with SQL Server 2008 R2, SQL Server 2008 R2 is the minimum release for SAP. In general, you should consider using the most recent SQL Server releases to run SAP workload in Azure IaaS. The latest SQL Server releases offer better integration into some of the Azure services and functionality. Or have changes that optimize operations in an Azure IaaS infrastructure. Therefore, the paper is restricted to SQL Server 2016 and SQL Server 2017. * **SQL Performance**: Microsoft Azure hosted Virtual Machines perform well in comparison to other public cloud virtualization offerings, but individual results may vary. Check out the article [Performance best practices for SQL Server in Azure Virtual Machines](/azure/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist). * **Using Images from Azure Marketplace**: The fastest way to deploy a new Microsoft Azure VM is to use an image from the Azure Marketplace. There are images in the Azure Marketplace, which contain the most recent SQL Server releases. The images where SQL Server already is installed can't be immediately used for SAP NetWeaver applications. The reason is the default SQL Server collation is installed within those images and not the collation required by SAP NetWeaver systems. In order to use such images, check the steps documented in chapter [Using a SQL Server image out of the Microsoft Azure Marketplace][dbms-guide-5.6].
-* **SQL Server multi-instance support within a single Azure VM**: This deployment method is supported. However, be aware of resource limitations, especially around network and storage bandwidth of the VM type that you're using. Detailed information is available in article [Sizes for virtual machines in Azure](../../sizes.md). These quota limitations might prevent you to implement the same multi-instance architecture as you can implement on-premise. As of the configuration and interference of sharing the resources available within a single VM, the same considerations as on-premise need to be taken into account.
-* **Multiple SAP databases in one single SQL Server instance in a single VM**: As above, configurations like these are supported. Considerations of multiple SAP databases sharing the shared resources of a single SQL Server instance are the same as for on-premise deployments. Additional keep other limits like number of disks that can be attached to a specific VM type in mind. Or network and storage quota limits of specific VM types as detailed [Sizes for virtual machines in Azure](../../sizes.md).
+* **SQL Server multi-instance support within a single Azure VM**: This deployment method is supported. However, be aware of resource limitations, especially around network and storage bandwidth of the VM type that you're using. Detailed information is available in article [Sizes for virtual machines in Azure](../../sizes.md). These quota limitations might prevent you to implement the same multi-instance architecture as you can implement on-premises. As of the configuration and interference of sharing the resources available within a single VM, the same considerations as on-premises need to be taken into account.
+* **Multiple SAP databases in one single SQL Server instance in a single VM**: As above, configurations like these are supported. Considerations of multiple SAP databases sharing the shared resources of a single SQL Server instance are the same as for on-premises deployments. Additional keep other limits like number of disks that can be attached to a specific VM type in mind. Or network and storage quota limits of specific VM types as detailed [Sizes for virtual machines in Azure](../../sizes.md).
## Recommendations on VM/VHD structure for SAP-related SQL Server deployments
virtual-machines Planning Guide Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/planning-guide-storage.md
The Azure Standard HDD storage was the only storage type when Azure infrastructu
## Azure VM limits in storage traffic
-In opposite to on-premise scenarios, the individual VM type you are selecting, plays a vital role in the storage bandwidth you can achieve. For the different storage types, you need to consider:
+In opposite to on-premises scenarios, the individual VM type you are selecting, plays a vital role in the storage bandwidth you can achieve. For the different storage types, you need to consider:
| Storage type| Linux | Windows | Comments | | | | | |
virtual-machines Planning Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/planning-guide.md
General default limitations and maximum limitations of Azure subscriptions can b
## Possible Scenarios SAP is often seen as one of the most mission-critical applications within enterprises. The architecture and operations of these applications is mostly complex and ensuring that you meet requirements on availability and performance is important.
-Thus enterprises have to think carefully about which cloud provider to choose for running such business critical business processes on. Azure is the ideal public cloud platform for business critical SAP applications and business processes. Given the wide variety of Azure infrastructure, nearly all existing SAP NetWeaver, and S/4HANA systems can be hosted in Azure today. Azure provides VMs with many Terabytes of memory and more than 200 CPUs. Beyond that Azure offers [HANA Large Instances](./hana-overview-architecture.md), which allow scale-up HANA deployments of up to 24 TB and SAP HANA scale-out deployments of up to 120 TB. One can state today that nearly all on-premise SAP scenarios can be run in Azure as well.
+Thus enterprises have to think carefully about which cloud provider to choose for running such business critical business processes on. Azure is the ideal public cloud platform for business critical SAP applications and business processes. Given the wide variety of Azure infrastructure, nearly all existing SAP NetWeaver, and S/4HANA systems can be hosted in Azure today. Azure provides VMs with many Terabytes of memory and more than 200 CPUs. Beyond that Azure offers [HANA Large Instances](./hana-overview-architecture.md), which allow scale-up HANA deployments of up to 24 TB and SAP HANA scale-out deployments of up to 120 TB. One can state today that nearly all on-premises SAP scenarios can be run in Azure as well.
For a rough description of the scenarios and some non-supported scenarios, see the document [SAP workload on Azure virtual machine supported scenarios](./sap-planning-supported-configurations.md).
virtual-machines Sap Rise Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-rise-integration.md
While vnet peering is the recommended and more typical deployment model, a VPN v
Network Security Groups are in effect on both customer and SAP vnet, identically to vnet peering architecture enabling communication to SAP NetWeaver and HANA ports as required. For details how to set up the VPN connection and which settings should be used, contact your SAP representative.
-## Connectivity back to on-premise
+## Connectivity back to on-premises
-With an existing customer Azure deployment, on-premise network is already connected through ExpressRoute (ER) or VPN. The same on-premise network path is typically used for SAP RISE/ECS managed workloads. Preferred architecture is to use existing ER/VPN Gateways in customerΓÇÖs hub vnet for this purpose, with connected SAP RISE vnet seen as a spoke network connected to customerΓÇÖs vnet hub.
+With an existing customer Azure deployment, on-premises network is already connected through ExpressRoute (ER) or VPN. The same on-premises network path is typically used for SAP RISE/ECS managed workloads. Preferred architecture is to use existing ER/VPN Gateways in customerΓÇÖs hub vnet for this purpose, with connected SAP RISE vnet seen as a spoke network connected to customerΓÇÖs vnet hub.
- This diagram shows a typical SAP customer's hub and spoke virtual networks. It's connected to on-premises with a connection. Cross tenant virtual network peering connects SAP RISE vnet to customer's hub vnet. The vnet peering has remote gateway transit enabled, enabling SAP RISE vnet to be accessed from on-premise.
+ This diagram shows a typical SAP customer's hub and spoke virtual networks. It's connected to on-premises with a connection. Cross tenant virtual network peering connects SAP RISE vnet to customer's hub vnet. The vnet peering has remote gateway transit enabled, enabling SAP RISE vnet to be accessed from on-premises.
:::image-end:::
-With this architecture, central policies and security rules governing network connectivity to customer workloads also apply to SAP RISE/ECS managed workloads. The same on-premise network path is used for both customer's vnets and SAP RISE/ECS vnet.
+With this architecture, central policies and security rules governing network connectivity to customer workloads also apply to SAP RISE/ECS managed workloads. The same on-premises network path is used for both customer's vnets and SAP RISE/ECS vnet.
-If there's no currently existing Azure to on-premise connectivity, contact your SAP representative for details which connections models are possible to be established. Any on-premise to SAP RISE/ECS connection is then for reaching the SAP managed vnet only. The on-premise to SAP RISE/ECS connection isn't used to access customer's own Azure vnets.
+If there's no currently existing Azure to on-premises connectivity, contact your SAP representative for details which connections models are possible to be established. Any on-premises to SAP RISE/ECS connection is then for reaching the SAP managed vnet only. The on-premises to SAP RISE/ECS connection isn't used to access customer's own Azure vnets.
**Important to note**: A virtual network can have [only have one gateway](../../../virtual-network/virtual-network-peering-overview.md#gateways-and-on-premises-connectivity), local or remote. With vnet peering established between SAP RISE/ECS using remote gateway transit like in above architecture, no gateways can be added in the SAP RISE/ECS vnet. A combination of vnet peering with remote gateway transit together with another VPN gateway in the SAP RISE/ECS vnet isn't possible.
Design description and specifics:
- Customers must provide and delegate to SAP a subdomain/zone (for example, \*ecs.contoso.com) which will be used to assign names and create forward and reverse DNS entries for the virtual machines that run SAP managed environment. SAP DNS servers are holding a master DNS role for the delegated zone
- - DNS zone transfer from SAP DNS server to customerΓÇÖs DNS servers is the primary method to replicate DNS entries from RISE/STE/ECS environment to on-premise DNS
+ - DNS zone transfer from SAP DNS server to customerΓÇÖs DNS servers is the primary method to replicate DNS entries from RISE/STE/ECS environment to on-premises DNS
- Customer-owned Azure vnets are also using custom DNS configuration referring to customer DNS servers located in Azure Hub vnet.
See [SAP's documentation](https://help.sap.com/docs/PRIVATE_LINK) and a series o
Your SAP landscape runs within SAP RISE/ECS subscription, you can access the SAP system through available ports. Each application communicating with your SAP system might require different ports to access it.
-For SAP Fiori, standalone or embedded within the SAP S/4 HANA or NetWeaver system, the customer can connect applications through OData or REST API. Both use https for incoming requests to the SAP system. Applications running on-premise or within the customerΓÇÖs own Azure subscription and vnet, use the established vnet peering or VPN vnet-to-vnet connection through a private IP address. Applications accessing a publicly available IP, exposed through SAP RISE managed Azure application gateway, are also able to contact the SAP system through https. For details and security for the application gateway and NSG open ports, contact SAP.
+For SAP Fiori, standalone or embedded within the SAP S/4 HANA or NetWeaver system, the customer can connect applications through OData or REST API. Both use https for incoming requests to the SAP system. Applications running on-premises or within the customerΓÇÖs own Azure subscription and vnet, use the established vnet peering or VPN vnet-to-vnet connection through a private IP address. Applications accessing a publicly available IP, exposed through SAP RISE managed Azure application gateway, are also able to contact the SAP system through https. For details and security for the application gateway and NSG open ports, contact SAP.
Applications using remote function calls (RFC) or direct database connections using JDBC/ODBC protocols are only possible through private networks and thus via the vnet peering or VPN from customerΓÇÖs vnet(s).
Applications using remote function calls (RFC) or direct database connections us
Diagram of open ports on a SAP RISE/ECS system. RFC connections for BAPI and IDoc, htps for OData and Rest/SOAP. ODBC/JDBC for direct database connections to SAP HANA. All connnections through the private vnet peering. Application Gateway with public IP for https as a potential option, managed through SAP. :::image-end:::
-With the information about available interfaces to the SAP RISE/ECS landscape, several methods of integration with Azure Services are possible. For data scenarios with Azure Data Factory or Synapse Analytics a self-hosted integration runtime or Azure Integration Runtime is available and described in the next chapter. For Logic Apps, Power Apps, Power BI the intermediary between the SAP RISE system and Azure service is through the on-premise data gateway, described in further chapters. Most services in the [Azure Integration Services](https://azure.microsoft.com/product-categories/integration/) do not require any intermediary gateway and thus can communicate directly with these available SAP interfaces.
+With the information about available interfaces to the SAP RISE/ECS landscape, several methods of integration with Azure Services are possible. For data scenarios with Azure Data Factory or Synapse Analytics a self-hosted integration runtime or Azure Integration Runtime is available and described in the next chapter. For Logic Apps, Power Apps, Power BI the intermediary between the SAP RISE system and Azure service is through the on-premises data gateway, described in further chapters. Most services in the [Azure Integration Services](https://azure.microsoft.com/product-categories/integration/) do not require any intermediary gateway and thus can communicate directly with these available SAP interfaces.
## Integration with self-hosted integration runtime
The customer is responsible for deployment and operation of the self-hosted inte
To learn the overall support on SAP data integration scenario, see [SAP data integration using Azure Data Factory whitepaper](https://github.com/Azure/Azure-DataFactory/blob/master/whitepaper/SAP%20Data%20Integration%20using%20Azure%20Data%20Factory.pdf) with detailed introduction on each SAP connector, comparison and guidance. ## On-premise data gateway
-Further Azure Services such as [Logic Apps](../../../logic-apps/logic-apps-using-sap-connector.md), [Power Apps](/connectors/saperp/) or [Power BI](/power-bi/connect-data/desktop-sap-bw-connector) communicate and exchange data with SAP systems through an on-premise data gateway. The on-premise data gateway is a virtual machine, running in Azure or on-premise. It provides secure data transfer between these Azure Services and your SAP systems.
+Further Azure Services such as [Logic Apps](../../../logic-apps/logic-apps-using-sap-connector.md), [Power Apps](/connectors/saperp/) or [Power BI](/power-bi/connect-data/desktop-sap-bw-connector) communicate and exchange data with SAP systems through an on-premises data gateway. The on-premises data gateway is a virtual machine, running in Azure or on-premises. It provides secure data transfer between these Azure Services and your SAP systems.
-With SAP RISE, the on-premise data gateway can connect to Azure Services running in customerΓÇÖs Azure subscription. This VM running the data gateway is deployed and operated by the customer. With below high-level architecture as overview, similar method can be used for either service.
+With SAP RISE, the on-premises data gateway can connect to Azure Services running in customerΓÇÖs Azure subscription. This VM running the data gateway is deployed and operated by the customer. With below high-level architecture as overview, similar method can be used for either service.
-[![SAP RISE/ECS accessed from Azure on-premise data gateway and connected Azure services.](./media/sap-rise-integration/sap-rise-on-premises-data-gateway.png)](./media/sap-rise-integration/sap-rise-on-premises-data-gateway.png#lightbox)
+[![SAP RISE/ECS accessed from Azure on-premises data gateway and connected Azure services.](./media/sap-rise-integration/sap-rise-on-premises-data-gateway.png)](./media/sap-rise-integration/sap-rise-on-premises-data-gateway.png#lightbox)
-The SAP RISE environment here provides access to the SAP ports for RFC and https described earlier. The communication ports are accessed by the private network address through the vnet peering or VPN site-to-site connection. The on-premise data gateway VM running in customerΓÇÖs Azure subscription uses the [SAP .NET connector](https://support.sap.com/en/product/connectors/msnet.html) to run RFC, BAPI or IDoc calls through the RFC connection. Additionally, depending on service and way the communication is setup, a way to connect to public IP of the SAP systems REST API through https might be required. The https connection to a public IP can be exposed through SAP RISE/ECS managed application gateway. This high level architecture shows the possible integration scenario. Alternatives to it such as using Logic Apps single tenant and [private endpoints](../../../logic-apps/secure-single-tenant-workflow-virtual-network-private-endpoint.md) to secure the communication and other can be seen as extension and are not described here in.
+The SAP RISE environment here provides access to the SAP ports for RFC and https described earlier. The communication ports are accessed by the private network address through the vnet peering or VPN site-to-site connection. The on-premises data gateway VM running in customerΓÇÖs Azure subscription uses the [SAP .NET connector](https://support.sap.com/en/product/connectors/msnet.html) to run RFC, BAPI or IDoc calls through the RFC connection. Additionally, depending on service and way the communication is setup, a way to connect to public IP of the SAP systems REST API through https might be required. The https connection to a public IP can be exposed through SAP RISE/ECS managed application gateway. This high level architecture shows the possible integration scenario. Alternatives to it such as using Logic Apps single tenant and [private endpoints](../../../logic-apps/secure-single-tenant-workflow-virtual-network-private-endpoint.md) to secure the communication and other can be seen as extension and are not described here in.
SAP RISE/ECS exposes the communication ports for these applications to use but has no knowledge about any details of the connected application or service running in a customerΓÇÖs subscription.
virtual-network-manager Create Virtual Network Manager Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/create-virtual-network-manager-cli.md
To begin your configuration, sign in to your Azure account. If you use the Cloud
az login ```
-Select the subscription for which you want to create an ExpressRoute circuit.
+Select the subscription where network manager will be deployed.
```azurecli-interactive az account set \
az account set \
## Create a resource group
-Before you can create an Azure Route Server, you have to create a resource group to host the Route Server. Create a resource group with [az group create](/cli/azure/group#az-group-create). This example creates a resource group named **myAVNMResourceGroup** in the **westus** location:
+Before you can deploy Azure Virtual Network Manager, you have to create a resource group to host the . Create a rnetwork manager esource group with [az group create](/cli/azure/group#az-group-create). This example creates a resource group named **myAVNMResourceGroup** in the **westus** location:
```azurecli-interactive az group create \
virtual-network Accelerated Networking How It Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/accelerated-networking-how-it-works.md
The mlx5 driver initializes the VF interface, and the interface is now functiona
The data path has been switched back to the VF interface.
-## Disable/Enable Accelerated Networking in a Running VM
+## Disable/Enable Accelerated Networking in a non-running VM
-Accelerated Networking can be toggled on a virtual NIC in a running VM with Azure CLI. For example:
+Accelerated Networking can be toggled on a virtual NIC in a non-running VM with Azure CLI. For example:
```output $ az network nic update --name u1804895 --resource-group testrg --accelerated-network false
virtual-network Virtual Networks Name Resolution For Vms And Role Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md
description: Name resolution scenarios for Azure IaaS, hybrid solutions, between
documentationcenter: na -+ na
Classic deployment model:
* [Azure Service Configuration Schema](/previous-versions/azure/reference/ee758710(v=azure.100)) * [Virtual Network Configuration Schema](/previous-versions/azure/reference/jj157100(v=azure.100))
-* [Configure a Virtual Network by using a network configuration file](/previous-versions/azure/virtual-network/virtual-networks-using-network-configuration-file)
+* [Configure a Virtual Network by using a network configuration file](/previous-versions/azure/virtual-network/virtual-networks-using-network-configuration-file)
virtual-wan Routing Deep Dive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/routing-deep-dive.md
+
+ Title: 'Virtual WAN routing deep dive'
+
+description: Learn about how Virtual WAN routing works in detail.
++++ Last updated : 05/08/2022+++
+# Virtual WAN routing deep dive
+
+[Azure Virtual WAN][virtual-wan-overview] is a networking solution that allows creating sophisticated networking topologies very easily: it encompasses routing across Azure regions between Azure VNets and on-premises locations via Point-to-Site VPN, Site-to-Site VPN, [ExpressRoute][er] and [integrated SDWAN appliances][virtual-wan-nva], including the option to [secure the traffic][virtual-wan-secured-hub]. In most scenarios it is not required any deep knowledge of how Virtual WAN internal routing works, but in certain situations it can be useful to understand Virtual WAN routing concepts.
+
+This document will explore sample Virtual WAN scenarios that will explain some of the behaviors that organizations might encounter when interconnecting their VNets and branches in complex networks. The scenarios shown in this article are by no means design recommendations, they are just sample topologies specifically designed to demonstrate certain Virtual WAN functionalities.
+
+## Scenario 1: topology with default routing preference
+
+The first scenario in this article will analyze a topology with two Virtual WAN hubs, one ExpressRoute circuit connected to each hub, one branch connected over VPN to hub 1, and a second branch connected via SDWAN to an NVA deployed inside of hub 2. In each hub there are VNets connected directly (VNets 11 and 21) and indirectly through an NVA (VNets 121, 122, 221 and 222). VNet 12 exchanges routing information with hub 1 via BGP (see [BGP peering with a virtual hub][virtual-wan-bgp]), and VNet 22 is configured with static routes, so that differences between both options can be shown.
+
+In each hub the VPN and SDWAN appliances server to a dual purpose: on one side they advertise their own individual prefixes (`10.4.1.0/24` over VPN in hub 1 and `10.5.3.0/24` over SDWAN in hub 2), and on the other they advertise the same prefixes as the ExpressRoute circuits in the same region (`10.4.2.0/24` in hub 1 and `10.5.2.0/24` in hub 2). This will be used to demonstrate how the [Virtual WAN hub routing preference][virtual-wan-hrp] works.
+
+All VNet and branch connections are associated and propagating to the default route table. Although the hubs are secured (there is an Azure Firewall deployed in every hub), they are not configured to secure private or Internet traffic. Doing so would result in all connections propagating to the `None` route table, which would remove all non-static routes from the `Default` route table and defeat the purpose of this article since the effective route blade in the portal would be almost empty (with the exception of the static routes to send traffic to the Azure Firewall).
++
+Out of the box the Virtual WAN hubs will exchange information between each other so that communication across regions is enabled. You can inspect the effective routes in Virtual WAN route tables: for example, the following picture shows the effective routes in hub 1:
++
+These effective routes will be then advertised by Virtual WAN to branches, and will inject them into the VNets connected to the virtual hubs, making the use of User Define Routes unnecessary. When inspecting the effective routes in a virtual hub the "Next Hop Type" and "Origin" fields will indicate where the routes are coming from. For example, a Next Hop Type of "Virtual Network Connection" indicates that the prefix is defined in a VNet directly connected to Virtual WAN (VNets 11 and 12 in the previous screenshot)
+
+The route 10.1.20.0/22 is injected by the NVA in VNet 12 over BGP (hence the Next Hop Type "HubBgpConnection", see [BGP Peering with a Virtual Hub][virtual-wan-bgp]) to cover both indirect spokes VNet 121 (10.1.21.0/24) and VNet 122 (10.1.22.0/24). VNets and branches in the remote hub are visible with a next hop of `hub2`, and it can be seen in the AS path that the Autonomous System Number `65520` has been prepended two times to these interhub routes.
++
+In hub 2 there is an integrated SDWAN Network Virtual Appliance, for more details on supported NVAs for this integration please visit [About NVAs in a Virtual WAN hub][virtual-wan-nva]. Note that the route to the SDWAN branch `10.5.3.0/24` has a next hop of `VPN_S2S_Gateway`. This type of next hop can indicate today either routes coming from an Azure Virtual Network Gateway or from NVAs integrated in the hub.
+
+In hub 2 the route for `10.2.20.0/22` to the indirect spokes VNet 221 (10.2.21.0/24) and VNet 222 (10.2.22.0/24) is installed as a static route, as indicated by the origin `defaultRouteTable`. If you check in the effective routes for hub 1, that route is not there. The reason is because static routes are not propagated via BGP, but need to be configured in every hub. Hence, a static route is required in hub 1 to provide connectivity between the VNets and branches in hub 1 to the indirect spokes in hub 2 (VNets 221 and 222):
++
+After adding the static route hub 1 will contain the `10.2.20.0/22` route as well:
++
+## Scenario 2: Global Reach and hub routing preference
+
+Even if hub 1 knows the ExpressRoute prefix from circuit 2 (`10.5.2.0/24`) and hub 2 knows the ExpressRoute prefix from circuit 1 (`10.4.2.0/24`), ExpressRoute routes from remote regions will not be advertised back to on-premises ExpressRoute links. Consequently, so that both ExpressRoute locations can communicate to each other interconnecting them via [ExpressRoute Global Reach][er-gr] is required:
++
+As explained in [Virtual hub routing preference (Preview)][virtual-wan-hrp] per default Virtual WAN will favor routes coming from ExpressRoute. Since routes are advertised from hub 1 to the ExpressRoute circuit 1, from the ExpressRoute circuit 1 to the circuit 2, and from the ExpressRoute circuit 2 to hub 2 (and vice versa), virtual hubs will prefer this path over the more direct inter hub link now, as the effective routes in hub 1 show:
++
+As you can see in the routes, ExpressRoute Global Reach will prepend the ExpressRoute Autonomous System Number (12076) multiple times before sending routes back to Azure to make these routes less preferable. However, Virtual WAN default hub routing precedence of ExpressRoute ignores the AS path length when taking routing decision.
+
+The effective routes in hub 2 will be similar:
++
+The routing preference can be changed to VPN or AS-Path as explained in [Virtual hub routing preference (Preview)][virtual-wan-hrp]. For example, you can set the preference to VPN as shown in this image:
++
+With a hub routing preference of VPN this is how the effective routes in hub 1 look like:
++
+The previous image shows that the route to `10.4.2.0/24` has now a next hop of `VPN_S2S_Gateway`, while with the default routing preference of ExpressRoute it was `ExpressRouteGateway`. However, in hub 2 the route to `10.5.2.0/24` will still appear with a next hop of `ExpressRoute`, because in this case the alternative route doesn't come from a VPN Gateway but from an NVA integrated in the hub:
++
+However, traffic between hubs is still preferring the routes coming via ExpressRoute. In order to use the more efficient direct connection between hub 1 and hub 2 the route preference can be set to "AS Path" on both hubs:
++
+Now the routes for remote spokes and branches in hub 1 will have a next hop of `Remote Hub` as intended:
++
+You can see that the IP prefix for hub 2 (`192.168.2.0/23`) still appears reachable over the Global Reach link, but this shouldn't impact traffic as there shouldn't be any traffic specifically addressed to devices in hub 2. This might be an issue though if there were NVAs in both hubs establishing SDWAN tunnels between each other.
+
+However, note that `10.4.2.0/24` is now preferred over the VPN Gateway. This can happen if the routes advertised via VPN have a shorter AS path than the routes advertised over ExpressRoute. After configuring the on-premises VPN device to prepend its Autonomous System Number (`65501`) to the VPN routes to make the less preferable, hub 1 now selects ExpressRoute as next hop for `10.4.2.0/24`:
++
+Hub 2 will show a similar table for the effective routes, where the VNets and branches in the other hub now appear with `Remote Hub` as next hop:
++
+## Scenario 3: Cross-connecting the ExpressRoute circuits to both hubs
+
+In order to add direct links between the Azure regions and the on-premises locations connected via ExpressRoute, it is often desirable connecting an single ExpressRoute circuit to multiple Virtual WAN hubs in a topology some times described as "bow tie", as the following topology shows:
++
+Virtual WAN will display that both circuits are connected to both hubs:
++
+Going back to the default hub routing preference of ExpressRoute, the routes to remote branches and VNets in hub 1 will show again ExpressRoute as next hop. Although this time the reason is not Global Reach, but the fact that the ExpressRoute circuits will bounce back the route advertisements they get from one hub to the other. For example, for hub 1 these are the effective routes with hub routing preference of ExpressRoute:
++
+Changing back the hub routing preference again to AS Path will return the inter hub routes to the optimal path using the direct connection between hubs 1 and 2:
++
+## Next steps
+
+For more information about Virtual WAN see:
+
+* The Virtual WAN [FAQ](virtual-wan-faq.md)
+
+[virtual-wan-overview]: /azure/virtual-wan/virtual-wan-about
+[virtual-wan-secured-hub]: /azure/firewall-manager/secured-virtual-hub
+[virtual-wan-hrp]: /azure/virtual-wan/about-virtual-hub-routing-preference
+[virtual-wan-nva]: /azure/virtual-wan/about-nva-hub
+[virtual-wan-bgp]: /azure/virtual-wan/scenario-bgp-peering-hub
+[er]: /azure/expressroute/expressroute-introduction
+[er-gr]: /azure/expressroute/expressroute-global-reach
virtual-wan Virtual Wan Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-faq.md
Yes. For a list of Managed Service Provider (MSP) solutions enabled via Azure Ma
### How does Virtual WAN hub routing differ from Azure Route Server in a VNet?
-Both Azure Virtual WAN hub and Azure Route Server provide Border Gateway Protocol (BGP) peering capabilities that can be utilized by NVAs (Network Virtual Appliance) to advertise IP addresses from the NVA to the userΓÇÖs Azure virtual networks. The deployment options differ in the sense that Azure Route Server is typically deployed by a self-managed customer hub VNet whereas Azure Virtual WAN provides a zero-touch fully meshed hub service to which customers connect their various spokes end points (Azure VNet, on-premise branches with site-to-site VPN or SDWAN, remote users with point-to-site/Remote User VPN and Private connections with ExpressRoute) and enjoy BGP Peering for NVAs deployed in spoke VNet along with other vWAN capabilities such as transit connectivity for VNet-to-VNet, transit connectivity between VPN and ExpressRoute, custom/advanced routing, custom route association and propagation, routing intent/policies for no hassle inter-region security, Secure Hub/Azure firewall etc. For more details about Virtual WAN BGP Peering, please see [How to peer BGP with a virtual hub](scenario-bgp-peering-hub.md).
+Both Azure Virtual WAN hub and Azure Route Server provide Border Gateway Protocol (BGP) peering capabilities that can be utilized by NVAs (Network Virtual Appliance) to advertise IP addresses from the NVA to the userΓÇÖs Azure virtual networks. The deployment options differ in the sense that Azure Route Server is typically deployed by a self-managed customer hub VNet whereas Azure Virtual WAN provides a zero-touch fully meshed hub service to which customers connect their various spokes end points (Azure VNet, on-premises branches with site-to-site VPN or SDWAN, remote users with point-to-site/Remote User VPN and Private connections with ExpressRoute) and enjoy BGP Peering for NVAs deployed in spoke VNet along with other vWAN capabilities such as transit connectivity for VNet-to-VNet, transit connectivity between VPN and ExpressRoute, custom/advanced routing, custom route association and propagation, routing intent/policies for no hassle inter-region security, Secure Hub/Azure firewall etc. For more details about Virtual WAN BGP Peering, please see [How to peer BGP with a virtual hub](scenario-bgp-peering-hub.md).
### If I'm using a third-party security provider (Zscaler, iBoss or Checkpoint) to secure my internet traffic, why don't I see the VPN site associated to the third-party security provider in the Azure Portal?
Yes, BGP communities generated by on-premises will be preserved in Virtual WAN.
The Virtual WAN team has been working on upgrading virtual routers from their current Cloud Services infrastructure to Virtual Machine Scale Sets based deployments. This will enable the virtual hub router to now be availability zone aware. If you navigate to your Virtual WAN hub resource and see this message and button, then you can upgrade your router to the latest version by clicking on the button. Azure-wide Cloud Services-based infrastructure is deprecating. If you would like to take advantage of new Virtual WAN features, such as [BGP peering with the hub](create-bgp-peering-hub-portal.md), you'll have to update your virtual hub router via Azure Portal.
-YouΓÇÖll only be able to update your virtual hub router if all the resources (gateways/route tables/VNet connections) in your hub are in a succeeded state. Additionally, as this operation requires deployment of new virtual machine scale sets based virtual hub routers, youΓÇÖll face an expected downtime of up to 30 minutes per hub. Within a single Virtual WAN resource, hubs should be updated one at a time instead of updating multiple at the same time. When the Router Version says ΓÇ£LatestΓÇ¥, then the hub is done updating. There will be no routing behavior changes after this update unless one of the following is true:
-
-1. The Virtual WAN hub is in a different region than one or more spoke VNets. In this case, you will have to delete and recreate these respective VNet connections to maintain connectivity.
-1. You have already configured BGP peering between your Virtual WAN hub and an NVA in a spoke VNet. In this case, you will have to [delete and then recreate the BGP peer](create-bgp-peering-hub-portal.md). Since the virtual hub router's IP addresses change after the upgrade, you will also have to reconfigure your NVA to peer with the virtual hub router's new IP addresses. These IP addresses are represented as the "virtualRouterIps" field in the Virtual Hub's Resource JSON.
+YouΓÇÖll only be able to update your virtual hub router if all the resources (gateways/route tables/VNet connections) in your hub are in a succeeded state. Additionally, as this operation requires deployment of new virtual machine scale sets based virtual hub routers, youΓÇÖll face an expected downtime of up to 30 minutes per hub. Within a single Virtual WAN resource, hubs should be updated one at a time instead of updating multiple at the same time. When the Router Version says ΓÇ£LatestΓÇ¥, then the hub is done updating. There will be no routing behavior changes after this update unless you have already configured BGP peering between your Virtual WAN hub and an NVA in a spoke VNet. In this case, you will have to [delete and then recreate the BGP peer](create-bgp-peering-hub-portal.md). Since the virtual hub router's IP addresses change after the upgrade, you will also have to reconfigure your NVA to peer with the virtual hub router's new IP addresses. These IP addresses are represented as the "virtualRouterIps" field in the Virtual Hub's Resource JSON.
If the update fails for any reason, your hub will be auto recovered to the old version to ensure there is still a working setup.