Updates from: 08/20/2022 01:11:42
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Json Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/json-transformations.md
The following example generates a JSON string based on the claim value of "email
<InputClaims> <InputClaim ClaimTypeReferenceId="email" TransformationClaimType="personalizations.0.to.0.email" /> <InputClaim ClaimTypeReferenceId="otp" TransformationClaimType="personalizations.0.dynamic_template_data.otp" />
- <InputClaim ClaimTypeReferenceId="email" TransformationClaimType="personalizations.0.dynamic_template_data.verify-email" />
+ <InputClaim ClaimTypeReferenceId="copiedEmail" TransformationClaimType="personalizations.0.dynamic_template_data.verify-email" />
</InputClaims> <InputParameters> <InputParameter Id="template_id" DataType="string" Value="d-4c56ffb40fa648b1aa6822283df94f60"/>
The following claims transformation outputs a JSON string claim that will be the
- Input claims: - **email**, transformation claim type **personalizations.0.to.0.email**: "someone@example.com"
+ - **copiedEmail**, transformation claim type **personalizations.0.dynamic_template_data.verify-email**: "someone@example.com"
- **otp**, transformation claim type **personalizations.0.dynamic_template_data.otp** "346349" - Input parameter: - **template_id**: "d-4c56ffb40fa648b1aa6822283df94f60"
active-directory-b2c Openid Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/openid-connect.md
Previously updated : 04/12/2022 Last updated : 08/12/2022
client_id=90c0fe63-bcf2-44d5-8fb7-b8bbc0b29dc6
&response_type=code+id_token &redirect_uri=https%3A%2F%2Fjwt.ms%2F &response_mode=fragment
-&scope=&scope=openid%20offline_access%20{application-id-uri}/{scope-name}
+&scope=openid%20offline_access%20{application-id-uri}/{scope-name}
&state=arbitrary_data_you_can_receive_in_the_response &nonce=12345 ```
active-directory-b2c User Flow Custom Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/user-flow-custom-attributes.md
You can create these attributes by using the portal UI before or after you use t
|Name |Used in | |||
-|`extension_loyaltyId` | Custom policy|
+|`extension_loyaltyId` | Custom policy|
|`extension_<b2c-extensions-app-guid>_loyaltyId` | [Microsoft Graph API](microsoft-graph-operations.md#application-extension-directory-extension-properties)|
+> [!NOTE]
+> When using a custom attribute in custom policies, you must prefix the claim type ID with `extension_` to allow the correct data mapping to take place within the Azure AD B2C directory.
+ The following example demonstrates the use of custom attributes in an Azure AD B2C custom policy claim definition. ```xml
active-directory Active Directory Schema Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-schema-extensions.md
For example, here is a claims-mapping policy to emit a single claim from a direc
Where *xxxxxxx* is the appID (or Client ID) of the application that the extension was registered with.
+> [!WARNING]
+> When you define a claims mapping policy for a directory extension attribute, use the `ExtensionID` property instead of the `ID` property within the body of the `ClaimsSchema` array, as shown in the example above.
+ > [!TIP] > Case consistency is important when setting directory extension attributes on objects. Extension attribute names aren't cases sensitive when being set up, but they are case sensitive when being read from the directory by the token service. If an extension attribute is set on a user object with the name "LegacyId" and on another user object with the name "legacyid", when the attribute is mapped to a claim using the name "LegacyId" the data will be successfully retrieved and the claim included in the token for the first user but not the second.
->
-> The "Id" parameter in the claims schema used for built-in directory attributes is "ExtensionID" for directory extension attributes.
## Next steps - Learn how to [add custom or additional claims to the SAML 2.0 and JSON Web Tokens (JWT) tokens](active-directory-optional-claims.md).
active-directory Reply Url https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reply-url.md
This table shows the maximum number of redirect URIs you can add to an app regis
| Microsoft work or school accounts in any organization's Azure Active Directory (Azure AD) tenant | 256 | `signInAudience` field in the application manifest is set to either *AzureADMyOrg* or *AzureADMultipleOrgs* | | Personal Microsoft accounts and work and school accounts | 100 | `signInAudience` field in the application manifest is set to *AzureADandPersonalMicrosoftAccount* |
+The maximum number of redirect URIS can't be raised for [security reasons](#restrictions-on-wildcards-in-redirect-uris). If your scenario requires more redirect URIs than the maximum limit allowed, consider the following [state parameter approach](#use-a-state-parameter) as the solution.
+ ## Maximum URI length You can use a maximum of 256 characters for each redirect URI you add to an app registration.
active-directory Auth Header Based https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-header-based.md
Previously updated : 10/10/2020 Last updated : 08/19/2022
active-directory Auth Kcd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-kcd.md
Previously updated : 10/10/2020 Last updated : 08/19/2022
There is a need to provide remote access, protect with pre-authentication, and p
* [Kerberos Constrained Delegation for single sign-on to your apps with Application Proxy](../app-proxy/application-proxy-configure-single-sign-on-with-kcd.md)
-* [Add an on-premises application for remote access through Application Proxy in Azure Active Directory](../app-proxy/application-proxy-add-on-premises-application.md)
+* [Add an on-premises application for remote access through Application Proxy in Azure Active Directory](../app-proxy/application-proxy-add-on-premises-application.md)
active-directory Auth Ldap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-ldap.md
Previously updated : 10/10/2020 Last updated : 08/19/2022
active-directory Auth Oauth2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-oauth2.md
Previously updated : 10/10/2020 Last updated : 08/19/2022
active-directory Auth Oidc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-oidc.md
Previously updated : 10/10/2020 Last updated : 08/19/2022
active-directory Auth Password Based Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-password-based-sso.md
Previously updated : 10/10/2020 Last updated : 08/19/2022
You need to protect with pre-authentication and provide SSO through password vau
* [Configure password based SSO for cloud applications ](../manage-apps/configure-password-single-sign-on-non-gallery-applications.md)
-* [Configure password-based SSO for on-premises applications with Application Proxy](../app-proxy/application-proxy-configure-single-sign-on-password-vaulting.md)
+* [Configure password-based SSO for on-premises applications with Application Proxy](../app-proxy/application-proxy-configure-single-sign-on-password-vaulting.md)
active-directory Auth Radius https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-radius.md
Previously updated : 10/10/2020 Last updated : 08/19/2022
active-directory Auth Remote Desktop Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-remote-desktop-gateway.md
Previously updated : 10/10/2020 Last updated : 08/19/2022
You need to provide remote access and protect your Remote Desktop Services deplo
* [Publish remote desktop with Azure AD Application Proxy](../app-proxy/application-proxy-integrate-with-remote-desktop-services.md)
-* [Add an on-premises application for remote access through Application Proxy in Azure AD](../app-proxy/application-proxy-add-on-premises-application.md)
+* [Add an on-premises application for remote access through Application Proxy in Azure AD](../app-proxy/application-proxy-add-on-premises-application.md)
active-directory Auth Saml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-saml.md
Previously updated : 10/10/2020 Last updated : 08/19/2022
active-directory Auth Sync Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-sync-overview.md
Previously updated : 10/10/2020 Last updated : 8/19/2022
active-directory Security Operations Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-applications.md
Previously updated : 07/15/2021 Last updated : 08/19/2022
Applications provide an attack surface for security breaches and must be monitored. While not targeted as often as user accounts, breaches can occur. Since applications often run without human intervention, the attacks may be harder to detect.
-This article provides guidance to monitor and alert on application events. It's regularly updated to help ensure that you:
+This article provides guidance to monitor and alert on application events and helps enable you to:
* Prevent malicious applications from getting unwarranted access to data.
active-directory Security Operations Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-devices.md
Previously updated : 07/15/2021 Last updated : 08/19/2022
active-directory Security Operations Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-infrastructure.md
Previously updated : 07/15/2021 Last updated : 08/19/2022
active-directory Security Operations User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-user-accounts.md
Previously updated : 07/15/2021 Last updated : 08/19/2022
active-directory Sync Ldap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/sync-ldap.md
Previously updated : 10/10/2020 Last updated : 08/19/2022
active-directory Sync Scim https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/sync-scim.md
Previously updated : 10/10/2020 Last updated : 08/19/2022
You want to automatically provision user information from an HCM system to Azure
* [Build a SCIM endpoint and configure user provisioning with Azure AD ](../app-provisioning/use-scim-to-provision-users-and-groups.md)
-* [SCIM 2.0 protocol compliance of the Azure AD Provisioning Service](../app-provisioning/application-provisioning-config-problem-scim-compatibility.md)
+* [SCIM 2.0 protocol compliance of the Azure AD Provisioning Service](../app-provisioning/application-provisioning-config-problem-scim-compatibility.md)
active-directory Entitlement Management Access Package Auto Assignment Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-auto-assignment-policy.md
+
+ Title: Configure an automatic assignment policy for an access package in Azure AD entitlement management - Azure Active Directory
+description: Learn how to configure automatic assignments based on rules for an access package in Azure Active Directory entitlement management.
+
+documentationCenter: ''
++
+editor:
++
+ na
++ Last updated : 08/15/2022+++++
+#Customer intent: As an administrator, I want detailed information about how I can edit an access package to include a policy for users to get and lose access package assignments automatically, without them or an administrator needing to request access.
++
+# Configure an automatic assignment policy for an access package in Azure AD entitlement management (Preview)
+
+You can use rules to determine access package assignment based on user properties in Azure Active Directory (Azure AD), part of Microsoft Entra. In Entitlement Management, an access package can have multiple policies, and each policy establishes how users get an assignment to the access package, and for how long. As an administrator, you can establish a policy for automatic assignments by supplying a membership rule, that Entitlement Management will follow to create and remove assignments automatically. Similar to a [dynamic group](../enterprise-users/groups-create-rule.md), when an automatic assignment policy is created, user attributes are evaluated for matches with the policy's membership rule. When an attribute changes for a user, these automatic assignment policy rules in the access packages are processed for membership changes. Assignments to users are then added or removed depending on whether they meet the rule criteria.
+
+During this preview, you can have at most one automatic assignment policy in an access package.
+
+This article describes how to create an access package automatic assignment policy for an existing access package.
+
+## Create an automatic assignment policy (Preview)
+
+To create a policy for an access package, you need to start from the access package's policy tab. Follow these steps to create a new policy for an access package.
+
+**Prerequisite role:** Global administrator, Identity Governance administrator, Catalog owner, or Access package manager
+
+1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**.
+
+1. In the left menu, click **Access packages** and then open the access package.
+
+1. Click **Policies** and then **Add auto-assignment policy** to create a new policy.
+
+1. In the first tab, you'll specify the rule. Click **Edit**.
+
+1. Provide a dynamic membership rule, using the [membership rule builder](../enterprise-users/groups-dynamic-membership.md) or by clicking **Edit** on the rule syntax text box.
+
+ > [!NOTE]
+ > The rule builder might not be able to display some rules constructed in the text box. For more information, see [rule builder in the Azure portal](/enterprise-users/groups-create-rule.md#rule-builder-in-the-azure-portal).
+
+ ![Screenshot of an access package automatic assignment policy rule configuration.](./media/entitlement-management-access-package-auto-assignment-policy/auto-assignment-rule-configuration.png)
+
+1. Click **Save** to close the dynamic membership rule editor, then click **Next** to open the **Custom Extensions** tab.
+
+1. If you have [custom extensions](entitlement-management-logic-apps-integration.md) in your catalog you wish to have run when the policy assigns or removes access, you can add them to this policy. Then click next to open the **Review** tab.
+
+1. Type a name and a description for the policy.
+
+ ![Screenshot of an access package automatic assignment policy review tab.](./media/entitlement-management-access-package-auto-assignment-policy/auto-assignment-review.png)
+
+1. Click **Create** to save the policy.
+
+ > [!NOTE]
+ > In this preview, Entitlement management will automatically create a dynamic security group corresponding to each policy, in order to evaluate the users in scope. This group should not be modified except by Entitlement Management itself. This group may also be modified or deleted automatically by Entitlement Management, so don't use this group for other applications or scenarios.
+
+1. Azure AD will evaluate the users in the organization that are in scope of this rule, and create assignments for those users who don't already have assignments to the access package. It may take several minutes for the evaluation to occur, or for subsequent updates to user's attributes to be reflected in the access package assignments.
+
+## Create an automatic assignment policy programmatically (Preview)
+
+You can also create a policy using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.ReadWrite.All` permission, or an application in a catalog role or with the `EntitlementManagement.ReadWrite.All` permission, can call the [create an accessPackageAssignmentPolicy](/graph/api/entitlementmanagement-post-assignmentpolicies?tabs=http&view=graph-rest-1.0&preserve-view=true) API. In your [request payload](/graph/api/resources/accesspackageassignmentpolicy?view=graph-rest-1.0&preserve-view=true), include the `displayName`, `description`, `specificAllowedTargets`, [`automaticRequestSettings`](/graph/api/resources/accesspackageautomaticrequestsettings?view=graph-rest-1.0&preserve-view=true) and `accessPackage` properties of the policy.
+
+## Next steps
+
+- [View assignments for an access package](entitlement-management-access-package-assignments.md)
active-directory Entitlement Management Access Package Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-request-policy.md
The way you specify who can request an access package is with a policy. Before c
When you create an access package, you can specify the request, approval and lifecycle settings, which are stored on the first policy of the access package. Most access packages will have a single policy for users to request access, but a single access package can have multiple policies. You would create multiple policies for an access package if you want to allow different sets of users to be granted assignments with different request and approval settings.
-For example, a single policy cannot be used to assign internal and external users to the same access package. However, you can create two policies in the same access package, one for internal users and one for external users. If there are multiple policies that apply to a user, they will be prompted at the time of their request to select the policy they would like to be assigned to. The following diagram shows an access package with two policies.
+For example, a single policy cannot be used to assign internal and external users to the same access package. However, you can create two policies in the same access package, one for internal users and one for external users. If there are multiple policies that apply to a user to request, they will be prompted at the time of their request to select the policy they would like to be assigned to. The following diagram shows an access package with two policies.
-![Multiple policies in an access package](./media/entitlement-management-access-package-request-policy/access-package-policy.png)
+![Diagram that illustrates multiple policies, along with multiple resource roles, can be contained within an access package.](./media/entitlement-management-access-package-request-policy/access-package-policy.png)
+
+In addition to policies for users to request access, you can also have policies for [automatic assignment](entitlement-management-access-package-auto-assignment-policy.md), and policies for direct assignment by administrators or catalog owners.
### How many policies will I need?
For example, a single policy cannot be used to assign internal and external user
| I want to allow users in my directory and also users outside my directory to request an access package | Two | | I want to specify different approval settings for some users | One for each group of users | | I want some users access package assignments to expire while other users can extend their access | One for each group of users |
-| I want users to request access and other users to be assigned access by an administrator | Two |
+| I want some users to request access and other users to be assigned access by an administrator | Two |
+| I want some users in my organization to receive access automatically, other users in my organization to be able to request, and other users to be assigned access by an administrator | Three |
For information about the priority logic that is used when multiple policies apply, see [Multiple policies](entitlement-management-troubleshoot.md#multiple-policies ).
Follow these steps if you want to allow users in your directory to be able to re
## For users not in your directory
- **Users not in your directory** refers to users who are in another Azure AD directory or domain. These users may not have yet been invited into your directory. Azure AD directories must be configured to be allow invitations in **Collaboration restrictions**. For more information, see [Configure external collaboration settings](../external-identities/external-collaboration-settings-configure.md).
+ **Users not in your directory** refers to users who are in another Azure AD directory or domain. These users may not have yet been invited into your directory. Azure AD directories must be configured to allow invitations in **Collaboration restrictions**. For more information, see [Configure external collaboration settings](../external-identities/external-collaboration-settings-configure.md).
> [!NOTE] > A guest user account will be created for a user not yet in your directory whose request is approved or auto-approved. The guest will be invited, but will not receive an invite email. Instead, they will receive an email when their access package assignment is delivered. By default, later when that guest user no longer has any access package assignments, because their last assignment has expired or been cancelled, that guest user account will be blocked from sign in and subsequently deleted. If you want to have guest users remain in your directory indefinitely, even if they have no access package assignments, you can change the settings for your entitlement management configuration. For more information about the guest user object, see [Properties of an Azure Active Directory B2B collaboration user](../external-identities/user-properties.md).
To change the request and approval settings for an access package, you need to o
1. If you are editing a policy click **Update**. If you are adding a new policy, click **Create**.
+## Creating an access package assignment policy programmatically
+
+You can also create a policy using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.ReadWrite.All` permission, or an application in a catalog role or with the `EntitlementManagement.ReadWrite.All` permission, can call the [create an accessPackageAssignmentPolicy](/graph/api/entitlementmanagement-post-assignmentpolicies?tabs=http&view=graph-rest-1.0&preserve-view=true) API.
+ ## Prevent requests from users with incompatible access In addition to the policy checks on who can request, you may wish to further restrict access, in order to avoid a user who already has some access - via a group or another access package - from obtaining excessive access.
active-directory Entitlement Management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-overview.md
Azure AD entitlement management can help address these challenges. To learn mor
Here are some of capabilities of entitlement management: - Control who can get access to applications, groups, Teams and SharePoint sites, with multi-stage approval, and ensure users don't retain access indefinitely through time-limited assignments and recurring access reviews.
+- Give users access automatically to those resources, based on the user's properties like department or cost center, and remove a user's access when those properties change (preview).
- Delegate to non-administrators the ability to create access packages. These access packages contain resources that users can request, and the delegated access package managers can define policies with rules for which users can request, who must approve their access, and when access expires. - Select connected organizations whose users can request access. When a user who isn't yet in your directory requests access, and is approved, they're automatically invited into your directory and assigned access. When their access expires, if they have no other access package assignments, their B2B account in your directory can be automatically removed.
You can have policies for users to request access. In these kinds of policies, a
- The approval process and the users that can approve or deny access - The duration of a user's access assignment, once approved, before the assignment expires
-You can also have policies for users to be assigned access, either by an administrator or automatically.
+You can also have policies for users to be assigned access, either by an administrator or [automatically](entitlement-management-access-package-auto-assignment-policy.md).
The following diagram shows an example of the different elements in entitlement management. It shows one catalog with two example access packages.
Specialized clouds, such as Azure Germany, and Azure China 21Vianet, aren't curr
Ensure that your directory has at least as many Azure AD Premium P2 licenses as you have: -- Member users who **can** request an access package.-- Member users who <u>request</u> an access package.-- Member users who <u>approve requests</u> for an access package.-- Member users who <u>review assignments</u> for an access package. -- Member users who have a <u>direct assignment</u> to an access package.
+- Member users who *can* request an access package.
+- Member users who *request* an access package.
+- Member users who *approve requests* for an access package.
+- Member users who *review assignments* for an access package.
+- Member users who have a *direct assignment* or an *automatic assignment* to an access package.
For guest users, licensing needs will depend on the [licensing model](../external-identities/external-identities-pricing.md) youΓÇÖre using. However, the below guest usersΓÇÖ activities are considered Azure AD Premium P2 usage:-- Guest users who <u>request</u> an access package. -- Guest users who <u>approve requests</u> for an access package.-- Guest users who <u>review assignments</u> for an access package.-- Guest users who have a <u>direct assignment</u> to an access package.
+- Guest users who *request* an access package.
+- Guest users who *approve requests* for an access package.
+- Guest users who *review assignments* for an access package.
+- Guest users who have a *direct assignment* to an access package.
Azure AD Premium P2 licenses are **not** required for the following tasks:
active-directory Entitlement Management Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-scenarios.md
There are several ways that you can configure entitlement management for your or
## Govern access for users in your organization
+### Administrator: Assign employees access automatically (preview)
+
+1. [Create a new access package](entitlement-management-access-package-create.md#start-new-access-package)
+1. [Add groups, Teams, applications, or SharePoint sites to access package](entitlement-management-access-package-create.md#resource-roles)
+1. [Add an automatic assignment policy](entitlement-management-access-package-auto-assignment-policy.md)
+ ### Access package 1. [Create a new access package](entitlement-management-access-package-create.md#start-new-access-package)
There are several ways that you can configure entitlement management for your or
## Programmatic administration
-You can also manage access packages, catalogs, policies, requests and assignments using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.Read.All` or `EntitlementManagement.ReadWrite.All` permission can call the [entitlement management API](/graph/tutorial-access-package-api). An application with those application permissions can also use many of those API functions, with the exception of managing resources in catalogs and access packages. An an applications which only needs to operate within specific catalogs, can be added to the **Catalog owner** or **Catalog reader** roles of a catalog to be authorized to update or read within that catalog.
+You can also manage access packages, catalogs, policies, requests and assignments using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.Read.All` or `EntitlementManagement.ReadWrite.All` permission can call the [entitlement management API](/graph/tutorial-access-package-api). An application with those application permissions can also use many of those API functions, with the exception of managing resources in catalogs and access packages. And an application which only needs to operate within specific catalogs can be added to the **Catalog owner** or **Catalog reader** roles of a catalog to be authorized to update or read within that catalog.
## Next steps
active-directory Identity Governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-overview.md
Once you've started using these identity governance features, you can easily aut
| Creating, updating and deleting AD and Azure AD user accounts automatically for employees |[Plan cloud HR to Azure AD user provisioning](../app-provisioning/plan-cloud-hr-provision.md)| | Updating the membership of a group, based on changes to the member user's attributes | [Create a dynamic group](../enterprise-users/groups-create-rule.md)| | Assigning licenses | [group-based licensing](../enterprise-users/licensing-groups-assign.md) |
+| Adding and removing a user's group memberships, application roles, and SharePoint site roles, based on changes to the user's attributes | [Configure an automatic assignment policy for an access package in entitlement management](entitlement-management-access-package-auto-assignment-policy.md) (preview)|
| Adding and removing a user's group memberships, application roles, and SharePoint site roles, on a specific date | [Configure lifecycle settings for an access package in entitlement management](entitlement-management-access-package-lifecycle-policy.md)| | Running custom workflows when a user requests or receives access, or access is removed | [Trigger Logic Apps in entitlement management](entitlement-management-logic-apps-integration.md) (preview) | | Regularly having memberships of guests in Microsoft groups and Teams reviewed, and removing guest memberships that are denied |[Create an access review](create-access-review.md) |
active-directory Reference Connect Sync Attributes Synchronized https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-sync-attributes-synchronized.md
In this case, start with the list of attributes in this topic and identify those
| targetAddress |X |X | | | | telephoneAssistant |X |X | | | | telephoneNumber |X |X | | |
-| thumbnailphoto |X |X | |synced only once from Azure AD to Exchange Online after which Exchange Online becomes source of authority for this attribute and any later changes can't be synced from on-premise. See ([KB](https://support.microsoft.com/help/3062745/user-photos-aren-t-synced-from-the-on-premises-environment-to-exchange)) for more.|
+| thumbnailphoto |X |X | |synced only once from Azure AD to Exchange Online after which Exchange Online becomes source of authority for this attribute and any later changes can't be synced from on-premises. See ([KB](https://support.microsoft.com/help/3062745/user-photos-aren-t-synced-from-the-on-premises-environment-to-exchange)) for more.|
| title |X |X | | | | unauthOrig |X |X |X | | | usageLocation |X | | |mechanical property. The userΓÇÖs country/region. Used for license assignment. |
In this case, start with the list of attributes in this topic and identify those
| targetAddress |X |X | | | | telephoneAssistant |X |X | | | | telephoneNumber |X |X | | |
-| thumbnailphoto |X |X | |synced only once from Azure AD to Exchange Online after which Exchange Online becomes source of authority for this attribute and any later changes can't be synced from on-premise. See ([KB](https://support.microsoft.com/help/3062745/user-photos-aren-t-synced-from-the-on-premises-environment-to-exchange)) for more.|
+| thumbnailphoto |X |X | |synced only once from Azure AD to Exchange Online after which Exchange Online becomes source of authority for this attribute and any later changes can't be synced from on-premises. See ([KB](https://support.microsoft.com/help/3062745/user-photos-aren-t-synced-from-the-on-premises-environment-to-exchange)) for more.|
| title |X |X | | | | unauthOrig |X |X |X | | | url |X |X | | |
active-directory Datawiza Azure Ad Sso Oracle Jde https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/datawiza-azure-ad-sso-oracle-jde.md
To integrate Oracle JDE with Azure AD:
|:--|:-| | Platform | Web | | App Name | Enter a unique application name.|
- | Public Domain | For example: https:/jde-external.example.com. <br>For testing, you can use localhost DNS. If you aren't deploying DAB behind a load balancer, use the **Public Domain** port. |
+ | Public Domain | For example: `https://jde-external.example.com`. <br>For testing, you can use localhost DNS. If you aren't deploying DAB behind a load balancer, use the **Public Domain** port. |
| Listen Port | The port that DAB listens on.| | Upstream Servers | The Oracle JDE implementation URL and port to be protected.|
active-directory Cisco Umbrella User Management Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cisco-umbrella-user-management-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
If your endpoints are running AnyConnect or the Cisco Secure Client version 4.10 MR5 or earlier, you will need to synchronize the ObjectGUID attribute for user identity attribution. You will need to reconfigure any Umbrella policy on groups after importing groups from Azure AD. > [!NOTE]
-> The on-premise Umbrella AD Connector should be turned off before importing the ObjectGUID attribute.
+> The on-premises Umbrella AD Connector should be turned off before importing the ObjectGUID attribute.
When using Microsoft Azure AD Connect, the ObjectGUID attribute of users is not synchronized from on-premises AD to Azure AD by default. To synchronize this attribute, enable the optional **Directory Extension attribute sync** and select the objectGUID attributes for users.
active-directory Meta Work Accounts Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/meta-work-accounts-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Meta Work Accounts | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Meta Work Accounts.
++++++++ Last updated : 09/03/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with Meta Work Accounts
+
+In this tutorial, you'll learn how to integrate Meta Work Accounts with Azure Active Directory (Azure AD). When you integrate Meta Work Accounts with Azure AD, you can:
+
+* Control in Azure AD who has access to Meta Work Accounts.
+* Enable your users to be automatically signed-in to Meta Work Accounts with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Meta Work Accounts single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Meta Work Accounts supports **SP and IDP** initiated SSO.
+
+## Add Meta Work Accounts from the gallery
+
+To configure the integration of Meta Work Accounts into Azure AD, you need to add Meta Work Accounts from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Meta Work Accounts** in the search box.
+1. Select **Meta Work Accounts** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Meta Work Accounts
+
+Configure and test Azure AD SSO with Meta Work Accounts using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Meta Work Accounts.
+
+To configure and test Azure AD SSO with Meta Work Accounts, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Meta Work Accounts SSO](#configure-meta-work-accounts-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Meta Work Accounts test user](#create-meta-work-accounts-test-user)** - to have a counterpart of B.Simon in Meta Work Accounts that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Meta Work Accounts** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, perform the following steps:
+
+ a. In the **Identifier** text box, type a URL using the following pattern:
+ `https://work.facebook.com/company/<ID>`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ ` https://work.facebook.com/work/saml.php?__cid=<ID>`
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type the URL:
+ `https://work.facebook.com`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Engage the [Work Accounts team](https://www.workplace.com/help/work) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/certificatebase64.png)
+
+1. On the **Set up Meta Work Accounts** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Meta Work Accounts.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Meta Work Accounts**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Meta Work Accounts SSO
+
+1. Log in to your Meta Work Accounts company site as an administrator.
+
+1. Go to **Security** > **Single Sign-On**.
+
+1. Enable **Single-sign on(SSO)** checkbox and click **+Add new SSO Provider**.
+
+ ![Screenshot shows the SSO Account.](./media/meta-work-accounts-tutorial/security.png "SSO Account")
+
+1. On the **Single Sign-On (SSO) Setup** page, perform the following steps:
+
+ ![Screenshot shows the SSO Configuration.](./media/meta-work-accounts-tutorial/certificate.png "Configuration")
+
+ 1. Enter a valid **Name of the SSO Provider**.
+
+ 1. In the **SAML URL** textbox, paste the **Login URL** value which you have copied from the Azure portal.
+
+ 1. In the **SAML Issuer URL** textbox, paste the **Azure AD Identifier** value which you have copied from the Azure portal.
+
+ 1. **Enable SAML logout redirection** checkbox and in the **SAML Logout URL** textbox, paste the **Logout URL** value which you have copied from the Azure portal.
+
+ 1. Open the downloaded **Certificate (Base64)** from the Azure portal into Notepad and paste the content into the **SAML Certificate** textbox.
+
+ 1. Copy **Audience URL** value, paste this value into the **Identifier** textbox in the **Basic SAML Configuration** section in the Azure portal.
+
+ 1. Copy **ACS (Assertion Consumer Service) URL** value, paste this value into the **Reply URL** text box in the **Basic SAML Configuration** section in the Azure portal.
+
+ 1. In the **Test SSO Setup** section, enter a valid email in the textbox and click **Test SSO**.
+
+ 1. Click **Save Changes**.
+
+### Create Meta Work Accounts test user
+
+In this section, you create a user called Britta Simon in Meta Work Accounts. Work with the [Work Accounts team](https://www.workplace.com/help/work) to add the users in the Meta Work Accounts platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Meta Work Accounts Sign on URL where you can initiate the login flow.
+
+* Go to Meta Work Accounts Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Meta Work Accounts for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Meta Work Accounts tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Meta Work Accounts for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Meta Work Accounts you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
active-directory Tickitlms Learn Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tickitlms-learn-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode: In the **Sign-on URL** text box, type the URL:
- `https:/learn.tickitlms.com/sso/login`
+ `https://learn.tickitlms.com/sso/login`
1. Click **Save**.
active-directory Admin Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/admin-api.md
Don't supply a request body for this method.
example message:
-```
+```json
{
- value:
+ "value":
[ { "id": "ZjViZjJmYzYtNzEzNS00ZDk0LWE2ZmUtYzI2ZTQ1NDNiYzVhPHNjcmlwdD5hbGVydCgneWF5IScpOzwvc2NyaXB0Pg",
example message:
"authorityId": "ffea7eb3-0000-1111-2222-000000000000", "status": "Enabled", "issueNotificationEnabled": false,
- "manifestUrl" : "https:/...",
+ "manifestUrl" : "https://...",
"rules": "<rules JSON>", "displays": [{<display JSON}] },
example message:
"authorityId": "cc55ba22-0000-1111-2222-000000000000", "status": "Enabled", "issueNotificationEnabled": false,
- "manifestUrl" : "https:/...",
+ "manifestUrl" : "https://...",
"rules": "<rules JSON>", "displays": [{<display JSON}] }
aks Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/certificate-rotation.md
Last updated 5/10/2022
Azure Kubernetes Service (AKS) uses certificates for authentication with many of its components. If you have a RBAC-enabled cluster built after March 2022 it is enabled with certificate auto-rotation. Periodically, you may need to rotate those certificates for security or policy reasons. For example, you may have a policy to rotate all your certificates every 90 days. > [!NOTE]
-> Certificate auto-rotation will not be enabled by default for non-RBAC enabled AKS clusters.
+> Certificate auto-rotation will *only* be enabled by default for RBAC enabled AKS clusters.
This article shows you how certificate rotation works in your AKS cluster.
az vmss run-command invoke -g MC_rg_myAKSCluster_region -n vmss-name --instance-
## Certificate Auto Rotation
-For AKS to automatically rotate non-CA certificates, the cluster must have [TLS Bootstrapping](https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/) which has been enabled by default in all Azure regions.
+For AKS to automatically rotate non-CA certificates, the cluster must have [TLS Bootstrapping](https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/) which has been enabled by default in all Azure regions.
> [!Note] > If you have an existing cluster you have to upgrade that cluster to enable Certificate Auto-Rotation.
+> Do not disable bootstrap to keep your auto-rotation enabled.
For any AKS clusters created or upgraded after March 2022 Azure Kubernetes Service will automatically rotate non-CA certificates on both the control plane and agent nodes within 80% of the client certificate valid time, before they expire with no downtime for the cluster.
az aks upgrade -g $RESOURCE_GROUP_NAME -n $CLUSTER_NAME
### Limitation
-Auto certificate rotation won't be enabled on a non-RBAC cluster.
+Certificate auto-rotation will only be enabled by default for RBAC enabled AKS clusters.
## Manually rotate your cluster certificates
aks Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/integrations.md
The following rules are used by AKS for applying updates to installed add-ons:
- Any breaking or behavior changes to the add-on will be announced well before, usually 60 days, a later minor version of Kubernetes is released on AKS. - Add-ons can be patched weekly with every new release of AKS which will be announced in the release notes. AKS releases can be controlled using [maintenance windows][maintenance-windows] and followed using [release tracker][release-tracker].
+### Exceptions
+
+Add-ons will be upgraded to a new major/minor version (or breaking change) within a Kubernetes minor version if either the cluster's Kubernetes version or the add-on version are in preview.
+
+### Available add-ons
+ The below table shows the available add-ons. | Name | Description | More details |
aks Scale Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/scale-cluster.md
Last updated 06/29/2022
# Scale the node count in an Azure Kubernetes Service (AKS) cluster
-If the resource needs of your applications change, you can manually scale an AKS cluster to run a different number of nodes. When you scale down, nodes are carefully [cordoned and drained][kubernetes-drain] to minimize disruption to running applications. When you scale up, AKS waits until nodes are marked **Ready** by the Kubernetes cluster before pods are scheduled on them.
+If the resource needs of your applications change, your cluster performance may be impacted due to low capacity on CPU, memory, PID space, or disk sizes. To address these changes, you can manually scale your AKS cluster to run a different number of nodes. When you scale down, nodes are carefully [cordoned and drained][kubernetes-drain] to minimize disruption to running applications. When you scale up, AKS waits until nodes are marked **Ready** by the Kubernetes cluster before pods are scheduled on them.
## Scale the cluster nodes
In this article, you manually scaled an AKS cluster to increase or decrease the
[set-azakscluster]: /powershell/module/az.aks/set-azakscluster [cluster-autoscaler]: cluster-autoscaler.md [az-aks-nodepool-scale]: /cli/azure/aks/nodepool#az_aks_nodepool_scale
-[update-azaksnodepool]: /powershell/module/az.aks/update-azaksnodepool
+[update-azaksnodepool]: /powershell/module/az.aks/update-azaksnodepool
api-management Developer Portal Integrate Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-integrate-application-insights.md
description: Learn how to integrate Application Insights into your managed or self-hosted developer portal. Previously updated : 03/25/2021 Last updated : 08/16/2022
A popular feature of Azure Monitor is Application Insights. It's an extensible A
Follow these steps to plug Application Insights into your managed or self-hosted developer portal. > [!IMPORTANT]
-> Steps 1 and 2 are not required for managed portals. If you have a managed portal, skip to step 4.
+> Steps 1 -3 are not required for managed portals. If you have a managed portal, skip to step 4.
1. Set up a [local environment](developer-portal-self-host.md#step-1-set-up-local-environment) for the latest release of the developer portal.
Follow these steps to plug Application Insights into your managed or self-hosted
npm install @paperbits/azure --save ```
-1. In the `startup.publish.ts` file in the `src` folder, import and register the Application Insights module:
+1. In the `startup.publish.ts` file in the `src` folder, import and register the Application Insights module. Add the `AppInsightsPublishModule` after the existing modules in the dependency injection container:
```typescript import { AppInsightsPublishModule } from "@paperbits/azure"; ...
+ const injector = new InversifyInjector();
+ injector.bindModule(new CoreModule());
+ ...
injector.bindModule(new AppInsightsPublishModule());
+ injector.resolve("autostart");
```
-1. Retrieve the portal's configuration:
+1. Retrieve the portal's configuration using the [Content Item - Get](/rest/api/apimanagement/current-ga/content-item/get) REST API:
```http
- GET /contentTypes/document/contentItems/configuration
+ GET https://management.azure.com/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.ApiManagement/service/{api-management-service-name}/contentTypes/document/contentItems/configuration?api-version=2021-08-01
```
+
+ Output is similar to:
```json {
- "nodes": [
+ "id": "/contentTypes/document/contentItems/configuration",
+ "type": "Microsoft.ApiManagement/service/contentTypes/contentItems",
+ "name": "configuration",
+ "properties": {
+ "nodes": [
{ "site": { "title": "Microsoft Azure API Management - developer portal",
Follow these steps to plug Application Insights into your managed or self-hosted
} } ]
+ }
} ```
-1. Extend the site configuration from the previous step with Application Insights configuration:
+1. Extend the site configuration from the previous step with Application Insights configuration. Update the configuration using the [Content Item - Create or Update](/rest/api/apimanagement/current-ga/content-item/create-or-update) REST API. Pass the Application Insights instrumentation key in an `integration` node in the request body.
+ ```http
- PUT /contentTypes/document/contentItems/configuration
+ PUT https://management.azure.com/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.ApiManagement/service/{api-management-service-name}/contentTypes/document/contentItems/configuration?api-version=2021-08-01
``` ```json {
+ "id": "/contentTypes/document/contentItems/configuration",
+ "type": "Microsoft.ApiManagement/service/contentTypes/contentItems",
+ "name": "configuration",
+ "properties": {
"nodes": [ { "site": { ... },
Follow these steps to plug Application Insights into your managed or self-hosted
} } ]
+ }
} ```
+1. After you update the configuration, [republish the portal](api-management-howto-developer-portal-customize.md#publish) for the changes to take effect.
+ ## Next steps Learn more about the developer portal:
api-management Developer Portal Integrate Google Tag Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-integrate-google-tag-manager.md
Follow the steps in this article to plug Google Tag Manager into your managed or
Follow these steps to plug Google Tag Manager into your managed or self-hosted developer portal. > [!IMPORTANT]
-> Steps 1 and 2 are not required for managed portals. If you have a managed portal, skip to step 4.
+> Steps 1 - 3 are not required for managed portals. If you have a managed portal, skip to step 4.
1. Set up a [local environment](developer-portal-self-host.md#step-1-set-up-local-environment) for the latest release of the developer portal.
app-service Overview Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-monitoring.md
Azure App Service provides several monitoring options for monitoring resources f
## Diagnostic Settings (via Azure Monitor)
-Azure Monitor is a monitoring service that provides a complete set of features to monitor your Azure resources in addition to resources in other clouds and on-premise. The Azure Monitor data platform collects data into logs and metrics where they can be analyzed. App Service monitoring data can be shipped to Azure Monitor through Diagnostic Settings.
+Azure Monitor is a monitoring service that provides a complete set of features to monitor your Azure resources in addition to resources in other clouds and on-premises. The Azure Monitor data platform collects data into logs and metrics where they can be analyzed. App Service monitoring data can be shipped to Azure Monitor through Diagnostic Settings.
Diagnostic settings lets you export logs to other services, such as Log Analytics, Storage account, and Event Hubs. Large amounts of data using SQL-like Kusto can be queried with Log Analytics. You can capture platform logs in Azure Monitor Logs as configured via Diagnostic Settings, and instrument your app further with the dedicated application performance management feature (Application Insights) for additional telemetry and logs.
app-service Reference App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/reference-app-settings.md
For more information on custom containers, see [Run a custom container in Azure]
| `DOCKER_REGISTRY_SERVER_USERNAME` | Username to authenticate with the registry server at `DOCKER_REGISTRY_SERVER_URL`. For security, this variable is not passed on to the container. || | `DOCKER_REGISTRY_SERVER_PASSWORD` | Password to authenticate with the registry server at `DOCKER_REGISTRY_SERVER_URL`. For security, this variable is not passed on to the container. || | `DOCKER_ENABLE_CI` | Set to `true` to enable the continuous deployment for custom containers. The default is `false` for custom containers. ||
-| `WEBSITE_PULL_IMAGE_OVER_VNET` | Connect and pull from a registry inside a Virtual Network or on-premise. Your app will need to be connected to a Virtual Network using VNet integration feature. This setting is also needed for Azure Container Registry with Private Endpoint. ||
+| `WEBSITE_PULL_IMAGE_OVER_VNET` | Connect and pull from a registry inside a Virtual Network or on-premises. Your app will need to be connected to a Virtual Network using VNet integration feature. This setting is also needed for Azure Container Registry with Private Endpoint. ||
| `WEBSITES_WEB_CONTAINER_NAME` | In a Docker Compose app, only one of the containers can be internet accessible. Set to the name of the container defined in the configuration file to override the default container selection. By default, the internet accessible container is the first container to define port 80 or 8080, or, when no such container is found, the first container defined in the configuration file. | | | `WEBSITES_PORT` | For a custom container, the custom port number on the container for App Service to route requests to. By default, App Service attempts automatic port detection of ports 80 and 8080. This setting is *not* injected into the container as an environment variable. || | `WEBSITE_CPU_CORES_LIMIT` | By default, a Windows container runs with all available cores for your chosen pricing tier. To reduce the number of cores, set to the number of desired cores limit. For more information, see [Customize the number of compute cores](configure-custom-container.md?pivots=container-windows#customize-the-number-of-compute-cores).||
app-service Tutorial Dotnetcore Sqldb App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-dotnetcore-sqldb-app.md
Azure App Service provides a web-based diagnostics console named Kudu. Kudu lets
To use Kudu, go to one of the following URLs. You'll need to sign into the Kudu site with your Azure credentials.
-* For apps deployed in Free, Shared, Basic, Standard, and Premium App Service plans - `https:/<app-name>.scm.azurewebsites.net`
+* For apps deployed in Free, Shared, Basic, Standard, and Premium App Service plans - `https://<app-name>.scm.azurewebsites.net`
* For apps deployed in Isolated service plans - `https://<app-name>.scm.<ase-name>.p.azurewebsites.net` From the main page in Kudu, you can find information about the application-hosting environment, app settings, deployments, and browse the files in the wwwroot directory.
automation Private Link Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/how-to/private-link-security.md
For more information, see [Key Benefits of Private Link](../../private-link/pri
## Limitations -- In the current implementation of Private Link, Automation account cloud jobs cannot access Azure resources that are secured using private endpoint. For example, Azure Key Vault, Azure SQL, Azure Storage account, etc. To workaround this, use a [Hybrid Runbook Worker](../automation-hybrid-runbook-worker.md) instead. Hence, on-premise VMs are supported to run Hybrid Runbook Workers against an Automation Account with Private Link enabled.
+- In the current implementation of Private Link, Automation account cloud jobs cannot access Azure resources that are secured using private endpoint. For example, Azure Key Vault, Azure SQL, Azure Storage account, etc. To workaround this, use a [Hybrid Runbook Worker](../automation-hybrid-runbook-worker.md) instead. Hence, on-premises VMs are supported to run Hybrid Runbook Workers against an Automation Account with Private Link enabled.
- You need to use the latest version of the [Log Analytics agent](../../azure-monitor/agents/log-analytics-agent.md) for Windows or Linux. - The [Log Analytics Gateway](../../azure-monitor/agents/gateway.md) does not support Private Link.
availability-zones Az Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/az-region.md
description: Learn what services are supported by availability zones and underst
Previously updated : 06/21/2022 Last updated : 08/18/2022
In the Product Catalog, always-available services are listed as "non-regional" s
| [Azure Backup](migrate-recovery-services-vault.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) | | [Azure Cosmos DB](../cosmos-db/high-availability.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure DNS: Azure DNS Private Zones](../dns/private-dns-getstarted-portal.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure DNS: Azure DNS Private Resolver](../dns/dns-private-resolver-get-started-portal.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
| [Azure ExpressRoute](../expressroute/designing-for-high-availability-with-expressroute.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Public IP](../virtual-network/ip-services/public-ip-addresses.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal.](media/icon-zonal.svg) | | [Azure Site Recovery](migrate-recovery-services-vault.md) | ![An icon that signifies this service is zonal](media/icon-zonal.svg) |
azure-app-configuration Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/cli-samples.md
Title: Azure CLI samples - Azure App Configuration description: Information about sample scripts provided for Azure App Configuration--++ Last updated 02/19/2020
azure-app-configuration Concept Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-customer-managed-keys.md
Title: Use customer-managed keys to encrypt your configuration data description: Encrypt your configuration data using customer-managed keys--++ Last updated 07/28/2020
azure-app-configuration Concept Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-disaster-recovery.md
Title: Azure App Configuration resiliency and disaster recovery description: Lean how to implement resiliency and disaster recovery with Azure App Configuration.--++ Last updated 07/09/2020
azure-app-configuration Concept Enable Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-enable-rbac.md
Title: Authorize access to Azure App Configuration using Azure Active Directory description: Enable Azure RBAC to authorize access to your Azure App Configuration instance--++ Last updated 05/26/2020
azure-app-configuration Concept Feature Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-feature-management.md
Title: Understand feature management using Azure App Configuration description: Turn features on and off using Azure App Configuration --++
azure-app-configuration Concept Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-geo-replication.md
Title: Geo-replication in Azure App Configuration (Preview) description: Details of the geo-replication feature in Azure App Configuration. --++
azure-app-configuration Concept Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-github-action.md
Title: Sync your GitHub repository to App Configuration description: Use GitHub Actions to automatically update your App Configuration instance when you update your GitHub repository.--++ Last updated 05/28/2020
azure-app-configuration Concept Key Value https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-key-value.md
Title: Understand Azure App Configuration key-value store description: Understand key-value storage in Azure App Configuration, which stores configuration data as key-values. Key-values are a representation of application settings.--++ Last updated 08/04/2020
azure-app-configuration Concept Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-private-endpoint.md
Title: Using private endpoints for Azure App Configuration description: Secure your App Configuration store using private endpoints --++ Last updated 07/15/2020
azure-app-configuration Enable Dynamic Configuration Aspnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/enable-dynamic-configuration-aspnet-core.md
description: In this tutorial, you learn how to dynamically update the configuration data for ASP.NET Core apps documentationcenter: ''-+ editor: '' ms.assetid:
ms.devlang: csharp Last updated 09/1/2020-+ #Customer intent: I want to dynamically update my app to use the latest configuration data in App Configuration.
azure-app-configuration Enable Dynamic Configuration Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/enable-dynamic-configuration-dotnet.md
Title: '.NET Framework Tutorial: dynamic configuration in Azure App Configuration' description: In this tutorial, you learn how to dynamically update the configuration data for .NET Framework apps using Azure App Configuration. -+ ms.devlang: csharp Last updated 07/24/2020-+ #Customer intent: I want to dynamically update my .NET Framework app to use the latest configuration data in App Configuration.
azure-app-configuration Howto App Configuration Event https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-app-configuration-event.md
Title: Use Event Grid for App Configuration data change notifications description: Learn how to use Azure App Configuration event subscriptions to send key-value modification events to a web endpoint -+ ms.assetid: ms.devlang: csharp Last updated 03/04/2020-+
azure-app-configuration Howto Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-best-practices.md
Title: Azure App Configuration best practices | Microsoft Docs
description: Learn best practices while using Azure App Configuration. Topics covered include key groupings, key-value compositions, App Configuration bootstrap, and more. documentationcenter: ''-+ editor: '' ms.assetid: Last updated 05/02/2019-+
azure-app-configuration Howto Feature Filters Aspnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-feature-filters-aspnet-core.md
description: Learn how to use feature filters to enable conditional feature flag
ms.devlang: csharp --++ Last updated 3/9/2020
azure-app-configuration Howto Integrate Azure Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-integrate-azure-managed-service-identity.md
Title: Use managed identities to access App Configuration description: Authenticate to Azure App Configuration using managed identities--++
azure-app-configuration Howto Labels Aspnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-labels-aspnet-core.md
description: This article describes how to use labels to retrieve app configuration values for the environment in which the app is currently running. ms.devlang: csharp-+ Last updated 3/12/2020-+ # Use labels to provide per-environment configuration values.
azure-app-configuration Howto Move Resource Between Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-move-resource-between-regions.md
Title: Move an App Configuration store to another region description: Learn how to move an App Configuration store to a different region. --++ Last updated 8/23/2021
azure-app-configuration Howto Targetingfilter Aspnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-targetingfilter-aspnet-core.md
description: Learn how to enable staged rollout of features for targeted audiences ms.devlang: csharp--++ Last updated 11/20/2020
azure-app-configuration Integrate Ci Cd Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/integrate-ci-cd-pipeline.md
Title: Integrate Azure App Configuration using a continuous integration and delivery pipeline description: Learn to implement continuous integration and delivery using Azure App Configuration -+ Last updated 04/19/2020-+ # Customer intent: I want to use Azure App Configuration data in my CI/CD pipeline.
azure-app-configuration Monitor App Configuration Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/monitor-app-configuration-reference.md
Title: Monitoring Azure App Configuration data reference description: Important Reference material needed when you monitor App Configuration --++ Last updated 05/05/2021
azure-app-configuration Monitor App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/monitor-app-configuration.md
Title: Monitor Azure App Configuration description: Start here to learn how to monitor App Configuration --++ Last updated 05/05/2021
azure-app-configuration Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/policy-reference.md
Title: Built-in policy definitions for Azure App Configuration
description: Lists Azure Policy built-in policy definitions for Azure App Configuration. These built-in policy definitions provide common approaches to managing your Azure resources. Last updated 08/16/2022 --++
azure-app-configuration Push Kv Devops Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/push-kv-devops-pipeline.md
Title: Push settings to App Configuration with Azure Pipelines description: Learn to use Azure Pipelines to push key-values to an App Configuration Store -+ Last updated 02/23/2021-+ # Push settings to App Configuration with Azure Pipelines
azure-app-configuration Quickstart Aspnet Core App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-aspnet-core-app.md
Title: Quickstart for Azure App Configuration with ASP.NET Core | Microsoft Docs description: Create an ASP.NET Core app with Azure App Configuration to centralize storage and management of application settings for an ASP.NET Core application. -+ ms.devlang: csharp Last updated 1/3/2022-+ #Customer intent: As an ASP.NET Core developer, I want to learn how to manage all my app settings in one place. # Quickstart: Create an ASP.NET Core app with Azure App Configuration
azure-app-configuration Quickstart Dotnet App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-dotnet-app.md
Title: Quickstart for Azure App Configuration with .NET Framework | Microsoft Do
description: In this article, create a .NET Framework app with Azure App Configuration to centralize storage and management of application settings separate from your code. documentationcenter: ''-+ ms.devlang: csharp Last updated 09/28/2020-+ #Customer intent: As a .NET Framework developer, I want to manage all my app settings in one place. # Quickstart: Create a .NET Framework app with Azure App Configuration
azure-app-configuration Quickstart Feature Flag Aspnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-feature-flag-aspnet-core.md
Title: Quickstart for adding feature flags to ASP.NET Core description: Add feature flags to ASP.NET Core apps and manage them using Azure App Configuration-+ ms.devlang: csharp Last updated 09/28/2020-+ #Customer intent: As an ASP.NET Core developer, I want to use feature flags to control feature availability quickly and confidently.
azure-app-configuration Quickstart Feature Flag Azure Functions Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-feature-flag-azure-functions-csharp.md
Title: Quickstart for adding feature flags to Azure Functions | Microsoft Docs description: In this quickstart, use Azure Functions with feature flags from Azure App Configuration and test the function locally. -+ ms.devlang: csharp Last updated 8/26/2020-+ # Quickstart: Add feature flags to an Azure Functions app
azure-app-configuration Quickstart Feature Flag Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-feature-flag-dotnet.md
Title: Quickstart for adding feature flags to .NET Framework apps | Microsoft Do
description: A quickstart for adding feature flags to .NET Framework apps and managing them in Azure App Configuration documentationcenter: ''-+ editor: '' ms.assetid:
.NET Last updated 10/19/2020-+ #Customer intent: As a .NET Framework developer, I want to use feature flags to control feature availability quickly and confidently. # Quickstart: Add feature flags to a .NET Framework app
azure-app-configuration Rest Api Authentication Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-authentication-azure-ad.md
Title: Azure Active Directory REST API - authentication description: Use Azure Active Directory to authenticate to Azure App Configuration by using the REST API--++ Last updated 08/17/2020
azure-app-configuration Rest Api Authentication Hmac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-authentication-hmac.md
Title: Azure App Configuration REST API - HMAC authentication description: Use HMAC to authenticate to Azure App Configuration by using the REST API--++ ms.devlang: csharp, golang, java, javascript, powershell, python
azure-app-configuration Rest Api Authentication Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-authentication-index.md
 Title: Azure App Configuration REST API - Authentication description: Reference pages for authentication using the Azure App Configuration REST API--++ Last updated 08/17/2020
azure-app-configuration Rest Api Authorization Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-authorization-azure-ad.md
Title: Azure App Configuration REST API - Azure Active Directory authorization description: Use Azure Active Directory for authorization against Azure App Configuration by using the REST API--++ Last updated 08/17/2020
azure-app-configuration Rest Api Authorization Hmac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-authorization-hmac.md
Title: Azure App Configuration REST API - HMAC authorization description: Use HMAC for authorization against Azure App Configuration using the REST API--++ Last updated 08/17/2020
azure-app-configuration Rest Api Authorization Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-authorization-index.md
 Title: Azure App Configuration REST API - Authorization description: Reference pages for authorization using the Azure App Configuration REST API--++ Last updated 08/17/2020
azure-app-configuration Rest Api Consistency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-consistency.md
 Title: Azure App Configuration REST API - consistency description: Reference pages for ensuring real-time consistency by using the Azure App Configuration REST API--++ Last updated 08/17/2020
azure-app-configuration Rest Api Fiddler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-fiddler.md
 Title: Azure Active Directory REST API - Test Using Fiddler description: Use Fiddler to test the Azure App Configuration REST API--++ Last updated 08/17/2020
azure-app-configuration Rest Api Headers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-headers.md
Title: Azure App Configuration REST API - Headers description: Reference pages for headers used with the Azure App Configuration REST API--++ Last updated 08/17/2020
azure-app-configuration Rest Api Key Value https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-key-value.md
 Title: Azure App Configuration REST API - key-value description: Reference pages for working with key-values by using the Azure App Configuration REST API--++ Last updated 08/17/2020
azure-app-configuration Rest Api Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-keys.md
Title: Azure App Configuration REST API - Keys description: Reference pages for working with keys using the Azure App Configuration REST API--++ Last updated 08/17/2020
azure-app-configuration Rest Api Labels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-labels.md
Title: Azure App Configuration REST API - Labels description: Reference pages for working with labels using the Azure App Configuration REST API--++ Last updated 08/17/2020
azure-app-configuration Rest Api Locks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-locks.md
Title: Azure App Configuration REST API - locks description: Reference pages for working with key-value locks by using the Azure App Configuration REST API--++ Last updated 08/17/2020
azure-app-configuration Rest Api Postman https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-postman.md
 Title: Azure Active Directory REST API - Test by using Postman description: Use Postman to test the Azure App Configuration REST API--++ Last updated 08/17/2020
azure-app-configuration Rest Api Revisions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-revisions.md
Title: Azure App Configuration REST API - key-value revisions description: Reference pages for working with key-value revisions by using the Azure App Configuration REST API--++ Last updated 08/17/2020
azure-app-configuration Rest Api Throttling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-throttling.md
Title: Azure App Configuration REST API - Throttling description: Reference pages for understanding throttling when using the Azure App Configuration REST API--++ Last updated 08/17/2020
azure-app-configuration Rest Api Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-versioning.md
Title: Azure App Configuration REST API - versioning description: Reference pages for versioning by using the Azure App Configuration REST API--++ Last updated 08/17/2020
azure-app-configuration Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api.md
Title: Azure App Configuration REST API description: Reference pages for the Azure App Configuration REST API--++ Last updated 08/17/2020
azure-app-configuration Cli Create Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/scripts/cli-create-service.md
Title: Azure CLI Script Sample - Create an Azure App Configuration Store
description: Create an Azure App Configuration store using a sample Azure CLI script. See reference article links to commands used in the script. -+ Last updated 01/24/2020-+
azure-app-configuration Cli Delete Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/scripts/cli-delete-service.md
Title: Azure CLI Script Sample - Delete an Azure App Configuration Store
description: Delete an Azure App Configuration store using a sample Azure CLI script. See reference article links to commands used in the script. -+ ms.devlang: azurecli Last updated 02/19/2020-+
azure-app-configuration Cli Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/scripts/cli-export.md
Title: Azure CLI Script Sample - Export from an Azure App Configuration Store
description: Use Azure CLI script to export configuration from Azure App Configuration -+ ms.devlang: azurecli Last updated 02/19/2020-+
azure-app-configuration Cli Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/scripts/cli-import.md
Title: Azure CLI script sample - Import to an App Configuration store
description: Use Azure CLI script - Importing configuration to Azure App Configuration -+ ms.devlang: azurecli Last updated 02/19/2020-+
azure-app-configuration Cli Work With Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/scripts/cli-work-with-keys.md
Title: Azure CLI Script Sample - Work with key-values in App Configuration Store
description: Use Azure CLI script to create, view, update and delete key values from App Configuration store -+ ms.devlang: azurecli Last updated 02/19/2020-+
azure-app-configuration Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure App Configuration
description: Lists Azure Policy Regulatory Compliance controls available for Azure App Configuration. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Last updated 08/17/2022 --++
azure-app-configuration Use Feature Flags Dotnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/use-feature-flags-dotnet-core.md
Title: Tutorial for using feature flags in a .NET Core app | Microsoft Docs
description: In this tutorial, you learn how to implement feature flags in .NET Core apps. documentationcenter: ''-+ editor: '' ms.assetid:
ms.devlang: csharp Last updated 09/17/2020-+ #Customer intent: I want to control feature availability in my app by using the .NET Core Feature Manager library.
azure-app-configuration Use Key Vault References Dotnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/use-key-vault-references-dotnet-core.md
Title: Tutorial for using Azure App Configuration Key Vault references in an ASP
description: In this tutorial, you learn how to use Azure App Configuration's Key Vault references from an ASP.NET Core app documentationcenter: ''-+ editor: '' ms.assetid:
ms.devlang: csharp Last updated 04/08/2020-+ #Customer intent: I want to update my ASP.NET Core application to reference values stored in Key Vault through App Configuration.
azure-fluid-relay Container Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/how-tos/container-recovery.md
Fluid framework periodically saves state, called summary, without any explicit b
We've added following methods to AzureClient that will enable developers to recover data from corrupted containers.
-[`getContainerVersions(ID, options)`](https://fluidframework.com/docs/apis/azure-client/azureclient/#azure-client-azureclient-getcontainerversions-Method)
+[`getContainerVersions(ID, options)`](https://fluidframework.com/docs/apis/azure-client/azureclient#getcontainerversions-Method)
`getContainerVersions` allows developers to view the previously generated versions of the container.
-[copyContainer(ID, containerSchema)](https://fluidframework.com/docs/apis/azure-client/azureclient/#azure-client-azureclient-copycontainer-Method)
+[`copyContainer(ID, containerSchema)`](https://fluidframework.com/docs/apis/azure-client/azureclient#copycontainer-Method)
`copyContainer` allows developers to generate a new detached container from a specific version of another container.
azure-fluid-relay Fluid Json Web Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/how-tos/fluid-json-web-token.md
Each part is separated by a period (.) and separately Base64 encoded.
| Claim | Format | Description | ||--|-|
-| documentId | string | Generated by FRS, identifies the document for which the token is being generated. |
+| documentId | string | Generated by Azure Fluid Relay (AFR) service. Identifies the document for which the token is being generated. |
| scope | string[] | Identifies the permissions required by the client on the document or summary. For every scope, you can define the permissions you want to give to the client. | | tenantId | string | Identifies the tenant. | | user | JSON | *Optional* `{ displayName: <display_name>, id: <user_id>, name: <user_name>, }` Identifies users of your application. This is sent back to your application by Alfred, the ordering service. It can be used by your application to identify your users from the response it gets from Alfred. Azure Fluid Relay doesn't validate this information. |
azure-fluid-relay Test Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/how-tos/test-automation.md
fluid.url: https://fluidframework.com/docs/testing/testing/
Testing and automation are crucial to maintaining the quality and longevity of your code. Internally, Fluid uses a range of unit and integration tests powered by [Mocha](https://mochajs.org/), [Jest](https://jestjs.io/), [Puppeteer](https://github.com/puppeteer/puppeteer), and [Webpack](https://webpack.js.org/).
-You can run tests using the local **@fluidframework/azure-local-service** or using a test tenant in Azure Fluid Relay service. **AzureClient** can be configured to connect to both a remote service and a local service, which enables you to use a single client type between tests against live and local service instances. The only difference is the configuration used to create the client.
+You can run tests using the local [@fluidframework/azure-local-service](https://www.npmjs.com/package/@fluidframework/azure-local-service) or using a test tenant in Azure Fluid Relay service. [AzureClient](https://fluidframework.com/docs/apis/azure-client/azureclient) can be configured to connect to both a remote service and a local service, which enables you to use a single client type between tests against live and local service instances. The only difference is the configuration used to create the client.
## Automation against Azure Fluid Relay
azure-fluid-relay Validate Document Creator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/how-tos/validate-document-creator.md
fluid.url: https://fluidframework.com/docs/apis/azure-client/itokenprovider/
# How to: Validate a User Created a Document
-When you create a document in Azure Fluid Relay, the JWT provided by the `ITokenProvider` for the creation request can only be used once. After creating a document, the client must generate a new JWT that contains the document ID provided by the service at creation time. If an application has an authorization service that manages document access control, it will need to know who created a document with a given ID in order to authorize the generation of a new JWT for access to that document.
+When you create a document in Azure Fluid Relay, the JWT provided by the [ITokenProvider](https://fluidframework.com/docs/apis/azure-client/itokenprovider/) for the creation request can only be used once. After creating a document, the client must generate a new JWT that contains the document ID provided by the service at creation time. If an application has an authorization service that manages document access control, it will need to know who created a document with a given ID in order to authorize the generation of a new JWT for access to that document.
## Inform an Authorization Service when a document is Created
-An application can tie into the document creation lifecycle by implementing a public `documentPostCreateCallback()` property in its `TokenProvider`. This callback will be triggered directly after creating the document, before a client requests the new JWT it needs to gain read/write permissions to the document that was created.
+An application can tie into the document creation lifecycle by implementing a public [documentPostCreateCallback()](https://fluidframework.com/docs/apis/azure-client/itokenprovider#documentpostcreatecallback-MethodSignature) method in its `TokenProvider`. This callback will be triggered directly after creating the document, before a client requests the new JWT it needs to gain read/write permissions to the document that was created.
The `documentPostCreateCallback()` receives two parameters: 1) the ID of the document that was created and 2) a JWT signed by the service with no permission scopes. The authorization service can verify the given JWT and use the information in the JWT to grant the correct user permissions for the newly created document.
azure-functions Durable Functions Azure Storage Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-azure-storage-provider.md
+
+ Title: Azure Storage provider for Durable Functions
+description: Learn about the characteristics of the Durable Functions Azure Storage provider.
++ Last updated : 07/18/2022+++
+# Azure Storage provider (Azure Functions)
+
+This document describes the characteristics of the Durable Functions Azure Storage provider, with a focus on performance and scalability aspects. The Azure Storage provider is the default provider. It stores instance states and queues in an Azure Storage (classic) account.
+
+> [!NOTE]
+> For more information on the supported storage providers for Durable Functions and how they compare, see the [Durable Functions storage providers](durable-functions-storage-providers.md) documentation.
+
+In the Azure Storage provider, all function execution is driven by Azure Storage queues. Orchestration and entity status and history are stored in Azure Tables. Azure Blobs and blob leases are used to distribute orchestration instances and entities across multiple app instances (also known as *workers* or simply *VMs*). This section goes into more detail on the various Azure Storage artifacts and how they impact performance and scalability.
+
+## Storage representation
+
+A [task hub](durable-functions-task-hubs.md) durably persists all instance states and all messages. For a quick overview of how these are used to track the progress of an orchestration, see the [task hub execution example](durable-functions-task-hubs.md#execution-example).
+
+The Azure Storage provider represents the task hub in storage using the following components:
+
+* Two Azure Tables store the instance states.
+* One Azure Queue stores the activity messages.
+* One or more Azure Queues store the instance messages. Each of these so-called *control queues* represents a [partition](durable-functions-perf-and-scale.md#partition-count) that is assigned a subset of all instance messages, based on the hash of the instance ID.
+* A few extra blob containers used for lease blobs and/or large messages.
+
+For example, a task hub named `xyz` with `PartitionCount = 4` contains the following queues and tables:
+
+![Diagram showing Azure Storage provider storage storage organization for 4 control queues.](./media/durable-functions-task-hubs/azure-storage.png)
+
+Next, we describe these components and the role they play in more detail.
+
+### History table
+
+The **History** table is an Azure Storage table that contains the history events for all orchestration instances within a task hub. The name of this table is in the form *TaskHubName*History. As instances run, new rows are added to this table. The partition key of this table is derived from the instance ID of the orchestration. Instance IDs are random by default, ensuring optimal distribution of internal partitions in Azure Storage. The row key for this table is a sequence number used for ordering the history events.
+
+When an orchestration instance needs to run, the corresponding rows of the History table are loaded into memory using a range query within a single table partition. These *history events* are then replayed into the orchestrator function code to get it back into its previously checkpointed state. The use of execution history to rebuild state in this way is influenced by the [Event Sourcing pattern](/azure/architecture/patterns/event-sourcing).
+
+> [!TIP]
+> Orchestration data stored in the History table includes output payloads from activity and sub-orchestrator functions. Payloads from external events are also stored in the History table. Because the full history is loaded into memory every time an orchestrator needs to execute, a large enough history can result in significant memory pressure on a given VM. The length and size of the orchestration history can be reduced by splitting large orchestrations into multiple sub-orchestrations or by reducing the size of outputs returned by the activity and sub-orchestrator functions it calls. Alternatively, you can reduce memory usage by lowering per-VM [concurrency throttles](durable-functions-perf-and-scale.md#concurrency-throttles) to limit how many orchestrations are loaded into memory concurrently.
+
+### Instances table
+
+The **Instances** table contains the statuses of all orchestration and entity instances within a task hub. As instances are created, new rows are added to this table. The partition key of this table is the orchestration instance ID or entity key and the row key is an empty string. There is one row per orchestration or entity instance.
+
+This table is used to satisfy [instance query requests from code](durable-functions-instance-management.md#query-instances) as well as [status query HTTP API](durable-functions-http-api.md#get-instance-status) calls. It is kept eventually consistent with the contents of the **History** table mentioned previously. The use of a separate Azure Storage table to efficiently satisfy instance query operations in this way is influenced by the [Command and Query Responsibility Segregation (CQRS) pattern](/azure/architecture/patterns/cqrs).
+
+> [!TIP]
+> The partitioning of the *Instances* table allows it to store millions of orchestration instances without any noticeable impact on runtime performance or scale. However, the number of instances can have a significant impact on [multi-instance query](durable-functions-instance-management.md#query-all-instances) performance. To control the amount of data stored in these tables, consider periodically [purging old instance data](durable-functions-instance-management.md#purge-instance-history).
+
+### Queues
+
+Orchestrator, entity, and activity functions are all triggered by internal queues in the function app's task hub. Using queues in this way provides reliable "at-least-once" message delivery guarantees. There are two types of queues in Durable Functions: the **control queue** and the **work-item queue**.
+
+#### The work-item queue
+
+There is one work-item queue per task hub in Durable Functions. It's a basic queue and behaves similarly to any other `queueTrigger` queue in Azure Functions. This queue is used to trigger stateless *activity functions* by dequeueing a single message at a time. Each of these messages contains activity function inputs and additional metadata, such as which function to execute. When a Durable Functions application scales out to multiple VMs, these VMs all compete to acquire tasks from the work-item queue.
+
+#### Control queue(s)
+
+There are multiple *control queues* per task hub in Durable Functions. A *control queue* is more sophisticated than the simpler work-item queue. Control queues are used to trigger the stateful orchestrator and entity functions. Because the orchestrator and entity function instances are stateful singletons, it's important that each orchestration or entity is only processed by one worker at a time. To achieve this constraint, each orchestration instance or entity is assigned to a single control queue. These control queues are load balanced across workers to ensure that each queue is only processed by one worker at a time. More details on this behavior can be found in subsequent sections.
+
+Control queues contain a variety of orchestration lifecycle message types. Examples include [orchestrator control messages](durable-functions-instance-management.md), activity function *response* messages, and timer messages. As many as 32 messages will be dequeued from a control queue in a single poll. These messages contain payload data as well as metadata including which orchestration instance it is intended for. If multiple dequeued messages are intended for the same orchestration instance, they will be processed as a batch.
+
+Control queue messages are constantly polled using a background thread. The batch size of each queue poll is controlled by the `controlQueueBatchSize` setting in host.json and has a default of 32 (the maximum value supported by Azure Queues). The maximum number of prefetched control-queue messages that are buffered in memory is controlled by the `controlQueueBufferThreshold` setting in host.json. The default value for `controlQueueBufferThreshold` varies depending on a variety of factors, including the type of hosting plan. For more information on these settings, see the [host.json schema](../functions-host-json.md#durabletask) documentation.
+
+> [!TIP]
+> Increasing the value for `controlQueueBufferThreshold` allows a single orchestration or entity to process events faster. However, increasing this value can also result in higher memory usage. The higher memory usage is partly due to pulling more messages off the queue and partly due to fetching more orchestration histories into memory. Reducing the value for `controlQueueBufferThreshold` can therefore be an effective way to reduce memory usage.
+
+#### Queue polling
+
+The durable task extension implements a random exponential back-off algorithm to reduce the effect of idle-queue polling on storage transaction costs. When a message is found, the runtime immediately checks for another message. When no message is found, it waits for a period of time before trying again. After subsequent failed attempts to get a queue message, the wait time continues to increase until it reaches the maximum wait time, which defaults to 30 seconds.
+
+The maximum polling delay is configurable via the `maxQueuePollingInterval` property in the [host.json file](../functions-host-json.md#durabletask). Setting this property to a higher value could result in higher message processing latencies. Higher latencies would be expected only after periods of inactivity. Setting this property to a lower value could result in [higher storage costs](durable-functions-billing.md#azure-storage-transactions) due to increased storage transactions.
+
+> [!NOTE]
+> When running in the Azure Functions Consumption and Premium plans, the [Azure Functions Scale Controller](../event-driven-scaling.md) will poll each control and work-item queue once every 10 seconds. This additional polling is necessary to determine when to activate function app instances and to make scale decisions. At the time of writing, this 10 second interval is constant and cannot be configured.
+
+#### Orchestration start delays
+
+Orchestrations instances are started by putting an `ExecutionStarted` message in one of the task hub's control queues. Under certain conditions, you may observe multi-second delays between when an orchestration is scheduled to run and when it actually starts running. During this time interval, the orchestration instance remains in the `Pending` state. There are two potential causes of this delay:
+
+* **Backlogged control queues**: If the control queue for this instance contains a large number of messages, it may take time before the `ExecutionStarted` message is received and processed by the runtime. Message backlogs can happen when orchestrations are processing lots of events concurrently. Events that go into the control queue include orchestration start events, activity completions, durable timers, termination, and external events. If this delay happens under normal circumstances, consider creating a new task hub with a larger number of partitions. Configuring more partitions will cause the runtime to create more control queues for load distribution. Each partition corresponds to 1:1 with a control queue, with a maximum of 16 partitions.
+
+* **Back off polling delays**: Another common cause of orchestration delays is the [previously described back-off polling behavior for control queues](#queue-polling). However, this delay is only expected when an app is scaled out to two or more instances. If there is only one app instance or if the app instance that starts the orchestration is also the same instance that is polling the target control queue, then there will not be a queue polling delay. Back off polling delays can be reduced by updating the **host.json** settings, as described previously.
+
+### Blobs
+
+In most cases, Durable Functions doesn't use Azure Storage Blobs to persist data. However, queues and tables have [size limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#azure-queue-storage-limits) that can prevent Durable Functions from persisting all of the required data into a storage row or queue message. For example, when a piece of data that needs to be persisted to a queue is greater than 45 KB when serialized, Durable Functions will compress the data and store it in a blob instead. When persisting data to blob storage in this way, Durable Function stores a reference to that blob in the table row or queue message. When Durable Functions needs to retrieve the data it will automatically fetch it from the blob. These blobs are stored in the blob container `<taskhub>-largemessages`.
+
+#### Performance considerations
+
+The extra compression and blob operation steps for large messages can be expensive in terms of CPU and I/O latency costs. Additionally, Durable Functions needs to load persisted data in memory, and may do so for many different function executions at the same time. As a result, persisting large data payloads can cause high memory usage as well. To minimize memory overhead, consider persisting large data payloads manually (for example, in blob storage) and instead pass around references to this data. This way your code can load the data only when needed to avoid redundant loads during [orchestrator function replays](durable-functions-orchestrations.md#reliability). However, storing payloads to local disks is *not* recommended since on-disk state is not guaranteed to be available since functions may execute on different VMs throughout their lifetimes.
+
+### Storage account selection
+
+The queues, tables, and blobs used by Durable Functions are created in a configured Azure Storage account. The account to use can be specified using the `durableTask/storageProvider/connectionStringName` setting (or `durableTask/azureStorageConnectionStringName` setting in Durable Functions 1.x) in the **host.json** file.
+
+#### Durable Functions 2.x
+
+```json
+{
+ "extensions": {
+ "durableTask": {
+ "storageProvider": {
+ "connectionStringName": "MyStorageAccountAppSetting"
+ }
+ }
+ }
+}
+```
+
+#### Durable Functions 1.x
+
+```json
+{
+ "extensions": {
+ "durableTask": {
+ "azureStorageConnectionStringName": "MyStorageAccountAppSetting"
+ }
+ }
+}
+```
+
+If not specified, the default `AzureWebJobsStorage` storage account is used. For performance-sensitive workloads, however, configuring a non-default storage account is recommended. Durable Functions uses Azure Storage heavily, and using a dedicated storage account isolates Durable Functions storage usage from the internal usage by the Azure Functions host.
+
+> [!NOTE]
+> Standard general purpose Azure Storage accounts are required when using the Azure Storage provider. All other storage account types are not supported. We highly recommend using legacy v1 general purpose storage accounts for Durable Functions. The newer v2 storage accounts can be significantly more expensive for Durable Functions workloads. For more information on Azure Storage account types, see the [Storage account overview](../../storage/common/storage-account-overview.md) documentation.
+
+### Orchestrator scale-out
+
+While activity functions can be scaled out infinitely by adding more VMs elastically, individual orchestrator instances and entities are constrained to inhabit a single partition and the maximum number of partitions is bounded by the `partitionCount` setting in your `host.json`.
+
+> [!NOTE]
+> Generally speaking, orchestrator functions are intended to be lightweight and should not require large amounts of computing power. It is therefore not necessary to create a large number of control-queue partitions to get great throughput for orchestrations. Most of the heavy work should be done in stateless activity functions, which can be scaled out infinitely.
+
+The number of control queues is defined in the **host.json** file. The following example host.json snippet sets the `durableTask/storageProvider/partitionCount` property (or `durableTask/partitionCount` in Durable Functions 1.x) to `3`. Note that there are as many control queues as there are partitions.
+
+#### Durable Functions 2.x
+
+```json
+{
+ "extensions": {
+ "durableTask": {
+ "storageProvider": {
+ "partitionCount": 3
+ }
+ }
+ }
+}
+```
+
+#### Durable Functions 1.x
+
+```json
+{
+ "extensions": {
+ "durableTask": {
+ "partitionCount": 3
+ }
+ }
+}
+```
+
+A task hub can be configured with between 1 and 16 partitions. If not specified, the default partition count is **4**.
+
+During low traffic scenarios, your application will be scaled-in, so partitions will be managed by a small number of workers. As an example, consider the diagram below.
+
+![Scale-in orchestrations diagram](./media/durable-functions-perf-and-scale/scale-progression-1.png)
+
+In the previous diagram, we see that orchestrators 1 through 6 are load balanced across partitions. Similarly, partitions, like activities, are load balanced across workers. Partitions are load-balanced across workers regardless of the number of orchestrators that get started.
+
+If you're running on the Azure Functions Consumption or Elastic Premium plans, or if you have load-based auto-scaling configured, more workers will get allocated as traffic increases and partitions will eventually load balance across all workers. If we continue to scale out, eventually each partition will eventually be managed by a single worker. Activities, on the other hand, will continue to be load-balanced across all workers. This is shown in the image below.
+
+![First scaled-out orchestrations diagram](./media/durable-functions-perf-and-scale/scale-progression-2.png)
+
+The upper-bound of the maximum number of concurrent _active_ orchestrations at *any given time* is equal to the number of workers allocated to your application _times_ your value for `maxConcurrentOrchestratorFunctions`. This upper-bound can be made more precise when your partitions are fully scaled-out across workers. When fully scaled-out, and since each worker will have only a single Functions host instance, the maximum number of _active_ concurrent orchestrator instances will be equal to your number of partitions _times_ your value for `maxConcurrentOrchestratorFunctions`.
+
+> [!NOTE]
+> In this context, *active* means that an orchestration or entity is loaded into memory and processing *new events*. If the orchestration or entity is waiting for more events, such as the return value of an activity function, it gets unloaded from memory and is no longer considered *active*. Orchestrations and entities will be subsequently reloaded into memory only when there are new events to process. There's no practical maximum number of *total* orchestrations or entities that can run on a single VM, even if they're all in the "Running" state. The only limitation is the number of *concurrently active* orchestration or entity instances.
+
+The image below illustrates a fully scaled-out scenario where more orchestrators are added but some are inactive, shown in grey.
+
+![Second scaled-out orchestrations diagram](./media/durable-functions-perf-and-scale/scale-progression-3.png)
+
+During scale-out, control queue leases may be redistributed across Functions host instances to ensure that partitions are evenly distributed. These leases are internally implemented as Azure Blob storage leases and ensure that any individual orchestration instance or entity only runs on a single host instance at a time. If a task hub is configured with three partitions (and therefore three control queues), orchestration instances and entities can be load-balanced across all three lease-holding host instances. Additional VMs can be added to increase capacity for activity function execution.
+
+The following diagram illustrates how the Azure Functions host interacts with the storage entities in a scaled out environment.
+
+![Scale diagram](./media/durable-functions-perf-and-scale/scale-interactions-diagram.png)
+
+As shown in the previous diagram, all VMs compete for messages on the work-item queue. However, only three VMs can acquire messages from control queues, and each VM locks a single control queue.
+
+Orchestration instances and entities are distributed across all control queue instances. The distribution is done by hashing the instance ID of the orchestration or the entity name and key pair. Orchestration instance IDs by default are random GUIDs, ensuring that instances are equally distributed across all control queues.
+
+Generally speaking, orchestrator functions are intended to be lightweight and should not require large amounts of computing power. It is therefore not necessary to create a large number of control queue partitions to get great throughput for orchestrations. Most of the heavy work should be done in stateless activity functions, which can be scaled out infinitely.
+
+## Extended sessions
+
+Extended sessions is a [caching mechanism](durable-functions-perf-and-scale.md#instance-caching) that keeps orchestrations and entities in memory even after they finish processing messages. The typical effect of enabling extended sessions is reduced I/O against the underlying durable store and overall improved throughput.
+
+You can enable extended sessions by setting `durableTask/extendedSessionsEnabled` to `true` in the **host.json** file. The `durableTask/extendedSessionIdleTimeoutInSeconds` setting can be used to control how long an idle session will be held in memory:
+
+**Functions 2.0**
+```json
+{
+ "extensions": {
+ "durableTask": {
+ "extendedSessionsEnabled": true,
+ "extendedSessionIdleTimeoutInSeconds": 30
+ }
+ }
+}
+```
+
+**Functions 1.0**
+```json
+{
+ "durableTask": {
+ "extendedSessionsEnabled": true,
+ "extendedSessionIdleTimeoutInSeconds": 30
+ }
+}
+```
+
+There are two potential downsides of this setting to be aware of:
+
+1. There's an overall increase in function app memory usage because idle instances are not unloaded from memory as quickly.
+2. There can be an overall decrease in throughput if there are many concurrent, distinct, short-lived orchestrator or entity function executions.
+
+As an example, if `durableTask/extendedSessionIdleTimeoutInSeconds` is set to 30 seconds, then a short-lived orchestrator or entity function episode that executes in less than 1 second still occupies memory for 30 seconds. It also counts against the `durableTask/maxConcurrentOrchestratorFunctions` quota mentioned previously, potentially preventing other orchestrator or entity functions from running.
+
+The specific effects of extended sessions on orchestrator and entity functions are described in the next sections.
+
+> [!NOTE]
+> Extended sessions are currently only supported in .NET languages, like C# or F#. Setting `extendedSessionsEnabled` to `true` for other platforms can lead to runtime issues, such as silently failing to execute activity and orchestration-triggered functions.
+
+### Orchestrator function replay
+
+As mentioned previously, orchestrator functions are replayed using the contents of the **History** table. By default, the orchestrator function code is replayed every time a batch of messages are dequeued from a control queue. Even if you are using the fan-out, fan-in pattern and are awaiting for all tasks to complete (for example, using `Task.WhenAll()` in .NET, `context.df.Task.all()` in JavaScript, or `context.task_all()` in Python), there will be replays that occur as batches of task responses are processed over time. When extended sessions are enabled, orchestrator function instances are held in memory longer and new messages can be processed without a full history replay.
+
+The performance improvement of extended sessions is most often observed in the following situations:
+
+* When there are a limited number of orchestration instances running concurrently.
+* When orchestrations have large number of sequential actions (for example, hundreds of activity function calls) that complete quickly.
+* When orchestrations fan-out and fan-in a large number of actions that complete around the same time.
+* When orchestrator functions need to process large messages or do any CPU-intensive data processing.
+
+In all other situations, there is typically no observable performance improvement for orchestrator functions.
+
+> [!NOTE]
+> These settings should only be used after an orchestrator function has been fully developed and tested. The default aggressive replay behavior can useful for detecting [orchestrator function code constraints](durable-functions-code-constraints.md) violations at development time, and is therefore disabled by default.
+
+### Performance targets
+
+The following table shows the expected *maximum* throughput numbers for the scenarios described in the [Performance Targets](durable-functions-perf-and-scale.md#performance-targets) section of the [Performance and Scale](durable-functions-perf-and-scale.md) article.
+
+"Instance" refers to a single instance of an orchestrator function running on a single small ([A1](../../virtual-machines/sizes-previous-gen.md)) VM in Azure App Service. In all cases, it is assumed that [extended sessions](#orchestrator-function-replay) are enabled. Actual results may vary depending on the CPU or I/O work performed by the function code.
+
+| Scenario | Maximum throughput |
+|-|-|
+| Sequential activity execution | 5 activities per second, per instance |
+| Parallel activity execution (fan-out) | 100 activities per second, per instance |
+| Parallel response processing (fan-in) | 150 responses per second, per instance |
+| External event processing | 50 events per second, per instance |
+| Entity operation processing | 64 operations per second |
+
+If you are not seeing the throughput numbers you expect and your CPU and memory usage appears healthy, check to see whether the cause is related to [the health of your storage account](../../storage/common/storage-monitoring-diagnosing-troubleshooting.md#troubleshooting-guidance). The Durable Functions extension can put significant load on an Azure Storage account and sufficiently high loads may result in storage account throttling.
+
+> [!TIP]
+> In some cases you can significantly increase the throughput of external events, activity fan-in, and entity operations by increasing the value of the `controlQueueBufferThreshold` setting in **host.json**. Increasing this value beyond its default causes the Durable Task Framework storage provider to use more memory to prefetch these events more aggressively, reducing delays associated with dequeueing messages from the Azure Storage control queues. For more information, see the [host.json](durable-functions-bindings.md#host-json) reference documentation.
+
+### High throughput processing
+
+The architecture of the Azure Storage backend puts certain limitations on the maximum theoretical performance and scalability of Durable Functions. If your testing shows that Durable Functions on Azure Storage won't meet your throughput requirements, you should consider instead using the [Netherite storage provider for Durable Functions](durable-functions-storage-providers.md#netherite).
+
+To compare the achievable throughput for various basic scenarios, see the section [Basic Scenarios](https://microsoft.github.io/durabletask-netherite/#/scenarios) of the Netherite storage provider documentation.
+
+The Netherite storage backend was designed and developed by [Microsoft Research](https://www.microsoft.com/research). It uses [Azure Event Hubs](../../event-hubs/event-hubs-about.md) and the [FASTER](https://www.microsoft.com/research/project/faster/) database technology on top of [Azure Page Blobs](../../storage/blobs/storage-blob-pageblob-overview.md). The design of Netherite enables significantly higher-throughput processing of orchestrations and entities compared to other providers. In some benchmark scenarios, throughput was shown to increase by more than an order of magnitude when compared to the default Azure Storage provider.
+
+For more information on the supported storage providers for Durable Functions and how they compare, see the [Durable Functions storage providers](durable-functions-storage-providers.md) documentation.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn about disaster recovery and geo-distribution](durable-functions-disaster-recovery-geo-distribution.md)
azure-functions Durable Functions Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-billing.md
Durable Functions uses Azure Storage by default to keep state persistent, proces
Several factors contribute to the actual Azure Storage costs incurred by your Durable Functions app: * A single function app is associated with a single task hub, which shares a set of Azure Storage resources. These resources are used by all durable functions in a function app. The actual number of functions in the function app has no effect on Azure Storage transaction costs.
-* Each function app instance internally polls multiple queues in the storage account by using an exponential-backoff polling algorithm. An idle app instance polls the queues less often than does an active app, which results in fewer transaction costs. For more information about Durable Functions queue-polling behavior, see the [queue-polling section of the Performance and Scale article](durable-functions-perf-and-scale.md#queue-polling).
+* Each function app instance internally polls multiple queues in the storage account by using an exponential-backoff polling algorithm. An idle app instance polls the queues less often than does an active app, which results in fewer transaction costs. For more information about Durable Functions queue-polling behavior when using the Azure Storage provider, see the [queue-polling section](durable-functions-azure-storage-provider.md#queue-polling) of the Azure Storage provider documentation.
* When running in the Azure Functions Consumption or Premium plans, the [Azure Functions scale controller](../event-driven-scaling.md) regularly polls all task-hub queues in the background. If a function app is under light to moderate scale, only a single scale controller instance will poll these queues. If the function app scales out to a large number of instances, more scale controller instances might be added. These additional scale controller instances can increase the total queue-transaction costs. * Each function app instance competes for a set of blob leases. These instances will periodically make calls to the Azure Blob service either to renew held leases or to attempt to acquire new leases. The task hub's configured partition count determines the number of blob leases. Scaling out to a larger number of function app instances likely increases the Azure Storage transaction costs associated with these lease operations.
azure-functions Durable Functions Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-diagnostics.md
Azure Functions supports debugging function code directly, and that same support
* **Replay**: Orchestrator functions regularly [replay](durable-functions-orchestrations.md#reliability) when new inputs are received. This behavior means a single *logical* execution of an orchestrator function can result in hitting the same breakpoint multiple times, especially if it is set early in the function code. * **Await**: Whenever an `await` is encountered in an orchestrator function, it yields control back to the Durable Task Framework dispatcher. If it is the first time a particular `await` has been encountered, the associated task is *never* resumed. Because the task never resumes, stepping *over* the await (F10 in Visual Studio) is not possible. Stepping over only works when a task is being replayed. * **Messaging timeouts**: Durable Functions internally uses queue messages to drive execution of orchestrator, activity, and entity functions. In a multi-VM environment, breaking into the debugging for extended periods of time could cause another VM to pick up the message, resulting in duplicate execution. This behavior exists for regular queue-trigger functions as well, but is important to point out in this context since the queues are an implementation detail.
-* **Stopping and starting**: Messages in Durable functions persist between debug sessions. If you stop debugging and terminate the local host process while a durable function is executing, that function may re-execute automatically in a future debug session. This behavior can be confusing when not expected. Clearing all messages from the [internal storage queues](durable-functions-perf-and-scale.md#internal-queue-triggers) between debug sessions is one technique to avoid this behavior.
+* **Stopping and starting**: Messages in Durable functions persist between debug sessions. If you stop debugging and terminate the local host process while a durable function is executing, that function may re-execute automatically in a future debug session. This behavior can be confusing when not expected. Using a [fresh task hub](durable-functions-task-hubs.md#task-hub-management) or clearing the task hub contents between debug sessions is one technique to avoid this behavior.
> [!TIP] > When setting breakpoints in orchestrator functions, if you want to only break on non-replay execution, you can set a conditional breakpoint that breaks only if the "is replaying" value is `false`.
azure-functions Durable Functions Http Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-http-api.md
Request parameters for this API include the default set mentioned previously as
| **`createdTimeFrom`** | Query string | Optional parameter. When specified, filters the list of returned instances that were created at or after the given ISO8601 timestamp.| | **`createdTimeTo`** | Query string | Optional parameter. When specified, filters the list of returned instances that were created at or before the given ISO8601 timestamp.| | **`runtimeStatus`** | Query string | Optional parameter. When specified, filters the list of returned instances based on their runtime status. To see the list of possible runtime status values, see the [Querying instances](durable-functions-instance-management.md) article. |
-| **`instanceIdPrefix`** | Query string | Optional parameter. When specified, filters the list of returned instances to include only instances whose instance id starts with the specified prefix string. Available starting with [version 2.7.2](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.DurableTask/2.7.2) of the extension. |
+| **`instanceIdPrefix`** | Query string | Optional parameter. When specified, filters the list of returned instances to include only instances whose instance ID starts with the specified prefix string. Available starting with [version 2.7.2](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.DurableTask/2.7.2) of the extension. |
| **`top`** | Query string | Optional parameter. When specified, limits the number of instances returned by the query. | ### Response
Here is an example of response payloads including the orchestration status (form
``` > [!NOTE]
-> This operation can be very expensive in terms of Azure Storage I/O if you are using the [default Azure Storage provider](durable-functions-storage-providers.md#azure-storage) and if there are a lot of rows in the Instances table. More details on Instance table can be found in the [Performance and scale in Durable Functions (Azure Functions)](durable-functions-perf-and-scale.md#instances-table) documentation.
+> This operation can be very expensive in terms of Azure Storage I/O if you are using the [default Azure Storage provider](durable-functions-storage-providers.md#azure-storage) and if there are a lot of rows in the Instances table. More details on Instance table can be found in the [Azure Storage provider](durable-functions-azure-storage-provider.md#instances-table) documentation.
If more results exist, a continuation token is returned in the response header. The name of the header is `x-ms-continuation-token`.
+> [!CAUTION]
+> The query result may return fewer items than the limit specified by `top`. When receiving results, you should therefore *always* check to see if there is a continuation token.
+ If you set continuation token value in the next request header, you can get the next page of results. This name of the request header is also `x-ms-continuation-token`. ## Purge single instance history
Request parameters for this API include the default set mentioned previously as
| **`runtimeStatus`** | Query string | Optional parameter. When specified, filters the list of purged instances based on their runtime status. To see the list of possible runtime status values, see the [Querying instances](durable-functions-instance-management.md) article. | > [!NOTE]
-> This operation can be very expensive in terms of Azure Storage I/O if you are using the [default Azure Storage provider](durable-functions-storage-providers.md#azure-storage) and if there are many rows in the Instances and/or History tables. More details on these tables can be found in the [Performance and scale in Durable Functions (Azure Functions)](durable-functions-perf-and-scale.md#instances-table) documentation.
+> This operation can be very expensive in terms of Azure Storage I/O if you are using the [default Azure Storage provider](durable-functions-storage-providers.md#azure-storage) and if there are many rows in the Instances and/or History tables. More details on these tables can be found in the [Performance and scale in Durable Functions (Azure Functions)](durable-functions-azure-storage-provider.md#instances-table) documentation.
### Response
azure-functions Durable Functions Perf And Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-perf-and-scale.md
Title: Performance and scale in Durable Functions - Azure
description: Learn about the unique scaling characteristics of the Durable Functions extension for Azure Functions. Previously updated : 05/13/2021 Last updated : 07/18/2022 # Performance and scale in Durable Functions (Azure Functions)
-To optimize performance and scalability, it's important to understand the unique scaling characteristics of [Durable Functions](durable-functions-overview.md).
+To optimize performance and scalability, it's important to understand the unique scaling characteristics of [Durable Functions](durable-functions-overview.md). In this article, we explain how workers are scaled based on load, and how one can tune the various parameters.
-## Azure Storage provider
+## Worker scaling
-The default configuration for Durable Functions stores this runtime state in an Azure Storage (classic) account. All function execution is driven by Azure Storage queues. Orchestration and entity status and history is stored in Azure Tables. Azure Blobs and blob leases are used to distribute orchestration instances and entities across multiple app instances (also known as *workers* or simply *VMs*). This section goes into more detail on the various Azure Storage artifacts and how they impact performance and scalability.
+A fundamental benefit of the [task hub concept](durable-functions-task-hubs.md) is that the number of workers that process task hub work items can be continuously adjusted. In particular, applications can add more workers (*scale out*) if the work needs to be processed more quickly, and can remove workers (*scale in*) if there is not enough work to keep the workers busy.
+It is even possible to *scale to zero* if the task hub is completely idle. When scaled to zero, there are no workers at all; only the scale controller and the storage need to remain active.
-> [!NOTE]
-> This document primarily focuses on the performance and scalability characteristics of Durable Functions using the default Azure Storage provider. However, other storage providers are also available. For more information on the supported storage providers for Durable Functions and how they compare, see the [Durable Functions storage providers](durable-functions-storage-providers.md) documentation.
-
-### History table
-
-The **History** table is an Azure Storage table that contains the history events for all orchestration instances within a task hub. The name of this table is in the form *TaskHubName*History. As instances run, new rows are added to this table. The partition key of this table is derived from the instance ID of the orchestration. Instance IDs are random by default, ensuring optimal distribution of internal partitions in Azure Storage. The row key for this table is a sequence number used for ordering the history events.
-
-When an orchestration instance needs to run, the corresponding rows of the History table are loaded into memory using a range query within a single table partition. These *history events* are then replayed into the orchestrator function code to get it back into its previously checkpointed state. The use of execution history to rebuild state in this way is influenced by the [Event Sourcing pattern](/azure/architecture/patterns/event-sourcing).
-
-> [!TIP]
-> Orchestration data stored in the History table includes output payloads from activity and sub-orchestrator functions. Payloads from external events are also stored in the History table. Because the full history is loaded into memory every time an orchestrator needs to execute, a large enough history can result in significant memory pressure on a given VM. The length and size of the orchestration history can be reduced by splitting large orchestrations into multiple sub-orchestrations or by reducing the size of outputs returned by the activity and sub-orchestrator functions it calls. Alternatively, you can reduce memory usage by lowering per-VM [concurrency throttles](#concurrency-throttles) to limit how many orchestrations are loaded into memory concurrently.
-
-### Instances table
-
-The **Instances** table contains the statuses of all orchestration and entity instances within a task hub. As instances are created, new rows are added to this table. The partition key of this table is the orchestration instance ID or entity key and the row key is an empty string. There is one row per orchestration or entity instance.
-
-This table is used to satisfy [instance query requests from code](durable-functions-instance-management.md#query-instances) as well as [status query HTTP API](durable-functions-http-api.md#get-instance-status) calls. It is kept eventually consistent with the contents of the **History** table mentioned previously. The use of a separate Azure Storage table to efficiently satisfy instance query operations in this way is influenced by the [Command and Query Responsibility Segregation (CQRS) pattern](/azure/architecture/patterns/cqrs).
-
-> [!TIP]
-> The partitioning of the *Instances* table allows it to store millions of orchestration instances without any noticeable impact on runtime performance or scale. However, the number of instances can have a significant impact on [multi-instance query](durable-functions-instance-management.md#query-all-instances) performance. To control the amount of data stored in these tables, consider periodically [purging old instance data](durable-functions-instance-management.md#purge-instance-history).
-
-### Internal queue triggers
-
-Orchestrator, entity, and activity functions are all triggered by internal queues in the function app's task hub. Using queues in this way provides reliable "at-least-once" message delivery guarantees. There are two types of queues in Durable Functions: the **control queue** and the **work-item queue**.
-
-#### The work-item queue
-
-There is one work-item queue per task hub in Durable Functions. It's a basic queue and behaves similarly to any other `queueTrigger` queue in Azure Functions. This queue is used to trigger stateless *activity functions* by dequeueing a single message at a time. Each of these messages contains activity function inputs and additional metadata, such as which function to execute. When a Durable Functions application scales out to multiple VMs, these VMs all compete to acquire tasks from the work-item queue.
-
-#### Control queue(s)
-
-There are multiple *control queues* per task hub in Durable Functions. A *control queue* is more sophisticated than the simpler work-item queue. Control queues are used to trigger the stateful orchestrator and entity functions. Because the orchestrator and entity function instances are stateful singletons, it's important that each orchestration or entity is only processed by one worker at a time. To achieve this constraint, each orchestration instance or entity is assigned to a single control queue. These control queues are load balanced across workers to ensure that each queue is only processed by one worker at a time. More details on this behavior can be found in subsequent sections.
-
-Control queues contain a variety of orchestration lifecycle message types. Examples include [orchestrator control messages](durable-functions-instance-management.md), activity function *response* messages, and timer messages. As many as 32 messages will be dequeued from a control queue in a single poll. These messages contain payload data as well as metadata including which orchestration instance it is intended for. If multiple dequeued messages are intended for the same orchestration instance, they will be processed as a batch.
+The following diagram illustrates this concept:
-Control queue messages are constantly polled using a background thread. The batch size of each queue poll is controlled by the `controlQueueBatchSize` setting in host.json and has a default of 32 (the maximum value supported by Azure Queues). The maximum number of prefetched control-queue messages that are buffered in memory is controlled by the `controlQueueBufferThreshold` setting in host.json. The default value for `controlQueueBufferThreshold` varies depending on a variety of factors, including the type of hosting plan. For more information on these settings, see the [host.json schema](../functions-host-json.md#durabletask) documentation.
+![Worker scaling diagram](./media/durable-functions-perf-and-scale/worker-scaling.png)
-> [!TIP]
-> Increasing the value for `controlQueueBufferThreshold` allows a single orchestration or entity to process events faster. However, increasing this value can also result in higher memory usage. The higher memory usage is partly due to pulling more messages off the queue and partly due to fetching more orchestration histories into memory. Reducing the value for `controlQueueBufferThreshold` can therefore be an effective way to reduce memory usage.
-
-#### Queue polling
-
-The durable task extension implements a random exponential back-off algorithm to reduce the effect of idle-queue polling on storage transaction costs. When a message is found, the runtime immediately checks for another message. When no message is found, it waits for a period of time before trying again. After subsequent failed attempts to get a queue message, the wait time continues to increase until it reaches the maximum wait time, which defaults to 30 seconds.
+### Automatic scaling
-The maximum polling delay is configurable via the `maxQueuePollingInterval` property in the [host.json file](../functions-host-json.md#durabletask). Setting this property to a higher value could result in higher message processing latencies. Higher latencies would be expected only after periods of inactivity. Setting this property to a lower value could result in [higher storage costs](durable-functions-billing.md#azure-storage-transactions) due to increased storage transactions.
+As with all Azure Functions running in the Consumption and Elastic Premium plans, Durable Functions supports auto-scale via the [Azure Functions scale controller](../event-driven-scaling.md#runtime-scaling). The Scale Controller monitors how long messages and tasks have to wait before they are processed. Based on these latencies it can decide whether to add or remove workers.
> [!NOTE]
-> When running in the Azure Functions Consumption and Premium plans, the [Azure Functions Scale Controller](../event-driven-scaling.md) will poll each control and work-item queue once every 10 seconds. This additional polling is necessary to determine when to activate function app instances and to make scale decisions. At the time of writing, this 10 second interval is constant and cannot be configured.
+> Starting with Durable Functions 2.0, function apps can be configured to run within VNET-protected service endpoints in the Elastic Premium plan. In this configuration, the Durable Functions triggers initiate scale requests instead of the Scale Controller. For more information, see [Runtime scale monitoring](../functions-networking-options.md#premium-plan-with-virtual-network-triggers).
-#### Orchestration start delays
-Orchestrations instances are started by putting an `ExecutionStarted` message in one of the task hub's control queues. Under certain conditions, you may observe multi-second delays between when an orchestration is scheduled to run and when it actually starts running. During this time interval, the orchestration instance remains in the `Pending` state. There are two potential causes of this delay:
+On a premium plan, automatic scaling can help to keep the number of workers (and therefore the operating cost) roughly proportional to the load that the application is experiencing.
-1. **Backlogged control queues**: If the control queue for this instance contains a large number of messages, it may take time before the `ExecutionStarted` message is received and processed by the runtime. Message backlogs can happen when orchestrations are processing lots of events concurrently. Events that go into the control queue include orchestration start events, activity completions, durable timers, termination, and external events. If this delay happens under normal circumstances, consider creating a new task hub with a larger number of partitions. Configuring more partitions will cause the runtime to create more control queues for load distribution. Each partition corresponds to 1:1 with a control queue, with a maximum of 16 partitions.
+### CPU usage
-2. **Back off polling delays**: Another common cause of orchestration delays is the [previously described back-off polling behavior for control queues](#queue-polling). However, this delay is only expected when an app is scaled out to two or more instances. If there is only one app instance or if the app instance that starts the orchestration is also the same instance that is polling the target control queue, then there will not be a queue polling delay. Back off polling delays can be reduced by updating the **host.json** settings, as described previously.
+**Orchestrator functions** are executed on a single thread to ensure that execution can be deterministic across many replays. Because of this single-threaded execution, it's important that orchestrator function threads do not perform CPU-intensive tasks, do I/O, or block for any reason. Any work that may require I/O, blocking, or multiple threads should be moved into activity functions.
-### Storage account selection
+**Activity functions** have all the same behaviors as regular queue-triggered functions. They can safely do I/O, execute CPU intensive operations, and use multiple threads. Because activity triggers are stateless, they can freely scale out to an unbounded number of VMs.
-The queues, tables, and blobs used by Durable Functions are created in a configured Azure Storage account. The account to use can be specified using the `durableTask/storageProvider/connectionStringName` setting (or `durableTask/azureStorageConnectionStringName` setting in Durable Functions 1.x) in the **host.json** file.
+**Entity functions** are also executed on a single thread and operations are processed one-at-a-time. However, entity functions do not have any restrictions on the type of code that can be executed.
-#### Durable Functions 2.x
+### Function timeouts
-```json
-{
- "extensions": {
- "durableTask": {
- "storageProvider": {
- "connectionStringName": "MyStorageAccountAppSetting"
- }
- }
- }
-}
-```
+Activity, orchestrator, and entity functions are subject to the same [function timeouts](../functions-scale.md#timeout) as all Azure Functions. As a general rule, Durable Functions treats function timeouts the same way as unhandled exceptions thrown by the application code.
-#### Durable Functions 1.x
+For example, if an activity times out, the function execution is recorded as a failure, and the orchestrator is notified and handles the timeout just like any other exception: retries take place if specified by the call, or an exception handler may be executed.
-```json
-{
- "extensions": {
- "durableTask": {
- "azureStorageConnectionStringName": "MyStorageAccountAppSetting"
- }
- }
-}
-```
-
-If not specified, the default `AzureWebJobsStorage` storage account is used. For performance-sensitive workloads, however, configuring a non-default storage account is recommended. Durable Functions uses Azure Storage heavily, and using a dedicated storage account isolates Durable Functions storage usage from the internal usage by the Azure Functions host.
-
-> [!NOTE]
-> Standard general purpose Azure Storage accounts are required when using the Azure Storage provider. All other storage account types are not supported. We highly recommend using legacy v1 general purpose storage accounts for Durable Functions. The newer v2 storage accounts can be significantly more expensive for Durable Functions workloads. For more information on Azure Storage account types, see the [Storage account overview](../../storage/common/storage-account-overview.md) documentation.
+### Entity operation batching
-### Orchestrator scale-out
+To improve performance and reduce cost, a single work item may execute an entire batch of entity operations. On consumption plans, each batch is then billed as a single function execution.
-While activity functions can be scaled out infinitely by adding more VMs elastically, individual orchestrator instances and entities are constrained to inhabit a single partition and the maximum number of partitions is bounded by the `partitionCount` setting in your `host.json`.
+By default, the maximum batch size is 50 for consumption plans and 5000 for all other plans. The maximum batch size can also be configured in the [host.json](durable-functions-bindings.md#host-json) file. If the maximum batch size is 1, batching is effectively disabled.
> [!NOTE]
-> Generally speaking, orchestrator functions are intended to be lightweight and should not require large amounts of computing power. It is therefore not necessary to create a large number of control-queue partitions to get great throughput for orchestrations. Most of the heavy work should be done in stateless activity functions, which can be scaled out infinitely.
+> If individual entity operations take a long time to execute, it may be beneficial to limit the maximum batch size to reduce the risk of [function timeouts](#function-timeouts), in particular on consumption plans.
-The number of control queues is defined in the **host.json** file. The following example host.json snippet sets the `durableTask/storageProvider/partitionCount` property (or `durableTask/partitionCount` in Durable Functions 1.x) to `3`. Note that there are as many control queues as there are partitions.
+## Instance caching
-#### Durable Functions 2.x
+Generally, to process an [orchestration work item](durable-functions-task-hubs.md#work-items), a worker has to both
-```json
-{
- "extensions": {
- "durableTask": {
- "storageProvider": {
- "partitionCount": 3
- }
- }
- }
-}
-```
+1. Fetch the orchestration history.
+1. Replay the orchestrator code using the history.
-#### Durable Functions 1.x
+If the same worker is processing multiple work items for the same orchestration, the storage provider can optimize this process by caching the history in the worker's memory, which eliminates the first step. Moreover, it can cache the mid-execution orchestrator, which eliminates the second step, the history replay, as well.
-```json
-{
- "extensions": {
- "durableTask": {
- "partitionCount": 3
- }
- }
-}
-```
-
-A task hub can be configured with between 1 and 16 partitions. If not specified, the default partition count is **4**.
+The typical effect of caching is reduced I/O against the underlying storage service, and overall improved throughput and latency. On the other hand, caching increases the memory consumption on the worker.
-During low traffic scenarios, your application will be scaled-in, so partitions will be managed by a small number of workers. As an example, consider the diagram below.
+Instance caching is currently supported by the Azure Storage provider and by the Netherite storage provider. The table below provides a comparison.
-![Scale-in orchestrations diagram](./media/durable-functions-perf-and-scale/scale-progression-1.png)
+|| Azure Storage provider | Netherite storage provider | MSSQL storage provider |
+|-|-|-|-|
+| **Instance caching** | Supported<br/>(.NET in-process worker only) | Supported | Not supported |
+| **Default setting** | Disabled | Enabled | n/a |
+| **Mechanism** | Extended Sessions | Instance Cache | n/a |
+| **Documentation** | See [Extended sessions](durable-functions-azure-storage-provider.md#extended-sessions) | See [Instance cache](https://microsoft.github.io/durabletask-netherite/#/caching) | n/a |
-In the previous diagram, we see that orchestrators 1 through 6 are load balanced across partitions. Similarly, partitions, like activities, are load balanced across workers. Partitions are load-balanced across workers regardless of the number of orchestrators that get started.
+> [!TIP]
+> Caching can reduce how often histories are replayed, but it cannot eliminate replay altogether. When developing orchestrators, we highly recommend testing them on a configuration that disables caching. This forced-replay behavior can useful for detecting [orchestrator function code constraints](durable-functions-code-constraints.md) violations at development time.
-If you're running on the Azure Functions Consumption or Elastic Premium plans, or if you have load-based auto-scaling configured, more workers will get allocated as traffic increases and partitions will eventually load balance across all workers. If we continue to scale out, eventually each partition will eventually be managed by a single worker. Activities, on the other hand, will continue to be load-balanced across all workers. This is shown in the image below.
+### Comparison of caching mechanisms
-![First scaled-out orchestrations diagram](./media/durable-functions-perf-and-scale/scale-progression-2.png)
+The providers use different mechanisms to implement caching, and offer different parameters to configure the caching behavior.
-The upper-bound of the maximum number of concurrent _active_ orchestrations at *any given time* is equal to the number of workers allocated to your application _times_ your value for `maxConcurrentOrchestratorFunctions`. This upper-bound can be made more precise when your partitions are fully scaled-out across workers. When fully scaled-out, and since each worker will have only a single Functions host instance, the maximum number of _active_ concurrent orchestrator instances will be equal to your number of partitions _times_ your value for `maxConcurrentOrchestratorFunctions`.
+* **Extended sessions**, as used by the Azure Storage provider, keep mid-execution orchestrators in memory until they are idle for some time. The parameters to control this mechanism are `extendedSessionsEnabled` and `extendedSessionIdleTimeoutInSeconds`. For more details, see the section [Extended sessions](durable-functions-azure-storage-provider.md#extended-sessions) of the Azure Storage provider documentation.
> [!NOTE]
-> In this context, *active* means that an orchestration or entity is loaded into memory and processing *new events*. If the orchestration or entity is waiting for more events, such as the return value of an activity function, it gets unloaded from memory and is no longer considered *active*. Orchestrations and entities will be subsequently reloaded into memory only when there are new events to process. There's no practical maximum number of *total* orchestrations or entities that can run on a single VM, even if they're all in the "Running" state. The only limitation is the number of *concurrently active* orchestration or entity instances.
-
-The image below illustrates a fully scaled-out scenario where more orchestrators are added but some are inactive, shown in grey.
+> Extended sessions are supported only in the .NET in-process worker.
-![Second scaled-out orchestrations diagram](./media/durable-functions-perf-and-scale/scale-progression-3.png)
-
-During scale-out, control queue leases may be redistributed across Functions host instances to ensure that partitions are evenly distributed. These leases are internally implemented as Azure Blob storage leases and ensure that any individual orchestration instance or entity only runs on a single host instance at a time. If a task hub is configured with three partitions (and therefore three control queues), orchestration instances and entities can be load-balanced across all three lease-holding host instances. Additional VMs can be added to increase capacity for activity function execution.
-
-The following diagram illustrates how the Azure Functions host interacts with the storage entities in a scaled out environment.
-
-![Scale diagram](./media/durable-functions-perf-and-scale/scale-interactions-diagram.png)
-
-As shown in the previous diagram, all VMs compete for messages on the work-item queue. However, only three VMs can acquire messages from control queues, and each VM locks a single control queue.
-
-Orchestration instances and entities are distributed across all control queue instances. The distribution is done by hashing the instance ID of the orchestration or the entity name and key pair. Orchestration instance IDs by default are random GUIDs, ensuring that instances are equally distributed across all control queues.
-
-Generally speaking, orchestrator functions are intended to be lightweight and should not require large amounts of computing power. It is therefore not necessary to create a large number of control queue partitions to get great throughput for orchestrations. Most of the heavy work should be done in stateless activity functions, which can be scaled out infinitely.
-
-### Auto-scale
-
-As with all Azure Functions running in the Consumption and Elastic Premium plans, Durable Functions supports auto-scale via the [Azure Functions scale controller](../event-driven-scaling.md#runtime-scaling). The Scale Controller monitors the latency of all queues by periodically issuing _peek_ commands. Based on the latencies of the peeked messages, the Scale Controller will decide whether to add or remove VMs.
-
-If the Scale Controller determines that control queue message latencies are too high, it will add VM instances until either the message latency decreases to an acceptable level or it reaches the control queue partition count. Similarly, the Scale Controller will continually add VM instances if work-item queue latencies are high, regardless of the partition count.
+* The **Instance cache**, as used by the Netherite storage provider, keeps the state of all instances, including their histories, in the worker's memory, while keeping track of the total memory used. If the cache size exceeds the limit configured by `InstanceCacheSizeMB`, the least recently used instance data is evicted. If `CacheOrchestrationCursors` is set to true, the cache also stores the mid-execution orchestrators along with the instance state.
+ For more details, see the section [Instance cache](https://microsoft.github.io/durabletask-netherite/#/caching) of the Netherite storage provider documentation.
> [!NOTE]
-> Starting with Durable Functions 2.0, function apps can be configured to run within VNET-protected service endpoints in the Elastic Premium plan. In this configuration, the Durable Functions triggers initiate scale requests instead of the Scale Controller. For more information, see [Runtime scale monitoring](../functions-networking-options.md#premium-plan-with-virtual-network-triggers).
-
-## Thread usage
-
-Orchestrator functions are executed on a single thread to ensure that execution can be deterministic across many replays. Because of this single-threaded execution, it's important that orchestrator function threads do not perform CPU-intensive tasks, do I/O, or block for any reason. Any work that may require I/O, blocking, or multiple threads should be moved into activity functions.
+> Instance caches work for all language SDKs, but the `CacheOrchestrationCursors` option is available only for the .NET in-process worker.
-Activity functions have all the same behaviors as regular queue-triggered functions. They can safely do I/O, execute CPU intensive operations, and use multiple threads. Because activity triggers are stateless, they can freely scale out to an unbounded number of VMs.
+## Concurrency throttles
-Entity functions are also executed on a single thread and operations are processed one-at-a-time. However, entity functions do not have any restrictions on the type of code that can be executed.
+A single worker instance can execute multiple [work items](durable-functions-task-hubs.md#work-items) concurrently. This helps to increase parallelism and more efficiently utilize the workers.
+However, if a worker attempts to process too many work items at the same time, it may exhaust its available resources, such as the CPU load, the number of network connections, or the available memory.
-## Function timeouts
+To ensure that an individual worker does not overcommit, it may be necessary to throttle the per-instance concurrency. By limiting the number of functions that are concurrently running on each worker, we can avoid exhausting the resource limits on that worker.
-Activity, orchestrator, and entity functions are subject to the same [function timeouts](../functions-scale.md#timeout) as all Azure Functions. As a general rule, Durable Functions treats function timeouts the same way as unhandled exceptions thrown by the application code. For example, if an activity times out, the function execution is recorded as a failure, and the orchestrator is notified and handles the timeout just like any other exception: retries take place if specified by the call, or an exception handler may be executed.
+> [!NOTE]
+> The concurrency throttles only apply locally, to limit what is currently being processed **per worker**. Thus, these throttles do not limit the total throughput of the system.
-## Concurrency throttles
+> [!TIP]
+> In some cases, throttling the per-worker concurrency can actually *increase* the total throughput of the system. This can occur when each worker takes less work, causing the scale controller to add more workers to keep up with the queues, which then increases the total throughput.
-Azure Functions supports executing multiple functions concurrently within a single app instance. This concurrent execution helps increase parallelism and minimizes the number of "cold starts" that a typical app will experience over time. However, high concurrency can exhaust per-VM system resources such network connections or available memory. Depending on the needs of the function app, it may be necessary to throttle the per-instance concurrency to avoid the possibility of running out of memory in high-load situations.
+### Configuration of throttles
-Activity, orchestrator, and entity function concurrency limits can be configured in the **host.json** file. The relevant settings are `durableTask/maxConcurrentActivityFunctions` for activity functions and `durableTask/maxConcurrentOrchestratorFunctions` for both orchestrator and entity functions. These settings control the maximum number of orchestrator, entity, or activity functions that can be loaded into memory concurrently.
+Activity, orchestrator, and entity function concurrency limits can be configured in the **host.json** file. The relevant settings are `durableTask/maxConcurrentActivityFunctions` for activity functions and `durableTask/maxConcurrentOrchestratorFunctions` for both orchestrator and entity functions. These settings control the maximum number of orchestrator, entity, or activity functions that are loaded into memory on a single worker.
> [!NOTE]
-> The concurrency throttles only apply locally, to limit what is currently being processed on one individual machine. Thus, these throttles do not limit the total throughput of the system. Quite to the contrary, they can actually support proper scale out, as they prevent individual machines from taking on too much work at once. If this leads to unprocessed work accumulating in the queues, the autoscaler adds more machines. The total throughput of the system thus scales out as needed.
+> Orchestrations and entities are only loaded into memory when they are actively processing events or operations, or if [instance caching](durable-functions-perf-and-scale.md#instance-caching) is enabled. After executing their logic and awaiting (i.e. hitting an `await` (C#) or `yield` (JavaScript, Python) statement in the orchestrator function code), they may be unloaded from memory. Orchestrations and entities that are unloaded from memory don't count towards the `maxConcurrentOrchestratorFunctions` throttle. Even if millions of orchestrations or entities are in the "Running" state, they only count towards the throttle limit when they are loaded into active memory. An orchestration that schedules an activity function similarly doesn't count towards the throttle if the orchestration is waiting for the activity to finish executing.
-> [!NOTE]
-> The `durableTask/maxConcurrentOrchestratorFunctions` limit applies only to the act of processing new events or operations. Orchestrations or entities that are idle waiting for events or operations do not count towards the limit.
-
-### Functions 2.0
+#### Functions 2.0
```json {
Activity, orchestrator, and entity function concurrency limits can be configured
} ```
-### Functions 1.x
+#### Functions 1.x
```json {
Activity, orchestrator, and entity function concurrency limits can be configured
} ```
-In the previous example, a maximum of 10 orchestrator or entity functions and 10 activity functions can run on a single VM concurrently. If not specified, the number of concurrent activity and orchestrator or entity function executions is capped at 10X the number of cores on the VM.
+### Language runtime considerations
+
+The language runtime you select may impose strict concurrency restrictions or your functions. For example, Durable Function apps written in Python or PowerShell may only support running a single function at a time on a single VM. This can result in significant performance problems if not carefully accounted for. For example, if an orchestrator fans-out to 10 activities but the language runtime restricts concurrency to just one function, then 9 of the 10 activity functions will be stuck waiting for a chance to run. Furthermore, these 9 stuck activities will not be able to be load balanced to any other workers because the Durable Functions runtime will have already loaded them into memory. This becomes especially problematic if the activity functions are long-running.
-If the maximum number of activities or orchestrations/entities on a worker VM is reached, the Durable trigger will wait for any executing functions to finish or unload before starting up new function executions.
+If the language runtime you are using places a restriction on concurrency, you should update the Durable Functions concurrency settings to match the concurrency settings of your language runtime. This ensures that the Durable Functions runtime will not attempt to run more functions concurrently than is allowed by the language runtime, allowing any pending activities to be load balanced to other VMs. For example, if you have a Python app that restricts concurrency to 4 functions (perhaps it's only configured with 4 threads on a single language worker process or 1 thread on 4 language worker processes) then you should configure both `maxConcurrentOrchestratorFunctions` and `maxConcurrentActivityFunctions` to 4.
-> [!NOTE]
-> These settings are useful to help manage memory and CPU usage on a single VM. However, when scaled out across multiple VMs, each VM has its own set of limits. These settings can't be used to control concurrency at a global level.
+For more information and performance recommendations for Python, see [Improve throughput performance of Python apps in Azure Functions](../python-scale-performance-reference.md). The techniques mentioned in this Python developer reference documentation can have a substantial impact on Durable Functions performance and scalability.
-> [!NOTE]
-> Orchestrations and entities are only loaded into memory when they are actively processing events or operations. After executing their logic and awaiting (i.e. hitting an `await` (C#) or `yield` (JavaScript, Python) statement in the orchestrator function code), they are unloaded from memory. Orchestrations and entities that are unloaded from memory don't count towards the `maxConcurrentOrchestratorFunctions` throttle. Even if millions of orchestrations or entities are in the "Running" state, they only count towards the throttle limit when they are loaded into active memory. An orchestration that schedules an activity function similarly doesn't count towards the throttle if the orchestration is waiting for the activity to finish executing.
+## Partition count
-### Language runtime considerations
+Some of the storage providers use a *partitioning* mechanism and allow specifying a `partitionCount` parameter.
-The language runtime you select may impose strict concurrency restrictions or your functions. For example, Durable Function apps written in Python or PowerShell may only support running a single function at a time on a single VM. This can result in significant performance problems if not carefully accounted for. For example, if an orchestrator fans-out to 10 activities but the language runtime restricts concurrency to just one function, then 9 of the 10 activity functions will be stuck waiting for a chance to run. Furthermore, these 9 stuck activities will not be able to be load balanced to any other workers because the Durable Functions runtime will have already loaded them into memory. This becomes especially problematic if the activity functions are long-running.
+When using partitioning, workers do not directly compete for individual work items. Instead, the work items are first grouped into `partitionCount` partitions. These partitions are then assigned to workers. This partitioned approach to load distribution can help to reduce the total number of storage accesses required. Also, it can enable [instance caching](durable-functions-perf-and-scale.md#instance-caching) and improve locality because it creates *affinity*: all work items for the same instance are processed by the same worker.
-If the language runtime you are using places a restriction on concurrency, you should update the Durable Functions concurrency settings to match the concurrency settings of your language runtime. This ensures that the Durable Functions runtime will not attempt to run more functions concurrently than is allowed by the language runtime, allowing any pending activities to be load balanced to other VMs. For example, if you have a Python app that restricts concurrency to 4 functions (perhaps its only configured with 4 threads on a single language worker process or 1 thread on 4 language worker processes) then you should configure both `maxConcurrentOrchestratorFunctions` and `maxConcurrentActivityFunctions` to 4.
+> [!NOTE]
+> Partitioning limits scale out because at most `partitionCount` workers can process work items from a partitioned queue.
-For more information and performance recommendations for Python, see [Improve throughput performance of Python apps in Azure Functions](../python-scale-performance-reference.md). The techniques mentioned in this Python developer reference documentation can have a substantial impact on Durable Functions performance and scalability.
+The following table shows, for each storage provider, which queues are partitioned, and the allowable range and default values for the `partitionCount` parameter.
-## Extended sessions
+|| Azure Storage provider | Netherite storage provider | MSSQL storage provider |
+|-|-|-|-|
+| **Instance messages**| Partitioned | Partitioned | Not partitioned |
+| **Activity messages** | Not partitioned | Partitioned | Not partitioned |
+| **Default `partitionCount`** | 4 | 12 | n/a |
+| **Maximum `partitionCount`** | 16 | 32 | n/a |
+| **Documentation** | See [Orchestrator scale-out](durable-functions-azure-storage-provider.md#orchestrator-scale-out) | See [Partition count considerations](https://microsoft.github.io/durabletask-netherite/#/settings?id=partition-count-considerations) | n/a |
-Extended sessions is a setting that keeps orchestrations and entities in memory even after they finish processing messages. The typical effect of enabling extended sessions is reduced I/O against the underlying durable store and overall improved throughput.
+> [!WARNING]
+> The partition count can no longer be changed after a task hub has been created. Thus, it is advisable to set it to a large enough value to accommodate future scale out requirements for the task hub instance.
-You can enable extended sessions by setting `durableTask/extendedSessionsEnabled` to `true` in the **host.json** file. The `durableTask/extendedSessionIdleTimeoutInSeconds` setting can be used to control how long an idle session will be held in memory:
+### Configuration of partition count
+
+The `partitionCount` parameter can be specified in the **host.json** file. The following example host.json snippet sets the `durableTask/storageProvider/partitionCount` property (or `durableTask/partitionCount` in Durable Functions 1.x) to `3`.
+
+#### Durable Functions 2.x
-**Functions 2.0**
```json { "extensions": { "durableTask": {
- "extendedSessionsEnabled": true,
- "extendedSessionIdleTimeoutInSeconds": 30
+ "storageProvider": {
+ "partitionCount": 3
+ }
} } } ```
-**Functions 1.0**
+#### Durable Functions 1.x
+ ```json {
- "durableTask": {
- "extendedSessionsEnabled": true,
- "extendedSessionIdleTimeoutInSeconds": 30
+ "extensions": {
+ "durableTask": {
+ "partitionCount": 3
+ }
} } ```
-There are two potential downsides of this setting to be aware of:
-
-1. There's an overall increase in function app memory usage because idle instances are not unloaded from memory as quickly.
-2. There can be an overall decrease in throughput if there are many concurrent, distinct, short-lived orchestrator or entity function executions.
-
-As an example, if `durableTask/extendedSessionIdleTimeoutInSeconds` is set to 30 seconds, then a short-lived orchestrator or entity function episode that executes in less than 1 second still occupies memory for 30 seconds. It also counts against the `durableTask/maxConcurrentOrchestratorFunctions` quota mentioned previously, potentially preventing other orchestrator or entity functions from running.
-
-The specific effects of extended sessions on orchestrator and entity functions are described in the next sections.
-
-> [!NOTE]
-> Extended sessions are currently only supported in .NET languages, like C# or F#. Setting `extendedSessionsEnabled` to `true` for other platforms can lead to runtime issues, such as silently failing to execute activity and orchestration-triggered functions.
-
-> [!NOTE]
-> Support for extended sessions may vary depend on the [Durable Functions storage provider you are using](durable-functions-storage-providers.md). See the storage provider documentation to learn whether it supports extended sessions.
-
-### Orchestrator function replay
-
-As mentioned previously, orchestrator functions are replayed using the contents of the **History** table. By default, the orchestrator function code is replayed every time a batch of messages are dequeued from a control queue. Even if you are using the fan-out, fan-in pattern and are awaiting for all tasks to complete (for example, using `Task.WhenAll()` in .NET, `context.df.Task.all()` in JavaScript, or `context.task_all()` in Python), there will be replays that occur as batches of task responses are processed over time. When extended sessions are enabled, orchestrator function instances are held in memory longer and new messages can be processed without a full history replay.
-
-The performance improvement of extended sessions is most often observed in the following situations:
-
-* When there are a limited number of orchestration instances running concurrently.
-* When orchestrations have large number of sequential actions (for example, hundreds of activity function calls) that complete quickly.
-* When orchestrations fan-out and fan-in a large number of actions that complete around the same time.
-* When orchestrator functions need to process large messages or do any CPU-intensive data processing.
-
-In all other situations, there is typically no observable performance improvement for orchestrator functions.
-
-> [!NOTE]
-> These settings should only be used after an orchestrator function has been fully developed and tested. The default aggressive replay behavior can useful for detecting [orchestrator function code constraints](durable-functions-code-constraints.md) violations at development time, and is therefore disabled by default.
-
-## Entity operation batching
-
-To improve performance and cost, entity operations are executed in batches. Each batch is billed as a single function execution.
-
-By default, the maximum batch size is 50 (for consumption plans) and 5000 (for all other plans). The maximum batch size can also be configured in the [host.json](durable-functions-bindings.md#host-json) file. If the maximum batch size is 1, batching is effectively disabled.
-
-> [!NOTE]
-> If individual entity operations take a long time to execute, it may be beneficial to limit the maximum batch size to reduce the risk of [function timeouts](#function-timeouts), in particular on consumption plans.
- ## Performance targets
-When planning to use Durable Functions for a production application, it is important to consider the performance requirements early in the planning process. This section covers some basic usage scenarios and the expected maximum throughput numbers.
+When planning to use Durable Functions for a production application, it is important to consider the performance requirements early in the planning process. Some basic usage scenarios include:
* **Sequential activity execution**: This scenario describes an orchestrator function that runs a series of activity functions one after the other. It most closely resembles the [Function Chaining](durable-functions-sequence.md) sample. * **Parallel activity execution**: This scenario describes an orchestrator function that executes many activity functions in parallel using the [Fan-out, Fan-in](durable-functions-cloud-backup.md) pattern.
When planning to use Durable Functions for a production application, it is impor
* **External event processing**: This scenario represents a single orchestrator function instance that waits on [external events](durable-functions-external-events.md), one at a time. * **Entity operation processing**: This scenario tests how quickly a _single_ [Counter entity](durable-functions-entities.md) can process a constant stream of operations.
-> [!TIP]
-> Unlike fan-out, fan-in operations are limited to a single VM. If your application uses the fan-out, fan-in pattern and you are concerned about fan-in performance, consider sub-dividing the activity function fan-out across multiple [sub-orchestrations](durable-functions-sub-orchestrations.md).
+We provide throughput numbers for these scenarios in the respective documentation for the storage providers. In particular:
-### Azure Storage performance targets
-
-The following table shows the expected *maximum* throughput numbers for the previously described scenarios when using the default [Azure Storage provider for Durable Functions](durable-functions-storage-providers.md#azure-storage). "Instance" refers to a single instance of an orchestrator function running on a single small ([A1](../../virtual-machines/sizes-previous-gen.md)) VM in Azure App Service. In all cases, it is assumed that [extended sessions](#orchestrator-function-replay) are enabled. Actual results may vary depending on the CPU or I/O work performed by the function code.
-
-| Scenario | Maximum throughput |
-|-|-|
-| Sequential activity execution | 5 activities per second, per instance |
-| Parallel activity execution (fan-out) | 100 activities per second, per instance |
-| Parallel response processing (fan-in) | 150 responses per second, per instance |
-| External event processing | 50 events per second, per instance |
-| Entity operation processing | 64 operations per second |
-
-If you are not seeing the throughput numbers you expect and your CPU and memory usage appears healthy, check to see whether the cause is related to [the health of your storage account](../../storage/common/storage-monitoring-diagnosing-troubleshooting.md#troubleshooting-guidance). The Durable Functions extension can put significant load on an Azure Storage account and sufficiently high loads may result in storage account throttling.
+* for the Azure Storage provider, see [Performance Targets](durable-functions-azure-storage-provider.md#performance-targets).
+* for the Netherite storage provider, see [Basic Scenarios](https://microsoft.github.io/durabletask-netherite/#/scenarios).
+* for the MSSQL storage provider, see [Orchestration Throughput Benchmarks](https://microsoft.github.io/durabletask-mssql/#/scaling?id=orchestration-throughput-benchmarks).
> [!TIP]
-> In some cases you can significantly increase the throughput of external events, activity fan-in, and entity operations by increasing the value of the `controlQueueBufferThreshold` setting in **host.json**. Increasing this value beyond its default causes the Durable Task Framework storage provider to use more memory to prefetch these events more aggressively, reducing delays associated with dequeueing messages from the Azure Storage control queues. For more information, see the [host.json](durable-functions-bindings.md#host-json) reference documentation.
-
-### High throughput processing
-
-The architecture of the Azure Storage backend puts certain limitations on the maximum theoretical performance and scalability of Durable Functions. If your testing shows that Durable Functions on Azure Storage won't meet your throughput requirements, you should consider instead using the [Netherite storage provider for Durable Functions](durable-functions-storage-providers.md#netherite).
-
-The Netherite storage backend was designed and developed by [Microsoft Research](https://www.microsoft.com/research). It uses [Azure Event Hubs](../../event-hubs/event-hubs-about.md) and the [FASTER](https://www.microsoft.com/research/project/faster/) database technology on top of [Azure Page Blobs](../../storage/blobs/storage-blob-pageblob-overview.md). The design of Netherite enables significantly higher-throughput processing of orchestrations and entities compared to other providers. In some benchmark scenarios, throughput was shown to increase by more than an order of magnitude when compared to the default Azure Storage provider.
-
-For more information on the supported storage providers for Durable Functions and how they compare, see the [Durable Functions storage providers](durable-functions-storage-providers.md) documentation.
+> Unlike fan-out, fan-in operations are limited to a single VM. If your application uses the fan-out, fan-in pattern and you are concerned about fan-in performance, consider sub-dividing the activity function fan-out across multiple [sub-orchestrations](durable-functions-sub-orchestrations.md).
## Next steps > [!div class="nextstepaction"]
-> [Learn about disaster recovery and geo-distribution](durable-functions-disaster-recovery-geo-distribution.md)
+> [Learn about the Azure Storage provider](durable-functions-azure-storage-provider.md)
azure-functions Durable Functions Serialization And Persistence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-serialization-and-persistence.md
Title: Data persistence and serialization in Durable Functions - Azure
description: Learn how the Durable Functions extension for Azure Functions persists data Previously updated : 05/26/2022 Last updated : 07/18/2022 ms.devlang: csharp, java, javascript, python #Customer intent: As a developer, I want to understand what data is persisted to durable storage, how that data is serialized, and how I can customize it when it doesn't work the way my app needs it to.
ms.devlang: csharp, java, javascript, python
# Data persistence and serialization in Durable Functions (Azure Functions)
-Durable Functions automatically persists function parameters, return values, and other state to a durable backend in order to provide reliable execution. However, the amount and frequency of data persisted to durable storage can impact application performance and storage transaction costs. Depending on the type of data your application stores, data retention and privacy policies may also need to be considered.
+The Durable Functions runtime automatically persists function parameters, return values, and other state to the [task hub](durable-functions-task-hubs.md) in order to provide reliable execution. However, the amount and frequency of data persisted to durable storage can impact application performance and storage transaction costs. Depending on the type of data your application stores, data retention and privacy policies may also need to be considered.
-## Azure Storage
+## Task Hub Contents
-By default, Durable Functions persists data to queues, tables, and blobs in an [Azure Storage](https://azure.microsoft.com/services/storage/) account that you specify.
+Task hubs store the current state of instances, and any pending messages:
-### Queues
+* *Instance states* store the current status and history of an instance. For orchestration instances, this includes the runtime state, the orchestration history, inputs, outputs, and custom status. For entity instances, it includes the entity state.
+* *Messages* store function inputs or outputs, event payloads, and metadata that Durable Functions uses for internal purposes, like routing and end-to-end correlation.
-Durable Functions uses Azure Storage queues to reliably schedule all function executions. These queue messages contain function inputs or outputs, depending on whether the message is being used to schedule an execution or return a value back to a calling function. These queue messages also include additional metadata that Durable Functions uses for internal purposes, like routing and end-to-end correlation. After a function has finished executing in response to a received message, that message is deleted and the result of the execution may also be persisted to either Azure Storage Tables or Azure Storage Blobs.
+Messages are deleted after being processed, but instance states persist unless they're explicitly deleted by the application or an operator. In particular, an orchestration history remains in storage even after the orchestration completes.
-Within a single [task hub](durable-functions-task-hubs.md), Durable Functions creates and adds messages to a *work-item* queue named `<taskhub>-workitem` for scheduling activity functions and one or more *control queues* named `<taskhub>-control-##` to schedule or resume orchestrator and entity functions. The number of control queues is equal to the number of partitions configured for your application. For more information about queues and partitions, see the [Performance and Scalability documentation](durable-functions-perf-and-scale.md).
+For an example of how states and messages represent the progress of an orchestration, see the [task hub execution example](durable-functions-task-hubs.md#execution-example).
-### Tables
-
-Once orchestrations process messages successfully, records of their resulting actions are persisted to the *History* table named `<taskhub>History`. Orchestration inputs, outputs, and custom status data is also persisted to the *Instances* table named `<taskhub>Instances`.
-
-### Blobs
-
-In most cases, Durable Functions doesn't use Azure Storage Blobs to persist data. However, queues and tables have [size limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#azure-queue-storage-limits) that can prevent Durable Functions from persisting all of the required data into a storage row or queue message. For example, when a piece of data that needs to be persisted to a queue is greater than 45 KB when serialized, Durable Functions will compress the data and store it in a blob instead. When persisting data to blob storage in this way, Durable Function stores a reference to that blob in the table row or queue message. When Durable Functions needs to retrieve the data it will automatically fetch it from the blob. These blobs are stored in the blob container `<taskhub>-largemessages`.
-
-> [!NOTE]
-> The extra compression and blob operation steps for large messages can be expensive in terms of CPU and I/O latency costs. Additionally, Durable Functions needs to load persisted data in memory, and may do so for many different function executions at the same time. As a result, persisting large data payloads can cause high memory usage as well. To minimize memory overhead, consider persisting large data payloads manually (for example, in blob storage) and instead pass around references to this data. This way your code can load the data only when needed to avoid redundant loads during [orchestrator function replays](durable-functions-orchestrations.md#reliability). However, storing payloads to disk is *not* recommended since on-disk state is not guaranteed to be available since functions may execute on different VMs throughout their lifetimes.
+Where and how states and messages are represented in storage [depends on the storage provider](durable-functions-task-hubs.md#representation-in-storage). By default, Durable Functions uses the [Azure Storage provider](durable-functions-azure-storage-provider.md) which persists data to queues, tables, and blobs in an [Azure Storage](https://azure.microsoft.com/services/storage/) account that you specify.
### Types of data that is serialized and persisted The following is a list of the different types of data that will be serialized and persisted when using features of Durable Functions:
azure-functions Durable Functions Storage Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-storage-providers.md
Title: Durable Functions storage providers - Azure
description: Learn about the different storage providers for Durable Functions and how they compare Previously updated : 05/05/2021 Last updated : 07/18/2022 #Customer intent: As a developer, I want to understand what storage providers are available Durable Functions and which one I should choose. # Durable Functions storage providers
-Durable Functions automatically persists function parameters, return values, and other state to durable storage to guarantee reliable execution. The default configuration for Durable Functions stores this runtime state in an Azure Storage (classic) account. However, it's possible to configure Durable Functions v2.0 and above to use an alternate durable storage provider.
- Durable Functions is a set of Azure Functions triggers and bindings that are internally powered by the [Durable Task Framework](https://github.com/Azure/durabletask) (DTFx). DTFx supports various backend storage providers, including the Azure Storage provider used by Durable Functions. Starting in Durable Functions **v2.5.0**, users can configure their function apps to use DTFx storage providers other than the Azure Storage provider. > [!NOTE]
-> The choice to use storage providers other than Azure Storage should be made carefully. Most function apps running in Azure should use the default Azure Storage provider for Durable Functions. However, there are important cost, scalability, and data management tradeoffs that should be considered when deciding whether to use an alternate storage provider. This article describes many of these tradeoffs in detail.
->
-> Also note that it's not currently possible to migrate data from one storage provider to another. If you want to use a new storage provider, you should create a new app configured with the new storage provider.
+> For many function apps, the default Azure Storage provider for Durable Functions is likely to suffice, and is the easiest to use since it requires no extra configuration. However, there are cost, scalability, and data management tradeoffs that may favor the use of an alternate storage provider.
+
+Two alternate storage providers were developed for use with Durable Functions and the Durable Task Framework, namely the _Netherite_ storage provider and the _Microsoft SQL Server (MSSQL)_ storage provider. This article describes all three supported providers, compares them against each other, and provides basic information about how to get started using them.
-Two alternate DTFx storage providers were developed for use with Durable Functions, the _Netherite_ storage provider and the _Microsoft SQL Server (MSSQL)_ storage provider. This article describes all three supported providers, compares them against each other, and provides basic information about how to get started using them.
+> [!NOTE]
+> It's not currently possible to migrate data from one storage provider to another. If you want to use a new storage provider, you should create a new app configured with the new storage provider.
## Azure Storage
Additional properties may be set to customize the connection. See [Common proper
To use the Netherite storage provider, you must first add a reference to the [Microsoft.Azure.DurableTask.Netherite.AzureFunctions](https://www.nuget.org/packages/Microsoft.Azure.DurableTask.Netherite.AzureFunctions) NuGet package in your **csproj** file (.NET apps) or your **extensions.proj** file (JavaScript, Python, and PowerShell apps).
-> [!NOTE]
-> The Netherite storage provider is not yet supported in apps that use [extension bundles](../functions-bindings-register.md#extension-bundles).
- The following host.json example shows the minimum configuration required to enable the Netherite storage provider. ```json
There are many significant tradeoffs between the various supported storage provi
|- |- |- |- | | Official support status | ✅ Generally available (GA) | ⚠ Public preview | ⚠ Public preview | | External dependencies | Azure Storage account (general purpose v1) | Azure Event Hubs<br/>Azure Storage account (general purpose) | [SQL Server 2019](https://www.microsoft.com/sql-server/sql-server-2019) or Azure SQL Database |
-| Local development and emulation options | [Azurite v3.12+](../../storage/common/storage-use-azurite.md) (cross platform)<br/>[Azure Storage Emulator](../../storage/common/storage-use-emulator.md) (Windows only) | In-memory emulation ([more information](https://microsoft.github.io/durabletask-netherite/#/emulation)) | SQL Server Developer Edition (supports [Windows](/sql/database-engine/install-windows/install-sql-server), [Linux](/sql/linux/sql-server-linux-setup), and [Docker containers](/sql/linux/sql-server-linux-docker-container-deployment)) |
+| Local development and emulation options | [Azurite v3.12+](../../storage/common/storage-use-azurite.md) (cross platform)<br/>[Azure Storage Emulator](../../storage/common/storage-use-emulator.md) (Windows only) | Supports in-memory emulation of task hubs ([more information](https://microsoft.github.io/durabletask-netherite/#/emulation)) | SQL Server Developer Edition (supports [Windows](/sql/database-engine/install-windows/install-sql-server), [Linux](/sql/linux/sql-server-linux-setup), and [Docker containers](/sql/linux/sql-server-linux-docker-container-deployment)) |
| Task hub configuration | Explicit | Explicit | Implicit by default ([more information](https://microsoft.github.io/durabletask-mssql/#/taskhubs)) | | Maximum throughput | Moderate | Very high | Moderate | | Maximum orchestration/entity scale-out (nodes) | 16 | 32 | N/A | | Maximum activity scale-out (nodes) | N/A | 32 | N/A | | Consumption plan support | ✅ Fully supported | ❌ Not supported | ❌ Not supported |
-| Elastic Premium plan support | ✅ Fully supported | ⚠ Requires [runtime scale monitoring](../functions-networking-options.md#premium-plan-with-virtual-network-triggers) | ⚠ Requires [runtime scale monitoring](../functions-networking-options.md#premium-plan-with-virtual-network-triggers) |
+| Elastic Premium plan support | ✅ Fully supported | ⚠ Requires [runtime scale monitoring](../functions-networking-options.md#premium-plan-with-virtual-network-triggers) | ⚠ Requires [runtime scale monitoring](../functions-networking-options.md#premium-plan-with-virtual-network-triggers) |
| [KEDA 2.0](https://keda.sh/) scaling support<br/>([more information](../functions-kubernetes-keda.md)) | ❌ Not supported | ❌ Not supported | ✅ Supported using the [MSSQL scaler](https://keda.sh/docs/scalers/mssql/) ([more information](https://microsoft.github.io/durabletask-mssql/#/scaling)) | | Support for [extension bundles](../functions-bindings-register.md#extension-bundles) (recommended for non-.NET apps) | ✅ Fully supported | ❌ Not supported | ❌ Not supported | | Price-performance configurable? | ❌ No | ✅ Yes (Event Hubs TUs and CUs) | ✅ Yes (SQL vCPUs) |
+| Managed Identity Support | ✅ Fully supported | ❌ Not supported | ⚠️ Requires runtime-driven scaling |
| Disconnected environment support | ❌ Azure connectivity required | ❌ Azure connectivity required | ✅ Fully supported | | Identity-based connections | ✅ Yes (preview) |❌ No | ❌ No |
azure-functions Durable Functions Task Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-task-hubs.md
# Task hubs in Durable Functions (Azure Functions)
-A *task hub* in [Durable Functions](durable-functions-overview.md) is a logical container for durable storage resources that are used for orchestrations and entities. Orchestrator, activity, and entity functions can only directly interact with each other when they belong to the same task hub.
+A *task hub* in [Durable Functions](durable-functions-overview.md) is a representation of the current state of the application in storage, including all the pending work. While a function app is running, the progress of orchestration, activity, and entity functions is continually stored in the task hub. This ensures that the application can resume processing where it left off, should it require to be restarted after being temporarily stopped or interrupted for some reason. Also, it allows the function app to scale the compute workers dynamically.
+
+![Diagram showing concept of function app and task hub concept.](./media/durable-functions-task-hubs/taskhub.png)
+
+Conceptually, a task hub stores the following information:
+
+* The **instance states** of all orchestration and entity instances.
+* The messages to be processed, including
+ * any **activity messages** that represent activities waiting to be run.
+ * any **instance messages** that are waiting to be delivered to instances.
+
+The difference between activity and instance messages is that activity messages are stateless, and can thus be processed anywhere, while instance messages need to be delivered to a particular stateful instance (orchestration or entity), identified by its instance ID.
+
+Internally, each storage provider may use a different organization to represent instance states and messages. For example, messages are stored in Azure Storage Queues by the Azure Storage provider, but in relational tables by the MSSQL provider. These differences don't matter as far as the design of the application is concerned, but some of them may influence the performance characteristics. We discuss them in the section [Representation in storage](durable-functions-task-hubs.md#representation-in-storage) below.
+
+## Work items
+
+The activity messages and instance messages in the task hub represent the work that the function app needs to process. While the function app is running, it continuously fetches *work items* from the task hub. Each work item is processing one or more messages. We distinguish two types of work items:
+
+* **Activity work items**: Run an activity function to process an activity message.
+* **Orchestrator work item**: Run an orchestrator or entity function to process one or more instance messages.
+
+Workers can process multiple work items at the same time, subject to the [configured per-worker concurrency limits](durable-functions-perf-and-scale.md#concurrency-throttles).
+
+Once a worker completes a work item, it commits the effects back to the task hub. These effects vary by the type of function that was executed:
+
+* A completed activity function creates an instance message containing the result, addressed to the parent orchestrator instance.
+* A completed orchestrator function updates the orchestration state and history, and may create new messages.
+* A completed entity function updates the entity state, and may also create new instance messages.
+
+For orchestrations, each work item represents one **episode** of that orchestration's execution. An episode starts when there are new messages for the orchestrator to process. Such a message may indicate that the orchestration should start; or it may indicate that an activity, entity call, timer, or suborchestration has completed; or it can represent an external event. The message triggers a work item that allows the orchestrator to process the result and to continue with the next episode. That episode ends when the orchestrator either completes, or reaches a point where it must wait for new messages.
+
+### Execution example
+
+Consider a fan-out-fan-in orchestration that starts two activities in parallel, and waits for both of them to complete:
+
+# [C#](#tab/csharp)
+
+```csharp
+[FunctionName("Example")]
+public static async Task Run([OrchestrationTrigger] IDurableOrchestrationContext context)
+{
+ Task t1 = context.CallActivityAsync<int>("MyActivity", 1);
+ Task t2 = context.CallActivityAsync<int>("MyActivity", 2);
+ await Task.WhenAll(t1, t2);
+}
+```
+
+# [JavaScript](#tab/javascript)
+
+```JavaScript
+module.exports = df.orchestrator(function*(context){
+ const tasks = [];
+ tasks.push(context.df.callActivity("MyActivity", 1));
+ tasks.push(context.df.callActivity("MyActivity", 2));
+ yield context.df.Task.all(tasks);
+});
+```
+
+# [Python](#tab/python)
+
+```python
+def orchestrator_function(context: df.DurableOrchestrationContext):
+ tasks = []
+ tasks.append(context.call_activity("MyActivity", 1))
+ tasks.append(context.call_activity("MyActivity", 2))
+ yield context.task_all(tasks)
+```
+
+# [Java](#tab/java)
+
+```java
+@FunctionName("Example")
+public String exampleOrchestrator(
+ @DurableOrchestrationTrigger(name = "runtimeState") String runtimeState) {
+ return OrchestrationRunner.loadAndRun(runtimeState, ctx -> {
+ Task<Void> t1 = ctx.callActivity("MyActivity", 1);
+ Task<Void> t2 = ctx.callActivity("MyActivity", 2);
+ ctx.allOf(List.of(t1, t2)).await();
+ });
+}
+```
+++
+After this orchestration is initiated by a client it's processed by the function app as a sequence of work items. Each completed work item updates the task hub state when it commits. These are the steps:
+
+1. A client requests to start a new orchestration with instance-id "123". After the client completes this request, the task hub contains a placeholder for the orchestration state and an instance message:
+
+ ![workitems-illustration-step-1](./media/durable-functions-task-hubs/work-items-1.png)
+
+ The label `ExecutionStarted` is one of many [history event types](https://github.com/Azure/durabletask/tree/main/src/DurableTask.Core/History#readme) that identify the various types of messages and events participating in an orchestration's history.
+
+2. A worker executes an *orchestrator work item* to process the `ExecutionStarted` message. It calls the orchestrator function which starts executing the orchestration code. This code schedules two activities and then stops executing when it is waiting for the results. After the worker commits this work item, the task hub contains
+
+ ![workitems-illustration-step-2](./media/durable-functions-task-hubs/work-items-2.png)
+
+ The runtime state is now `Running`, two new `TaskScheduled` messages were added, and the history now contains the five events `OrchestratorStarted`, `ExecutionStarted`, `TaskScheduled`, `TaskScheduled`, `OrchestratorCompleted`. These events represent the first episode of this orchestration's execution.
+
+3. A worker executes an *activity work item* to process one of the `TaskScheduled` messages. It calls the activity function with input "2". When the activity function completes, it creates a `TaskCompleted` message containing the result. After the worker commits this work item, the task hub contains
+
+ ![workitems-illustration-step-3](./media/durable-functions-task-hubs/work-items-3.png)
+
+4. A worker executes an *orchestrator work item* to process the `TaskCompleted` message. If the orchestration is still cached in memory, it can just resume execution. Otherwise, the worker first [replays the history to recover the current state of the orchestration](durable-functions-orchestrations.md#reliability). Then it continues the orchestration, delivering the result of the activity. After receiving this result, the orchestration is still waiting for the result of the other activity, so it once more stops executing. After the worker commits this work item, the task hub contains
+
+ ![workitems-illustration-step-4](./media/durable-functions-task-hubs/work-items-4.png)
+
+ The orchestration history now contains three more events `OrchestratorStarted`, `TaskCompleted`, `OrchestratorCompleted`. These events represent the second episode of this orchestration's execution.
+
+5. A worker executes an *activity work item* to process the remaining `TaskScheduled` message. It calls the activity function with input "1". After the worker commits this work item, the task hub contains
+
+ ![workitems-illustration-step-5](./media/durable-functions-task-hubs/work-items-5.png)
+
+6. A worker executes another *orchestrator work item* to process the `TaskCompleted` message. After receiving this second result, the orchestration completes. After the worker commits this work item, the task hub contains
+
+ ![workitems-illustration-step-6](./media/durable-functions-task-hubs/work-items-6.png)
+
+ The runtime state is now `Completed`, and the orchestration history now contains four more events `OrchestratorStarted`, `TaskCompleted`, `ExecutionCompleted`, `OrchestratorCompleted`. These events represent the third and final episode of this orchestration's execution.
+
+The final history for this orchestration's execution then contains the 12 events `OrchestratorStarted`, `ExecutionStarted`, `TaskScheduled`, `TaskScheduled`, `OrchestratorCompleted`, `OrchestratorStarted`, `TaskCompleted`, `OrchestratorCompleted`, `OrchestratorStarted`, `TaskCompleted`, `ExecutionCompleted`, `OrchestratorCompleted`.
> [!NOTE]
-> This document describes the details of task hubs in a way that is specific to the default [Azure Storage provider for Durable Functions](durable-functions-storage-providers.md#azure-storage). If you are using a non-default storage provider for your Durable Functions app, you can find detailed task hub documentation in the provider-specific documentation:
->
-> * [Task Hub information for the Netherite storage provider](https://microsoft.github.io/durabletask-netherite/#/storage)
-> * [Task Hub information for the Microsoft SQL (MSSQL) storage provider](https://microsoft.github.io/durabletask-mssql/#/taskhubs)
->
-> For more information on the various storage provider options and how they compare, see the [Durable Functions storage providers](durable-functions-storage-providers.md) documentation.
+> The schedule shown isn't the only one: there are many slightly different possible schedules. For example, if the second activity completes earlier, both `TaskCompleted` instance messages may be processed by a single work item. In that case, the execution history is a bit shorter, because there are only two episodes, and it contains the following 10 events: `OrchestratorStarted`, `ExecutionStarted`, `TaskScheduled`, `TaskScheduled`, `OrchestratorCompleted`, `OrchestratorStarted`, `TaskCompleted`, `TaskCompleted`, `ExecutionCompleted`, `OrchestratorCompleted`.
+
+## Task hub management
+
+Next, let's take a closer look at how task hubs are created or deleted, how to use task hubs correctly when running multiple function apps, and how the content of task hubs can be inspected.
+
+### Creation and deletion
-If multiple function apps share a storage account, each function app *must* be configured with a separate task hub name. This requirement also applies to staging slots: each staging slot must be configured with a unique task hub name. A single storage account can contain multiple task hubs. This restriction generally applies to other storage providers as well.
+An empty task hub with all the required resources is automatically created in storage when a function app is started the first time.
+
+If using the default Azure Storage provider, no extra configuration is required. Otherwise, follow the [instructions for configuring storage providers](durable-functions-storage-providers.md#configuring-alternate-storage-providers) to ensure that the storage provider can properly provision and access the storage resources required for the task hub.
> [!NOTE]
-> The exception to the task hub sharing rule is if you are configuring your app for regional disaster recovery. See the [disaster recovery and geo-distribution](durable-functions-disaster-recovery-geo-distribution.md) article for more information.
+> The task hub is *not* automatically deleted when you stop or delete the function app. You must delete the task hub, its contents, or the containing storage account manually if you no longer want to keep that data.
+
+> [!TIP]
+> In a development scenario, you may need to restart from a clean state often. To do so quickly, you can just [change the configured task hub name](durable-functions-task-hubs.md#task-hub-names). This will force the creation of a new, empty task hub when you restart the application. Be aware that the old data is not deleted in this case.
+
+### Multiple function apps
+
+If multiple function apps share a storage account, each function app *must* be configured with a separate [task hub name](durable-functions-task-hubs.md#task-hub-names). This requirement also applies to staging slots: each staging slot must be configured with a unique task hub name. A single storage account can contain multiple task hubs. This restriction generally applies to other storage providers as well.
The following diagram illustrates one task hub per function app in shared and dedicated Azure Storage accounts.
-![Diagram showing shared and dedicated storage accounts.](./media/durable-functions-task-hubs/task-hubs-storage.png)
+![Diagram showing shared and dedicated storage accounts.](./media/durable-functions-task-hubs/multiple-apps.png)
+
+> [!NOTE]
+> The exception to the task hub sharing rule is if you are configuring your app for regional disaster recovery. See the [disaster recovery and geo-distribution](durable-functions-disaster-recovery-geo-distribution.md) article for more information.
+
+### Content inspection
+
+There are several common ways to inspect the contents of a task hub:
+
+1. Within a function app, the client object provides methods to query the instance store. To learn more about what types of queries are supported, see the [Instance Management](durable-functions-instance-management.md) article.
+2. Similarly, The [HTTP API](durable-functions-http-features.md) offers REST requests to query the state of orchestrations and entities. See the [HTTP API Reference](durable-functions-http-api.md) for more details.
+3. The [Durable Functions Monitor](https://github.com/microsoft/DurableFunctionsMonitor) tool can inspect task hubs and offers various options for visual display.
+
+For some of the storage providers, it is also possible to inspect the task hub by going directly to the underlying storage:
+
+* If using the Azure Storage provider, the instance states are stored in the [Instance Table](durable-functions-azure-storage-provider.md#instances-table) and the [History Table](durable-functions-azure-storage-provider.md#history-table) that can be inspected using tools such as Azure Storage Explorer.
+* If using the MSSQL storage provider, SQL queries and tools can be used to inspect the task hub contents inside the database.
+
+## Representation in storage
+
+Each storage provider uses a different internal organization to represent task hubs in storage. Understanding this organization, while not required, can be helpful when troubleshooting a function app or when trying to ensure performance, scalability, or cost targets. We thus briefly explain, for each storage provider, how the data is organized in storage. For more information on the various storage provider options and how they compare, see the [Durable Functions storage providers](durable-functions-storage-providers.md).
+
+### Azure Storage provider
+
+The Azure Storage provider represents the task hub in storage using the following components:
+
+* Two Azure Tables store the instance states.
+* One Azure Queue stores the activity messages.
+* One or more Azure Queues store the instance messages. Each of these so-called *control queues* represents a [partition](durable-functions-perf-and-scale.md#partition-count) that is assigned a subset of all instance messages, based on the hash of the instance ID.
+* A few extra blob containers used for lease blobs and/or large messages.
+
+For example, a task hub named `xyz` with `PartitionCount = 4` contains the following queues and tables:
+
+![Diagram showing Azure Storage provider storage storage organization for 4 control queues.](./media/durable-functions-task-hubs/azure-storage.png)
+
+Next, we describe these components and the role they play in more detail.
+
+For more information how task hubs are represented by the Azure Storage provider, see the [Azure Storage provider](durable-functions-azure-storage-provider.md) documentation.
+
+### Netherite storage provider
+
+Netherite partitions all of the task hub state into a specified number of partitions.
+In storage, the following resources are used:
+
+* One Azure Storage blob container that contains all the blobs, grouped by partition.
+* One Azure Table that contains published metrics about the partitions.
+* An Azure Event Hubs namespace for delivering messages between partitions.
+
+For example, a task hub named `mytaskhub` with `PartitionCount = 32` is represented in storage as follows:
+
+![Diagram showing Netherite storage organization for 32 partitions.](./media/durable-functions-task-hubs/netherite-storage.png)
+
+> [!NOTE]
+> All of the task hub state is stored inside the `x-storage` blob container. The `DurableTaskPartitions` table and the EventHubs namespace contain redundant data: if their contents are lost, they can be automatically recovered. Therefore it is not necessary to configure the Azure Event Hubs namespace to retain messages past the default expiration time.
+
+Netherite uses an event-sourcing mechanism, based on a log and checkpoints, to represent the current state of a partition. Both block blobs and page blobs are used. It is not possible to read this format from storage directly, so the function app has to be running when querying the instance store.
+
+For more information on task hubs for the Netherite storage provider, see [Task Hub information for the Netherite storage provider](https://microsoft.github.io/durabletask-netherite/#/storage).
+
+### MSSQL storage provider
+
+All task hub data is stored in a single relational database, using several tables:
+
+* The `dt.Instances` and `dt.History` tables store the instance states.
+* The `dt.NewEvents` table stores the instance messages.
+* The `dt.NewTasks` table stores the activity messages.
-## Azure Storage resources
-A task hub in Azure Storage consists of the following resources:
+![Diagram showing MSSQL storage organization.](./media/durable-functions-task-hubs/mssql-storage.png)
-* One or more control queues.
-* One work-item queue.
-* One history table.
-* One instances table.
-* One storage container containing one or more lease blobs.
-* A storage container containing large message payloads, if applicable.
+To enable multiple task hubs to coexist independently in the same database, each table includes a `TaskHub` column as part of its primary key. Unlike the other two providers, the MSSQL provider doesn't have a concept of partitions.
-All of these resources are created automatically in the configured Azure Storage account when orchestrator, entity, or activity functions run or are scheduled to run. The [Performance and Scale](durable-functions-perf-and-scale.md) article explains how these resources are used.
+For more information on task hubs for the MSSQL storage provider, see [Task Hub information for the Microsoft SQL (MSSQL) storage provider](https://microsoft.github.io/durabletask-mssql/#/taskhubs).
## Task hub names
-Task hubs in Azure Storage are identified by a name that conforms to these rules:
+Task hubs are identified by a name that must conform to these rules:
* Contains only alphanumeric characters * Starts with a letter
public HttpResponseMessage httpStart(
> [!NOTE] > Configuring task hub names in client binding metadata is only necessary when you use one function app to access orchestrations and entities in another function app. If the client functions are defined in the same function app as the orchestrations and entities, you should avoid specifying task hub names in the binding metadata. By default, all client bindings get their task hub metadata from the **host.json** settings.
-Task hub names in Azure Storage must start with a letter and consist of only letters and numbers. If not specified, a default task hub name will be used as shown in the following table:
+Task hub names must start with a letter and consist of only letters and numbers. If not specified, a default task hub name will be used as shown in the following table:
| Durable extension version | Default task hub name | | - | - |
-| 2.x | When deployed in Azure, the task hub name is derived from the name of the _function app_. When running outside of Azure, the default task hub name is `TestHubName`. |
+| 2.x | When deployed in Azure, the task hub name is derived from the name of the *function app*. When running outside of Azure, the default task hub name is `TestHubName`. |
| 1.x | The default task hub name for all environments is `DurableFunctionsHub`. | For more information about the differences between extension versions, see the [Durable Functions versions](durable-functions-versions.md) article.
azure-functions Durable Functions Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-versioning.md
When doing side-by-side deployments in Azure Functions or Azure App Service, we
## Next steps > [!div class="nextstepaction"]
-> [Learn how to handle performance and scale issues](durable-functions-perf-and-scale.md)
+> [Learn about using and choosing storage providers](durable-functions-storage-providers.md)
azure-functions Storage Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/storage-considerations.md
Functions uses Blob storage to persist important information, such as [function
### In-region data residency
-When all customer data must remain within a single region, the storage account associated with the function app must be one with [in-region redundancy](../storage/common/storage-redundancy.md). An in-region redundant storage account also must be used with [Azure Durable Functions](./durable/durable-functions-perf-and-scale.md#storage-account-selection).
+When all customer data must remain within a single region, the storage account associated with the function app must be one with [in-region redundancy](../storage/common/storage-redundancy.md). An in-region redundant storage account also must be used with [Azure Durable Functions](./durable/durable-functions-azure-storage-provider.md#storage-account-selection).
Other platform-managed customer data is only stored within the region when hosting in an internally load-balanced App Service Environment (ASE). To learn more, see [ASE zone redundancy](../app-service/environment/zone-redundancy.md#in-region-data-residency).
azure-monitor Asp Net Exceptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-exceptions.md
description: Capture exceptions from ASP.NET apps along with request telemetry.
ms.devlang: csharp Previously updated : 05/19/2021 Last updated : 08/19/2022
namespace MVC2App.Controllers
{ if (filterContext != null && filterContext.HttpContext != null && filterContext.Exception != null) {
- //If customError is Off, then AI HTTPModule will report the exception
+ //The attribute should track exceptions only when CustomErrors setting is On
+ //if CustomErrors is Off, exceptions will be caught by AI HTTP Module
if (filterContext.HttpContext.IsCustomErrorEnabled) { //or reuse instance (recommended!). see note above var ai = new TelemetryClient();
azure-monitor Azure Vm Vmss Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-vm-vmss-apps.md
Title: Monitor performance on Azure VMs - Azure Application Insights description: Application performance monitoring for Azure VM and Azure virtual machine scale sets. Chart load and response time, dependency information, and set alerts on performance. Previously updated : 08/26/2019 Last updated : 08/19/2022 ms.devlang: csharp, java, javascript, python
azure-monitor Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ip-addresses.md
Title: IP addresses used by Azure Monitor | Microsoft Docs description: This article discusses server firewall exceptions that are required by Azure Monitor Previously updated : 01/27/2020 Last updated : 08/19/2022
[Azure Monitor](../overview.md) uses several IP addresses. Azure Monitor is made up of core platform metrics and logs in addition to Log Analytics and Application Insights. You might need to know IP addresses if the app or infrastructure that you're monitoring is hosted behind a firewall. > [!NOTE]
-> Although these addresses are static, it's possible that we'll need to change them from time to time. All Application Insights traffic represents outbound traffic with the exception of availability monitoring and webhooks, which require inbound firewall rules.
+> Although these addresses are static, it's possible that we'll need to change them from time to time. All Application Insights traffic represents outbound traffic with the exception of availability monitoring and webhook action groups, which also require inbound firewall rules.
You can use Azure [network service tags](../../virtual-network/service-tags-overview.md) to manage access if you're using Azure network security groups. If you're managing access for hybrid/on-premises resources, you can download the equivalent IP address lists as [JSON files](../../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files), which are updated each week. To cover all the exceptions in this article, use the service tags `ActionGroup`, `ApplicationInsightsAvailability`, and `AzureMonitor`.
azure-monitor Java Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-spring-boot.md
To use the programmatic configuration and attach the Application Insights agent
And invoke the `attach()` method of the `com.microsoft.applicationinsights.attach.ApplicationInsights` class.
-> [!TIP]
-> ΓÜá JRE is not supported.
+> [!WARNING]
+>
+> JRE is not supported.
-> [!TIP]
-> ΓÜá Read-only file system is not supported.
+> [!WARNING]
+>
+> Read-only file system is not supported.
-> [!TIP]
-> ΓÜá The invocation must be requested at the beginning of the `main` method.
+> [!WARNING]
+>
+> The invocation must be requested at the beginning of the `main` method.
Example:
public class SpringBootApp {
If you want to use a JSON configuration: * The `applicationinsights.json` file has to be in the classpath
-* Or you can use an environmental variable or a system property, more in the _Configuration file path_ part on [this page](../app/java-standalone-config.md).
+* Or you can use an environmental variable or a system property, more in the _Configuration file path_ part on [this page](../app/java-standalone-config.md). Spring properties defined in a Spring _.properties_ file are not supported.
> [!TIP]
azure-monitor Opencensus Python Dependency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python-dependency.md
Title: Dependency Tracking in Azure Application Insights with OpenCensus Python | Microsoft Docs description: Monitor dependency calls for your Python apps via OpenCensus Python. Previously updated : 10/15/2019 Last updated : 8/19/2022 ms.devlang: python + # Track dependencies with OpenCensus Python
OPENCENSUS = {
} ```
+You can find a Django sample application that uses dependencies in the Azure Monitor OpenCensus Python samples repository located [here](https://github.com/Azure-Samples/azure-monitor-opencensus-python/tree/master/azure_monitor/django_sample).
+ ## Dependencies with "mysql" integration Track your MYSQL dependencies with the OpenCensus `mysql` integration. This integration supports the [mysql-connector](https://pypi.org/project/mysql-connector-python/) library.
azure-monitor Opencensus Python Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python-request.md
Title: Incoming Request Tracking in Azure Application Insights with OpenCensus Python | Microsoft Docs description: Monitor request calls for your Python apps via OpenCensus Python. Previously updated : 10/15/2019 Last updated : 8/19/2022 ms.devlang: python
First, instrument your Python application with latest [OpenCensus Python SDK](./
} ```
+You can find a Django sample application in the sample Azure Monitor OpenCensus Python samples repository located [here](https://github.com/givenscj/azure-monitor-opencensus-python/tree/master/azure_monitor/django_sample).
## Tracking Flask applications 1. Download and install `opencensus-ext-flask` from [PyPI](https://pypi.org/project/opencensus-ext-flask/) and instrument your application with the `flask` middleware. Incoming requests sent to your `flask` application will be tracked.
First, instrument your Python application with latest [OpenCensus Python SDK](./
> [!NOTE] > To run Flask under uWSGI in a Docker environment, you must first add `lazy-apps = true` to the uWSGI configuration file (uwsgi.ini). For more information, see the [issue description](https://github.com/census-instrumentation/opencensus-python/issues/660).
+You can find a Flask sample application that tracks requests in the Azure Monitor OpenCensus Python samples repository located [here](https://github.com/Azure-Samples/azure-monitor-opencensus-python/tree/master/azure_monitor/flask_sample).
+ ## Tracking Pyramid applications 1. Download and install `opencensus-ext-django` from [PyPI](https://pypi.org/project/opencensus-ext-pyramid/) and instrument your application with the `pyramid` tween. Incoming requests sent to your `pyramid` application will be tracked.
azure-monitor Opencensus Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python.md
Title: Monitor Python applications with Azure Monitor | Microsoft Docs description: Provides instructions to wire up OpenCensus Python with Azure Monitor Previously updated : 10/12/2021 Last updated : 8/19/2022 ms.devlang: python
You may have noted that OpenCensus is converging into [OpenTelemetry](https://op
## Prerequisites - An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.-- Python installation. This article uses [Python 3.7.0](https://www.python.org/downloads/release/python-370/), although other versions will likely work with minor changes. The Opencensus Python SDK only supports Python v2.7 and v3.4+.-- Create an Application Insights [resource](./create-new-resource.md). You'll be assigned your own instrumentation key (ikey) for your resource. [!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
Install the OpenCensus Azure Monitor exporters:
python -m pip install opencensus-ext-azure ```
-> [!NOTE]
-> The `python -m pip install opencensus-ext-azure` command assumes that you have a `PATH` environment variable set for your Python installation. If you haven't configured this variable, you need to give the full directory path to where your Python executable is located. The result is a command like this: `C:\Users\Administrator\AppData\Local\Programs\Python\Python37-32\python.exe -m pip install opencensus-ext-azure`.
-
-The SDK uses three Azure Monitor exporters to send different types of telemetry to Azure Monitor. They're trace, metrics, and logs. For more information on these telemetry types, see [the data platform overview](../data-platform.md). Use the following instructions to send these telemetry types via the three exporters.
+The SDK uses three Azure Monitor exporters to send different types of telemetry to Azure Monitor. They are `trace`, `metrics`, and `logs`. For more information on these telemetry types, see [the data platform overview](../data-platform.md). Use the following instructions to send these telemetry types via the three exporters.
## Telemetry type mappings
Here are the exporters that OpenCensus provides mapped to the types of telemetry
main() ```
-1. The exporter sends log data to Azure Monitor. You can find the data under `traces`.
+1. The exporter sends log data to Azure Monitor. You can find the data under `traces`.
> [!NOTE] > In this context, `traces` isn't the same as `tracing`. Here, `traces` refers to the type of telemetry that you'll see in Azure Monitor when you utilize `AzureLogHandler`. But `tracing` refers to a concept in OpenCensus and relates to [distributed tracing](./distributed-tracing.md).
For more detailed information about how to use queries and logs, see [Logs in Az
* [Customization](https://github.com/census-instrumentation/opencensus-python/blob/master/README.rst#customization) * [Azure Monitor Exporters on GitHub](https://github.com/census-instrumentation/opencensus-python/tree/master/contrib/opencensus-ext-azure) * [OpenCensus Integrations](https://github.com/census-instrumentation/opencensus-python#extensions)
-* [Azure Monitor Sample Applications](https://github.com/Azure-Samples/azure-monitor-opencensus-python)
+* [Azure Monitor Sample Applications](https://github.com/Azure-Samples/azure-monitor-opencensus-python/tree/master/azure_monitor)
## Next steps
azure-monitor Status Monitor V2 Detailed Instructions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/status-monitor-v2-detailed-instructions.md
Install-Module : The 'Install-Module' command was found in the module 'PowerShel
loaded. For more information, run 'Import-Module PowerShellGet'. Import-Module : File C:\Program Files\WindowsPowerShell\Modules\PackageManagement\1.3.1\PackageManagement.psm1 cannot
-be loaded because running scripts is disabled on this system. For more information, see about_Execution_Policies at
-https:/go.microsoft.com/fwlink/?LinkID=135170.
+be loaded because running scripts is disabled on this system. For more information, see about_Execution_Policies at https://go.microsoft.com/fwlink/?LinkID=135170.
``` - ## Prerequisites for PowerShell Audit your instance of PowerShell by running the `$PSVersionTable` command.
azure-monitor Autoscale Predictive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-predictive.md
Predictive autoscale adheres to the scaling boundaries you've set for your virtu
*Forecast only* allows you to view your predicted CPU forecast without triggering the scaling action based on the prediction. You can then compare the forecast with your actual workload patterns to build confidence in the prediction models before you enable the predictive autoscale feature.
-## Public preview support, availability, and limitations
+## Public preview support and limitations
>[!NOTE] > This release is a public preview. We're testing and gathering feedback for future releases. As such, we do not provide production-level support for this feature. Support is best effort. Send feature suggestions or feedback on predicative autoscale to predautoscalesupport@microsoft.com.
-During public preview, predictive autoscale is only available in the following regions:
--- West Central US-- West US2-- UK South-- UK West-- Southeast Asia-- East Asia-- Australia East-- Australia South east-- Canada Central-- Canada East- The following limitations apply during public preview. Predictive autoscale: - Only works for workloads exhibiting cyclical CPU usage patterns.
For more information on Azure Resource Manager templates, see [Resource Manager
This section answers common questions.
+### Why is CPU percentage over 100 percent on predictive charts?
+The predictive chart shows the cumulative load for all machines in the scale set. If you have 5 VMs in a scale set, the maximum cumulative load for all VMs will be 500%, that is, five times the 100% maximum CPU load of each VM.
+ ### What happens over time when you turn on predictive autoscale for a virtual machine scale set? Prediction autoscale uses the history of a running virtual machine scale set. If your scale set has been running less than 7 days, you'll receive a message that the model is being trained. For more information, see the [no predictive data message](#errors-and-warnings). Predictions improve as time goes by and achieve maximum accuracy 15 days after the virtual machine scale set is created.
The modeling works best with workloads that exhibit periodicity. We recommend th
Standard autoscaling is a necessary fallback if the predictive model doesn't work well for your scenario. Standard autoscale will cover unexpected load spikes, which aren't part of your typical CPU load pattern. It also provides a fallback if an error occurs in retrieving the predictive data.
+### Which rule will take effect if both predictive and standard autoscale rules are set?
+Standard autoscale rules are used if there is an unexpected spike in the CPU load, or an error occurs when retrieving predictive data```
+
+We use the threshold set in the standard autoscale rules to understand when youΓÇÖd like to scale out and by how many instances. If you want your VM scale set to scale out when the CPU usage exceeds 70%, and actual or predicted data shows that CPU usage is or will be over 70%, then a scale out will occur.
+ ## Errors and warnings This section addresses common errors and warnings.
Learn more about autoscale in the following articles:
- [Azure Monitor autoscale common metrics](./autoscale-common-metrics.md) - [Best practices for Azure Monitor autoscale](./autoscale-best-practices.md) - [Use autoscale actions to send email and webhook alert notifications](./autoscale-webhook-email.md)-- [Autoscale REST API](/rest/api/monitor/autoscalesettings)
+- [Autoscale REST API](/rest/api/monitor/autoscalesettings)
azure-monitor Container Insights Analyze https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-analyze.md
# Monitor your Kubernetes cluster performance with Container insights
-With Container insights, you can use the performance charts and health status to monitor the workload of Kubernetes clusters hosted on Azure Kubernetes Service (AKS), Azure Stack, or other environment from two perspectives. You can monitor directly from the cluster, or you can view all clusters in a subscription from Azure Monitor. Viewing Azure Container Instances is also possible when monitoring a specific AKS cluster.
+With Container insights, you can use the performance charts and health status to monitor the workload of Kubernetes clusters hosted on Azure Kubernetes Service (AKS), Azure Stack, or another environment from two perspectives. You can monitor directly from the cluster. You can also view all clusters in a subscription from Azure Monitor. Viewing Azure Container Instances is also possible when you're monitoring a specific AKS cluster.
-This article helps you understand the two perspectives, and how Azure Monitor helps you quickly assess, investigate, and resolve detected issues.
+This article helps you understand the two perspectives and how Azure Monitor helps you quickly assess, investigate, and resolve detected issues.
For information about how to enable Container insights, see [Onboard Container insights](container-insights-onboard.md).
-Azure Monitor provides a multi-cluster view that shows the health status of all monitored Kubernetes clusters running Linux and Windows Server 2019 deployed across resource groups in your subscriptions. It shows clusters discovered across all environments that aren't monitored by the solution. You can immediately understand cluster health, and from here, you can drill down to the node and controller performance page or navigate to see performance charts for the cluster. For AKS clusters that were discovered and identified as unmonitored, you can enable monitoring for them at any time.
+Azure Monitor provides a multi-cluster view that shows the health status of all monitored Kubernetes clusters running Linux and Windows Server 2019 deployed across resource groups in your subscriptions. It shows clusters discovered across all environments that aren't monitored by the solution.
-The main differences in monitoring a Windows Server cluster with Container insights compared to a Linux cluster are described in [Feature of Container insights](container-insights-overview.md#features-of-container-insights) in the overview article.
+With this view, you can immediately understand cluster health. From here, you can drill down to the node and controller performance page or navigate to see performance charts for the cluster. For AKS clusters that were discovered and identified as unmonitored, you can enable monitoring for them at any time.
+The main differences in monitoring a Windows Server cluster with Container insights compared to a Linux cluster are described in [Features of Container insights](container-insights-overview.md#features-of-container-insights) in the overview article.
## Multi-cluster view from Azure Monitor To view the health status of all Kubernetes clusters deployed, select **Monitor** from the left pane in the Azure portal. Under the **Insights** section, select **Containers**.
-![Azure Monitor multi-cluster dashboard example](./media/container-insights-analyze/azmon-containers-multiview.png)
+![Screenshot that shows an Azure Monitor multi-cluster dashboard example.](./media/container-insights-analyze/azmon-containers-multiview.png)
You can scope the results presented in the grid to show clusters that are:
-* **Azure** - AKS and AKS-Engine clusters hosted in Azure Kubernetes Service
-* **Azure Stack (Preview)** - AKS-Engine clusters hosted on Azure Stack
-* **Non-Azure (Preview)** - Kubernetes clusters hosted on-premises
-* **All** - View all the Kubernetes clusters hosted in Azure, Azure Stack, and on-premises environments that are onboarded to Container insights
+* **Azure**: AKS and AKS Engine clusters hosted in Azure Kubernetes Service.
+* **Azure Stack (Preview)**: AKS Engine clusters hosted on Azure Stack.
+* **Non-Azure (Preview)**: Kubernetes clusters hosted on-premises.
+* **All**: View all the Kubernetes clusters hosted in Azure, Azure Stack, and on-premises environments that are onboarded to Container insights.
-To view clusters from a specific environment, select it from the **Environments** pill on the top-left corner of the page.
+To view clusters from a specific environment, select it from **Environment** in the upper-left corner.
-![Environment pill selector example](./media/container-insights-analyze/clusters-multiview-environment-pill.png)
+![Screenshot that shows an Environment selector example.](./media/container-insights-analyze/clusters-multiview-environment-pill.png)
On the **Monitored clusters** tab, you learn the following: -- How many clusters are in a critical or unhealthy state, versus how many are healthy or not reporting (referred to as an Unknown state).-- Whether all of the [Azure Kubernetes Engine (AKS-engine)](https://github.com/Azure/aks-engine) deployments are healthy.
+- How many clusters are in a critical or unhealthy state versus how many are healthy or not reporting (referred to as an Unknown state).
+- Whether all of the [Azure Kubernetes Engine (AKS Engine)](https://github.com/Azure/aks-engine) deployments are healthy.
- How many nodes and user and system pods are deployed per cluster. - How much disk space is available and if there's a capacity issue.
The health statuses included are:
* **Misconfigured**: Container insights wasn't configured correctly in the specified workspace. * **No data**: Data hasn't reported to the workspace for the last 30 minutes.
-Health state calculates overall cluster status as the *worst of* the three states with one exception. If any of the three states is Unknown, the overall cluster state shows **Unknown**.
+Health state calculates the overall cluster status as the *worst of* the three states with one exception. If any of the three states is Unknown, the overall cluster state shows **Unknown**.
The following table provides a breakdown of the calculation that controls the health states for a monitored cluster on the multi-cluster view.
Access to Container insights is available directly from an AKS cluster by select
- Containers >[!NOTE]
->The experience described in the remainder of this article are also applicable for viewing performance and health status of your Kubernetes clusters hosted on Azure Stack or other environment when selected from the multi-cluster view.
+>The experiences described in the remainder of this article are also applicable for viewing performance and health status of your Kubernetes clusters hosted on Azure Stack or another environment when selected from the multi-cluster view.
The default page opens and displays four line performance charts that show key performance metrics of your cluster.
-![Example performance charts on the Cluster tab](./media/container-insights-analyze/containers-cluster-perfview.png)
+![Screenshot that shows example performance charts on the Cluster tab.](./media/container-insights-analyze/containers-cluster-perfview.png)
The performance charts display four performance metrics: - **Node CPU utilization&nbsp;%**: An aggregated perspective of CPU utilization for the entire cluster. To filter the results for the time range, select **Avg**, **Min**, **50th**, **90th**, **95th**, or **Max** in the percentiles selector above the chart. The filters can be used either individually or combined. - **Node memory utilization&nbsp;%**: An aggregated perspective of memory utilization for the entire cluster. To filter the results for the time range, select **Avg**, **Min**, **50th**, **90th**, **95th**, or **Max** in the percentiles selector above the chart. The filters can be used either individually or combined.-- **Node count**: A node count and status from Kubernetes. Statuses of the cluster nodes represented are Total, Ready, and Not Ready. They can be filtered individually or combined in the selector above the chart.-- **Active pod count**: A pod count and status from Kubernetes. Statuses of the pods represented are Total, Pending, Running, Unknown, Succeeded, or Failed. They can be filtered individually or combined in the selector above the chart.
+- **Node count**: A node count and status from Kubernetes. Statuses of the cluster nodes represented are **Total**, **Ready**, and **Not Ready**. They can be filtered individually or combined in the selector above the chart.
+- **Active pod count**: A pod count and status from Kubernetes. Statuses of the pods represented are **Total**, **Pending**, **Running**, **Unknown**, **Succeeded**, or **Failed**. They can be filtered individually or combined in the selector above the chart.
Use the Left and Right arrow keys to cycle through each data point on the chart. Use the Up and Down arrow keys to cycle through the percentile lines. Select the pin icon in the upper-right corner of any one of the charts to pin the selected chart to the last Azure dashboard you viewed. From the dashboard, you can resize and reposition the chart. Selecting the chart from the dashboard redirects you to Container insights and loads the correct scope and view.
-Container insights also supports Azure Monitor [metrics explorer](../essentials/metrics-getting-started.md), where you can create your own plot charts, correlate and investigate trends, and pin to dashboards. From metrics explorer, you also can use the criteria that you set to visualize your metrics as the basis of a [metric-based alert rule](../alerts/alerts-metric.md).
+Container insights also supports Azure Monitor [Metrics Explorer](../essentials/metrics-getting-started.md), where you can create your own plot charts, correlate and investigate trends, and pin to dashboards. From Metrics Explorer, you also can use the criteria that you set to visualize your metrics as the basis of a [metric-based alert rule](../alerts/alerts-metric.md).
-## View container metrics in metrics explorer
+## View container metrics in Metrics Explorer
-In metrics explorer, you can view aggregated node and pod utilization metrics from Container insights. The following table summarizes the details to help you understand how to use the metric charts to visualize container metrics.
+In Metrics Explorer, you can view aggregated node and pod utilization metrics from Container insights. The following table summarizes the details to help you understand how to use the metric charts to visualize container metrics.
|Namespace | Metric | Description | |-|--|-| | insights.container/nodes | |
-| | cpuUsageMillicores | Aggregated measurement of CPU utilization across the cluster. It is a CPU core split into 1000 units (milli = 1000). Used to determine the usage of cores in a container where many applications might be using one core.|
+| | cpuUsageMillicores | Aggregated measurement of CPU utilization across the cluster. It's a CPU core split into 1,000 units (milli = 1000). Used to determine the usage of cores in a container where many applications might be using one core.|
| | cpuUsagePercentage | Aggregated average CPU utilization measured in percentage across the cluster.| | | memoryRssBytes | Container RSS memory used in bytes.| | | memoryRssPercentage | Container RSS memory used in percent.|
You can [split](../essentials/metrics-charts.md#apply-splitting) a metric to vie
When you switch to the **Nodes**, **Controllers**, and **Containers** tabs, a property pane automatically displays on the right side of the page. It shows the properties of the item selected, which includes the labels you defined to organize Kubernetes objects. When a Linux node is selected, the **Local Disk Capacity** section also shows the available disk space and the percentage used for each disk presented to the node. Select the **>>** link in the pane to view or hide the pane.
-As you expand the objects in the hierarchy, the properties pane updates based on the object selected. From the pane, you also can view Kubernetes container logs (stdout/stderror), events, and pod metrics by selecting the **View live data (preview)** link at the top of the pane. For more information about the configuration required to grant and control access to view this data, see [Setup the Live Data (preview)](container-insights-livedata-setup.md). While you review cluster resources, you can see this data from the container in real-time. For more information about this feature, see [How to view Kubernetes logs, events, and pod metrics in real time](container-insights-livedata-overview.md). To view Kubernetes log data stored in your workspace based on pre-defined log searches, select **View container logs** from the **View in analytics** drop-down list. For additional information about this topic, see [How to query logs from Container insights](container-insights-log-query.md).
+As you expand the objects in the hierarchy, the properties pane updates based on the object selected. From the pane, you also can view Kubernetes container logs (stdout/stderror), events, and pod metrics by selecting the **View live data (preview)** link at the top of the pane. For more information about the configuration required to grant and control access to view this data, see [Set up the Live Data (preview)](container-insights-livedata-setup.md).
-Use the **+ Add Filter** option at the top of the page to filter the results for the view by **Service**, **Node**, **Namespace**, or **Node Pool**. After you select the filter scope, select one of the values shown in the **Select value(s)** field. After the filter is configured, it's applied globally while viewing any perspective of the AKS cluster. The formula only supports the equal sign. You can add additional filters on top of the first one to further narrow your results. For example, if you specify a filter by **Node**, you can only select **Service** or **Namespace** for the second filter.
+While you review cluster resources, you can see this data from the container in real time. For more information about this feature, see [How to view Kubernetes logs, events, and pod metrics in real time](container-insights-livedata-overview.md).
+
+To view Kubernetes log data stored in your workspace based on predefined log searches, select **View container logs** from the **View in analytics** dropdown list. For more information, see [How to query logs from Container insights](container-insights-log-query.md).
+
+Use the **+ Add Filter** option at the top of the page to filter the results for the view by **Service**, **Node**, **Namespace**, or **Node Pool**. After you select the filter scope, select one of the values shown in the **Select value(s)** field. After the filter is configured, it's applied globally while viewing any perspective of the AKS cluster. The formula only supports the equal sign. You can add more filters on top of the first one to further narrow your results. For example, if you specify a filter by **Node**, you can only select **Service** or **Namespace** for the second filter.
Specifying a filter in one tab continues to be applied when you select another. It's deleted after you select the **x** symbol next to the specified filter. Switch to the **Nodes** tab and the row hierarchy follows the Kubernetes object model, which starts with a node in your cluster. Expand the node to view one or more pods running on the node. If more than one container is grouped to a pod, they're displayed as the last row in the hierarchy. You also can view how many non-pod-related workloads are running on the host if the host has processor or memory pressure.
-![Example of the Kubernetes Node hierarchy in the performance view](./media/container-insights-analyze/containers-nodes-view.png)
+![Screenshot that shows an example of the Kubernetes Node hierarchy in the performance view.](./media/container-insights-analyze/containers-nodes-view.png)
-Windows Server containers that run the Windows Server 2019 OS are shown after all of the Linux-based nodes in the list. When you expand a Windows Server node, you can view one or more pods and containers that run on the node. After a node is selected, the properties pane shows version information.
+Windows Server containers that run the Windows Server 2019 OS are shown after all the Linux-based nodes in the list. When you expand a Windows Server node, you can view one or more pods and containers that run on the node. After a node is selected, the properties pane shows version information.
-![Example Node hierarchy with Windows Server nodes listed](./media/container-insights-analyze/nodes-view-windows.png)
+![Screenshot that shows an example Node hierarchy with Windows Server nodes listed.](./media/container-insights-analyze/nodes-view-windows.png)
Azure Container Instances virtual nodes that run the Linux OS are shown after the last AKS cluster node in the list. When you expand a Container Instances virtual node, you can view one or more Container Instances pods and containers that run on the node. Metrics aren't collected and reported for nodes, only for pods.
-![Example Node hierarchy with Container Instances listed](./media/container-insights-analyze/nodes-view-aci.png)
+![Screenshot that shows an example Node hierarchy with Container Instances listed.](./media/container-insights-analyze/nodes-view-aci.png)
From an expanded node, you can drill down from the pod or container that runs on the node to the controller to view performance data filtered for that controller. Select the value under the **Controller** column for the specific node.
-![Screenshot shows the drill-down from node to controller in the performance view](./media/container-insights-analyze/drill-down-node-controller.png)
-
-Select controllers or containers at the top of the page to review the status and resource utilization for those objects. To review memory utilization, in the **Metric** drop-down list, select **Memory RSS** or **Memory working set**. **Memory RSS** is supported only for Kubernetes version 1.8 and later. Otherwise, you view values for **Min&nbsp;%** as *NaN&nbsp;%*, which is a numeric data type value that represents an undefined or unrepresentable value.
+![Screenshot that shows the drill-down from node to controller in the performance view.](./media/container-insights-analyze/drill-down-node-controller.png)
-![Container nodes performance view](./media/container-insights-analyze/containers-node-metric-dropdown.png)
+Select controllers or containers at the top of the page to review the status and resource utilization for those objects. To review memory utilization, in the **Metric** dropdown list, select **Memory RSS** or **Memory working set**. **Memory RSS** is supported only for Kubernetes version 1.8 and later. Otherwise, you view values for **Min&nbsp;%** as *NaN&nbsp;%*, which is a numeric data type value that represents an undefined or unrepresentable value.
-**Memory working set** shows both the resident memory and virtual memory (cache) included and is a total of what the application is using. **Memory RSS** shows only main memory (which is nothing but the resident memory in other words). This metric shows the actual capacity of available memory. What is the difference between resident memory and virtual memory?
+![Screenshot that shows a Container nodes performance view.](./media/container-insights-analyze/containers-node-metric-dropdown.png)
-- Resident memory or main memory, is the actual amount of machine memory available to the nodes of the cluster.
+**Memory working set** shows both the resident memory and virtual memory (cache) included and is a total of what the application is using. **Memory RSS** shows only main memory, which is nothing but the resident memory. This metric shows the actual capacity of available memory. What's the difference between resident memory and virtual memory?
-- Virtual memory is reserved hard disk space (cache) used by the operating system to swap data from memory to disk when under memory pressure, and then fetch it back to memory when needed.
+- **Resident memory**, or main memory, is the actual amount of machine memory available to the nodes of the cluster.
+- **Virtual memory** is reserved hard disk space (cache) used by the operating system to swap data from memory to disk when under memory pressure, and then fetch it back to memory when needed.
By default, performance data is based on the last six hours, but you can change the window by using the **TimeRange** option at the upper left. You also can filter the results within the time range by selecting **Min**, **Avg**, **50th**, **90th**, **95th**, and **Max** in the percentile selector.
-![Percentile selection for data filtering](./media/container-insights-analyze/containers-metric-percentile-filter.png)
+![Screenshot that shows a percentile selection for data filtering.](./media/container-insights-analyze/containers-metric-percentile-filter.png)
When you hover over the bar graph under the **Trend** column, each bar shows either CPU or memory usage, depending on which metric is selected, within a sample period of 15 minutes. After you select the trend chart through a keyboard, use the Alt+Page up key or Alt+Page down key to cycle through each bar individually. You get the same details that you would if you hovered over the bar.
-![Trend bar chart hover-over example](./media/container-insights-analyze/containers-metric-trend-bar-01.png)
+![Screenshot that shows a Trend bar chart hover-over example.](./media/container-insights-analyze/containers-metric-trend-bar-01.png)
In the next example, for the first node in the list, *aks-nodepool1-*, the value for **Containers** is 9. This value is a rollup of the total number of containers deployed.
-![Rollup of containers-per-node example](./media/container-insights-analyze/containers-nodes-containerstotal.png)
+![Screenshot that shows a rollup of containers-per-node example.](./media/container-insights-analyze/containers-nodes-containerstotal.png)
This information can help you quickly identify whether you have a proper balance of containers between nodes in your cluster.
The information that's presented when you view the **Nodes** tab is described in
| Controller | Only for containers and pods. It shows which controller it resides in. Not all pods are in a controller, so some might display **N/A**. | | Trend Min&nbsp;%, Avg&nbsp;%, 50th&nbsp;%, 90th&nbsp;%, 95th&nbsp;%, Max&nbsp;% | Bar graph trend represents the average percentile metric percentage of the controller. |
-You may notice a workload after expanding a node named **Other process**. It represents non-containerized processes that run on your node, and includes:
-
-* Self-managed or managed Kubernetes non-containerized processes
-
-* Container run-time processes
+You might notice a workload after expanding a node named **Other process**. It represents non-containerized processes that run on your node, and includes:
-* Kubelet
+* Self-managed or managed Kubernetes non-containerized processes.
+* Container run-time processes.
+* Kubelet.
+* System processes running on your node.
+* Other non-Kubernetes workloads running on node hardware or a VM.
-* System processes running on your node
-
-* Other non-Kubernetes workloads running on node hardware or VM
-
-It is calculated by: *Total usage from CAdvisor* - *Usage from containerized process*.
+It's calculated by *Total usage from CAdvisor* - *Usage from containerized process*.
In the selector, select **Controllers**.
-![Select Controllers view](./media/container-insights-analyze/containers-controllers-tab.png)
+![Screenshot that shows selecting Controllers.](./media/container-insights-analyze/containers-controllers-tab.png)
Here you can view the performance health of your controllers and Container Instances virtual node controllers or virtual node pods not connected to a controller.
-![\<Name> controllers performance view](./media/container-insights-analyze/containers-controllers-view.png)
+![Screenshot that shows a \<Name> controllers performance view.](./media/container-insights-analyze/containers-controllers-view.png)
The row hierarchy starts with a controller. When you expand a controller, you view one or more pods. Expand a pod, and the last row displays the container grouped to the pod. From an expanded controller, you can drill down to the node it's running on to view performance data filtered for that node. Container Instances pods not connected to a controller are listed last in the list.
-![Example Controllers hierarchy with Container Instances pods listed](./media/container-insights-analyze/controllers-view-aci.png)
+![Screenshot that shows an example Controllers hierarchy with Container Instances pods listed.](./media/container-insights-analyze/controllers-view-aci.png)
Select the value under the **Node** column for the specific controller.
-![Example drill-down from controller to node in the performance view](./media/container-insights-analyze/drill-down-controller-node.png)
+![Screenshot that shows an example drill-down from controller to node in the performance view.](./media/container-insights-analyze/drill-down-controller-node.png)
The information that's displayed when you view controllers is described in the following table. | Column | Description | |--|-| | Name | The name of the controller.|
-| Status | The rollup status of the containers after it's finished running with status such as *OK*, *Terminated*, *Failed*, *Stopped*, or *Paused*. If the container is running but the status either wasn't properly displayed or wasn't picked up by the agent and hasn't responded for more than 30 minutes, the status is *Unknown*. Additional details of the status icon are provided in the following table.|
+| Status | The rollup status of the containers after it's finished running with status such as **OK**, **Terminated**, **Failed**, **Stopped**, or **Paused**. If the container is running but the status either wasn't properly displayed or wasn't picked up by the agent and hasn't responded for more than 30 minutes, the status is **Unknown**. More details of the status icon are provided in the following table.|
| Min&nbsp;%, Avg&nbsp;%, 50th&nbsp;%, 90th&nbsp;%, 95th&nbsp;%, Max&nbsp;%| Rollup average of the average percentage of each entity for the selected metric and percentile. | | Min, Avg, 50th, 90th, 95th, Max | Rollup of the average CPU millicore or memory performance of the container for the selected percentile. The average value is measured from the CPU/Memory limit set for a pod. | | Containers | Total number of containers for the controller or pod. |
The icons in the status field indicate the online status of the containers.
| Icon | Status | |--|-|
-| ![Ready running status icon](./media/container-insights-analyze/containers-ready-icon.png) | Running (Ready)|
-| ![Waiting or Paused status icon](./media/container-insights-analyze/containers-waiting-icon.png) | Waiting or Paused|
-| ![Last reported running status icon](./media/container-insights-analyze/containers-grey-icon.png) | Last reported running but hasn't responded for more than 30 minutes|
-| ![Successful status icon](./media/container-insights-analyze/containers-green-icon.png) | Successfully stopped or failed to stop|
+| ![Ready running status icon.](./media/container-insights-analyze/containers-ready-icon.png) | Running (Ready)|
+| ![Waiting or Paused status icon.](./media/container-insights-analyze/containers-waiting-icon.png) | Waiting or Paused|
+| ![Last reported running status icon.](./media/container-insights-analyze/containers-grey-icon.png) | Last reported running but hasn't responded for more than 30 minutes|
+| ![Successful status icon.](./media/container-insights-analyze/containers-green-icon.png) | Successfully stopped or failed to stop|
-The status icon displays a count based on what the pod provides. It shows the worst two states, and when you hover over the status, it displays a rollup status from all pods in the container. If there isn't a ready state, the status value displays **(0)**.
+The status icon displays a count based on what the pod provides. It shows the worst two states. When you hover over the status, it displays a rollup status from all pods in the container. If there isn't a ready state, the status value displays **(0)**.
In the selector, select **Containers**.
-![Select Containers view](./media/container-insights-analyze/containers-containers-tab.png)
+![Screenshot that shows selecting Containers.](./media/container-insights-analyze/containers-containers-tab.png)
-Here you can view the performance health of your Azure Kubernetes and Azure Container Instances containers.
+Here you can view the performance health of your AKS and Container Instances containers.
-![\<Name> containers performance view](./media/container-insights-analyze/containers-containers-view.png)
+![Screenshot that shows a \<Name> containers performance view.](./media/container-insights-analyze/containers-containers-view.png)
From a container, you can drill down to a pod or node to view performance data filtered for that object. Select the value under the **Pod** or **Node** column for the specific container.
-![Example drill-down from node to containers in the performance view](./media/container-insights-analyze/drill-down-controller-node.png)
+![Screenshot that shows an example drill-down from node to containers in the performance view.](./media/container-insights-analyze/drill-down-controller-node.png)
The information that's displayed when you view containers is described in the following table. | Column | Description | |--|-| | Name | The name of the controller.|
-| Status | Status of the containers, if any. Additional details of the status icon are provided in the next table.|
+| Status | Status of the containers, if any. More details of the status icon are provided in the next table.|
| Min&nbsp;%, Avg&nbsp;%, 50th&nbsp;%, 90th&nbsp;%, 95th&nbsp;%, Max&nbsp;% | The rollup of the average percentage of each entity for the selected metric and percentile. | | Min, Avg, 50th, 90th, 95th, Max | The rollup of the average CPU millicore or memory performance of the container for the selected percentile. The average value is measured from the CPU/Memory limit set for a pod. | | Pod | Container where the pod resides.|
The icons in the status field indicate the online statuses of pods, as described
| Icon | Status | |--|-|
-| ![Ready running status icon](./media/container-insights-analyze/containers-ready-icon.png) | Running (Ready)|
-| ![Waiting or Paused status icon](./media/container-insights-analyze/containers-waiting-icon.png) | Waiting or Paused|
-| ![Last reported running status icon](./media/container-insights-analyze/containers-grey-icon.png) | Last reported running but hasn't responded in more than 30 minutes|
-| ![Terminated status icon](./media/container-insights-analyze/containers-terminated-icon.png) | Successfully stopped or failed to stop|
-| ![Failed status icon](./media/container-insights-analyze/containers-failed-icon.png) | Failed state |
+| ![Ready running status icon.](./media/container-insights-analyze/containers-ready-icon.png) | Running (Ready)|
+| ![Waiting or Paused status icon.](./media/container-insights-analyze/containers-waiting-icon.png) | Waiting or Paused|
+| ![Last reported running status icon.](./media/container-insights-analyze/containers-grey-icon.png) | Last reported running but hasn't responded in more than 30 minutes|
+| ![Terminated status icon.](./media/container-insights-analyze/containers-terminated-icon.png) | Successfully stopped or failed to stop|
+| ![Failed status icon.](./media/container-insights-analyze/containers-failed-icon.png) | Failed state |
## Monitor and visualize network configurations
-Azure Network Policy Manager includes informative Prometheus metrics that allow you to monitor and better understand your network configurations. It provides built-in visualizations in either the Azure portal or Grafana Labs. For details, see [Monitor and Visualize Network Configurations with Azure NPM](../../virtual-network/kubernetes-network-policies.md#monitor-and-visualize-network-configurations-with-azure-npm).
+Azure Network Policy Manager includes informative Prometheus metrics that you can use to monitor and better understand your network configurations. It provides built-in visualizations in either the Azure portal or Grafana Labs. For more information, see [Monitor and visualize network configurations with Azure NPM](../../virtual-network/kubernetes-network-policies.md#monitor-and-visualize-network-configurations-with-azure-npm).
## Workbooks
-Workbooks combine text, log queries, metrics, and parameters into rich interactive reports that allow you to analyze cluster performance. See [Workbooks in Container insights](container-insights-reports.md) for a description of the workbooks available for Container insights.
-
+Workbooks combine text, log queries, metrics, and parameters into rich interactive reports that you can use to analyze cluster performance. For a description of the workbooks available for Container insights, see [Workbooks in Container insights](container-insights-reports.md).
## Next steps -- Review [Create performance alerts with Container insights](./container-insights-log-alerts.md) to learn how to create alerts for high CPU and memory utilization to support your DevOps or operational processes and procedures.--- View [log query examples](container-insights-log-query.md) to see predefined queries and examples to evaluate or customize to alert, visualize, or analyze your clusters.--- View [monitor cluster health](./container-insights-overview.md) to learn about viewing the health status your Kubernetes cluster.
+- See [Create performance alerts with Container insights](./container-insights-log-alerts.md) to learn how to create alerts for high CPU and memory utilization to support your DevOps or operational processes and procedures.
+- See [Log query examples](container-insights-log-query.md) to see predefined queries and examples to evaluate or customize to alert, visualize, or analyze your clusters.
+- See [Monitor cluster health](./container-insights-overview.md) to learn about viewing the health status of your Kubernetes cluster.
azure-monitor Container Insights Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-cost.md
The following are examples of what changes you can apply to your cluster by modi
ttlSecondsAfterFinished: 100 ```
-After applying one or more of these changes to your ConfigMaps, see [Applying updated ConfigMap](container-insights-prometheus-integration.md#applying-updated-configmap) to apply it to your cluster.
+After applying one or more of these changes to your ConfigMaps, see [Apply updated ConfigMap](container-insights-prometheus-integration.md#apply-updated-configmap) to apply it to your cluster.
### Prometheus metrics scraping
azure-monitor Container Insights Log Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-log-query.md
Title: How to query logs from Container insights
-description: Container insights collects metrics and log data and this article describes the records and includes sample queries.
+ Title: Query logs from Container insights
+description: Container insights collects metrics and log data, and this article describes the records and includes sample queries.
Last updated 07/19/2021
-# How to query logs from Container insights
+# Query logs from Container insights
-Container insights collects performance metrics, inventory data, and health state information from container hosts and containers. The data is collected every three minutes and forwarded to the Log Analytics workspace in Azure Monitor where it's available for [log queries](../logs/log-query-overview.md) using [Log Analytics](../logs/log-analytics-overview.md) in Azure Monitor. You can apply this data to scenarios that include migration planning, capacity analysis, discovery, and on-demand performance troubleshooting. Azure Monitor Logs can help you look for trends, diagnose bottlenecks, forecast, or correlate data that can help you determine whether the current cluster configuration is performing optimally.
+Container insights collects performance metrics, inventory data, and health state information from container hosts and containers. The data is collected every three minutes and forwarded to the Log Analytics workspace in Azure Monitor where it's available for [log queries](../logs/log-query-overview.md) using [Log Analytics](../logs/log-analytics-overview.md) in Azure Monitor.
-See [Using queries in Azure Monitor Log Analytics](../logs/queries.md) for information on using these queries and [Log Analytics tutorial](../logs/log-analytics-tutorial.md) for a complete tutorial on using Log Analytics to run queries and work with their results.
+You can apply this data to scenarios that include migration planning, capacity analysis, discovery, and on-demand performance troubleshooting. Azure Monitor Logs can help you look for trends, diagnose bottlenecks, forecast, or correlate data that can help you determine whether the current cluster configuration is performing optimally.
+
+For information on using these queries, see [Using queries in Azure Monitor Log Analytics](../logs/queries.md). For a complete tutorial on using Log Analytics to run queries and work with their results, see [Log Analytics tutorial](../logs/log-analytics-tutorial.md).
## Open Log Analytics
-There are multiple options for starting Log Analytics, each starting with a different [scope](../logs/scope.md). For access to all data in the workspace, select **Logs** from the **Monitor** menu. To limit the data to a single Kubernetes cluster, select **Logs** from that cluster's menu.
+There are multiple options for starting Log Analytics. Each option starts with a different [scope](../logs/scope.md). For access to all data in the workspace, on the **Monitoring** menu, select **Logs**. To limit the data to a single Kubernetes cluster, select **Logs** from that cluster's menu.
+ ## Existing log queries
-You don't necessarily need to understand how to write a log query to use Log Analytics. There are multiple prebuilt queries that you can select and either run without modification or use as a start to a custom query. Click **Queries** at the top of the Log Analytics screen and view queries with a **Resource type** of **Kubernetes Services**.
+You don't necessarily need to understand how to write a log query to use Log Analytics. You can select from multiple prebuilt queries. You can either run the queries without modification or use them as a start to a custom query. Select **Queries** at the top of the Log Analytics screen, and view queries with a **Resource type** of **Kubernetes Services**.
+ ## Container tables
-See [Azure Monitor table reference](/azure/azure-monitor/reference/tables/tables-resourcetype#kubernetes-services) for a list of tables and their detailed descriptions used by Container insights. All of these tables are available for log queries.
+For a list of tables and their detailed descriptions used by Container insights, see the [Azure Monitor table reference](/azure/azure-monitor/reference/tables/tables-resourcetype#kubernetes-services). All these tables are available for log queries.
## Example log queries
-It's often useful to build queries that start with an example or two and then modify them to fit your requirements. To help build more advanced queries, you can experiment with the following sample queries:
+
+It's often useful to build queries that start with an example or two and then modify them to fit your requirements. To help build more advanced queries, you can experiment with the following sample queries.
### List all of a container's lifecycle information
Perf
| summarize AvgUsedRssMemoryBytes = avg(CounterValue) by bin(TimeGenerated, 30m), InstanceName ```
-### Requests Per Minute with Custom Metrics
+### Requests per minute with custom metrics
```kusto InsightsMetrics
InsightsMetrics
| project RequestsPerMinute = Val - prev(Val), TimeGenerated | render barchart ```+ ### Pods by name and namespace ```kusto
on ContainerID
``` ### Pod scale-out (HPA)
-Returns the number of scaled out replicas in each deployment. Calculates the scale-out percentage with the maximum number of replicas configured in HPA.
+This query returns the number of scaled-out replicas in each deployment. It calculates the scale-out percentage with the maximum number of replicas configured in HPA.
```kusto let _minthreshold = 70; // minimum threshold goes here if you want to setup as an alert
KubePodInventory
on deployment_hpa ```
-### Nodepool scale-outs
-Returns the number of active nodes in each node pool. Calculates the number of available active nodes and the max node configuration in the auto-scaler settings to determine the scale-out percentage. See commented lines in query to use it for a **number of results** alert rule.
+### Nodepool scale-outs
+
+This query returns the number of active nodes in each node pool. It calculates the number of available active nodes and the max node configuration in the autoscaler settings to determine the scale-out percentage. See commented lines in the query to use it for a **number of results** alert rule.
```kusto let nodepoolMaxnodeCount = 10; // the maximum number of nodes in your auto scale setting goes here.
KubeNodeInventory
| extend nodepoolType = todynamic(Labels) //Parse the labels to get the list of node pool types | extend nodepoolName = todynamic(nodepoolType[0].agentpool) // parse the label to get the nodepool name or set the specific nodepool name (like nodepoolName = 'agentpool)' | summarize nodeCount = count(Computer) by ClusterName, tostring(nodepoolName), TimeGenerated
-//(Uncomment the below two lines to set this as an log search alert)
+//(Uncomment the below two lines to set this as a log search alert)
//| extend scaledpercent = iff(((nodeCount * 100 / nodepoolMaxnodeCount) >= _minthreshold and (nodeCount * 100 / nodepoolMaxnodeCount) < _maxthreshold), "warn", "normal") //| where scaledpercent == 'warn' | summarize arg_max(TimeGenerated, *) by nodeCount, ClusterName, tostring(nodepoolName)
KubeNodeInventory
``` ### System containers (replicaset) availability
-Returns the system containers (replicasets) and report the unavailable percentage. See commented lines in query to use it for a **number of results** alert rule.
+
+This query returns the system containers (replicasets) and reports the unavailable percentage. See commented lines in the query to use it for a **number of results** alert rule.
```kusto let startDateTime = 5m; // the minimum time interval goes here
KubePodInventory
``` ### System containers (daemonsets) availability
-Returns the system containers (daemonsets) and report the unavailable percentage. See commented lines in query to use it for a **number of results** alert rule.
+
+This query returns the system containers (daemonsets) and reports the unavailable percentage. See commented lines in the query to use it for a **number of results** alert rule.
```kusto let startDateTime = 5m; // the minimum time interval goes here
KubePodInventory
``` ## Resource logs
-Resource logs for AKS are stored in the [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) table You can distinguish different logs with the **Category** column. See [AKS reference resource logs](../../aks/monitor-aks-reference.md) for a description of each category. The following examples require a diagnostic extension to send resource logs for an AKS cluster to a Log Analytics workspace. See [Configure monitoring](../../aks/monitor-aks.md#configure-monitoring) for details.
+
+Resource logs for AKS are stored in the [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) table. You can distinguish different logs with the **Category** column. For a description of each category, see [AKS reference resource logs](../../aks/monitor-aks-reference.md). The following examples require a diagnostic extension to send resource logs for an AKS cluster to a Log Analytics workspace. For more information, see [Configure monitoring](../../aks/monitor-aks.md#configure-monitoring).
### API server logs
InsightsMetrics
```
-To view Prometheus metrics scraped by Azure Monitor filtered by Namespace, specify "prometheus". Here's a sample query to view Prometheus metrics from the `default` kubernetes namespace.
+To view Prometheus metrics scraped by Azure Monitor and filtered by namespace, specify "prometheus". Here's a sample query to view Prometheus metrics from the `default` Kubernetes namespace.
``` InsightsMetrics
InsightsMetrics
| where Name contains "some_prometheus_metric" ```
-### Query config or scraping errors
+### Query configuration or scraping errors
To investigate any configuration or scraping errors, the following example query returns informational events from the `KubeMonAgentEvents` table.
KubeMonAgentEvents | where Level != "Info"
The output shows results similar to the following example:
-![Log query results of informational events from agent](./media/container-insights-log-query/log-query-example-kubeagent-events.png)
+![Screenshot that shows log query results of informational events from an agent.](./media/container-insights-log-query/log-query-example-kubeagent-events.png)
## Next steps
-Container insights does not include a predefined set of alerts. Review the [Create performance alerts with Container insights](./container-insights-log-alerts.md) to learn how to create recommended alerts for high CPU and memory utilization to support your DevOps or operational processes and procedures.
+Container insights doesn't include a predefined set of alerts. To learn how to create recommended alerts for high CPU and memory utilization to support your DevOps or operational processes and procedures, see [Create performance alerts with Container insights](./container-insights-log-alerts.md).
azure-monitor Container Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-overview.md
Title: Overview of Container insights | Microsoft Docs
-description: This article describes Container insights that monitors AKS Container Insights solution and the value it delivers by monitoring the health of your AKS clusters and Container Instances in Azure.
+description: This article describes Container insights, which monitors the AKS Container insights solution, and the value it delivers by monitoring the health of your AKS clusters and Container Instances in Azure.
Last updated 09/08/2020
Container insights is a feature designed to monitor the performance of container workloads deployed to: -- Managed Kubernetes clusters hosted on [Azure Kubernetes Service (AKS)](../../aks/intro-kubernetes.md)-- Self-managed Kubernetes clusters hosted on Azure using [AKS Engine](https://github.com/Azure/aks-engine)-- [Azure Container Instances](../../container-instances/container-instances-overview.md)-- Self-managed Kubernetes clusters hosted on [Azure Stack](/azure-stack/user/azure-stack-kubernetes-aks-engine-overview) or on-premises-- [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/overview.md)
+- Managed Kubernetes clusters hosted on [Azure Kubernetes Service (AKS)](../../aks/intro-kubernetes.md).
+- Self-managed Kubernetes clusters hosted on Azure using [AKS Engine](https://github.com/Azure/aks-engine).
+- [Azure Container Instances](../../container-instances/container-instances-overview.md).
+- Self-managed Kubernetes clusters hosted on [Azure Stack](/azure-stack/user/azure-stack-kubernetes-aks-engine-overview) or on-premises.
+- [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/overview.md).
-Container insights supports clusters running the Linux and Windows Server 2019 operating system. The container runtimes it supports are Moby and any CRI compatible runtime such as CRI-O and ContainerD. Docker is no longer supported as a container runtime as of September 2022. For more information about this deprecation, see the [AKS release notes][aks-release-notes].
+Container insights supports clusters running the Linux and Windows Server 2019 operating system. The container runtimes it supports are Moby and any CRI-compatible runtime such as CRI-O and ContainerD. Docker is no longer supported as a container runtime as of September 2022. For more information about this deprecation, see the [AKS release notes][aks-release-notes].
>[!NOTE]
-> Container insights support for Windows Server 2022 operating system in public preview.
+> Container insights support for Windows Server 2022 operating system is in public preview.
Monitoring your containers is critical, especially when you're running a production cluster, at scale, with multiple applications.
-Container insights gives you performance visibility by collecting memory and processor metrics from controllers, nodes, and containers that are available in Kubernetes through the Metrics API. After you enable monitoring from Kubernetes clusters, metrics and Container logs are automatically collected for you through a containerized version of the Log Analytics agent for Linux. Metrics are sent to the [metrics database in Azure Monitor](../essentials/data-platform-metrics.md), and log data is sent to your [Log Analytics workspace](../logs/log-analytics-workspace-overview.md).
+Container insights gives you performance visibility by collecting memory and processor metrics from controllers, nodes, and containers that are available in Kubernetes through the Metrics API. After you enable monitoring from Kubernetes clusters, metrics and Container logs are automatically collected for you through a containerized version of the Log Analytics agent for Linux. Metrics are sent to the [metrics database in Azure Monitor](../essentials/data-platform-metrics.md). Log data is sent to your [Log Analytics workspace](../logs/log-analytics-workspace-overview.md).
## Features of Container insights
-Container insights delivers a comprehensive monitoring experience to understand the performance and health of your Kubernetes cluster and container workloads.
+Container insights deliver a comprehensive monitoring experience to understand the performance and health of your Kubernetes cluster and container workloads. You can:
- Identify resource bottlenecks by identifying AKS containers running on the node and their average processor and memory utilization. - Identify processor and memory utilization of container groups and their containers hosted in Azure Container Instances. - View the controller's or pod's overall performance by identifying where the container resides in a controller or a pod. - Review the resource utilization of workloads running on the host that are unrelated to the standard processes that support the pod. - Identify capacity needs and determine the maximum load that the cluster can sustain by understanding the behavior of the cluster under average and heaviest loads.-- Configure alerts to proactively notify you or record it when CPU and memory utilization on nodes or containers exceed your thresholds, or when a health state change occurs in the cluster at the infrastructure or nodes health rollup.-- Integrate with [Prometheus](https://prometheus.io/docs/introduction/overview/) to view application and workload metrics it collects from nodes and Kubernetes using [queries](container-insights-log-query.md) to create custom alerts, dashboards, and perform detailed analysis.
+- Configure alerts to proactively notify you or record when CPU and memory utilization on nodes or containers exceed your thresholds, or when a health state change occurs in the cluster at the infrastructure or nodes health rollup.
+- Integrate with [Prometheus](https://prometheus.io/docs/introduction/overview/) to view application and workload metrics it collects from nodes and Kubernetes by using [queries](container-insights-log-query.md) to create custom alerts and dashboards and perform detailed analysis.
- Monitor container workloads [deployed to AKS Engine](https://github.com/Azure/aks-engine) on-premises and [AKS Engine on Azure Stack](/azure-stack/user/azure-stack-kubernetes-aks-engine-overview). - Monitor container workloads [deployed to Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/overview.md). --
-Check out the following video providing an intermediate level deep dive to help you learn about monitoring your AKS cluster with Container insights. Note that the video refers to *Azure Monitor for Containers* which is the previous name for *Container insights*.
+The following video provides an intermediate-level deep dive to help you learn about monitoring your AKS cluster with Container insights. The video refers to *Azure Monitor for Containers*, which is the previous name for *Container insights*.
> [!VIDEO https://www.youtube.com/embed/XEdwGvS2AwA]
+## Access Container insights
-## How to access Container insights
-Access Container insights in the Azure portal from Azure Monitor or directly from the selected AKS cluster. The Azure Monitor menu gives you the global perspective of all the containers deployed amd which are monitored, allowing you to search and filter across your subscriptions and resource groups. You can then drill into Container insights from the selected container. Access Container insights for a particular AKS container directly from the AKS page.
-
-![Overview of methods to access Container insights](./media/container-insights-overview/azmon-containers-experience.png)
+You can access Container insights in the Azure portal from Azure Monitor or directly from the selected AKS cluster. The Azure Monitor menu gives you the global perspective of all the containers that are deployed and monitored. This information allows you to search and filter across your subscriptions and resource groups. You can then drill into Container insights from the selected container. Access Container insights for a particular AKS container directly from the AKS page.
+![Screenshot that shows an overview of methods to access Container insights.](./media/container-insights-overview/azmon-containers-experience.png)
## Differences between Windows and Linux clusters
-The main differences in monitoring a Windows Server cluster compared to a Linux cluster include the following:
-- Windows doesn't have a Memory RSS metric, and as a result it isn't available for Windows node and containers. The [Working Set](/windows/win32/memory/working-set) metric is available.
+The main differences in monitoring a Windows Server cluster compared to a Linux cluster include:
+
+- Windows doesn't have a Memory RSS metric, and as a result it isn't available for Windows nodes and containers. The [Working Set](/windows/win32/memory/working-set) metric is available.
- Disk storage capacity information isn't available for Windows nodes. - Only pod environments are monitored, not Docker environments. - With the preview release, a maximum of 30 Windows Server containers are supported. This limitation doesn't apply to Linux containers. ## Next steps
-To begin monitoring your Kubernetes cluster, review [How to enable Container insights](container-insights-onboard.md) to understand the requirements and available methods to enable monitoring.
+To begin monitoring your Kubernetes cluster, review [Enable Container insights](container-insights-onboard.md) to understand the requirements and available methods to enable monitoring.
<!-- LINKS - external --> [aks-release-notes]: https://github.com/Azure/AKS/releases
azure-monitor Container Insights Prometheus Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-prometheus-integration.md
Title: Configure Container insights Prometheus Integration | Microsoft Docs
+ Title: Configure Container insights Prometheus integration | Microsoft Docs
description: This article describes how you can configure the Container insights agent to scrape metrics from Prometheus with your Kubernetes cluster. Last updated 04/22/2020
# Configure scraping of Prometheus metrics with Container insights
-[Prometheus](https://prometheus.io/) is a popular open source metric monitoring solution and is a part of the [Cloud Native Compute Foundation](https://www.cncf.io/). Container insights provides a seamless onboarding experience to collect Prometheus metrics. Typically, to use Prometheus, you need to set up and manage a Prometheus server with a store. By integrating with Azure Monitor, a Prometheus server is not required. You just need to expose the Prometheus metrics endpoint through your exporters or pods (application), and the containerized agent for Container insights can scrape the metrics for you.
+[Prometheus](https://prometheus.io/) is a popular open-source metric monitoring solution and is a part of the [Cloud Native Compute Foundation](https://www.cncf.io/). Container insights provides a seamless onboarding experience to collect Prometheus metrics.
-![Container monitoring architecture for Prometheus](./media/container-insights-prometheus-integration/monitoring-kubernetes-architecture.png)
+Typically, to use Prometheus, you need to set up and manage a Prometheus server with a store. If you integrate with Azure Monitor, a Prometheus server isn't required. You only need to expose the Prometheus metrics endpoint through your exporters or pods (application). Then the containerized agent for Container insights can scrape the metrics for you.
+
+![Diagram that shows container monitoring architecture for Prometheus.](./media/container-insights-prometheus-integration/monitoring-kubernetes-architecture.png)
>[!NOTE]
->The minimum agent version supported for scraping Prometheus metrics is ciprod07092019 or later, and the agent version supported for writing configuration and agent errors in the `KubeMonAgentEvents` table is ciprod10112019. For Azure Red Hat OpenShift and Red Hat OpenShift v4, agent version ciprod04162020 or higher.
+>The minimum agent version supported for scraping Prometheus metrics is ciprod07092019. The agent version supported for writing configuration and agent errors in the `KubeMonAgentEvents` table is ciprod10112019. For Azure Red Hat OpenShift and Red Hat OpenShift v4, the agent version is ciprod04162020 or later.
>
->For more information about the agent versions and what's included in each release, see [agent release notes](https://github.com/microsoft/Docker-Provider/tree/ci_feature_prod).
->To verify your agent version, click on **Insights** Tab of the resource, from the **Nodes** tab select a node, and in the properties pane note value of the **Agent Image Tag** property.
+>For more information about the agent versions and what's included in each release, see [Agent release notes](https://github.com/microsoft/Docker-Provider/tree/ci_feature_prod).
+>To verify your agent version, select the **Insights** tab of the resource. From the **Nodes** tab, select a node. In the properties pane, note the value of the **Agent Image Tag** property.
Scraping of Prometheus metrics is supported with Kubernetes clusters hosted on: -- Azure Kubernetes Service (AKS)-- Azure Stack or on-premises-- Azure Arc enabled Kubernetes
- - Azure Red Hat OpenShift and Red Hat OpenShift version 4.x through cluster connect to Azure Arc
+- Azure Kubernetes Service (AKS).
+- Azure Stack or on-premises.
+- Azure Arc enabled Kubernetes.
+- Azure Red Hat OpenShift and Red Hat OpenShift version 4.x through cluster connect to Azure Arc.
### Prometheus scraping settings Active scraping of metrics from Prometheus is performed from one of two perspectives:
-* Cluster-wide - HTTP URL and discover targets from listed endpoints of a service. For example, k8s services such as kube-dns and kube-state-metrics, and pod annotations specific to an application. Metrics collected in this context will be defined in the ConfigMap section *[Prometheus data_collection_settings.cluster]*.
-* Node-wide - HTTP URL and discover targets from listed endpoints of a service. Metrics collected in this context will be defined in the ConfigMap section *[Prometheus_data_collection_settings.node]*.
+* **Cluster-wide**: HTTP URL and discover targets from listed endpoints of a service, for example, Kubernetes services such as kube-dns and kube-state-metrics, and pod annotations specific to an application. Metrics collected in this context will be defined in the ConfigMap section *[Prometheus data_collection_settings.cluster]*.
+* **Node-wide**: HTTP URL and discover targets from listed endpoints of a service. Metrics collected in this context will be defined in the ConfigMap section *[Prometheus_data_collection_settings.node]*.
| Endpoint | Scope | Example | |-|-||
-| Pod annotation | Cluster-wide | annotations: <br>`prometheus.io/scrape: "true"` <br>`prometheus.io/path: "/mymetrics"` <br>`prometheus.io/port: "8000"` <br>`prometheus.io/scheme: "http"` |
+| Pod annotation | Cluster-wide | Annotations: <br>`prometheus.io/scrape: "true"` <br>`prometheus.io/path: "/mymetrics"` <br>`prometheus.io/port: "8000"` <br>`prometheus.io/scheme: "http"` |
| Kubernetes service | Cluster-wide | `http://my-service-dns.my-namespace:9100/metrics` <br>`https://metrics-server.kube-system.svc.cluster.local/metrics`ΓÇï |
-| url/endpoint | Per-node and/or cluster-wide | `http://myurl:9101/metrics` |
+| URL/endpoint | Per-node and/or cluster-wide | `http://myurl:9101/metrics` |
-When a URL is specified, Container insights only scrapes the endpoint. When Kubernetes service is specified, the service name is resolved with the cluster DNS server to get the IP address and then the resolved service is scraped.
+When a URL is specified, Container insights only scrapes the endpoint. When Kubernetes service is specified, the service name is resolved with the cluster DNS server to get the IP address. Then the resolved service is scraped.
|Scope | Key | Data type | Value | Description | ||--|--|-|-| | Cluster-wide | | | | Specify any one of the following three methods to scrape endpoints for metrics. |
-| | `urls` | String | Comma-separated array | HTTP endpoint (Either IP address or valid URL path specified). For example: `urls=[$NODE_IP/metrics]`. ($NODE_IP is a specific Container insights parameter and can be used instead of node IP address. Must be all uppercase.) |
-| | `kubernetes_services` | String | Comma-separated array | An array of Kubernetes services to scrape metrics from kube-state-metrics. Fully qualified domain names must be used here. For example,`kubernetes_services = ["https://metrics-server.kube-system.svc.cluster.local/metrics",http://my-service-dns.my-namespace.svc.cluster.local:9100/metrics]`.|
-| | `monitor_kubernetes_pods` | Boolean | true or false | When set to `true` in the cluster-wide settings, Container insights agent will scrape Kubernetes pods across the entire cluster for the following Prometheus annotations:<br> `prometheus.io/scrape:`<br> `prometheus.io/scheme:`<br> `prometheus.io/path:`<br> `prometheus.io/port:` |
-| | `prometheus.io/scrape` | Boolean | true or false | Enables scraping of the pod. `monitor_kubernetes_pods` must be set to `true`. |
-| | `prometheus.io/scheme` | String | http or https | Defaults to scrapping over HTTP. If necessary, set to `https`. |
-| | `prometheus.io/path` | String | Comma-separated array | The HTTP resource path on which to fetch metrics from. If the metrics path is not `/metrics`, define it with this annotation. |
-| | `prometheus.io/port` | String | 9102 | Specify a port to scrape from. If port is not set, it will default to 9102. |
+| | `urls` | String | Comma-separated array | HTTP endpoint (either IP address or valid URL path specified). For example: `urls=[$NODE_IP/metrics]`. ($NODE_IP is a specific Container insights parameter and can be used instead of a node IP address. Must be all uppercase.) |
+| | `kubernetes_services` | String | Comma-separated array | An array of Kubernetes services to scrape metrics from kube-state-metrics. Fully qualified domain names must be used here. For example,`kubernetes_services = ["https://metrics-server.kube-system.svc.cluster.local/metrics",http://my-service-dns.my-namespace.svc.cluster.local:9100/metrics]`|
+| | `monitor_kubernetes_pods` | Boolean | true or false | When set to `true` in the cluster-wide settings, the Container insights agent will scrape Kubernetes pods across the entire cluster for the following Prometheus annotations:<br> `prometheus.io/scrape:`<br> `prometheus.io/scheme:`<br> `prometheus.io/path:`<br> `prometheus.io/port:` |
+| | `prometheus.io/scrape` | Boolean | true or false | Enables scraping of the pod, and `monitor_kubernetes_pods` must be set to `true`. |
+| | `prometheus.io/scheme` | String | http or https | Defaults to scraping over HTTP. If necessary, set to `https`. |
+| | `prometheus.io/path` | String | Comma-separated array | The HTTP resource path from which to fetch metrics. If the metrics path isn't `/metrics`, define it with this annotation. |
+| | `prometheus.io/port` | String | 9102 | Specify a port to scrape from. If the port isn't set, it will default to 9102. |
| | `monitor_kubernetes_pods_namespaces` | String | Comma-separated array | An allowlist of namespaces to scrape metrics from Kubernetes pods.<br> For example, `monitor_kubernetes_pods_namespaces = ["default1", "default2", "default3"]` |
-| Node-wide | `urls` | String | Comma-separated array | HTTP endpoint (Either IP address or valid URL path specified). For example: `urls=[$NODE_IP/metrics]`. ($NODE_IP is a specific Container insights parameter and can be used instead of node IP address. Must be all uppercase.) |
-| Node-wide or Cluster-wide | `interval` | String | 60s | The collection interval default is one minute (60 seconds). You can modify the collection for either the *[prometheus_data_collection_settings.node]* and/or *[prometheus_data_collection_settings.cluster]* to time units such as s, m, h. |
-| Node-wide or Cluster-wide | `fieldpass`<br> `fielddrop`| String | Comma-separated array | You can specify certain metrics to be collected or not from the endpoint by setting the allow (`fieldpass`) and disallow (`fielddrop`) listing. You must set the allowlist first. |
+| Node-wide | `urls` | String | Comma-separated array | HTTP endpoint (either IP address or valid URL path specified). For example: `urls=[$NODE_IP/metrics]`. ($NODE_IP is a specific Container insights parameter and can be used instead of a node IP address. Must be all uppercase.) |
+| Node-wide or cluster-wide | `interval` | String | 60s | The collection interval default is one minute (60 seconds). You can modify the collection for either the *[prometheus_data_collection_settings.node]* and/or *[prometheus_data_collection_settings.cluster]* to time units such as s, m, and h. |
+| Node-wide or cluster-wide | `fieldpass`<br> `fielddrop`| String | Comma-separated array | You can specify certain metrics to be collected or not from the endpoint by setting the allow (`fieldpass`) and disallow (`fielddrop`) listing. You must set the allowlist first. |
-ConfigMaps is a global list and there can be only one ConfigMap applied to the agent. You cannot have another ConfigMaps overruling the collections.
+ConfigMaps is a global list and there can be only one ConfigMap applied to the agent. You can't have another ConfigMaps overruling the collections.
## Configure and deploy ConfigMaps
Perform the following steps to configure your ConfigMap configuration file for t
* Azure Stack or on-premises * Azure Red Hat OpenShift version 4.x and Red Hat OpenShift version 4.x
-1. [Download](https://aka.ms/container-azm-ms-agentconfig) the template ConfigMap yaml file and save it as container-azm-ms-agentconfig.yaml.
+1. [Download](https://aka.ms/container-azm-ms-agentconfig) the template ConfigMap YAML file and save it as container-azm-ms-agentconfig.yaml.
>[!NOTE]
- >This step is not required when working with Azure Red Hat OpenShift since the ConfigMap template already exists on the cluster.
+ >This step isn't required when you're working with Azure Red Hat OpenShift because the ConfigMap template already exists on the cluster.
-2. Edit the ConfigMap yaml file with your customizations to scrape Prometheus metrics.
+1. Edit the ConfigMap YAML file with your customizations to scrape Prometheus metrics.
- >[!NOTE]
- >If you are editing the ConfigMap yaml file for Azure Red Hat OpenShift, first run the command `oc edit configmaps container-azm-ms-agentconfig -n openshift-azure-logging` to open the file in a text editor.
+ If you're editing the ConfigMap YAML file for Azure Red Hat OpenShift, first run the command `oc edit configmaps container-azm-ms-agentconfig -n openshift-azure-logging` to open the file in a text editor.
>[!NOTE]
- >The following annotation `openshift.io/reconcile-protect: "true"` must be added under the metadata of *container-azm-ms-agentconfig* ConfigMap to prevent reconciliation.
+ >The following annotation `openshift.io/reconcile-protect: "true"` must be added under the metadata of *container-azm-ms-agentconfig* ConfigMap to prevent reconciliation.
>``` >metadata: > annotations: > openshift.io/reconcile-protect: "true" >```
- - To collect of Kubernetes services cluster-wide, configure the ConfigMap file using the following example.
+ - To collect Kubernetes services cluster-wide, configure the ConfigMap file by using the following example:
``` prometheus-data-collection-settings: |- ΓÇï
Perform the following steps to configure your ConfigMap configuration file for t
kubernetes_services = ["http://my-service-dns.my-namespace:9102/metrics"] ```
- - To configure scraping of Prometheus metrics from a specific URL across the cluster, configure the ConfigMap file using the following example.
+ - To configure scraping of Prometheus metrics from a specific URL across the cluster, configure the ConfigMap file by using the following example:
``` prometheus-data-collection-settings: |- ΓÇï
Perform the following steps to configure your ConfigMap configuration file for t
urls = ["http://myurl:9101/metrics"] ## An array of urls to scrape metrics from ```
- - To configure scraping of Prometheus metrics from an agent's DaemonSet for every individual node in the cluster, configure the following in the ConfigMap:
+ - To configure scraping of Prometheus metrics from an agent's DaemonSet for every individual node in the cluster, configure the following example in the ConfigMap:
``` prometheus-data-collection-settings: |- ΓÇï
Perform the following steps to configure your ConfigMap configuration file for t
``` >[!NOTE]
- >$NODE_IP is a specific Container insights parameter and can be used instead of node IP address. It must be all uppercase.
+ >$NODE_IP is a specific Container insights parameter and can be used instead of a node IP address. It must be all uppercase.
- - To configure scraping of Prometheus metrics by specifying a pod annotation, perform the following steps:
+ - To configure scraping of Prometheus metrics by specifying a pod annotation:
- 1. In the ConfigMap, specify the following:
+ 1. In the ConfigMap, specify the following configuration:
``` prometheus-data-collection-settings: |- ΓÇï
Perform the following steps to configure your ConfigMap configuration file for t
monitor_kubernetes_pods = true ```
- 2. Specify the following configuration for pod annotations:
+ 1. Specify the following configuration for pod annotations:
``` - prometheus.io/scrape:"true" #Enable scraping for this pod ΓÇï
Perform the following steps to configure your ConfigMap configuration file for t
- prometheus.io/port:"8000" #If port is not 9102 use this annotationΓÇï ```
- If you want to restrict monitoring to specific namespaces for pods that have annotations, for example only include pods dedicated for production workloads, set the `monitor_kubernetes_pod` to `true` in ConfigMap, and add the namespace filter `monitor_kubernetes_pods_namespaces` specifying the namespaces to scrape from. For example, `monitor_kubernetes_pods_namespaces = ["default1", "default2", "default3"]`
+ If you want to restrict monitoring to specific namespaces for pods that have annotations, for example, only include pods dedicated for production workloads, set the `monitor_kubernetes_pod` to `true` in ConfigMap. Then add the namespace filter `monitor_kubernetes_pods_namespaces` to specify the namespaces to scrape from. An example is `monitor_kubernetes_pods_namespaces = ["default1", "default2", "default3"]`.
-3. Run the following kubectl command: `kubectl apply -f <configmap_yaml_file.yaml>`.
+1. Run the following kubectl command: `kubectl apply -f <configmap_yaml_file.yaml>`.
- Example: `kubectl apply -f container-azm-ms-agentconfig.yaml`.
+ Example: `kubectl apply -f container-azm-ms-agentconfig.yaml`.
-The configuration change can take a few minutes to finish before taking effect. You must restart all omsagent pods manually. When the restarts are finished, a message is displayed that's similar to the following and includes the result: `configmap "container-azm-ms-agentconfig" created`.
+The configuration change can take a few minutes to finish before taking effect. You must restart all omsagent pods manually. When the restarts are finished, a message appears that's similar to the following and includes the result `configmap "container-azm-ms-agentconfig" created`.
-## Configure and deploy ConfigMaps - Azure Red Hat OpenShift v3
+## Configure and deploy ConfigMaps for Azure Red Hat OpenShift v3
This section includes the requirements and steps to successfully configure your ConfigMap configuration file for Azure Red Hat OpenShift v3.x cluster. >[!NOTE]
->For Azure Red Hat OpenShift v3.x, a template ConfigMap file is created in the *openshift-azure-logging* namespace. It is not configured to actively scrape metrics or data collection from the agent.
+>For Azure Red Hat OpenShift v3.x, a template ConfigMap file is created in the *openshift-azure-logging* namespace. It isn't configured to actively scrape metrics or data collection from the agent.
### Prerequisites
-Before you start, confirm you are a member of the Customer Cluster Admin role of your Azure Red Hat OpenShift cluster to configure the containerized agent and Prometheus scraping settings. To verify you are a member of the *osa-customer-admins* group, run the following command:
+Before you start, confirm you're a member of the Customer Cluster Admin role of your Azure Red Hat OpenShift cluster to configure the containerized agent and Prometheus scraping settings. To verify you're a member of the *osa-customer-admins* group, run the following command:
``` bash oc get groups ```
-The output will resemble the following:
+The output will resemble the following example:
``` bash NAME USERS osa-customer-admins <your-user-account>@<your-tenant-name>.onmicrosoft.com ```
-If you are member of *osa-customer-admins* group, you should be able to list the `container-azm-ms-agentconfig` ConfigMap using the following command:
+If you're a member of *osa-customer-admins* group, you should be able to list the `container-azm-ms-agentconfig` ConfigMap by using the following command:
``` bash oc get configmaps container-azm-ms-agentconfig -n openshift-azure-logging ```
-The output will resemble the following:
+The output will resemble the following example:
``` bash NAME DATA AGE
container-azm-ms-agentconfig 4 56m
### Enable monitoring
-Perform the following steps to configure your ConfigMap configuration file for your Azure Red Hat OpenShift v3.x cluster.
+To configure your ConfigMap configuration file for your Azure Red Hat OpenShift v3.x cluster:
-1. Edit the ConfigMap yaml file with your customizations to scrape Prometheus metrics. The ConfigMap template already exists on the Red Hat OpenShift v3 cluster. Run the command `oc edit configmaps container-azm-ms-agentconfig -n openshift-azure-logging` to open the file in a text editor.
+1. Edit the ConfigMap YAML file with your customizations to scrape Prometheus metrics. The ConfigMap template already exists on the Red Hat OpenShift v3 cluster. Run the command `oc edit configmaps container-azm-ms-agentconfig -n openshift-azure-logging` to open the file in a text editor.
>[!NOTE]
- >The following annotation `openshift.io/reconcile-protect: "true"` must be added under the metadata of *container-azm-ms-agentconfig* ConfigMap to prevent reconciliation.
+ >The following annotation `openshift.io/reconcile-protect: "true"` must be added under the metadata of *container-azm-ms-agentconfig* ConfigMap to prevent reconciliation.
>``` >metadata: > annotations: > openshift.io/reconcile-protect: "true" >```
- - To collect of Kubernetes services cluster-wide, configure the ConfigMap file using the following example.
+ - To collect Kubernetes services cluster-wide, configure the ConfigMap file by using the following example:
``` prometheus-data-collection-settings: |- ΓÇï
Perform the following steps to configure your ConfigMap configuration file for y
kubernetes_services = ["http://my-service-dns.my-namespace:9102/metrics"] ```
- - To configure scraping of Prometheus metrics from a specific URL across the cluster, configure the ConfigMap file using the following example.
+ - To configure scraping of Prometheus metrics from a specific URL across the cluster, configure the ConfigMap file by using the following example:
``` prometheus-data-collection-settings: |- ΓÇï
Perform the following steps to configure your ConfigMap configuration file for y
urls = ["http://myurl:9101/metrics"] ## An array of urls to scrape metrics from ```
- - To configure scraping of Prometheus metrics from an agent's DaemonSet for every individual node in the cluster, configure the following in the ConfigMap:
+ - To configure scraping of Prometheus metrics from an agent's DaemonSet for every individual node in the cluster, configure the following example in the ConfigMap:
``` prometheus-data-collection-settings: |- ΓÇï
Perform the following steps to configure your ConfigMap configuration file for y
``` >[!NOTE]
- >$NODE_IP is a specific Container insights parameter and can be used instead of node IP address. It must be all uppercase.
+ >$NODE_IP is a specific Container insights parameter and can be used instead of a node IP address. It must be all uppercase.
- - To configure scraping of Prometheus metrics by specifying a pod annotation, perform the following steps:
+ - To configure scraping of Prometheus metrics by specifying a pod annotation:
- 1. In the ConfigMap, specify the following:
+ 1. In the ConfigMap, specify the following configuration:
``` prometheus-data-collection-settings: |- ΓÇï
Perform the following steps to configure your ConfigMap configuration file for y
monitor_kubernetes_pods = true ```
- 2. Specify the following configuration for pod annotations:
+ 1. Specify the following configuration for pod annotations:
``` - prometheus.io/scrape:"true" #Enable scraping for this pod ΓÇï
Perform the following steps to configure your ConfigMap configuration file for y
- prometheus.io/port:"8000" #If port is not 9102 use this annotationΓÇï ```
- If you want to restrict monitoring to specific namespaces for pods that have annotations, for example only include pods dedicated for production workloads, set the `monitor_kubernetes_pod` to `true` in ConfigMap, and add the namespace filter `monitor_kubernetes_pods_namespaces` specifying the namespaces to scrape from. For example, `monitor_kubernetes_pods_namespaces = ["default1", "default2", "default3"]`
+ If you want to restrict monitoring to specific namespaces for pods that have annotations, for example, only include pods dedicated for production workloads, set `monitor_kubernetes_pod` to `true` in ConfigMap. Then add the namespace filter `monitor_kubernetes_pods_namespaces` to specify the namespaces to scrape from. An example is `monitor_kubernetes_pods_namespaces = ["default1", "default2", "default3"]`.
-2. Save your changes in the editor.
+1. Save your changes in the editor.
-The configuration change can take a few minutes to finish before taking effect, and all omsagent pods in the cluster will restart. The restart is a rolling restart for all omsagent pods, not all restart at the same time. When the restarts are finished, a message is displayed that's similar to the following and includes the result: `configmap "container-azm-ms-agentconfig" created`.
+The configuration change can take a few minutes to finish before taking effect. Then all omsagent pods in the cluster will restart. The restart is a rolling restart for all omsagent pods. Not all pods restart at the same time. When the restarts are finished, a message appears that's similar to the following and includes the result `configmap "container-azm-ms-agentconfig" created`.
-You can view the updated ConfigMap by running the command, `oc describe configmaps container-azm-ms-agentconfig -n openshift-azure-logging`.
+You can view the updated ConfigMap by running the command `oc describe configmaps container-azm-ms-agentconfig -n openshift-azure-logging`.
-## Applying updated ConfigMap
+## Apply updated ConfigMap
-If you have already deployed a ConfigMap to your cluster and you want to update it with a newer configuration, you can edit the ConfigMap file you've previously used, and then apply using the same commands as before.
+If you've already deployed a ConfigMap to your cluster and you want to update it with a newer configuration, you can edit the ConfigMap file you've previously used. Then apply it by using the same commands as before.
For the following Kubernetes environments:
For the following Kubernetes environments:
- Azure Stack or on-premises - Azure Red Hat OpenShift and Red Hat OpenShift version 4.x
-run the command `kubectl apply -f <config3. map_yaml_file.yaml>`.
+run the command `kubectl apply -f <config3. map_yaml_file.yaml>`.
-For an example, run the command, `Example: kubectl apply -f container-azm-ms-agentconfig.yaml` to open the file in your default editor to modify and then save it.
+For an example, run the command `Example: kubectl apply -f container-azm-ms-agentconfig.yaml` to open the file in your default editor to modify and then save it.
-The configuration change can take a few minutes to finish before taking effect, and all omsagent pods in the cluster will restart. The restart is a rolling restart for all omsagent pods, not all restart at the same time. When the restarts are finished, a popup message is displayed that's similar to the following and includes the result: 'configmap "container-azm-ms-agentconfig' created to indicate the configmap resource created.
+The configuration change can take a few minutes to finish before taking effect. Then all omsagent pods in the cluster will restart. The restart is a rolling restart for all omsagent pods. Not all pods restart at the same time. When the restarts are finished, a message appears that's similar to the following and includes the result "configmap 'container-azm-ms-agentconfig' created" to indicate the configmap resource was created.
## Verify configuration
-To verify the configuration was successfully applied to a cluster, use the following command to review the logs from an agent pod: `kubectl logs omsagent-fdf58 -n=kube-system`.
+To verify the configuration was successfully applied to a cluster, use the following command to review the logs from an agent pod: `kubectl logs omsagent-fdf58 -n=kube-system`.
>[!NOTE]
->This command is not applicable to Azure Red Hat OpenShift v3.x cluster.
->
+>This command isn't applicable to Azure Red Hat OpenShift v3.x cluster.
+>
-If there are configuration errors from the omsagent pods, the output will show errors similar to the following:
+If there are configuration errors from the omsagent pods, the output will show errors similar to the following example:
``` ***************Start Config Processing********************
config::unsupported/missing config schema version - 'v21' , using defaults
Errors related to applying configuration changes are also available for review. The following options are available to perform additional troubleshooting of configuration changes and scraping of Prometheus metrics: -- From an agent pod logs using the same `kubectl logs` command
+- From an agent pod logs using the same `kubectl logs` command.
>[!NOTE]
- >This command is not applicable to Azure Red Hat OpenShift cluster.
+ >This command isn't applicable to Azure Red Hat OpenShift cluster.
> -- From Live Data (preview). Live Data (preview) logs show errors similar to the following:
+- From Live Data (preview). Live Data (preview) logs show errors similar to the following example:
``` 2019-07-08T18:55:00Z E! [inputs.prometheus]: Error in plugin: error making HTTP request to http://invalidurl:1010/metrics: Get http://invalidurl:1010/metrics: dial tcp: lookup invalidurl on 10.0.0.10:53: no such host ``` - From the **KubeMonAgentEvents** table in your Log Analytics workspace. Data is sent every hour with *Warning* severity for scrape errors and *Error* severity for configuration errors. If there are no errors, the entry in the table will have data with severity *Info*, which reports no errors. The **Tags** property contains more information about the pod and container ID on which the error occurred and also the first occurrence, last occurrence, and count in the last hour.- - For Azure Red Hat OpenShift v3.x and v4.x, check the omsagent logs by searching the **ContainerLog** table to verify if log collection of openshift-azure-logging is enabled.
-Errors prevent omsagent from parsing the file, causing it to restart and use the default configuration. After you correct the error(s) in ConfigMap on clusters other than Azure Red Hat OpenShift v3.x, save the yaml file and apply the updated ConfigMaps by running the command: `kubectl apply -f <configmap_yaml_file.yaml`.
+Errors prevent omsagent from parsing the file, causing it to restart and use the default configuration. After you correct the errors in ConfigMap on clusters other than Azure Red Hat OpenShift v3.x, save the YAML file and apply the updated ConfigMaps by running the command `kubectl apply -f <configmap_yaml_file.yaml`.
-For Azure Red Hat OpenShift v3.x, edit and save the updated ConfigMaps by running the command: `oc edit configmaps container-azm-ms-agentconfig -n openshift-azure-logging`.
+For Azure Red Hat OpenShift v3.x, edit and save the updated ConfigMaps by running the command `oc edit configmaps container-azm-ms-agentconfig -n openshift-azure-logging`.
## Query Prometheus metrics data
-To view prometheus metrics scraped by Azure Monitor and any configuration/scraping errors reported by the agent, review [Query Prometheus metrics data](container-insights-log-query.md#query-prometheus-metrics-data) and [Query config or scraping errors](container-insights-log-query.md#query-config-or-scraping-errors).
+To view Prometheus metrics scraped by Azure Monitor and any configuration/scraping errors reported by the agent, review [Query Prometheus metrics data](container-insights-log-query.md#query-prometheus-metrics-data) and [Query configuration or scraping errors](container-insights-log-query.md#query-configuration-or-scraping-errors).
## View Prometheus metrics in Grafana
-Container insights supports viewing metrics stored in your Log Analytics workspace in Grafana dashboards. We have provided a template that you can download from Grafana's [dashboard repository](https://grafana.com/grafana/dashboards?dataSource=grafana-azure-monitor-datasource&category=docker) to get you started and reference to help you learn how to query additional data from your monitored clusters to visualize in custom Grafana dashboards.
+Container insights supports viewing metrics stored in your Log Analytics workspace in Grafana dashboards. We've provided a template that you can download from Grafana's [dashboard repository](https://grafana.com/grafana/dashboards?dataSource=grafana-azure-monitor-datasource&category=docker). Use the template to get started and reference it to help you learn how to query other data from your monitored clusters to visualize in custom Grafana dashboards.
## Review Prometheus data usage
-To identify the ingestion volume of each metrics size in GB per day to understand if it is high, the following query is provided.
+To identify the ingestion volume of each metrics size in GB per day to understand if it's high, the following query is provided.
``` InsightsMetrics
InsightsMetrics
| render barchart ```
-The output will show results similar to the following:
+The output will show results similar to the following example.
-![Screenshot shows the log query results of data ingestion volume](./media/container-insights-prometheus-integration/log-query-example-usage-03.png)
+![Screenshot that shows the log query results of data ingestion volume.](./media/container-insights-prometheus-integration/log-query-example-usage-03.png)
To estimate what each metrics size in GB is for a month to understand if the volume of data ingested received in the workspace is high, the following query is provided.
InsightsMetrics
| render barchart ```
-The output will show results similar to the following:
+The output will show results similar to the following example.
-![Log query results of data ingestion volume](./media/container-insights-prometheus-integration/log-query-example-usage-02.png)
+![Screenshot that shows log query results of data ingestion volume.](./media/container-insights-prometheus-integration/log-query-example-usage-02.png)
-Further information on how to analyze usage is available in [Analyze usage in Log Analytics workspace](../logs/analyze-usage.md).
+For more information on how to analyze usage, see [Analyze usage in Log Analytics workspace](../logs/analyze-usage.md).
## Next steps
-Learn more about configuring the agent collection settings for stdout, stderr, and environmental variables from container workloads [here](container-insights-agent-config.md).
+To learn more about configuring the agent collection settings for stdout, stderr, and environmental variables from container workloads, see [Configure agent data collection for Container insights](container-insights-agent-config.md).
azure-monitor Resource Manager Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-manager-diagnostic-settings.md
param storageAccountId string
@description('The resource Id for the event hub authorization rule.') param eventHubAuthorizationRuleId string
-@description('The name of teh event hub.')
+@description('The name of the event hub.')
param eventHubName string resource setting 'Microsoft.Insights/diagnosticSettings@2021-05-01-preview' = {
resource setting 'Microsoft.Insights/diagnosticSettings@2021-05-01-preview' = {
"eventHubName": { "type": "string", "metadata": {
- "description": "The name of teh event hub."
+ "description": "The name of the event hub."
} } },
azure-monitor Log Analytics Workspace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-analytics-workspace-overview.md
Each workspace contains multiple tables that are organized into separate columns
[![Diagram that shows the Azure Monitor Logs structure.](media/data-platform-logs/logs-structure.png)](media/data-platform-logs/logs-structure.png#lightbox)
+> [!WARNING]
+> Table names are used for billing purposes so they should not contain sensitive information.
+ ## Cost There's no direct cost for creating or maintaining a workspace. You're charged for the data sent to it, which is also known as data ingestion. You're charged for how long that data is stored, which is otherwise known as data retention. These costs might vary based on the data plan of each table, as described in [Log data plans (preview)](#log-data-plans-preview).
azure-monitor Vminsights Enable Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-powershell.md
PARAMETERS
This cmdlet supports the common parameters: Verbose, Debug, ErrorAction, ErrorVariable, WarningAction, WarningVariable, OutBuffer, PipelineVariable, and OutVariable. For more information, see
- about_CommonParameters (https:/go.microsoft.com/fwlink/?LinkID=113216).
+ about_CommonParameters (https://go.microsoft.com/fwlink/?LinkID=113216).
-- EXAMPLE 1 -- .\Install-VMInsights.ps1 -WorkspaceRegion eastus -WorkspaceId <WorkspaceId> -WorkspaceKey <WorkspaceKey> -SubscriptionId <SubscriptionId>
azure-netapp-files Azacsnap Cmd Ref Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-cmd-ref-configure.md
na Previously updated : 04/21/2021 Last updated : 08/19/2022
Database section
# [SAP HANA](#tab/sap-hana)
-When adding a *SAP HANA database* to the configuration, the following values are required:
+When you add an *SAP HANA database* to the configuration, the following values are required:
- **HANA Server's Address** = The SAP HANA server hostname or IP address. - **HANA SID** = The SAP HANA System ID.
When adding a *SAP HANA database* to the configuration, the following values are
[Azure Backup](../backup/index.yml) service provides an alternate backup tool for SAP HANA, where database and log backups are streamed into the Azure Backup Service. Some customers would like to combine the streaming backint-based backups with regular snapshot-based backups. However, backint-based
-backups block other methods of backup, such as using a files-based backup or a storage snapshot-based backup (for example, AzAcSnap). Guidance is provided on
-the Azure Backup site on how to [Run SAP HANA native client backup to local disk on a database with Azure Backup enabled](../backup/sap-hana-db-manage.md).
+backups block other methods of backup, such as using a files-based backup or a storage snapshot-based backup (for example, AzAcSnap). Guidance is provided on the Azure Backup site on how to [Run SAP HANA Studio backup on a database with Azure Backup enabled](../backup/backup-azure-sap-hana-database.md#run-sap-hana-studio-backup-on-a-database-with-azure-backup-enabled).
The process described in the Azure Backup documentation has been implemented with AzAcSnap to automatically do the following steps:
the configuration file directly.
# [Oracle](#tab/oracle)
-When adding an *Oracle database* to the configuration, the following values are required:
+When you add an *Oracle database* to the configuration, the following values are required:
- **Oracle DB Server's Address** = The database server hostname or IP address. - **SID** = The database System ID.
When adding an *Oracle database* to the configuration, the following values are
# [Azure Large Instance (Bare Metal)](#tab/azure-large-instance)
-When adding *HLI Storage* to a database section, the following values are required:
+When you add *HLI Storage* to a database section, the following values are required:
- **Storage User Name** = This value is the user name used to establish the SSH connection to the Storage. - **Storage IP Address** = The address of the Storage system.
When adding *HLI Storage* to a database section, the following values are requir
# [Azure NetApp Files (with VM)](#tab/azure-netapp-files)
-When adding *ANF Storage* to a database section, the following values are required:
+When you add *ANF Storage* to a database section, the following values are required:
-- **Service Principal Authentication filename** = this is the `authfile.json` file generated in the Cloud Shell when configuring
+- **Service Principal Authentication filename** = the `authfile.json` file generated in the Cloud Shell when configuring
communication with Azure NetApp Files storage.-- **Full ANF Storage Volume Resource ID** = the full Resource ID of the Volume being snapshot. This can be retrieved from:
+- **Full ANF Storage Volume Resource ID** = the full Resource ID of the Volume being snapshot. This string can be retrieved from:
Azure portal ΓÇô> ANF ΓÇô> Volume ΓÇô> Settings/Properties ΓÇô> Resource ID
For **Azure Large Instance** system, this information is provided by Microsoft S
is made available in an Excel file that is provided during handover. Open a service request if you need to be provided this information again.
-The following is an example only and is the content of the file as generated by the configuration session above, update all the values accordingly.
+The following output is an example configuration file only and is the content of the file as generated by the configuration session above, update all the values accordingly.
```bash cat azacsnap.json
azure-netapp-files Azacsnap Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-release-notes.md
This page lists major changes made to AzAcSnap to provide new functionality or r
> [!IMPORTANT] > AzAcSnap 6 brings a new release model for AzAcSnap and includes fully supported GA features and Preview features in a single release.
-Since AzAcSnap v5.0 was released as GA in April-2021, there have been 8 releases of AzAcSnap across two branches. Our goal with the new release model is to align with how Azure components are released. This allows moving features from Preview to GA (without having to move an entire branch), and introduce new Preview features (without having to create a new branch). From AzAcSnap 6 we will have a single branch with fully supported GA features and Preview features (which are subject to Microsoft's Preview Ts&Cs). ItΓÇÖs important to note customers cannot accidentally use Preview features, and must enable them with the `--preview` command line option. This means the next release will be AzAcSnap 7, which could include; patches (if necessary) for GA features, current Preview features moving to GA, or new Preview features.
+Since AzAcSnap v5.0 was released as GA in April 2021, there have been 8 releases of AzAcSnap across two branches. Our goal with the new release model is to align with how Azure components are released. This allows moving features from Preview to GA (without having to move an entire branch), and introduce new Preview features (without having to create a new branch). From AzAcSnap 6 we will have a single branch with fully supported GA features and Preview features (which are subject to Microsoft's Preview Ts&Cs). ItΓÇÖs important to note customers cannot accidentally use Preview features, and must enable them with the `--preview` command line option. This means the next release will be AzAcSnap 7, which could include; patches (if necessary) for GA features, current Preview features moving to GA, or new Preview features.
AzAcSnap 6 is being released with the following fixes and improvements:
azure-netapp-files Azure Netapp Files Mount Unmount Volumes For Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-mount-unmount-volumes-for-virtual-machines.md
Title: Mount Azure NetApp Files volumes for virtual machines | Microsoft Docs
-description: Learn how to mount an Azure NetApp Files volume for Windows or Linux virtual machines.
+ Title: Mount NFS volumes for virtual machines | Microsoft Docs
+description: Learn how to mount an NFS volume for Windows or Linux virtual machines.
Previously updated : 06/13/2022 Last updated : 08/18/2022
-# Mount a volume for Windows or Linux VMs
+# Mount NFS volumes for Linux or Windows VMs
-You can mount an Azure NetApp Files file for Windows or Linux virtual machines (VMs). The mount instructions for Linux virtual machines are available on Azure NetApp Files.
+You can mount an NFS file for Windows or Linux virtual machines (VMs).
## Requirements
You can mount an Azure NetApp Files file for Windows or Linux virtual machines (
* 4045 TCP/UDP = `nlockmgr` (NFSv3 only) * 4046 TCP/UDP = `status` (NFSv3 only)
-## Steps
+## Mount NFS volumes on Linux clients
-1. Click the **Volumes** blade, and then select the volume for which you want to mount.
-2. Click **Mount instructions** from the selected volume, and then follow the instructions to mount the volume.
+1. Review the [Linux NFS mount options best practices](performance-linux-mount-options.md).
+2. Select the **Volumes** pane and then the NFS volume that you want to mount.
+3. To mount the NFS volume using a Linux client, select **Mount instructions** from the selected volume. Follow the displayed instructions to mount the volume.
+ :::image type="content" source="../media/azure-netapp-files/azure-netapp-files-mount-instructions-nfs.png" alt-text="Screenshot of Mount instructions." lightbox="../media/azure-netapp-files/azure-netapp-files-mount-instructions-nfs.png":::
+ * Ensure that you use the `vers` option in the `mount` command to specify the NFS protocol version that corresponds to the volume you want to mount.
+ For example, if the NFS version is NFSv4.1:
+ `sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=4.1,tcp,sec=sys $MOUNTTARGETIPADDRESS:/$VOLUMENAME $MOUNTPOINT`
+ * If you use NFSv4.1 and your configuration requires using VMs with the same host names (for example, in a DR test), refer to [Configure two VMs with the same hostname to access NFSv4.1 volumes](configure-nfs-clients.md#configure-two-vms-with-the-same-hostname-to-access-nfsv41-volumes).
+4. If you want the volume mounted automatically when an Azure VM is started or rebooted, add an entry to the `/etc/fstab` file on the host.
+ For example: `$ANFIP:/$FILEPATH /$MOUNTPOINT nfs bg,rw,hard,noatime,nolock,rsize=65536,wsize=65536,vers=3,tcp,_netdev 0 0`
+ * `$ANFIP` is the IP address of the Azure NetApp Files volume found in the volume properties menu
+ * `$FILEPATH` is the export path of the Azure NetApp Files volume
+ * `$MOUNTPOINT` is the directory created on the Linux host used to mount the NFS export
+5. If you want to mount an NFS Kerberos volume, refer to [Configure NFSv4.1 Kerberos encryption](configure-kerberos-encryption.md) for additional details.
+6. You can also access SMB volumes from Unix and Linux clients via NFS by setting the protocol access for the volume to ΓÇ£dual-protocolΓÇ¥. This allows for accessing the volume via NFS (NFSv3 or NFSv4.1) and SMB. See [Create a dual-protocol volume](create-volumes-dual-protocol.md) for details. Take note of the security style mappings table. Mounting a dual-protocol volume from Unix and Linux clients relies on the same procedure as regular NFS volumes.
- ![Mount instructions NFS](../media/azure-netapp-files/azure-netapp-files-mount-instructions-nfs.png)
+## Mount NFS volumes on Windows clients
- ![Mount instructions SMB](../media/azure-netapp-files/azure-netapp-files-mount-instructions-smb.png)
- * If you are mounting an NFS volume, ensure that you use the `vers` option in the `mount` command to specify the NFS protocol version that corresponds to the volume you want to mount.
- * If you are using NFSv4.1, use the following command to mount your file system: `sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=4.1,tcp,sec=sys $MOUNTTARGETIPADDRESS:/$VOLUMENAME $MOUNTPOINT`
- > [!NOTE]
- > If you use NFSv4.1 and your use case involves leveraging VMs with the same hostnames (for example, in a DR test), see [Configure two VMs with the same hostname to access NFSv4.1 volumes](configure-nfs-clients.md#configure-two-vms-with-the-same-hostname-to-access-nfsv41-volumes).
+Mounting NFSv4.1 volumes on Windows clients is supported. For more information, see [Network File System overview](/windows-server/storage/nfs/nfs-overview).
-3. If you want to have an NFS volume automatically mounted when an Azure VM is started or rebooted, add an entry to the `/etc/fstab` file on the host.
+If you want to mount NFSv3 volumes on a Windows client using NFS:
- For example: `$ANFIP:/$FILEPATH /$MOUNTPOINT nfs bg,rw,hard,noatime,nolock,rsize=65536,wsize=65536,vers=3,tcp,_netdev 0 0`
-
- * `$ANFIP` is the IP address of the Azure NetApp Files volume found in the volume properties blade.
- * `$FILEPATH` is the export path of the Azure NetApp Files volume.
- * `$MOUNTPOINT` is the directory created on the Linux host used to mount the NFS export.
-
-4. If you want to mount the volume to Windows using NFS:
-
- > [!NOTE]
- > One alternative to mounting an NFS volume on Windows is to [Create a dual-protocol volume for Azure NetApp Files](create-volumes-dual-protocol.md), allowing the native access of SMB for Windows and NFS for Linux. However, if that is not possible, you can mount the NFS volume on Windows using the steps below.
-
- * Set the permissions to allow the volume to be mounted on Windows
- * Follow the steps to [Configure Unix permissions and change ownership mode for NFS and dual-protocol volumes](configure-unix-permissions-change-ownership-mode.md#unix-permissions) and set the permissions to '777' or '775'.
- * Install NFS client on Windows
- * Open PowerShell
- * type: `Install-WindowsFeature -Name NFS-Client`
- * Mount the volume via the NFS client on Windows
- * Obtain the 'mount path' of the volume
- * Open a Command prompt
- * type: `mount -o anon -o mtype=hard \\$ANFIP\$FILEPATH $DRIVELETTER:\`
- * `$ANFIP` is the IP address of the Azure NetApp Files volume found in the volume properties blade.
- * `$FILEPATH` is the export path of the Azure NetApp Files volume.
- * `$DRIVELETTER` is the drive letter where you would like the volume mounted within Windows.
-
-5. If you want to mount an NFS Kerberos volume, see [Configure NFSv4.1 Kerberos encryption](configure-kerberos-encryption.md) for additional details.
+1. [Mount the volume onto a Unix or Linux VM first](#mount-nfs-volumes-on-linux-clients).
+1. Run a `chmod 777` or `chmod 775` command against the volume.
+1. Mount the volume via the NFS client on Windows using the mount option `mtype=hard` to reduce connection issues.
+ See [Windows command line utility for mounting NFS volumes](/windows-server/administration/windows-commands/mount) for more detail.
+ For example: `Mount -o rsize=256 -o wsize=256 -o mtype=hard \\10.x.x.x\testvol X:* `
+1. You can also access NFS volumes from Windows clients via SMB by setting the protocol access for the volume to ΓÇ£dual-protocolΓÇ¥. This setting allows access to the volume via SMB and NFS (NFSv3 or NFSv4.1) and will result in better performance than using the NFS client on Windows with an NFS volume. See [Create a dual-protocol volume](create-volumes-dual-protocol.md) for details, and take note of the security style mappings table. Mounting a dual-protocol volume from Windows clients using the same procedure as regular SMB volumes.
## Next steps
+* [Mount SMB volumes for Windows or Linux virtual machines](mount-volumes-vms-smb.md)
+* [Linux NFS mount options best practices](performance-linux-mount-options.md)
* [Configure NFSv4.1 default domain for Azure NetApp Files](azure-netapp-files-configure-nfsv41-domain.md) * [NFS FAQs](faq-nfs.md) * [Network File System overview](/windows-server/storage/nfs/nfs-overview)
azure-netapp-files Azure Netapp Files Solution Architectures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
na Previously updated : 08/11/2022 Last updated : 08/19/2022 # Solution architectures using Azure NetApp Files
This section provides references for solutions for Linux OSS applications and da
* [General mainframe refactor to Azure - Azure Example Scenarios](/azure/architecture/example-scenario/mainframe/general-mainframe-refactor) * [Refactor mainframe applications with Advanced - Azure Example Scenarios](/azure/architecture/example-scenario/mainframe/refactor-mainframe-applications-advanced)
+* [Refactor mainframe applications with Astadia ΓÇô Azure Example Scenarios](/azure/architecture/example-scenario/mainframe/refactor-mainframe-applications-astadia)
+* [Refactor mainframe computer systems that run Adabas & Natural - Azure Example Scenarios](/azure/architecture/example-scenario/mainframe/refactor-adabas-aks)
+* [Refactor IBM z/OS mainframe coupling facility (CF) to Azure - Azure Example Scenarios](/azure/architecture/reference-architectures/zos/refactor-zos-coupling-facility)
+* [Refactor mainframe applications to Azure with Raincode compilers - Azure Example Scenarios](/azure/architecture/reference-architectures/app-modernization/raincode-reference-architecture)
+ ### Oracle
This section provides references for Virtual Desktop infrastructure solutions.
* [Azure Virtual Desktop at enterprise scale](/azure/architecture/example-scenario/wvd/windows-virtual-desktop) * [Microsoft FSLogix for the enterprise - Azure NetApp Files best practices](/azure/architecture/example-scenario/wvd/windows-virtual-desktop-fslogix#azure-netapp-files-best-practices) * [Setting up Azure NetApp Files for MSIX App Attach](https://techcommunity.microsoft.com/t5/windows-virtual-desktop/setting-up-azure-netapp-files-for-msix-app-attach-step-by-step/m-p/1990021)
+* [Multiple forests with AD DS and Azure AD ΓÇô Azure Example Scenarios](/azure/architecture/example-scenario/wvd/multi-forest)
+* [Multiregion Business Continuity and Disaster Recovery (BCDR) for Azure Virtual Desktop ΓÇô Azure Example Scenarios](/azure/architecture/example-scenario/wvd/azure-virtual-desktop-multi-region-bcdr)
+* [Deploy Esri ArcGIS Pro in Azure Virtual Desktop ΓÇô Azure Example Scenarios](/azure/architecture/example-scenario/data/esri-arcgis-azure-virtual-desktop)
+ ### Citrix
This section provides solutions for Azure platform services.
### Azure Red Hat Openshift * [Using Trident to Automate Azure NetApp Files from OpenShift](https://techcommunity.microsoft.com/t5/fasttrack-for-azure/using-trident-to-automate-azure-netapp-files-from-openshift/ba-p/2367351)
+* [Deploy IBM Maximo Application Suite on Azure ΓÇô Azure Example Scenarios](/azure/architecture/example-scenario/apps/deploy-ibm-maximo-application-suite)
### Azure Batch
azure-netapp-files Backup Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-introduction.md
na Previously updated : 07/26/2021 Last updated : 08/19/2022
Azure NetApp Files backup is supported for the following regions:
* Japan East * North Europe * South Central US
-* UK South
* West Europe * West US * West US 2
azure-netapp-files Configure Kerberos Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-kerberos-encryption.md
The following requirements apply to NFSv4.1 client encryption:
* Ensure that User Principal Names for user accounts do *not* end with a `$` symbol (for example, user$@REALM.COM). <!-- Not using 'contoso.com' in this example; per Mark, A customers REALM namespace may be different from their AD domain name space. --> For [Group managed service accounts](/windows-server/security/group-managed-service-accounts/getting-started-with-group-managed-service-accounts) (gMSA), you need to remove the trailing `$` from the User Principal Name before the account can be used with the Azure NetApp Files Kerberos feature. - ## Create an NFS Kerberos Volume 1. Follow steps in [Create an NFS volume for Azure NetApp Files](azure-netapp-files-create-volumes.md) to create the NFSv4.1 volume.
azure-netapp-files Faq Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-performance.md
Previously updated : 02/07/2022 Last updated : 08/18/2022 # Performance FAQs for Azure NetApp Files
Azure NetApp Files provides volume performance metrics. You can also use Azure M
See [Performance impact of Kerberos on NFSv4.1 volumes](performance-impact-kerberos.md) for information about security options for NFSv4.1, the performance vectors tested, and the expected performance impact.
+## What's the performance impact of using `nconnect` with Kerberos?
++ ## Does Azure NetApp Files support SMB Direct? No, Azure NetApp Files does not support SMB Direct.
azure-netapp-files Mount Volumes Vms Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/mount-volumes-vms-smb.md
+
+ Title: Mount SMB volumes for Windows VMs | Microsoft Docs
+description: Learn how to mount SMB volumes for Windows virtual machines.
+++++ Last updated : 08/18/2022+
+# Mount SMB volumes for Windows VMs
+
+You can mount an SMB file for Windows virtual machines (VMs).
+
+## Mount SMB volumes on a Windows client
+
+1. Select the **Volumes** menu and then the SMB volume that you want to mount.
+1. To mount the SMB volume using a Windows client, select **Mount instructions** from the selected volume. Follow the displayed instructions to mount the volume.
+ :::image type="content" source="../media/azure-netapp-files/azure-netapp-files-mount-instructions-smb.png" alt-text="Screenshot of Mount instructions." lightbox="../media/azure-netapp-files/azure-netapp-files-mount-instructions-smb.png":::
+
+## Next steps
+
+* [Mount NFS volumes for Windows or Linux VMs](azure-netapp-files-mount-unmount-volumes-for-virtual-machines.md)
+* [SMB FAQs](faq-smb.md)
+* [Network File System overview](/windows-server/storage/nfs/nfs-overview)
azure-netapp-files Performance Linux Mount Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-linux-mount-options.md
na Previously updated : 05/05/2022 Last updated : 08/19/2022 # Linux NFS mount options best practices for Azure NetApp Files
When you use `nconnect`, keep the following rules in mind:
For details, see [Linux concurrency best practices for Azure NetApp Files](performance-linux-concurrency-session-slots.md).
+### `Nconnect` considerations
++ ## `Rsize` and `Wsize` Examples in this section provide information about how to approach performance tuning. You might need to make adjustments to suit your specific application needs.
sudo vi /etc/fstab
10.23.1.4:/HN1-shared/shared /hana/shared nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0 ```
-Also for example, SAS Viya recommends a 256-KiB read and write sizes, and [SAS GRID](https://communities.sas.com/t5/Administration-and-Deployment/Azure-NetApp-Files-A-shared-file-system-to-use-with-SAS-Grid-on/m-p/606973/highlight/true#M17740) limits the `r/wsize` to 64 KiB while augmenting read performance with increased read-ahead for the NFS mounts. See [NFS read-ahead best practices for Azure NetApp Files](performance-linux-nfs-read-ahead.md) for details.
+For example, SAS Viya recommends a 256-KiB read and write sizes, and [SAS GRID](https://communities.sas.com/t5/Administration-and-Deployment/Azure-NetApp-Files-A-shared-file-system-to-use-with-SAS-Grid-on/m-p/606973/highlight/true#M17740) limits the `r/wsize` to 64 KiB while augmenting read performance with increased read-ahead for the NFS mounts. See [NFS read-ahead best practices for Azure NetApp Files](performance-linux-nfs-read-ahead.md) for details.
The following considerations apply to the use of `rsize` and `wsize`:
The attributes `acregmin`, `acregmax`, `acdirmin`, and `acdirmax` control the co
For example, consider the default `acregmin` and `acregmax` values, 3 and 30 seconds, respectively. For instance, the attributes are repeatedly evaluated for the files in a directory. After 3 seconds, the NFS service is queried for freshness. If the attributes are deemed valid, the client doubles the trusted time to 6 seconds, 12 seconds, 24 seconds, then as the maximum is set to 30, 30 seconds. From that point on, until the cached attributes are deemed out of date (at which point the cycle starts over), trustworthiness is defined as 30 seconds being the value specified by `acregmax`.
-There are other cases that can benefit from a similar set of mount options, even when there is no complete ownership by the clients, for example, if the clients use the data as read only and data update is managed through another path. For applications that use grids of clients like EDA, web hosting and movie rendering and have relatively static data sets (EDA tools or libraries, web content, texture data), the typical behavior is that the data set is largely cached on the clients. There are very few reads and no writes. There will be many `getattr`/access calls coming back to storage. These data sets are typically updated through another client mounting the file systems and periodically pushing content updates.
+There are other cases that can benefit from a similar set of mount options, even when there's no complete ownership by the clients, for example, if the clients use the data as read only and data update is managed through another path. For applications that use grids of clients like EDA, web hosting and movie rendering and have relatively static data sets (EDA tools or libraries, web content, texture data), the typical behavior is that the data set is largely cached on the clients. There are few reads and no writes. There will be many `getattr`/access calls coming back to storage. These data sets are typically updated through another client mounting the file systems and periodically pushing content updates.
-In these cases, there is a known lag in picking up new content and the application still works with potentially out-of-date data. In these cases, `nocto` and `actimeo` can be used to control the period where out-of-data date can be managed. For example, in EDA tools and libraries, `actimeo=600` works well because this data is typically updated infrequently. For small web hosting where clients need to see their data updates timely as they are editing their sites, `actimeo=10` might be acceptable. For large-scale web sites where there is content pushed to multiple file systems, `actimeo=60` might be acceptable.
+In these cases, there's a known lag in picking up new content and the application still works with potentially out-of-date data. In these cases, `nocto` and `actimeo` can be used to control the period where out-of-data date can be managed. For example, in EDA tools and libraries, `actimeo=600` works well because this data is typically updated infrequently. For small web hosting where clients need to see their data updates timely as they're editing their sites, `actimeo=10` might be acceptable. For large-scale web sites where there's content pushed to multiple file systems, `actimeo=60` might be acceptable.
Using these mount options significantly reduces the workload to storage in these cases. (For example, a recent EDA experience reduced IOPs to the tool volume from >150 K to ~6 K.) Applications can run significantly faster because they can trust the data in memory. (Memory access time is nanoseconds vs. hundreds of microseconds for `getattr`/access on a fast network.)
Close-to-open consistency (the `cto` mount option) ensures that no matter the st
* When a directory is crawled (`ls`, `ls -l` for example) a certain set of PRC calls are issued. The NFS server shares its view of the filesystem. As long as `cto` is used by all NFS clients accessing a given NFS export, all clients will see the same list of files and directories therein. The freshness of the attributes of the files in the directory is controlled by the [attribute cache timers](#how-attribute-cache-timers-work). In other words, as long as `cto` is used, files appear to remote clients as soon as the file is created and the file lands on the storage. * When a file is opened, the content of the file is guaranteed fresh from the perspective of the NFS server.
- If there is a race condition where the content has not finished flushing from Machine 1 when a file is opened on Machine 2, Machine 2 will only receive the data present on the server at the time of the open. In this case, Machine 2 will not retrieve more data from the file until the `acreg` timer is reached, and Machine 2 checks its cache coherency from the server again. This scenario can be observed using a tail `-f` from Machine 2 when the file is still being written to from Machine 1.
+ If there's a race condition where the content has not finished flushing from Machine 1 when a file is opened on Machine 2, Machine 2 will only receive the data present on the server at the time of the open. In this case, Machine 2 will not retrieve more data from the file until the `acreg` timer is reached, and Machine 2 checks its cache coherency from the server again. This scenario can be observed using a tail `-f` from Machine 2 when the file is still being written to from Machine 1.
### No close-to-open consistency
azure-resource-manager Publish Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/publish-notifications.md
Title: Managed apps with notifications
-description: Configure managed applications with webhook endpoints to receive notifications about creates, updates, deletes, and errors on the managed application instances.
+ Title: Azure managed applications with notifications
+description: Configure an Azure managed application with webhook endpoints to receive notifications about creates, updates, deletes, and errors on the managed application instances.
Previously updated : 11/01/2019 Last updated : 08/18/2022 + # Azure managed applications with notifications
-Azure managed application notifications allow publishers to automate actions based on lifecycle events of the managed application instances. Publishers can specify custom notification webhook endpoints to receive event notifications about new and existing managed application instances. Publishers can set up custom workflows at the time of application provisioning, updates, and deletion.
+Azure managed application notifications allow publishers to automate actions based on lifecycle events of the managed application instances. Publishers can specify a custom notification webhook endpoint to receive event notifications about new and existing managed application instances. Publishers can set up custom workflows at the time of application provisioning, updates, and deletion.
## Getting started
-To start receiving managed applications, spin up a public HTTPS endpoint and specify it when you publish the service catalog application definition or Azure Marketplace offer.
+
+To start receiving managed application notifications, create a public HTTPS endpoint. Specify the endpoint when you publish the service catalog application definition or Microsoft Azure Marketplace offer.
Here are the recommended steps to get started quickly:
-1. Spin up a public HTTPS endpoint that logs the incoming POST requests and returns `200 OK`.
-2. Add the endpoint to the service catalog application definition or Azure Marketplace offer as explained later in this article.
-3. Create a managed application instance that references the application definition or Azure Marketplace offer.
-4. Validate that the notifications are being received.
-5. Enable authorization as explained in the **Endpoint authentication** section of this article.
-6. Follow the instructions in the **Notification schema** section of this article to parse the notification requests and implement your business logic based on the notification.
+
+1. Create a public HTTPS endpoint that logs the incoming POST requests and returns `200 OK`.
+1. Add the endpoint to the service catalog application definition or Azure Marketplace offer as explained later in this article.
+1. Create a managed application instance that references the application definition or Azure Marketplace offer.
+1. Validate that the notifications are being received.
+1. Enable authorization as explained in the [Endpoint authentication](#endpoint-authentication) section of this article.
+1. Follow the instructions in the [Notification schema](#notification-schema) section of this article to parse the notification requests and implement your business logic based on the notification.
## Add service catalog application definition notifications
-#### Azure portal
+
+The following examples show how to add a notification endpoint URI using the portal or REST API.
+
+### Azure portal
+ To get started, see [Publish a service catalog application through Azure portal](./publish-portal.md).
-![Service catalog application definition notifications in the Azure portal](./media/publish-notifications/service-catalog-notifications.png)
-#### REST API
+### REST API
> [!NOTE]
-> Currently, you can supply only one endpoint in the `notificationEndpoints` in the application definition properties.
+> You can only supply one endpoint in the `notificationEndpoints` property of the managed application definition.
``` JSON
- {
- "properties": {
- "isEnabled": true,
- "lockLevel": "ReadOnly",
- "displayName": "Sample Application Definition",
- "description": "Notification-enabled application definition.",
- "notificationPolicy": {
- "notificationEndpoints": [
- {
- "uri": "https://isv.azurewebsites.net:1214?sig=unique_token"
- }
- ]
- },
- "authorizations": [
- {
- "principalId": "d6b7fbd3-4d99-43fe-8a7a-f13aef11dc18",
- "roleDefinitionId": "8e3af657-a8ff-443c-a75c-2fe8c4bcb635"
- },
- ...
-
+{
+ "properties": {
+ "isEnabled": true,
+ "lockLevel": "ReadOnly",
+ "displayName": "Sample Application Definition",
+ "description": "Notification-enabled application definition.",
+ "notificationPolicy": {
+ "notificationEndpoints": [
+ {
+ "uri": "https://isv.azurewebsites.net:1214?sig=unique_token"
+ }
+ ]
+ },
+ "authorizations": [
+ {
+ "principalId": "d6b7fbd3-4d99-43fe-8a7a-f13aef11dc18",
+ "roleDefinitionId": "8e3af657-a8ff-443c-a75c-2fe8c4bcb635"
+ },
+ ...
```+ ## Add Azure Marketplace managed application notifications+ For more information, see [Create an Azure application offer](../../marketplace/azure-app-offer-setup.md).
-![Azure Marketplace managed application notifications in the Azure portal](./media/publish-notifications/marketplace-notifications.png)
+ ## Event triggers
-The following table describes all the possible combinations of EventType and ProvisioningState and their triggers:
+
+The following table describes all the possible combinations of `eventType` and `provisioningState` and their triggers:
EventType | ProvisioningState | Trigger for notification ||
PATCH | Succeeded | After a successful PATCH on the managed application instance
DELETE | Deleting | As soon as the user initiates a DELETE of a managed app instance. DELETE | Deleted | After the full and successful deletion of the managed application. DELETE | Failed | After any error during the deprovisioning process that blocks the deletion.+ ## Notification schema
-When you spin up your webhook endpoint to handle notifications, you'll need to parse the payload to get important properties to then act upon the notification. Service catalog and Azure Marketplace managed application notifications provide many of the same properties. Two small differences are outlined in the table that follows the samples.
-#### Service catalog application notification schema
-Here's a sample service catalog notification after the successful provisioning of a managed application instance:
+When you create your webhook endpoint to handle notifications, you'll need to parse the payload to get important properties to then act upon the notification. Service catalog and Azure Marketplace managed application notifications provide many of the same properties, but there are some differences. The `applicationDefinitionId` property only applies to service catalog. The `billingDetails` and `plan` properties only apply to Azure Marketplace.
+
+Azure appends `/resource` to the notification endpoint URI you provided in the managed application definition. The webhook endpoint must be able to handle notifications on the `/resource` URI. For example, if you provided a notification endpoint URI like `https://fabrikam.com` then the webhook endpoint URI is `https://fabrikam.com/resource`.
+
+### Service catalog application notification schema
+
+The following sample shows a service catalog notification after the successful provisioning of a managed application instance.
+ ``` HTTP POST https://{your_endpoint_URI}/resource?{optional_parameter}={optional_parameter_value} HTTP/1.1 {
- "eventType": "PUT",
- "applicationId": "/subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.Solutions/applications/<applicationName>",
- "eventTime": "2019-08-14T19:20:08.1707163Z",
- "provisioningState": "Succeeded",
- "applicationDefinitionId": "/subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.Solutions/applicationDefinitions/<appDefName>"
+ "eventType": "PUT",
+ "applicationId": "/subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.Solutions/applications/<applicationName>",
+ "eventTime": "2019-08-14T19:20:08.1707163Z",
+ "provisioningState": "Succeeded",
+ "applicationDefinitionId": "/subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.Solutions/applicationDefinitions/<appDefName>"
}- ``` If the provisioning fails, a notification with the error details will be sent to the specified endpoint.
If the provisioning fails, a notification with the error details will be sent to
POST https://{your_endpoint_URI}/resource?{optional_parameter}={optional_parameter_value} HTTP/1.1 {
- "eventType": "PUT",
- "applicationId": "subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.Solutions/applications/<applicationName>",
- "eventTime": "2019-08-14T19:20:08.1707163Z",
- "provisioningState": "Failed",
- "applicationDefinitionId": "/subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.Solutions/applicationDefinitions/<appDefName>",
- "error": {
- "code": "ErrorCode",
- "message": "error message",
- "details": [
- {
- "code": "DetailedErrorCode",
- "message": "error message"
- }
- ]
- }
+ "eventType": "PUT",
+ "applicationId": "subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.Solutions/applications/<applicationName>",
+ "eventTime": "2019-08-14T19:20:08.1707163Z",
+ "provisioningState": "Failed",
+ "applicationDefinitionId": "/subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.Solutions/applicationDefinitions/<appDefName>",
+ "error": {
+ "code": "ErrorCode",
+ "message": "error message",
+ "details": [
+ {
+ "code": "DetailedErrorCode",
+ "message": "error message"
+ }
+ ]
+ }
}- ```
-#### Azure Marketplace application notification schema
+### Azure Marketplace application notification schema
+
+The following sample shows a service catalog notification after the successful provisioning of a managed application instance.
-Here's a sample service catalog notification after the successful provisioning of a managed application instance:
``` HTTP POST https://{your_endpoint_URI}/resource?{optional_parameter}={optional_parameter_value} HTTP/1.1 {
- "eventType": "PUT",
- "applicationId": "/subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.Solutions/applications/<applicationName>",
- "eventTime": "2019-08-14T19:20:08.1707163Z",
- "provisioningState": "Succeeded",
- "billingDetails": {
- "resourceUsageId":"<resourceUsageId>"
- },
- "plan": {
- "publisher": "publisherId",
- "product": "offer",
- "name": "skuName",
- "version": "1.0.1"
- }
+ "eventType": "PUT",
+ "applicationId": "/subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.Solutions/applications/<applicationName>",
+ "eventTime": "2019-08-14T19:20:08.1707163Z",
+ "provisioningState": "Succeeded",
+ "billingDetails": {
+ "resourceUsageId": "<resourceUsageId>"
+ },
+ "plan": {
+ "publisher": "publisherId",
+ "product": "offer",
+ "name": "skuName",
+ "version": "1.0.1"
+ }
}- ``` If the provisioning fails, a notification with the error details will be sent to the specified endpoint.
If the provisioning fails, a notification with the error details will be sent to
POST https://{your_endpoint_URI}/resource?{optional_parameter}={optional_parameter_value} HTTP/1.1 {
- "eventType": "PUT",
- "applicationId": "/subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.Solutions/applications/<applicationName>",
- "eventTime": "2019-08-14T19:20:08.1707163Z",
- "provisioningState": "Failed",
- "billingDetails": {
- "resourceUsageId":"<resourceUsageId>"
- },
- "plan": {
- "publisher": "publisherId",
- "product": "offer",
- "name": "skuName",
- "version": "1.0.1"
- },
- "error": {
- "code": "ErrorCode",
- "message": "error message",
- "details": [
- {
- "code": "DetailedErrorCode",
- "message": "error message"
- }
- ]
- }
+ "eventType": "PUT",
+ "applicationId": "/subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.Solutions/applications/<applicationName>",
+ "eventTime": "2019-08-14T19:20:08.1707163Z",
+ "provisioningState": "Failed",
+ "billingDetails": {
+ "resourceUsageId": "<resourceUsageId>"
+ },
+ "plan": {
+ "publisher": "publisherId",
+ "product": "offer",
+ "name": "skuName",
+ "version": "1.0.1"
+ },
+ "error": {
+ "code": "ErrorCode",
+ "message": "error message",
+ "details": [
+ {
+ "code": "DetailedErrorCode",
+ "message": "error message"
+ }
+ ]
+ }
}- ```
-Parameter | Description
+Property | Description
|
-eventType | The type of event that triggered the notification. (For example, PUT, PATCH, DELETE.)
-applicationId | The fully qualified resource identifier of the managed application for which the notification was triggered.
-eventTime | The timestamp of the event that triggered the notification. (Date and time in UTC ISO 8601 format.)
-provisioningState | The provisioning state of the managed application instance. (For example, Succeeded, Failed, Deleting, Deleted.)
-error | *Specified only when the provisioningState is Failed*. Contains the error code, message, and details of the issue that caused the failure.
-applicationDefinitionId | *Specified only for service catalog managed applications*. Represents the fully qualified resource identifier of the application definition for which the managed application instance was provisioned.
-plan | *Specified only for Azure Marketplace managed applications*. Represents the publisher, offer, SKU, and version of the managed application instance.
-billingDetails | *Specified only for Azure Marketplace managed applications.* The billing details of the managed application instance. Contains the resourceUsageId that you can use to query Azure Marketplace for usage details.
+`eventType` | The type of event that triggered the notification. (For example, PUT, PATCH, DELETE.)
+`applicationId` | The fully qualified resource identifier of the managed application for which the notification was triggered.
+`eventTime` | The timestamp of the event that triggered the notification. (Date and time in UTC ISO 8601 format.)
+`provisioningState` | The provisioning state of the managed application instance. For example, Succeeded, Failed, Deleting, Deleted.
+`applicationDefinitionId` | _Specified only for service catalog managed applications_. Represents the fully qualified resource identifier of the application definition for which the managed application instance was provisioned.
+`billingDetails` | _Specified only for Azure Marketplace managed applications_. The billing details of the managed application instance. Contains the `resourceUsageId` that you can use to query Azure Marketplace for usage details.
+`plan` | _Specified only for Azure Marketplace managed applications_. Represents the publisher, offer, SKU, and version of the managed application instance.
+`error` | _Specified only when the provisioningState is Failed_. Contains the error code, message, and details of the issue that caused the failure.
## Endpoint authentication+ To secure the webhook endpoint and ensure the authenticity of the notification:
-1. Provide a query parameter on top of the webhook URI, like this: https\://your-endpoint.com?sig=Guid. With each notification, check that the query parameter `sig` has the expected value `Guid`.
-2. Issue a GET on the managed application instance by using applicationId. Validate that the provisioningState matches the provisioningState of the notification to ensure consistency.
+
+1. Provide a query parameter on top of the webhook URI, like this: `https://your-endpoint.com?sig=Guid`. With each notification, check that the query parameter `sig` has the expected value `Guid`.
+1. Issue a GET on the managed application instance by using `applicationId`. Validate that the `provisioningState` matches the `provisioningState` of the notification to ensure consistency.
## Notification retries
-The Managed Application Notification service expects a `200 OK` response from the webhook endpoint to the notification. The notification service will retry if the webhook endpoint returns an HTTP error code greater than or equal to 500, if it returns an error code of 429, or if the endpoint is temporarily unreachable. If the webhook endpoint doesn't become available within 10 hours, the notification message will be dropped and the retries will stop.
+The managed application notification service expects a `200 OK` response from the webhook endpoint to the notification. The notification service will retry if the webhook endpoint returns an HTTP error code greater than or equal to 500, it returns an error code of 429, or if the endpoint is temporarily unreachable. If the webhook endpoint doesn't become available within 10 hours, the notification message will be dropped, and the retries will stop.
azure-resource-manager Template Tutorial Quickstart Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-quickstart-template.md
Title: Tutorial - Use quickstart templates
-description: Learn how to use Azure Quickstart templates to complete your template development.
+description: Learn how to use Azure Quickstart Templates to complete your template development.
Previously updated : 03/27/2020 Last updated : 08/17/2022
-# Tutorial: Use Azure Quickstart templates
+# Tutorial: Use Azure Quickstart Templates
-[Azure Quickstart templates](https://azure.microsoft.com/resources/templates/) is a repository of community contributed templates. You can use the sample templates in your template development. In this tutorial, you find a website resource definition, and add it to your own template. It takes about **12 minutes** to complete.
+[Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/) is a repository of community contributed templates. You can use the sample templates in your template development. In this tutorial, you find a website resource definition and add it to your own template. This instruction takes **12 minutes** to complete.
## Prerequisites We recommend that you complete the [tutorial about exported templates](template-tutorial-export-template.md), but it's not required.
-You must have Visual Studio Code with the Resource Manager Tools extension, and either Azure PowerShell or Azure CLI. For more information, see [template tools](template-tutorial-create-first-template.md#get-tools).
+You need to have Visual Studio Code with the Resource Manager Tools extension, and either Azure PowerShell or Azure Command-Line Interface (CLI). For more information, see [template tools](template-tutorial-create-first-template.md#get-tools).
## Review template
-At the end of the previous tutorial, your template had the following JSON:
+At the end of the previous tutorial, your template had the following JSON file:
:::code language="json" source="~/resourcemanager-templates/get-started-with-templates/export-template/azuredeploy.json":::
This template works for deploying storage accounts and app service plans, but yo
## Find template
-1. Open [Azure Quickstart templates](https://azure.microsoft.com/resources/templates/)
-1. In **Search**, enter _deploy linux web app_.
+1. Open [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/)
1. Select the tile with the title **Deploy a basic Linux web app**. If you have trouble finding it, here's the [direct link](https://azure.microsoft.com/resources/templates/webapp-basic-linux/). 1. Select **Browse on GitHub**. 1. Select _azuredeploy.json_.
-1. Review the template. In particular, look for the `Microsoft.Web/sites` resource.
+1. Review the template. Look for the `Microsoft.Web/sites` resource.
![Resource Manager template quickstart web site](./media/template-tutorial-quickstart-template/resource-manager-template-quickstart-template-web-site.png)
Merge the quickstart template with the existing template:
:::code language="json" source="~/resourcemanager-templates/get-started-with-templates/quickstart-template/azuredeploy.json" range="1-108" highlight="32-45,49,85-100":::
-The web app name needs to be unique across Azure. To prevent having duplicate names, the `webAppPortalName` variable has been updated from `"webAppPortalName": "[concat(parameters('webAppName'), '-webapp')]"` to `"webAppPortalName": "[concat(parameters('webAppName'), uniqueString(resourceGroup().id))]"`.
+The web app name needs to be unique across Azure. To prevent having duplicate names, the `webAppPortalName` variable is updated from `"webAppPortalName": "[concat(parameters('webAppName'), '-webapp')]"` to `"webAppPortalName": "[concat(parameters('webAppName'), uniqueString(resourceGroup().id))]"`.
Add a comma at the end of the `Microsoft.Web/serverfarms` definition to separate the resource definition from the `Microsoft.Web/sites` definition. There are a couple of important features to note in this new resource.
-You'll notice it has an element named `dependsOn` that's set to the app service plan. This setting is required because the app service plan must exist before the web app is created. The `dependsOn` element tells Resource Manager how to order the resources for deployment.
+It has an element named `dependsOn` that's set to the app service plan. This setting is required because the app service plan needs to exist before the web app is created. The `dependsOn` element tells Resource Manager how to order the resources for deployment.
The `serverFarmId` property uses the [resourceId](template-functions-resource.md#resourceid) function. This function gets the unique identifier for a resource. In this case, it gets the unique identifier for the app service plan. The web app is associated with one specific app service plan.
New-AzResourceGroupDeployment `
# [Azure CLI](#tab/azure-cli)
-To run this deployment command, you must have the [latest version](/cli/azure/install-azure-cli) of Azure CLI.
+To run this deployment command, you need to have the [latest version](/cli/azure/install-azure-cli) of Azure CLI.
```azurecli az deployment group create \
az deployment group create \
> [!NOTE]
-> If the deployment failed, use the `verbose` switch to get information about the resources being created. Use the `debug` switch to get more information for debugging.
+> If the deployment fails, use the `verbose` switch to get information about the resources you're creating. Use the `debug` switch to get more information for debugging.
## Clean up resources If you're moving on to the next tutorial, you don't need to delete the resource group.
-If you're stopping now, you might want to clean up the resources you deployed by deleting the resource group.
+If you're stopping now, you might want to delete the resource group.
-1. From the Azure portal, select **Resource group** from the left menu.
-2. Enter the resource group name in the **Filter by name** field.
-3. Select the resource group name.
+1. From the Azure portal, select **Resource groups** from the left menu.
+2. Type the resource group name in the **Filter for any field...** text field.
+3. Check the box next to **myResourceGroup** and select **myResourceGroup** or your resource group name.
4. Select **Delete resource group** from the top menu. ## Next steps
azure-video-analyzer Connect Cameras To Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/cloud/connect-cameras-to-cloud.md
You can deploy the Video Analyzer edge module to an IoT Edge device on the same
* When cameras/devices need to be shielded from exposure to the internet * When cameras/devices do not have the functionality to connect to IoT Hub independently
-* When power, space, or other considerations permit only a lightweight edge device to be deployed on-premise
+* When power, space, or other considerations permit only a lightweight edge device to be deployed on-premises
The Video Analyzer edge module does not act as a transparent gateway for messaging and telemetry from the camera to IoT Hub, but only as a transparent gateway for video.
azure-video-indexer Accounts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/accounts-overview.md
# Azure Video Indexer account types
-This article gives an overview of Azure Video Indexer accounts and provides links to other articles for more details.
+This article gives an overview of Azure Video Indexer accounts types and provides links to other articles for more details.
-## Differences between classic, ARM, trial accounts
+## Overview
-Classic and ARM (Azure Resource Manager) are both paid accounts with similar data plane capabilities and pricing. The main difference is that classic accounts control plane is managed by Azure Video Indexer and ARM accounts control plane is managed by Azure Resource Manager.
-Going forward, ARM account support more Azure native features and integrations such as: Azure Monitor, Private endpoints, Service tag and CMK (Customer managed key).
+The first time you visit the [www.videoindexer.ai/](https://www.videoindexer.ai/) website, a trial account is automatically created. A trial Azure Video Indexer account has limitation on number of indexing minutes, support, and SLA.
-A trial account is automatically created the first time you visit the [www.videoindexer.ai/](https://www.videoindexer.ai/) website. A trial Azure Video Indexer account has limitation on number of videos, support, and SLA. A trial Azure Video Indexer account has limitation on number of videos, support, and SLA.
+With a trial, account Azure Video Indexer provides up to 600 minutes of free indexing to users and up to 2400 minutes of free indexing to users that subscribe to the Azure Video Indexer API on the [developer portal](https://aka.ms/avam-dev-portal).
-### To generate an access token
+> [!NOTE]
+> The trial account is not available on the Azure Government cloud.
+
+You can later create a paid account where you're not limited by the quota. Two types of paid accounts are available to you: Azure Resource Manager (ARM) (currently in preview) and classic (generally available). The main difference between the two is account management platform. While classic accounts are built on the API Management, ARM-based accounts management is built on Azure, which enables apply access control to all services with role-based access control (Azure RBAC) natively.
+
+Make sure to review [Pricing](https://azure.microsoft.com/pricing/details/video-indexer/).
+
+## Connecting to Azure subscription
+
+With a trial account, you don't have to set up an Azure subscription. When creating a paid account, you need to connect Azure Video Indexer [to your Azure subscription and an Azure Media Services account](connect-to-azure.md).
+
+## To get access to your account
| | ARM-based |Classic| Trial| ||||| |Get access token | [ARM REST API](https://aka.ms/avam-arm-api) |[Get access token](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Account-Access-Token)|Same as classic |Share account| [Azure RBAC(role based access control)](../role-based-access-control/overview.md)| [Invite users](invite-users.md) |Same as classic
-### Indexing
-
-* Free trial account: up to 10 hours of free indexing, and up to 40 hours of free indexing for API registered users.
-* Paid unlimited account: for larger scale indexing, create a new Video Indexer account connected to a paid Microsoft Azure subscription.
-
-For more details, see [Pricing](https://azure.microsoft.com/pricing/details/video-indexer/).
-
-### Create accounts
+## Create accounts
* ARM accounts: [Get started with Azure Video Indexer in Azure portal](create-account-portal.md). **The recommended paid account type is the ARM-based account**. * Upgrade a trial account to an ARM based account and [**import** your content for free](connect-to-azure.md#import-your-content-from-the-trial-account). * Classic accounts: [Create classic accounts using API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Create-Paid-Account).
-* Connect a classic account to ARM: [Connect an existing classic paid Azure Video Indexer account to ARM-based account](connect-classic-account-to-arm.md).
+* Connect a classic account to ARM: [Connect an existing classic paid Azure Video Indexer account to an ARM-based account](connect-classic-account-to-arm.md).
## Limited access features
azure-video-indexer Animated Characters Recognition How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/animated-characters-recognition-how-to.md
Title: Animated character detection with Azure Video Indexer how to
-description: This how to demonstrates how to use animated character detection with Azure Video Indexer.
+description: This topic demonstrates how to use animated character detection with Azure Video Indexer.
Last updated 12/07/2020
-# Use the animated character detection (preview) with portal and API
+# Use the animated character detection with portal and API
Azure Video Indexer supports detection, grouping, and recognition of characters in animated content, this functionality is available through the Azure portal and through API. Review [this overview](animated-characters-recognition.md) article.
Follow these steps to connect your Custom Vision account to Azure Video Indexer,
1. Select the question mark on the top-right corner of the page and choose **API Reference**. 1. Make sure you're subscribed to API Management by clicking **Products** tab. If you have an API connected you can continue to the next step, otherwise, subscribe. 1. On the developer portal, select the **Complete API Reference** and browse to **Operations**.
-1. Select **Connect Custom Vision Account (PREVIEW)** and select **Try it**.
+1. Select **Connect Custom Vision Account** and select **Try it**.
1. Fill in the required fields and the access token and select **Send**. For more information about how to get the Video Indexer access token go to the [developer portal](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Account-Access-Token), and see the [relevant documentation](video-indexer-use-apis.md#obtain-access-token-using-the-authorization-api).
Before tagging and training the model, all animated characters will be named ΓÇ£
1. Review each character group: * If the group contains unrelated images, it's recommended to delete these in the Custom Vision website.
- * If there are images that belong to a different character, change the tag on these specific images by select the image, adding the right tag and deleting the wrong tag.
+ * If there are images that belong to a different character, change the tag on these specific images by selecting the image, adding the right tag and deleting the wrong tag.
* If the group isn't correct, meaning it contains mainly non-character images or images from multiple characters, you can delete in Custom Vision website or in Azure Video Indexer insights. * The grouping algorithm will sometimes split your characters to different groups. It's therefore recommended to give all the groups that belong to the same character the same name (in Azure Video Indexer Insights), which will immediately cause all these groups to appear as on in Custom Vision website. 1. Once the group is refined, make sure the initial name you tagged it with reflects the character in the group.
Once trained, any video that will be indexed or reindexed with that model will r
1. Create an animated characters model. Use the [create animation model](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Create-Animation-Model) API.
-1. Index or re-index a video.
+1. Index or reindex a video.
Use the [re-indexing](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Re-Index-Video) API. 1. Customize the animated characters models.
azure-video-indexer Animated Characters Recognition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/animated-characters-recognition.md
Last updated 11/19/2019 -
-# Animated character detection (preview)
+# Animated character detection
Azure Video Indexer supports detection, grouping, and recognition of characters in animated content via integration with [Cognitive Services custom vision](https://azure.microsoft.com/services/cognitive-services/custom-vision-service/). This functionality is available both through the portal and through the API.
Before you start training your model, the characters are detected namelessly. As
The following diagram demonstrates the flow of the animated character detection process.
-![Flow diagram](./media/animated-characters-recognition/flow.png)
+> [!div class="mx-imgBorder"]
+> :::image type="content" source="./media/animated-characters-recognition/flow.png" alt-text="Image of a flow diagram ." lightbox="./media/animated-characters-recognition/flow.png":::
## Accounts
Depending on a type of your Azure Video Indexer account, different feature sets
|||| |Custom Vision account|Managed behind the scenes by Azure Video Indexer. |Your Custom Vision account is connected to Azure Video Indexer.| |Number of animation models|One|Up to 100 models per account (Custom Vision limitation).|
-|Training the model|Azure Video Indexer trains the model for new characters additional examples of existing characters.|The account owner trains the model when they are ready to make changes.|
+|Training the model|Azure Video Indexer trains the model for new characters additional examples of existing characters.|The account owner trains the model when they're ready to make changes.|
|Advanced options in Custom Vision|No access to the Custom Vision portal.|You can adjust the models yourself in the Custom Vision portal.| ## Use the animated character detection with portal and API
For details, see [Use the animated character detection with portal and API](anim
## Limitations
-* Currently, the "animation identification" capability is not supported in East-Asia region.
+* Currently, the "animation identification" capability isn't supported in East-Asia region.
* Characters that appear to be small or far in the video may not be identified properly if the video's quality is poor. * The recommendation is to use a model per set of animated characters (for example per an animated series).
azure-video-indexer Compliance Privacy Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/compliance-privacy-security.md
+
+ Title: Azure Video Indexer compliance, privacy and security
+description: This article discusses Azure Video Indexer compliance, privacy and security.
+ Last updated : 08/18/2022+++
+# Compliance, Privacy and Security
+
+As an important reminder, you must comply with all applicable laws in your use of Azure Video Indexer, and you may not use Azure Video Indexer or any Azure service in a manner that violates the rights of others, or that may be harmful to others.
+
+Before uploading any video/image to Azure Video Indexer, You must have all the proper rights to use the video/image, including, where required by law, all the necessary consents from individuals (if any) in the video/image, for the use, processing, and storage of their data in Azure Video Indexer and Azure. Some jurisdictions may impose special legal requirements for the collection, online processing and storage of certain categories of data, such as biometric data. Before using Azure Video Indexer and Azure for the processing and storage of any data subject to special legal requirements, You must ensure compliance with any such legal requirements that may apply to You.
+
+To learn about compliance, privacy and security in Azure Video Indexer please visit the Microsoft [Trust Center](https://www.microsoft.com/TrustCenter/CloudServices/Azure/default.aspx). For Microsoft's privacy obligations, data handling and retention practices, including how to delete your data, please review Microsoft's [Privacy Statement](https://privacy.microsoft.com/PrivacyStatement), the [Online Services Terms](https://www.microsoft.com/licensing/product-licensing/products?rtc=1) ("OST") and [Data Processing Addendum](https://www.microsoftvolumelicensing.com/DocumentSearch.aspx?Mode=3&DocumentTypeId=67) ("DPA"). By using Azure Video Indexer, you agree to be bound by the OST, DPA and the Privacy Statement.
+
+## Next steps
+
+[Azure Video Indexer overview](video-indexer-overview.md)
azure-video-indexer Create Account Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/create-account-portal.md
Last updated 06/10/2022
-# Tutorial: create an account with Azure portal
+# Tutorial: create an ARM-based account with Azure portal
[!INCLUDE [Gate notice](./includes/face-limited-access.md)]
-This tutorial walks you through the steps of creating an Azure Video Indexer account and its accompanying resources by using the Azure portal. The created account is an Azure Resource Manager (ARM) based account. For information about different Azure Video Indexer account types, see the [Overview of account types](accounts-overview.md) topic.
+This tutorial walks you through the steps of creating an Azure Video Indexer account and its accompanying resources by using the Azure portal. The created account is an Azure Resource Manager (ARM) based account (currently in preview). For information about different Azure Video Indexer account types, see the [Overview of account types](accounts-overview.md) topic.
## Prerequisites
azure-video-indexer Observed People Featured Clothing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/observed-people-featured-clothing.md
+
+ Title: People's featured clothing
+description: This article gives an overview of featured clothing images appearing in a video.
+ Last updated : 11/15/2021+++
+# People's featured clothing (preview)
+
+Azure Video Indexer enables you to get data on the featured clothing of an observed person. The people's featured clothing feature, helps to enable the following scenarios:
+
+- Ads placement - using the featured clothing insight information, you can enable more targeted ads placement.
+- Video summarization - you can create a summary of the most interesting outfits appearing in the video.
+
+## Viewing featured clothing
+
+The featured clothing insight is available when indexing your file by choosing the Advanced option -> Advanced video or Advanced video + audio preset (under Video + audio indexing). Standard indexing will not include this insight.
++
+The featured clothing images are ranked based on some of the following factors: key moments of the video, general emotions from text or audio. The `id` property indicates the ranking index. For example, `"id": 1` signifies the most important featured clothing.
+
+> [!NOTE]
+> The featured clothing currently can be viewed only from the artifact file.
+
+1. In the right-upper corner, select to download the artifact zip file: **Download** -> **Artifact (ZIP)**
+1. Open `featuredclothing.zip`.
+
+The .zip file contains two objects:
+
+- `featuredclothing.map.json` - the file contains instances of each featured clothing, with the following properties:
+
+ - `id` ΓÇô ranking index (`"id": 1` is the most important clothing).
+ - `confidence` ΓÇô the score of the featured clothing.
+ - `frameIndex` ΓÇô the best frame of the clothing.
+ - `timestamp` ΓÇô corresponding to the frameIndex.
+ - `opBoundingBox` ΓÇô bounding box of the person.
+ - `faceBoundingBox` ΓÇô bounding box of the person's face, if detected.
+ - `fileName` ΓÇô where the best frame of the clothing is saved.
+
+ An example of the featured clothing with `"id": 1`.
+
+ ```
+ "instances": [
+ {
+ "confidence": 0.98,
+ "faceBoundingBox": {
+ "x": 0.50158,
+ "y": 0.10508,
+ "width": 0.13589,
+ "height": 0.45372
+ },
+ "fileName": "frame_12147.jpg",
+ "frameIndex": 12147,
+ "id": 1,
+ "opBoundingBox": {
+ "x": 0.34141,
+ "y": 0.16667,
+ "width": 0.28125,
+ "height": 0.82083
+ },
+ "timestamp": "00:08:26.6311250"
+ },
+ ```
+- `featuredclothing.frames.map` ΓÇô this folder contains images of the best frames that the featured clothing appeared in, corresponding to the `fileName` property in each instance in `featuredclothing.map.json`.
+
+## Limitations and assumptions
+
+It's important to note the limitations of featured clothing to avoid or mitigate the effects of false detections of images with low quality or low relevancy.ΓÇ»
+
+- Pre-condition for the featured clothing is that the person wearing the clothes can be found in the observed people insight.
+- If the face of a person wearing the featured clothing wasn't detected, the results won't include the faces bounding box.
+- If a person in a video wears more than one outfit, the algorithm selects its best outfit as a single featured clothing image.
+- When posed, the tracks are optimized to handle observed people who most often appear on the front.
+- Wrong detections may occur when people are overlapping.
+- Frames containing blurred people are more prone to low quality results.
+
+For more information, see the [limitations of observed people](observed-people-tracing.md#limitations-and-assumptions).
+
+## Next steps
+
+- [Trace observed people in a video](observed-people-tracing.md)
+- [People's detected clothing](detected-clothing.md)
azure-video-indexer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md
In order to upload a video from a URL, change your code to send nu
var uploadRequestResult = await client.PostAsync($"{apiUrl}/{accountInfo.Location}/Accounts/{accountInfo.Id}/Videos?{queryParams}", null); ```
-## June 2022 release updates
+## July 2022 release updates
+
+### Featured clothing insight (preview)
+
+You can now view the featured clothing of an observed person, when indexing a video using Azure Video Indexer advanced video settings. With the new featured clothing insight information, you can enable more targeted ads placement.
+
+For details on how featured clothing images are ranked and how to view this insight, see [observed people featured clothing](observed-people-featured-clothing.md).
+
+## June 2022
### Create Video Indexer blade improvements in Azure portal
Azure Video Indexer introduces source languages support for STT (speech-to-text)
### Matched person detection capability
-When indexing a video through our advanced video settings, you can view the new matched person detection capability. If there are people observed in your media file, you can now view the specific person who matched each of them through the media player.
+When indexing a video with Azure Video Indexer advanced video settings, you can view the new matched person detection capability. If there are people observed in your media file, you can now view the specific person who matched each of them through the media player.
## November 2021
For more information go to [create an Azure Video Indexer account](https://techc
### PeopleΓÇÖs clothing detection
-When indexing a video through the advanced video settings, you can view the new **PeopleΓÇÖs clothing detection** capability. If there are people detected in your media file, you can now view the clothing type they are wearing through the media player.
+When indexing a video with Azure Video Indexer advanced video settings, you can view the new peopleΓÇÖs clothing detection capability. If there are people detected in your media file, you can now view the clothing type they are wearing through the media player.
### Face bounding box (preview)
azure-video-indexer Video Indexer Output Json V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-output-json-v2.md
When indexing with an API and the response status is OK, you get a detailed JSON
[!INCLUDE [insights](./includes/insights.md)]
-This article examines the Azure Video Indexer output (JSON content). For information about what features and insights are available to you, see [Azure Video Indexer insights](video-indexer-overview.md#video-insights).
+This article examines the Azure Video Indexer output (JSON content). For information about what features and insights are available to you, see [Azure Video Indexer insights](video-indexer-overview.md#video-models).
> [!NOTE] > All the access tokens in Azure Video Indexer expire in one hour.
azure-video-indexer Video Indexer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-overview.md
Title: What is Azure Video Indexer? description: This article gives an overview of the Azure Video Indexer service. Previously updated : 06/09/2022 Last updated : 08/18/2022
To start extracting insights with Azure Video Indexer, see the [how can I get st
## Compliance, Privacy and Security
-As an important reminder, you must comply with all applicable laws in your use of Azure Video Indexer, and you may not use Azure Video Indexer or any Azure service in a manner that violates the rights of others, or that may be harmful to others.
-
-Before uploading any video/image to Azure Video Indexer, You must have all the proper rights to use the video/image, including, where required by law, all the necessary consents from individuals (if any) in the video/image, for the use, processing, and storage of their data in Azure Video Indexer and Azure. Some jurisdictions may impose special legal requirements for the collection, online processing and storage of certain categories of data, such as biometric data. Before using Azure Video Indexer and Azure for the processing and storage of any data subject to special legal requirements, You must ensure compliance with any such legal requirements that may apply to You.
-
-To learn about compliance, privacy and security in Azure Video Indexer please visit the Microsoft [Trust Center](https://www.microsoft.com/TrustCenter/CloudServices/Azure/default.aspx). For Microsoft's privacy obligations, data handling and retention practices, including how to delete your data, please review Microsoft's [Privacy Statement](https://privacy.microsoft.com/PrivacyStatement), the [Online Services Terms](https://www.microsoft.com/licensing/product-licensing/products?rtc=1) ("OST") and [Data Processing Addendum](https://www.microsoftvolumelicensing.com/DocumentSearch.aspx?Mode=3&DocumentTypeId=67) ("DPA"). By using Azure Video Indexer, you agree to be bound by the OST, DPA and the Privacy Statement.
+> [!Important]
+> Before you continue with Azure Video Indexer, read [Compliance, privacy and security](compliance-privacy-security.md).
## What can I do with Azure Video Indexer?
Azure Video Indexer's insights can be applied to many scenarios, among them are:
* Content moderation: Use textual and visual content moderation models to keep your users safe from inappropriate content and validate that the content you publish matches your organization's values. You can automatically block certain videos or alert your users about the content. * Recommendations: Video insights can be used to improve user engagement by highlighting the relevant video moments to users. By tagging each video with additional metadata, you can recommend to users the most relevant videos and highlight the parts of the video that will match their needs.
-## Features
+## Video/audio AI features
+
+The following list shows the insights you can retrieve from your videos using Azure Video Indexer video and audio AI features (models.
-The following list shows the insights you can retrieve from your videos using Azure Video Indexer video and audio models:
+Unless specified otherwise, a model is generally available.
-### Video insights
+### Video models
* **Face detection**: Detects and groups faces appearing in the video.
-* **Celebrity identification**: Azure Video Indexer automatically identifies over 1 million celebritiesΓÇölike world leaders, actors, actresses, athletes, researchers, business, and tech leaders across the globe. The data about these celebrities can also be found on various websites (IMDB, Wikipedia, and so on).
-* **Account-based face identification**: Azure Video Indexer trains a model for a specific account. It then recognizes faces in the video based on the trained model. For more information, see [Customize a Person model from the Azure Video Indexer website](customize-person-model-with-website.md) and [Customize a Person model with the Azure Video Indexer API](customize-person-model-with-api.md).
-* **Thumbnail extraction for faces** ("best face"): Automatically identifies the best captured face in each group of faces (based on quality, size, and frontal position) and extracts it as an image asset.
-* **Visual text recognition** (OCR): Extracts text that's visually displayed in the video.
+* **Celebrity identification**: Identifies over 1 million celebritiesΓÇölike world leaders, actors, artists, athletes, researchers, business, and tech leaders across the globe. The data about these celebrities can also be found on various websites (IMDB, Wikipedia, and so on).
+* **Account-based face identification**: Trains a model for a specific account. It then recognizes faces in the video based on the trained model. For more information, see [Customize a Person model from the Azure Video Indexer website](customize-person-model-with-website.md) and [Customize a Person model with the Azure Video Indexer API](customize-person-model-with-api.md).
+* **Thumbnail extraction for faces**: Identifies the best captured face in each group of faces (based on quality, size, and frontal position) and extracts it as an image asset.
+* **Optical character recognition (OCR)**: Extracts text from images like pictures, street signs and products in media files to create insights.
* **Visual content moderation**: Detects adult and/or racy visuals. * **Labels identification**: Identifies visual objects and actions displayed. * **Scene segmentation**: Determines when a scene changes in video based on visual cues. A scene depicts a single event and it's composed by a series of consecutive shots, which are semantically related.
The following list shows the insights you can retrieve from your videos using Az
* **Black frame detection**: Identifies black frames presented in the video. * **Keyframe extraction**: Detects stable keyframes in a video. * **Rolling credits**: Identifies the beginning and end of the rolling credits in the end of TV shows and movies.
-* **Animated characters detection** (preview): Detection, grouping, and recognition of characters in animated content via integration with [Cognitive Services custom vision](https://azure.microsoft.com/services/cognitive-services/custom-vision-service/). For more information, see [Animated character detection](animated-characters-recognition.md).
-* **Editorial shot type detection**: Tagging shots based on their type (like wide shot, medium shot, close up, extreme close up, two shot, multiple people, outdoor and indoor, and so on). For more information, see [Editorial shot type detection](scenes-shots-keyframes.md#editorial-shot-type-detection).
-* **Observed people tracking** (preview): detects observed people in videos and provides information such as the location of the person in the video frame (using bounding boxes) and the exact timestamp (start, end) and confidence when a person appears. For more information, see [Trace observed people in a video](observed-people-tracing.md).
- * **People's detected clothing**: detects the clothing types of people appearing in the video and provides information such as long or short sleeves, long or short pants and skirt or dress. The detected clothing is associated with the people wearing it and the exact timestamp (start,end) along with a confidence level for the detection are provided.
-* **Matched person**: matches between people that were observed in the video with the corresponding faces detected. The matching between the observed people and the faces contain a confidence level.
-
-### Audio insights
-
-* **Audio transcription**: Converts speech to text over 50 languages and allows extensions. Supported languages include English US, English United Kingdom, English Australia, Spanish, Spanish(Mexico), French, French(Canada), German, Italian, Mandarin Chinese, Chinese (Cantonese, Traditional), Chinese (Simplified), Japanese, Russian, Portuguese, Hindi, Czech, Dutch, Polish, Danish, Norwegian, Finish, Swedish, Thai, Turkish, Korean, Arabic(Egypt), Arabic(Syrian Arab Republic), Arabic(Israel), Arabic(Iraq), Arabic(Jordan), Arabic(Kuwait), Arabic(Lebanon), Arabic(Oman), Arabic(Qatar), Arabic(Saudi Arabia), Arabic(United Arab Emirates), Arabic(Palestinian Authority) and Arabic Modern Standard (Bahrain) .
-* **Automatic language detection**: Automatically identifies the dominant spoken language. Supported languages include English, Spanish, French, German, Italian, Mandarin Chinese, Japanese, Russian, and Portuguese. If the language can't be identified with confidence, Azure Video Indexer assumes the spoken language is English. For more information, see [Language identification model](language-identification-model.md).
-* **Multi-language speech identification and transcription**: Automatically identifies the spoken language in different segments from audio. It sends each segment of the media file to be transcribed and then combines the transcription back to one unified transcription. For more information, see [Automatically identify and transcribe multi-language content](multi-language-identification-transcription.md).
+* **Animated characters detection** : Detects, groups, and recognizes characters in animated content via integration with [Cognitive Services custom vision](https://azure.microsoft.com/services/cognitive-services/custom-vision-service/). For more information, see [Animated character detection](animated-characters-recognition.md).
+* **Editorial shot type detection**: Tags shots based on their type (like wide shot, medium shot, close up, extreme close up, two shot, multiple people, outdoor and indoor, and so on). For more information, see [Editorial shot type detection](scenes-shots-keyframes.md#editorial-shot-type-detection).
+* **Observed people tracking** (preview): Detects observed people in videos and provides information such as the location of the person in the video frame (using bounding boxes) and the exact timestamp (start, end) and confidence when a person appears. For more information, see [Trace observed people in a video](observed-people-tracing.md).
+ * **People's detected clothing** (preview): Detects the clothing types of people appearing in the video and provides information such as long or short sleeves, long or short pants and skirt or dress. The detected clothing is associated with the people wearing it and the exact timestamp (start, end) along with a confidence level for the detection are provided. For more information, see [detected clothing](detected-clothing.md).
+ * **Featured clothing** (preview): captures featured clothing images appearing in a video. You can improve your targeted ads by using the featured clothing insight. For information on how the featured clothing images are ranked and how to get the insights, see [featured clothing](observed-people-featured-clothing.md).
+* **Matched person** (preview): Matches people that were observed in the video with the corresponding faces detected. The matching between the observed people and the faces contain a confidence level.
+
+### Audio models
+
+* **Audio transcription**: Converts speech to text over 50 languages and allows extensions. For a comprehensive list of language support by service, see [language support](language-support.md).
+* **Automatic language detection**: Identifies the dominant spoken language. For a comprehensive list of language support by service, see [language support](language-support.md). If the language can't be identified with confidence, Azure Video Indexer assumes the spoken language is English. For more information, see [Language identification model](language-identification-model.md).
+* **Multi-language speech identification and transcription**: Identifies the spoken language in different segments from audio. It sends each segment of the media file to be transcribed and then combines the transcription back to one unified transcription. For more information, see [Automatically identify and transcribe multi-language content](multi-language-identification-transcription.md).
* **Closed captioning**: Creates closed captioning in three formats: VTT, TTML, SRT. * **Two channel processing**: Auto detects separate transcript and merges to single timeline. * **Noise reduction**: Clears up telephony audio or noisy recordings (based on Skype filters).
The following list shows the insights you can retrieve from your videos using Az
> [!NOTE] > The full set of events is available only when you choose **Advanced Audio Analysis** when uploading a file, in upload preset. By default, only silence is detected.
-### Audio and video insights (multi-channels)
+### Audio and video models (multi-channels)
When indexing by one channel, partial result for those models will be available. * **Keywords extraction**: Extracts keywords from speech and visual text. * **Named entities extraction**: Extracts brands, locations, and people from speech and visual text via natural language processing (NLP).
-* **Topic inference**: Extracts topics based on various keywords (i.e. keywords 'Stock Exchange', 'Wall Street' will produce the topic 'Economics'). The model uses three different ontologies ([IPTC](https://iptc.org/standards/media-topics/), [Wikipedia](https://www.wikipedia.org/) and the Video Indexer hierarchical topic ontology). The model uses transcription (spoken words), OCR content (visual text), and celebrities recognized in the video using the Video Indexer facial recognition model.
+* **Topic inference**: Extracts topics based on various keywords (that is, keywords 'Stock Exchange', 'Wall Street' will produce the topic 'Economics'). The model uses three different ontologies ([IPTC](https://iptc.org/standards/media-topics/), [Wikipedia](https://www.wikipedia.org/) and the Video Indexer hierarchical topic ontology). The model uses transcription (spoken words), OCR content (visual text), and celebrities recognized in the video using the Video Indexer facial recognition model.
* **Artifacts**: Extracts rich set of "next level of details" artifacts for each of the models. * **Sentiment analysis**: Identifies positive, negative, and neutral sentiments from speech and visual text.
When indexing by one channel, partial result for those models will be available.
Before creating a new account, review [Account types](accounts-overview.md).
+### Supported browsers
+
+The following list shows the supported browsers that you can use for the Azure Video Indexer website and for your apps that embed the widgets. The list also shows the minimum supported browser version:
+
+- Edge, version: 16
+- Firefox, version: 54
+- Chrome, version: 58
+- Safari, version: 11
+- Opera, version: 44
+- Opera Mobile, version: 59
+- Android Browser, version: 81
+- Samsung Browser, version: 7
+- Chrome for Android, version: 87
+- Firefox for Android, version: 83
+ ### Start using Azure Video Indexer You can access Azure Video Indexer capabilities in three ways:
You can access Azure Video Indexer capabilities in three ways:
For more information, seeΓÇ»[Embed visual widgets in your application](video-indexer-embed-widgets.md). If you're using the website, the insights are added as metadata and are visible in the portal. If you're using APIs, the insights are available as a JSON file.
-## Supported browsers
-
-The following list shows the supported browsers that you can use for the Azure Video Indexer website and for your apps that embed the widgets. The list also shows the minimum supported browser version:
--- Edge, version: 16-- Firefox, version: 54-- Chrome, version: 58-- Safari, version: 11-- Opera, version: 44-- Opera Mobile, version: 59-- Android Browser, version: 81-- Samsung Browser, version: 7-- Chrome for Android, version: 87-- Firefox for Android, version: 83- ## Next steps You're ready to get started with Azure Video Indexer. For more information, see the following articles:
azure-vmware Concepts Design Public Internet Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-design-public-internet-access.md
Your requirements for security controls, visibility, capacity, and operations dr
## Internet Service hosted in Azure
-There are multiple ways to generate a default route in Azure and send it towards your Azure VMware Solution private cloud or on-premise. The options are as follows:
+There are multiple ways to generate a default route in Azure and send it towards your Azure VMware Solution private cloud or on-premises. The options are as follows:
- An Azure firewall in a Virtual WAN Hub. - A third-party Network Virtual Appliance in a Virtual WAN Hub Spoke Virtual Network.
azure-vmware Concepts Network Design Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-network-design-considerations.md
Title: Concepts - Network design considerations
description: Learn about network design considerations for Azure VMware Solution Previously updated : 03/04/2022 Last updated : 08/19/2022 # Azure VMware Solution network design considerations
To reach vCenter Server and NSX Manager, more specific routes from on-prem need
Now that you've covered Azure VMware Solution network design considerations, you might consider learning more. - [Network interconnectivity concepts - Azure VMware Solution](concepts-networking.md)
+- [Plan the Azure VMware Solution deployment](plan-private-cloud-deployment.md)
+- [Networking planning checklist for Azure VMware Solution](tutorial-network-checklist.md)
## Recommended content
azure-vmware Connect Multiple Private Clouds Same Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/connect-multiple-private-clouds-same-region.md
The Azure VMware Solution Interconnect feature is available in all regions.
>[!NOTE] >The **AVS interconnect** feature doesn't check for overlapping IP space the way native Azure vNet peering does before creating the peering. Therefore, it's your responsibility to ensure that there isn't overlap between the private clouds. >
->In Azure VMware Solution environments, it's possible to configure non-routed, overlapping IP deployments on NSX segments that aren't routed to Azure. These don't cause issues with the AVS Interconnect feature, as it only routes between the NSX T0 on each private cloud.
+>In Azure VMware Solution environments, it's possible to configure non-routed, overlapping IP deployments on NSX segments that aren't routed to Azure. These don't cause issues with the AVS Interconnect feature, as it only routes between the NSX-T Data Center T0 gateway on each private cloud.
## Add connection between private clouds
azure-vmware Deploy Disaster Recovery Using Vmware Hcx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-disaster-recovery-using-vmware-hcx.md
This guide covers the following replication scenarios:
1. After selecting **Test**, the recovery operation begins.
-1. When finished, you can check the new VM in the Azure VMware Solution private cloud vCenter.
+1. When finished, you can check the new VM in the Azure VMware Solution private cloud vCenter Server.
:::image type="content" source="./media/disaster-recovery-virtual-machines/verify-test-recovery.png" alt-text="Screenshot showing the check recovery operation summary." border="true" lightbox="./media/disaster-recovery-virtual-machines/verify-test-recovery.png":::
azure-vmware Deploy Zerto Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-zerto-disaster-recovery.md
Zerto is a disaster recovery solution designed to minimize downtime of VMs shoul
| Component | Description | | | | | **Zerto Virtual Manager (ZVM)** | Management application for Zerto implemented as a Windows service installed on a Windows VM. The private cloud administrator installs and manages the Windows VM. The ZVM enables Day 0 and Day 2 disaster recovery configuration. For example, configuring primary and disaster recovery sites, protecting VMs, recovering VMs, and so on. However, it doesn't handle the replication data of the protected customer VMs. |
-| **Virtual Replication appliance (vRA)** | Linux VM to handle data replication from the source to the replication target. One instance of vRA is installed per ESXi host, delivering a true scale architecture that grows and shrinks along with the private cloud's hosts. The VRA manages data replication to and from protected VMs to its local or remote target, storing the data in the journal. |
+| **Virtual Replication appliance (vRA)** | Linux VM to handle data replication from the source to the replication target. One instance of vRA is installed per ESXi host, delivering a true scale architecture that grows and shrinks along with the private cloud's hosts. The vRA manages data replication to and from protected VMs to its local or remote target, storing the data in the journal. |
| **Zerto ESXi host driver** | Installed on each VMware ESXi host configured for Zerto disaster recovery. The host driver intercepts a vSphere VM's IO and sends the replication data to the chosen vRA for that host. The vRA is then responsible for replicating the VM's data to one or more disaster recovery targets. | | **Zerto Cloud Appliance (ZCA)** | Windows VM only used when Zerto is used to recover vSphere VMs as Azure Native IaaS VMs. The ZCA is composed of:<ul><li>**ZVM:** A Windows service that hosts the UI and integrates with the native APIs of Azure for management and orchestration.</li><li>**VRA:** A Windows service that replicates the data from or to Azure.</li></ul>The ZCA integrates natively with the platform it's deployed on, allowing you to use Azure Blob storage within a storage account on Microsoft Azure. As a result, it ensures the most cost-efficient deployment on each of these platforms. | | **Virtual Protection Group (VPG)** | Logical group of VMs created on the ZVM. Zerto allows configuring disaster recovery, Backup, and Mobility policies on a VPG. This mechanism enables a consistent set of policies to be applied to a group of VMs. |
To learn more about Zerto platform architecture, see the [Zerto Platform Archite
You can use Zerto with Azure VMware Solution for the following three scenarios.
-### Scenario 1: On-premises VMware to Azure VMware Solution disaster recovery
+### Scenario 1: On-premises VMware vSphere to Azure VMware Solution disaster recovery
In this scenario, the primary site is an on-premises vSphere-based environment. The disaster recovery site is an Azure VMware Solution private cloud.
azure-vmware Disable Internet Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/disable-internet-access.md
Last updated 05/12/2022
# Disable internet access or enable a default route
-In this article, you'll learn how to disable Internet access or enable a default route for your Azure VMware Solution private cloud. There are multiple ways to set up a default route. You can use a Virtual WAN hub, Network Virtual Appliance in a Virtual Network, or use a default route from on-premise. If you don't set up a default route, there will be no Internet access to your Azure VMware Solution private cloud.
+In this article, you'll learn how to disable Internet access or enable a default route for your Azure VMware Solution private cloud. There are multiple ways to set up a default route. You can use a Virtual WAN hub, Network Virtual Appliance in a Virtual Network, or use a default route from on-premises. If you don't set up a default route, there will be no Internet access to your Azure VMware Solution private cloud.
With a default route setup, you can achieve the following tasks: - Disable Internet access to your Azure VMware Solution private cloud.
azure-vmware Enable Vmware Cds With Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/enable-vmware-cds-with-azure.md
+
+ Title: Enable VMware Cloud director service with Azure VMware Solution (Public Preview)
+description: This article explains how to use Azure VMware Solution to enable enterprise customers to use Azure VMware Solution for private clouds underlying resources for virtual datacenters.
+ Last updated : 08/09/2022++
+# Enable VMware Cloud Director service with Azure VMware Solution (Preview)
+
+[VMware Cloud Director Service (CDs)](https://docs.vmware.com/en/VMware-Cloud-Director-service/services/getting-started-with-vmware-cloud-director-service/GUID-149EF3CD-700A-4B9F-B58B-8EA5776A7A92.html) with Azure VMware Solution enables enterprise customers to use APIs or the Cloud Director services portal to self-service provision and manage virtual datacenters through multi-tenancy with reduced time and complexity.
+
+In this article, you'll learn how to enable VMware Cloud Director service (CDs) with Azure VMware Solution for enterprise customers to use Azure VMware Solution resources and Azure VMware Solution private clouds with underlying resources for virtual datacenters.
+
+>[!IMPORTANT]
+> Cloud Director service (CDs) is now available to use with Azure VMware Solution under the Enterprise Agreement (EA) model only. It's not suitable for MSP / Hoster to resell Azure VMware Solution capacity to customers at this point. For more information, see [Azure Service terms](https://www.microsoft.com/licensing/terms/productoffering/MicrosoftAzure/EAEAS#GeneralServiceTerms).
+
+## Reference architecture
+The following diagram shows typical architecture for Cloud Director Services with Azure VMware Solution and how they're connected. Communications to Azure VMware Solution endpoints from Cloud Director service are supported by an SSL reverse proxy.
++
+VMware Cloud Director supports multi-tenancy by using organizations. A single organization can have multiple organization virtual data centers (VDC). Each OrganizationΓÇÖs VDC can have their own dedicated Tier-1 router (Edge Gateway) which is further connected with the providerΓÇÖs managed shared Tier-0 router.
+
+## Connect tenants and their organization virtual datacenters to Azure vNet based resources
+
+To provide access to vNET based Azure resources, each tenant can have their own dedicated Azure vNET with Azure VPN gateway. A site-to-site VPN between customer organization VDC and Azure vNET is established. To achieve this connectivity, the provider will provide public IP to the organization VDC. Organization VDCΓÇÖs administrator can configure IPSEC VPN connectivity from Cloud Director Service portal.
++
+As shown in the diagram above, organization 01 has two organization Virtual datacenters (VDCs): VDC1 and VDC2. The virtual datacenter of each organization has its own Azure vNETs connected with their respective organization VDC Edge gateway through IPSEC VPN.
+Providers provide public IP addresses to the organization VDC Edge gateway for IPSEC VPN configuration. An ORG VDC Edge gateway firewall blocks all traffic by default, specific allow rules needs to be added on organization Edge gateway firewall.
+
+Organization VDCs can be part of a single organization and still provide isolation between them. For example, VM1 hosted in organization VDC1 cannot ping Azure VM JSVM2 for tenant2.
+
+### Prerequisites
+- Organization VDC is configured with an Edge gateway and has Public IPs assigned to it to establish IPSEC VPN by provider.
+- Tenants have created a routed Organization VDC network in tenantΓÇÖs Virtual datacenter.
+- Test VM1 and VM2 are created in the Organization VDC1 and VDC2 respectively. Both VMs are connected to the routed orgVDC network in their respective VDCs.
+- Have a dedicated [Azure vNET](tutorial-configure-networking.md#create-a-vnet-manually) configured for each tenant. For this example, we created Tenant1-vNet and Tenant2-vNet for tenant1 and tenant2 respectively.
+- Create an [Azure Virtual network gateway](tutorial-configure-networking.md#create-a-virtual-network-gateway) for vNETs created earlier.
+- Deploy Azure VMs JSVM1 and JSVM2 for tenant1 and tenant2 for test purposes.
+
+> [!Note]
+> CDS supports a policy-based VPN. Azure VPN gateway configures route-based VPN by default and to configure policy-based VPN policy-based selector needs to be enabled.
+
+### Configure Azure vNet
+Create the following components in tenantΓÇÖs dedicated Azure vNet to establish IPSEC tunnel connection with the tenantΓÇÖs ORG VDC edge gateway.
+- Azure Virtual network gateway
+- Local network gateway.
+- Add IPSEC connection on VPN gateway.
+- Edit connection configuration to enable policy-based VPN. git status
+
+### Create Azure virtual network gateway
+To create an Azure virtual network gateway, see the [create-a-virtual-network-gateway tutorial](tutorial-configure-networking.md#create-a-virtual-network-gateway).
+
+### Create local network gateway
+1. Log in to the Azure portal and select **Local network gateway** from marketplace and then select **Create**.
+1. Local Network Gateway represents remote end site details. Therefore provide tenant1 OrgVDC public IP address and orgVDC Network details to create local end point for tenant1.
+1. Under **Instance details**, select **Endpoint** as IP address
+1. Add IP address (add Public IP address from tenantΓÇÖs OrgVDC Edge gateway).
+1. Under **Address space** add **Tenants Org VDC Network**.
+1. Repeat steps 1-5 to create a local network gateway for tenant 2.
+
+### Create IPSEC connection on VPN gateway
+1. Select tenant1 VPN Gateway (created earlier) and then select **Connection** (in left pane) to add new IPSEC connection with tenant1 orgVDC Edge gateway.
+1. Enter the following details.
+
+ | **Name** | **Connection** |
+ |:- | :--|
+ | Connection Type | Site to Site |
+ | VPN Gateway | TenantΓÇÖs VPN Gateway |
+ | Local Network Gateway | TenantΓÇÖs Local Gateway |
+ | PSK | Shared Key (provide a password) |
+ | IKE Protocol | IKEV2 (ORG-VDC is using IKEv2) |
+
+1. Select **Ok** to deploy local network gateway.
+
+### Configure IPsec Connection
+Cloud Director Service supports a policy-based VPN. Azure VPN gateway configures route-based VPN by default and to configure policy-based VPN policy-based selector needs to be enabled.
+
+1. Select the connection you created earlier and then select **configuration** to view the default settings.
+1. **IPSEC/IKE Policy**
+1. **Enable policy base traffic selector**
+1. Modify all other parameters to match what you have in OrgVDC.
+ >[!Note]
+ > Both source and destination of the tunnel should have identical settings for IKE,SA, DPD etc.
+1. Select **Save**.
+
+### Configure VPN on organization VDC Edge router
+1. Log in to Organization CDS tenant portal and select tenantΓÇÖs edge gateway.
+1. Select **IPSEC VPN** option under **Services** and then select **New**.
+1. Under general setting, provide **Name** and select desired security profile. Ensure that security profile settings (IKE, Tunnel, and DPD configuration) are same on both sides of the IPsec tunnel.
+1. Modify Azure VPN gateway to match the Security profile, if necessary. You can also do security profile customization from CDS tenant portal.
+
+ >[!Note]
+ > VPN tunnel won't establish if these settings were mismatched.
+1. Under **Peer Authentication Mode**, provide the same pre-shared key that is used at the Azure VPN gateway.
+1. Under **Endpoint configuration**, add the OrganizationΓÇÖs public IP and network details in local endpoint and Azure VNet details in remote endpoint configuration.
+1. Under **Ready to complete**, review applied configuration.
+1. Select **Finish** to apply configuration.
+
+### Apply firewall configuration
+Organization VDC Edge router firewall denies traffic by default. You'll need to apply specific rules to enable connectivity. Use the following steps to apply firewall rules.
+
+1. Add IP set in CDS portal
+ 1. Log in to Edge router then select **IP SETS** under the **Security** tab in left plane.
+ 1. Select **New** to create IP sets.
+ 1. Enter **Name** and **IP address** of test VM deployed in orgVDC.
+ 1. Create another IP set for Azure vNET for this tenant.
+2. Apply firewall rules on ORG VDC Edge router.
+ 1. Under **Edge gateway**, select **Edge gateway** and then select **firewall** under **services**.
+ 1. Select **Edit rules**.
+ 1. Select **NEW ON TOP** and enter rule name.
+ 1. Add **source** and **destination** details. Use created IPSET in source and destination.
+ 1. Under **Action**, select **Allow**.
+ 1. Select **Save** to apply configuration.
+3. Verify tunnel status
+ 1. Under **Edge gateway** select **Service**, then select **IPSEC VPN**,
+ 1. Select **View statistics**.
+ Status of tunnel should show **UP**.
+4. Verify IPsec connection
+ 1. Log in to Azure VM deployed in tenants vNET and ping tenantΓÇÖs test VM IP address in tenantΓÇÖs OrgVDC.
+ For example, ping VM1 from JSVM1. Similarly, you should be able to ping VM2 from JSVM2.
+You can verify isolation between tenants Azure vNETs. Tenant1ΓÇÖs VM1 won't be able to ping Tenant2ΓÇÖs Azure VM JSVM2 in tenant2 Azure vNETs.
+
+## Connect Tenant workload to public Internet
+
+- Tenants can use public IP to do SNAT configuration to enable Internet access for VM hosted in organization VDC. To achieve this connectivity, the provider can provide public IP to the organization VDC.
+- Each organization VDC can be created with dedicated T1 router (created by provider) with reserved Public & Private IP for NAT configuration. Tenants can use public IP SNAT configuration to enable Internet access for VM hosted in organization VDC.
+- OrgVDC administrator can create a routed OrgVDC network connected to their OrgVDC Edge gateway. To provide Internet access.
+- OrgVDC administrator can configure SNAT to provide a specific VM or use network CIDR to provide public connectivity.
+- OrgVDC Edge has default DENY ALL firewall rule. Organization administrators will need to open appropriate ports to allow access through the firewall by adding a new firewall rule. Virtual machines configured on such OrgVDC network used in SNAT configuration should be able to access the Internet.
+
+### Prerequisites
+1. Public IP is assigned to the organization VDC Edge router.
+ To verify, log in to the organization's VDC. Under **Networking**> **Edges**, select **Edge Gateway**, then select **IP allocations** under **IP management**. You should see a range of assigned IP address there.
+2. Create a routed Organization VDC network. (Connect OrgvDC network to the edge gateway with public IP address assigned)
+
+### Apply SNAT configuration
+1. Log in to Organization VDC. Navigate to your Edge gateway and then select **NAT** under **Services**.
+2. Select **New** to add new SNAT rule.
+3. Provide **Name** and select **Interface type** as SNAT.
+4. Under **External IP**, enter public IP address from public IP pool assigned to your orgVDC Edge router.
+5. Under **Internal IP**, enter IP address for your test VM.
+ This IP address is one of the orgVDC network IP assigned to the VM.
+6. **State** should be enabled.
+7. Under **Priority**, select a higher number.
+ For example, 4096.
+8. Select **Save** to save the configuration.
+
+### Apply firewall rule
+1. Log in to Organization VDC and navigate to **Edge Gateway**, then select **IP set** under security.
+2. Create an IPset. Provide IP address of your VM (you can use CIDR also). Select **Save**.
+3. Under **services**, select **Firewall**, then select **Edit rules**.
+4. Select **New ON TOP** and create a firewall rule to allow desired port and destination.
+1. Select the **IPset** your created earlier as source. Under **Action**, select **Allow**.
+1. Select **Keep** to save the configuration.
+1. Log in to your test VM and ping your destination address to verify outbound connectivity.
+
+## Migrate workloads to Cloud Director Service on Azure VMware Solution
+
+VMware Cloud Director Availability can be used to migrate VMware Cloud Director workload into Cloud Director service on Azure VMware Solution. Enterprise customers can drive self-serve one-way warm migration from the on-premises Cloud Director Availability vSphere plugin, or they can run the Cloud Director Availability plugin from the provider-managed Cloud Director instance and move workloads into Azure VMware Solution.
+
+For more information about VMware Cloud Director Availability, see [VMware Cloud Director Availability | Disaster Recovery & Migration](https://www.vmware.com/products/cloud-director-availability.html)
+
+## FAQs
+**Question**: What are the supported Azure regions for the VMware Cloud Director service?
+
+**Answer**: This offering is supported in all Azure regions where Azure VMware Solution is available except for Brazil South and South Africa. Ensure that the region you wish to connect to Cloud Director service is within a 150-milliseconds round trip time for latency with Cloud Director service.
+
+## Next steps
+[VMware Cloud Director Service Documentation](https://docs.vmware.com/en/VMware-Cloud-Director-service/https://docsupdatetracker.net/index.html)
azure-vmware Request Host Quota Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/request-host-quota-azure-vmware-solution.md
CSPs must use [Microsoft Partner Center](https://partner.microsoft.com) to enabl
Access the Azure portal using the **Admin On Behalf Of** (AOBO) procedure from Partner Center. >[!IMPORTANT]
->Azure VMware Solution service does not provide a multi-tenancy required. Hosting partners requiring it are not supported.
+>Azure VMware Solution service does not provide multi-tenancy support. Hosting partners requiring it are not supported.
1. Configure the CSP Azure plan:
azure-vmware Vrealize Operations For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/vrealize-operations-for-azure-vmware-solution.md
Most customers have an existing on-premises deployment of vRealize Operations to
:::image type="content" source="media/vrealize-operations-manager/vrealize-operations-deployment-option-1.png" alt-text="Diagram showing the on-premises vRealize Operations managing Azure VMware Solution deployment." border="false":::
-To extend the vRealize Operations capabilities to the Azure VMware Solution private cloud, you create an adapter [instance for the private cloud resources](https://docs.vmware.com/en/vRealize-Operations-Manager/8.1/com.vmware.vcom.config.doc/GUID-640AD750-301E-4D36-8293-1BFEB67E2600.html). It collects data from the Azure VMware Solution private cloud and brings it into on-premises vRealize Operations. The on-premises vRealize Operations Manager instance can directly connect to the vCenter Server and NSX-T Manager on Azure VMware Solution. Optionally, you can deploy a vRealize Operations Remote Collector on the Azure VMware Solution private cloud. The collector compresses and encrypts the data collected from the private cloud before it's sent over the ExpressRoute or VPN network to the vRealize Operations Manager running on-premise.
+To extend the vRealize Operations capabilities to the Azure VMware Solution private cloud, you create an adapter [instance for the private cloud resources](https://docs.vmware.com/en/vRealize-Operations-Manager/8.1/com.vmware.vcom.config.doc/GUID-640AD750-301E-4D36-8293-1BFEB67E2600.html). It collects data from the Azure VMware Solution private cloud and brings it into on-premises vRealize Operations. The on-premises vRealize Operations Manager instance can directly connect to the vCenter Server and NSX-T Manager on Azure VMware Solution. Optionally, you can deploy a vRealize Operations Remote Collector on the Azure VMware Solution private cloud. The collector compresses and encrypts the data collected from the private cloud before it's sent over the ExpressRoute or VPN network to the vRealize Operations Manager running on-premises.
> [!TIP] > Refer to the [VMware documentation](https://docs.vmware.com/en/vRealize-Operations-Manager/8.1/com.vmware.vcom.vapp.doc/GUID-7FFC61A0-7562-465C-A0DC-46D092533984.html) for step-by-step guide for installing vRealize Operations Manager.
baremetal-infrastructure About The Public Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-public-preview/about-the-public-preview.md
In particular, this article highlights Public Preview features.
## Unlock the benefits of Azure * Establish a consistent hybrid deployment strategy
-* Operate seamlessly with on-premise Nutanix Clusters in Azure
+* Operate seamlessly with on-premises Nutanix Clusters in Azure
* Build and scale without constraints * Invent for today and be prepared for tomorrow with NC2 on Azure
batch Simplified Node Communication Pool No Public Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/simplified-node-communication-pool-no-public-ip.md
client-request-id: 00000000-0000-0000-0000-000000000000
} ```
+## Create a pool without public IP addresses using ARM template
+
+You can use this [Azure Quickstart Template](https://azure.microsoft.com/resources/templates/batch-pool-no-public-ip/) to create a pool without public IP addresses using Azure Resource Manager (ARM) template.
+
+Following resources will be deployed by the template:
+
+- Azure Batch account with IP firewall configured to block public network access to Batch node management endpoint
+- Virtual network with network security group to block internet outbound access
+- Private endpoint to access Batch node management endpoint of the account
+- DNS integration for the private endpoint using private DNS zone linked to the virtual network
+- Batch pool deployed in the virtual network and without public IP addresses
+
+If you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
+
+[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.batch%2Fbatch-pool-no-public-ip%2Fazuredeploy.json)
+
+> [!NOTE]
+> If the private endpoint deployment failed due to invalid groupId "nodeManagement", please check if the region is in the supported list, and you've already opted in with [Simplified compute node communication](simplified-compute-node-communication.md). Choose the right region and opt in your Batch account, then retry the deployment.
+ ## Outbound access to the internet In a pool without public IP addresses, your virtual machines won't be able to access the public internet unless you configure your network setup appropriately, such as by using [virtual network NAT](../virtual-network/nat-gateway/nat-overview.md). Note that NAT only allows outbound access to the internet from the virtual machines in the virtual network. Batch-created compute nodes won't be publicly accessible, since they don't have public IP addresses associated.
center-sap-solutions Install Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/install-software.md
After you've created infrastructure for your new SAP system using *Azure Center for SAP solutions (ACSS)*, you need to install the SAP software.
-In this how-to guide, you'll learn how to upload and install all the required components in your Azure account. You can either [run a pre-installation script to automate the upload process](#upload-components-with-script) or [manually upload the components](#upload-components-manually). Then, you can [run the software installation wizard](#install-software).
+In this how-to guide, you'll learn how to upload and install all the required components in your Azure account. You can either [run a pre-installation script to automate the upload process](#option-1-upload-software-components-with-script) or [manually upload the components](#option-2-upload-software-components-manually). Then, you can [run the software installation wizard](#install-software).
## Prerequisites
In this how-to guide, you'll learn how to upload and install all the required co
## Supported software
-ACSS supports the following SAP software version: **S/4HANA 1909 SPS 03, SAP S/4HANA 2020 SPS 03, SAP S/4HANA 2021 ISS 00**.
+ACSS supports the following SAP software version: **S/4HANA 1909 SPS 03, S/4HANA 2020 SPS 03, S/4HANA 2021 ISS 00**.
Following is the operating system (OS) software versions compatibility with SAP Software Version: | Publisher | Version | Generation SKU | Patch version name | Supported SAP Software Version | | | - | -- | | |
-| Red Hat | RHEL-SAP-HA (8.2 HA Pack) | 82sapha-gen2 | 8.2.2021091202 | S/4HANA 1909 SPS 03,SAP S/4HANA 2020 SPS 03, SAP S/4HANA 2021 ISS 00 |
-| Red Hat | RHEL-SAP-HA (8.4 HA Pack) | 84sapha-gen2 | 8.4.2021091202 | S/4HANA 1909 SPS 03,SAP S/4HANA 2020 SPS 03, SAP S/4HANA 2021 ISS 00 |
-| SUSE | sles-sap-15-sp3 | gen2 | 2022.01.26 | S/4HANA 1909 SPS 03,SAP S/4HANA 2020 SPS 03, SAP S/4HANA 2021 ISS 00 |
+| Red Hat | RHEL-SAP-HA (8.2 HA Pack) | 82sapha-gen2 | 8.2.2021091202 | S/4HANA 1909 SPS 03, S/4HANA 2020 SPS 03, S/4HANA 2021 ISS 00 |
+| Red Hat | RHEL-SAP-HA (8.4 HA Pack) | 84sapha-gen2 | 8.4.2021091202 | S/4HANA 1909 SPS 03, S/4HANA 2020 SPS 03, S/4HANA 2021 ISS 00 |
+| SUSE | sles-sap-15-sp3 | gen2 | 2022.01.26 | S/4HANA 1909 SPS 03, S/4HANA 2020 SPS 03, S/4HANA 2021 ISS 00 |
| SUSE | sles-sap-12-sp4 | gen2 | 2022.02.01 | S/4HANA 1909 SPS 03 | ## Required components
The following components are necessary for the SAP installation:
- SAP software installation media (part of the `sapbits` container described later in this article) - All essential SAP packages (*SWPM*, *SAPCAR*, etc.)
- - SAP software (for example, *S/4HANA 1909 SPS 03, S/4 HANA 2020 SPS 03, SAP S/4HANA 2021 ISS 00*)
-- Supporting software packages for the installation process
+ - SAP software (for example, *S/4HANA 2021 ISS 00*)
+- Supporting software packages for the installation process (part of the `deployervmpackages` container described later in this article)
- `pip3` version `pip-21.3.1.tar.gz` - `wheel` version 0.37.1 - `jq` version 1.6
The following components are necessary for the SAP installation:
- The SAP URL to download the software (`url`) - Template or INI files, which are stack XML files required to run the SAP packages.
-## Upload components with script
+## Option 1: Upload software components with script
You can use the following method to upload the SAP components to your Azure account using scripts. Then, you can [run the software installation wizard](#install-software) to install the SAP software.
-You also can [upload the components manually](#upload-components-manually) instead.
+You also can [upload the components manually](#option-2-upload-software-components-manually) instead.
### Set up storage account
-Before you can download the software, set up an Azure Storage account for the downloads.
+Before you can download the software, set up an Azure Storage account for storing the software.
1. [Create an Azure Storage account through the Azure portal](../storage/common/storage-account-create.md). Make sure to create the storage account in the same subscription as your SAP system infrastructure.
Before you can download the software, set up an Azure Storage account for the do
1. On the **New container** pane, for **Name**, enter `sapbits`. 1. Select **Create**.
-
+
+ 1. Grant the ACSS application *Azure SAP Workloads Management* **Storage Blob Data Reader** and **Reader and Data Access** role access on this storage account.
+
+### Download supporting software
+After setting up your Azure Storage account, you need an Ubuntu VM to run scripts that download the software components.
+ 1. Create an Ubuntu 20.04 VM in Azure 1. Sign in to the VM.
Before you can download the software, set up an Azure Storage account for the do
1. When asked if you have a storage account, enter `Y`.
-1. When asked for the base path to the SAP storage account, enter the container path. To find the container path:
+1. When asked for the base path to the software storage account, enter the container path. To find the container path:
1. Find the storage account that you created in the Azure portal.
Before you can download the software, set up an Azure Storage account for the do
1. Copy the **Key** value.
-1. In the Azure portal, find the container named `sapbits` in the storage account that you created.
+1. Once the script completes successfully, in the Azure portal, find the container named `sapbits` in the storage account that you created.
1. Make sure the deployer VM packages are now visible in `sapbits`.
Before you can download the software, set up an Azure Storage account for the do
### Download SAP media
-After setting up your Azure Storage account, you can download the SAP installation media required to install the SAP software.
+You can download the SAP installation media required to install the SAP software, using a script as described in this section.
-1. Sign in to the Ubuntu VM that you created in the [previous section](#set-up-storage-account).
+1. Sign in to the Ubuntu VM that you created in the [previous section](#download-supporting-software).
-1. Install ansible 2.9.27 on the ubuntu VM
+1. Install Ansible 2.9.27 on the ubuntu VM
```bash sudo pip3 install ansible==2.9.27
After setting up your Azure Storage account, you can download the SAP installati
- For `<username>`, use your SAP username. - For `<password>`, use your SAP password. - For `<bom_base_name>`, use the SAP Version you want to install i.e. **_S41909SPS03_v0011ms_** or **_S42020SPS03_v0003ms_** or **_S4HANA_2021_ISS_v0001ms_**
- - For `<storageAccountAccessKey>`, use your storage account's access key. You found this value in the [previous section](#set-up-storage-account).
- - For `<containerBasePath>`, use the path to your `sapbits` container. You found this value in the [previous section](#set-up-storage-account).
+ - For `<storageAccountAccessKey>`, use your storage account's access key. You found this value in the [previous section](#download-supporting-software).
+ - For `<containerBasePath>`, use the path to your `sapbits` container. You found this value in the [previous section](#download-supporting-software).
The format is `https://<your-storage-account>.blob.core.windows.net/sapbits`
After setting up your Azure Storage account, you can download the SAP installati
Now, you can [install the SAP software](#install-software) using the installation wizard.
-## Upload components manually
+## Option 2: Upload software components manually
-You can use the following method to download and upload the SAP components to your Azure account manually. Then, you can [run the software installation wizard](#install-software) to install the SAP software.
+You can use the following method to download and upload the SAP components to your Azure storage account manually. Then, you can [run the software installation wizard](#install-software) to install the SAP software.
-You also can [run scripts to automate this process](#upload-components-with-script) instead.
+You also can [run scripts to automate this process](#option-1-upload-software-components-with-script) instead.
-1. Create a new Azure storage account for the SAP components.
+1. Create a new Azure storage account for storing the software components.
1. Grant the ACSS application *Azure SAP Workloads Management* **Storage Blob Data Reader** and **Reader and Data Access** role access to this storage account. 1. Create a container within the storage account. You can choose any container name; for example, **sapbits**.
-1. Create two folders within the contained, named **deployervmpackages** and **sapfiles**.
+1. Create two folders within the container, named **deployervmpackages** and **sapfiles**.
> [!WARNING] > Don't change the folder name structure for any steps in this process. Otherwise, the installation process can fail. 1. Download the supporting software packages listed in the [required components list](#required-components) to your local computer.
You also can [run scripts to automate this process](#upload-components-with-scri
1. **SUM20SP14_latest**
- - For S/4 HANA 2020 SPS 03, make following folders
+ - For S/4HANA 2020 SPS 03, make following folders
1. **HANA_2_00_063_v0001ms** 1. **S42020SPS03_v0003ms** 1. **SWPM20SP12_latest** 1. **SUM20SP14_latest**
- - For SAP S/4HANA 2021 ISS 00, make following folders
+ - For S/4HANA 2021 ISS 00, make following folders
1. **HANA_2_00_063_v0001ms** 1. **S4HANA_2021_ISS_v0001ms** 1. **SWPM20SP12_latest**
You also can [run scripts to automate this process](#upload-components-with-scri
1. [SWPM20SP12_latest.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/SWPM20SP12_latest/SWPM20SP12_latest.yaml) 1. [SUM20SP14_latest.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/SUM20SP14_latest/SUM20SP14_latest.yaml)
- - For S/4 HANA 2020 SPS 03,
+ - For S/4HANA 2020 SPS 03,
1. [S42020SPS03_v0003ms.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/S42020SPS03_v0003ms.yaml) 1. [HANA_2_00_063_v0001ms.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/HANA_2_00_063_v0001ms/HANA_2_00_063_v0001ms.yaml) 1. [SWPM20SP12_latest.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/SWPM20SP12_latest/SWPM20SP12_latest.yaml) 1. [SUM20SP14_latest.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/SUM20SP14_latest/SUM20SP14_latest.yaml)
- - For SAP S/4HANA 2021 ISS 00,
+ - For S/4HANA 2021 ISS 00,
1. [S4HANA_2021_ISS_v0001ms.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/S4HANA_2021_ISS_v0001ms.yaml) 1. [HANA_2_00_063_v0001ms.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/HANA_2_00_063_v0001ms/HANA_2_00_063_v0001ms.yaml) 1. [SWPM20SP12_latest.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/SWPM20SP12_latest/SWPM20SP12_latest.yaml)
You also can [run scripts to automate this process](#upload-components-with-scri
1. [S41909SPS03_v0011ms-scsha-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-scsha-inifile-param.j2) 1. [S41909SPS03_v0011ms-web-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-web-inifile-param.j2)
- - For S/4 HANA 2020 SPS 03,
+ - For S/4HANA 2020 SPS 03,
1. [HANA_2_00_055_v1_install.rsp.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/templates/HANA_2_00_055_v1_install.rsp.j2) 1. [HANA_2_00_install.rsp.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/templates/HANA_2_00_install.rsp.j2) 1. [S42020SPS03_v0003ms-app-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/templates/S42020SPS03_v0003ms-app-inifile-param.j2)
You also can [run scripts to automate this process](#upload-components-with-scri
1. [S42020SPS03_v0003ms-scs-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/templates/S42020SPS03_v0003ms-scs-inifile-param.j2) 1. [S42020SPS03_v0003ms-scsha-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/templates/S42020SPS03_v0003ms-scsha-inifile-param.j2)
- - For SAP S/4HANA 2021 ISS 00,
+ - For S/4HANA 2021 ISS 00,
1. [HANA_2_00_055_v1_install.rsp.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/HANA_2_00_055_v1_install.rsp.j2) 1. [HANA_2_00_install.rsp.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/HANA_2_00_install.rsp.j2) 1. [NW_ABAP_ASCS_S4HANA2021.CORE.HDB.AB](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/NW_ABAP_ASCS_S4HANA2021.CORE.HDB.ABAP_Distributed.params)
You also can [run scripts to automate this process](#upload-components-with-scri
1. [SWPM20SP12_latest.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/SWPM20SP12_latest/SWPM20SP12_latest.yaml) 1. [SUM20SP14_latest.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/SUM20SP14_latest/SUM20SP14_latest.yaml)
- - For S/4 HANA 2020 SPS 03,
+ - For S/4HANA 2020 SPS 03,
1. [S42020SPS03_v0003ms.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/S42020SPS03_v0003ms.yaml) 1. [HANA_2_00_063_v0001ms.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/HANA_2_00_063_v0001ms/HANA_2_00_063_v0001ms.yaml) 1. [SWPM20SP12_latest.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/SWPM20SP12_latest/SWPM20SP12_latest.yaml) 1. [SUM20SP14_latest.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/SUM20SP14_latest/SUM20SP14_latest.yaml)
- - For SAP S/4HANA 2021 ISS 00,
+ - For S/4HANA 2021 ISS 00,
1. [S4HANA_2021_ISS_v0001ms.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/S4HANA_2021_ISS_v0001ms.yaml) 1. [HANA_2_00_063_v0001ms.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/HANA_2_00_063_v0001ms/HANA_2_00_063_v0001ms.yaml) 1. [SWPM20SP12_latest.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/SWPM20SP12_latest/SWPM20SP12_latest.yaml)
To install the SAP software on Azure, use the ACSS installation wizard.
1. For **Software version**, use the **SAP S/4HANA 1909 SPS03** or **SAP S/4HANA 2020 SPS 03** or **SAP S/4HANA 2021 ISS 00** . Please note only those versions will light up which are supported with the OS version that was used to deploy the infrastructure previously.
- 1. For **BOM directory location**, select **Browse** and find the path to your BOM file. For example, `/sapfiles/boms/S41909SPS03_v0010ms.yaml`.
+ 1. For **BOM directory location**, select **Browse** and find the path to your BOM file. For example, `https://<your-storage-account>.blob.core.windows.net/sapbits/sapfiles/boms/S41909SPS03_v0010ms.yaml`.
- 1. For **SAP FQDN:**, provide a fully qualified domain name (FQDN) for your SAP system. For example, `sap.contoso.com`.
+ 1. For **SAP FQDN**, provide a fully qualified domain name (FQDN) for your SAP system. For example, `sap.contoso.com`.
- 1. For High Availability (HA) systems only, enter the client identifier for the SONITH Fencing Agent service principal for **Fencing client ID**.
+ 1. For High Availability (HA) systems only, enter the client identifier for the STONITH Fencing Agent service principal for **Fencing client ID**.
- 1. For High Availability (HA) systems only, enter the password for the SONITH Fencing Agent service principal for **Fencing client password**.
+ 1. For High Availability (HA) systems only, enter the password for the STONITH Fencing Agent service principal for **Fencing client password**.
1. For **SSH private key**, provide the SSH private key that you created or selected as part of your infrastructure deployment.
To install the SAP software on Azure, use the ACSS installation wizard.
1. Wait for the installation to complete. The process takes approximately three hours. You can see the progress, along with estimated times for each step, in the wizard.
-1. After the installation completes, sign in with your SAP system credentials.
+1. After the installation completes, sign in with your SAP system credentials. Refer to [this section](manage-virtual-instance.md) to find the SAP system and HANA DB credentials for the newly installed system.
## Limitations
The following are known limitations and issues.
You can install a maximum of 10 Application Servers, excluding the Primary Application Server.
-### SAP package versions
+### SAP package version changes
When SAP changes the version of packages for a component in the BOM, you might encounter problems with the automated installation shell script. It's recommended to download your SAP installation media as soon as possible to avoid issues.
cognitive-services Sovereign Clouds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/sovereign-clouds.md
https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.cn/translator/text/batch
"inputs": [ { "source": {
- "sourceUrl": "https://my.blob.core.windows.net/source-en?sv=2019-12-12&st=2021-03-05T17%3A45%3A25Z&se=2021-03-13T17%3A45%3A00Z&sr=c&sp=rl&sig=SDRPMjE4nfrH3csmKLILkT%2Fv3e0Q6SWpssuuQl1NmfM%3D"
+ "sourceUrl": "https://<storage_acount>.blob.core.chinacloudapi.cn/source-en?sv=2019-12-12&st=2021-03-05T17%3A45%3A25Z&se=2021-03-13T17%3A45%3A00Z&sr=c&sp=rl&sig=SDRPMjE4nfrH3csmKLILkT%2Fv3e0Q6SWpssuuQl1NmfM%3D"
}, "targets": [ {
- "targetUrl": "https://my.blob.core.windows.net/target-zh-Hans?sv=2019-12-12&st=2021-03-05T17%3A49%3A02Z&se=2021-03-13T17%3A49%3A00Z&sr=c&sp=wdl&sig=Sq%2BYdNbhgbq4hLT0o1UUOsTnQJFU590sWYo4BOhhQhs%3D",
+ "targetUrl": "https://<storage_acount>.blob.core.chinacloudapi.cn/target-zh-Hans?sv=2019-12-12&st=2021-03-05T17%3A49%3A02Z&se=2021-03-13T17%3A49%3A00Z&sr=c&sp=wdl&sig=Sq%2BYdNbhgbq4hLT0o1UUOsTnQJFU590sWYo4BOhhQhs%3D",
"language": "zh-Hans" } ]
cognitive-services Previous Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/previous-updates.md
This article contains a list of previously recorded updates for Azure Cognitive
### Text Analytics for health updates
-* A new model version `2021-05-15` for the `/health` endpoint and on-premise container which provides
+* A new model version `2021-05-15` for the `/health` endpoint and on-premises container which provides
* 5 new entity types: `ALLERGEN`, `CONDITION_SCALE`, `COURSE`, `EXPRESSION` and `MUTATION_TYPE`, * 14 new relation types, * Assertion detection expanded for new entity types and
This article contains a list of previously recorded updates for Azure Cognitive
* This parameter lets you specify select PII entities, as well as those not supported by default for the input language. * Updated client libraries, which include asynchronous and text analytics for health operations.
-* A new model version `2021-03-01` for text analytics for health API and on-premise container which provides:
+* A new model version `2021-03-01` for text analytics for health API and on-premises container which provides:
* A rename of the `Gene` entity type to `GeneOrProtein`. * A new `Date` entity type. * Assertion detection which replaces negation detection.
cognitive-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/models.md
The service provides access to many different models. Models describe a family o
## Naming convention
-Azure OpenAI's models follow a standard naming convention: `{task}-{model name}-{version #}`. For example, our most powerful natural language model is called `text-davinci-001` and a codex series model would look like `code-cushman-001`.
+Azure OpenAI's models follow a standard naming convention: `{task}-{model name}-{version #}`. For example, our most powerful natural language model is called `text-davinci-001` and a Codex series model would look like `code-cushman-001`.
> Older versions of the GPT-3 models are available as `ada`, `babbage`, `curie`, `davinci` and do not follow these conventions. These models are primarily intended to be used for fine-tuning and search.
The Codex models are descendants of our base GPT-3 models that can understand an
TheyΓÇÖre most capable in Python and proficient in over a dozen languages including JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, SQL, and even Shell.
-Currently we only offer one codex model: `code-cushman-001`.
+Currently we only offer one Codex model: `code-cushman-001`.
## Embeddings Models
cognitive-services Fine Tuning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/fine-tuning.md
The Azure OpenAI Service lets you tailor our models to your personal datasets us
- An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services) - Access granted to service in the desired Azure subscription. This service is currently invite only. You can fill out a new use case request here: <https://aka.ms/oai/access>. Please open an issue on this repo to contact us if you have an issue-- The following python libraries: os, requests, json
+- The following Python libraries: os, requests, json
- An Azure OpenAI Service resource with a model deployed. If you don't have a resource/model the process is documented in our [resource deployment guide](../how-to/create-resource.md) ## Fine-tuning workflow
The fine-tuning workflow requires the following steps:
Your training data set consists of input & output examples for how you would like the model perform.
-The training dataset you use **must** be a JSON lines (JSONL) document where each line is a prompt-completion pair and a single example. The OpenAI python CLI provides a useful data preparation tool to easily convert your data into this file format.
+The training dataset you use **must** be a JSON lines (JSONL) document where each line is a prompt-completion pair and a single example. The OpenAI Python CLI provides a useful data preparation tool to easily convert your data into this file format.
Here's an example of the format:
Once you've prepared your dataset, you can upload your files to the service. We
For large data files, we recommend you import from Azure Blob. Large files can become unstable when uploaded through multipart forms because the requests are atomic and can't be retried or resumed.
-The following python code will create a sample dataset and show how to upload a file and print the returned ID. Make sure to save the IDs returned as you'll need them for the fine-tuning training job creation.
+The following Python code will create a sample dataset and show how to upload a file and print the returned ID. Make sure to save the IDs returned as you'll need them for the fine-tuning training job creation.
> [!IMPORTANT] > Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like [Azure Key Vault](../../../key-vault/general/overview.md). See the Cognitive Services [security](../../cognitive-services-security.md) article for more information.
while train_status not in ["succeeded", "failed"] or valid_status not in ["succe
After you've uploaded the training and (optional) validation file, you wish to use for your training job you're ready to start the process. You can use the [Models API](../reference.md#models) to identify which models are fine-tunable.
-Once you have the model, you want to fine-tune you need to create a job. The following python code shows an example of how to create a new job:
+Once you have the model, you want to fine-tune you need to create a job. The following Python code shows an example of how to create a new job:
```python create_args = {
az cognitiveservices account deployment create
## Use a fine-tuned model
-Once your model has been deployed, you can use it like any other model. Reference the deployment name you specified in the previous step. You can use either the REST API or python SDK and can continue to use all the other Completions parameters like temperature, frequency_penalty, presence_penalty, etc., on these requests to fine-tuned models.
+Once your model has been deployed, you can use it like any other model. Reference the deployment name you specified in the previous step. You can use either the REST API or Python SDK and can continue to use all the other Completions parameters like temperature, frequency_penalty, presence_penalty, etc., on these requests to fine-tuned models.
```python print('Sending a test completion job')
That said, tweaking the hyperparameters used for fine-tuning can often lead to a
## Next Steps - Explore the full REST API Reference documentation to learn more about all the fine-tuning capabilities. You can find the [full REST documentation here](../reference.md).-- Explore more of the [python SDK operations here](https://github.com/openai/openai-python/blob/main/examples/azure/finetuning.ipynb).
+- Explore more of the [Python SDK operations here](https://github.com/openai/openai-python/blob/main/examples/azure/finetuning.ipynb).
cognitive-services Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/managed-identity.md
In the following sections, you'll use the Azure CLI to assign roles, and obtain
- An Azure subscription - Access granted to service in the desired Azure subscription. - Azure CLI. [Installation Guide](/cli/azure/install-azure-cli)-- The following python libraries: os, requests, json
+- The following Python libraries: os, requests, json
## Sign into the Azure CLI
cognitive-services Work With Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/work-with-code.md
When you show Codex the database schema, it's able to make an informed guess abo
### Specify the programming language
-Codex understands dozens of different programming languages. Many share similar conventions for comments, functions and other programming syntax. By specifying the language and what version in a comment, Codex is better able to provide a completion for what you want. That said, Codex is fairly flexible with style and syntax. Here's an example for R and python.
+Codex understands dozens of different programming languages. Many share similar conventions for comments, functions and other programming syntax. By specifying the language and what version in a comment, Codex is better able to provide a completion for what you want. That said, Codex is fairly flexible with style and syntax. Here's an example for R and Python.
```r # R language
cognitive-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/reference.md
The Azure OpenAI service provides two methods for authentication. you can use e
The service APIs are versioned using the ```api-version``` query parameter. All versions follow the YYYY-MM-DD date structure, with a -preview suffix for a preview service. For example: ```
-POST https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=2021-11-01-preview
+POST https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=2022-06-01-preview
```
-We currently have the following versions available: ```2022-03-01-preview``` and ```2021-11-01-preview```
+We currently have the following versions available: ```2022-06-01-preview```
## Completions With the Completions operation, the model will generate one or more predicted completions based on a provided prompt. The service can also return the probabilities of alternative tokens at each position.
GET https://{your-resource-name}.openai.azure.com/openai/fine-tunes/{fine_tune_i
**Supported versions** -- `2022-03-01-preview`
+- `2022-06-01-preview`
#### Example request
communication-services Define Media Composition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/media-composition/define-media-composition.md
In this section you learned how to:
You may also want to: - Learn about [media composition concept](../../concepts/voice-video-calling/media-comp.md)
+ - Get started on [media composition](./get-started-media-composition.md)
+ <!-- -->
communication-services Get Started Media Composition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/media-composition/get-started-media-composition.md
+
+ Title: Azure Communication Services Quickstart - Create and manage a media composition
+
+description: In this quickstart, you'll learn how to create a media composition within your Azure Communication Services resource.
+++++ Last updated : 08/18/2022++++
+# Quickstart: Create and manage a media composition resource
++
+Get started with Azure Communication Services by using the Communication Services C# Media Composition SDK to compose and stream videos.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- The latest version of [.NET Core SDK](https://dotnet.microsoft.com/download/dotnet-core) for your operating system.
+- An active Communication Services resource and connection string. [Create a Communication Services resource](../create-communication-resource.md).
+
+### Prerequisite check
+
+- In a terminal or command window, run the `dotnet` command to check that the .NET SDK is installed.
+
+## Set up the application environment
+
+To set up an environment for using media composition, take the steps in the following sections.
+
+### Create a new C# application
+
+1. In a console window, such as cmd, PowerShell, or Bash, use the `dotnet new` command to create a new console app with the name `MediaCompositionQuickstart`. This command creates a simple "Hello World" C# project with a single source file, **Program.cs**.
+
+ ```console
+ dotnet new console -o MediaCompositionQuickstart
+ ```
+
+1. Change your directory to the newly created app folder and use the `dotnet build` command to compile your application.
+
+ ```console
+ cd MediaCompositionQuickstart
+ dotnet build
+ ```
+
+### Install the package
+
+1. While still in the application directory, install the Azure Communication Services MediaComposition SDK for .NET package by using the following command.
+
+ ```console
+ dotnet add package Azure.Communication.MediaCompositionQuickstart --version 1.0.0-beta.1
+ ```
+
+1. Add a `using` directive to the top of **Program.cs** to include the `Azure.Communication` namespace.
+
+ ```csharp
+ using System;
+ using System.Collections.Generic;
+
+ using Azure;
+ using Azure.Communication;
+ using Azure.Communication.MediaComposition;
+ ```
+
+## Authenticate the media composition client
+
+Open **Program.cs** in a text editor and replace the body of the `Main` method with code to initialize a `MediaCompositionClient` with your connection string. The `MediaCompositionClient` will be used to create and manage media composition objects.
+
+ You can find your Communication Services resource connection string in the Azure portal. For more information on connection strings, see [this page](../create-communication-resource.md#access-your-connection-strings-and-service-endpoints).
++
+```csharp
+// Find your Communication Services resource in the Azure portal
+var connectionString = "<connection_string>";
+var mediaCompositionClient = new MediaCompositionClient(connectionString);
+```
+
+## Create a media composition
+
+Create a new media composition by defining the `inputs`, `layout`, `outputs`, and a user-friendly `mediaCompositionId`. For more information on how to define the values, see [this page](./define-media-composition.md). These values are passed into the `CreateAsync` function exposed on the client. The code snippet below shows and example of defining a simple two by two grid layout:
+
+```csharp
+var layout = new GridLayout(
+ rows: 2,
+ columns: 2,
+ inputIds: new List<List<string>>
+ {
+ new List<string> { "Jill", "Jack" }, new List<string> { "Jane", "Jerry" }
+ })
+ {
+ Resolution = new(1920, 1080)
+ };
+
+var inputs = new Dictionary<string, MediaInput>()
+{
+ ["Jill"] = new ParticipantInput
+ (
+ id: new MicrosoftTeamsUserIdentifier("f3ba9014-6dca-4456-8ec0-fa03cfa2b7b7"),
+ call: "teamsMeeting")
+ {
+ PlaceholderImageUri = "https://imageendpoint"
+ },
+ ["Jack"] = new ParticipantInput
+ (
+ id: new MicrosoftTeamsUserIdentifier("fa4337b5-f13a-41c5-a34f-f2aa46699b61"),
+ call: "teamsMeeting")
+ {
+ PlaceholderImageUri = "https://imageendpoint"
+ },
+ ["Jane"] = new ParticipantInput
+ (
+ id: new MicrosoftTeamsUserIdentifier("2dd69470-dc25-49cf-b5c3-f562f08bf3b2"),
+ call: "teamsMeeting"
+ )
+ {
+ PlaceholderImageUri = "https://imageendpoint"
+ },
+ ["Jerry"] = new ParticipantInput
+ (
+ id: new MicrosoftTeamsUserIdentifier("30e29fde-ac1c-448f-bb34-0f3448d5a677"),
+ call: "teamsMeeting")
+ {
+ PlaceholderImageUri = "https://imageendpoint"
+ },
+ ["teamsMeeting"] = new TeamsMeetingInput(teamsJoinUrl: "https://teamsJoinUrl")
+};
+
+var outputs = new Dictionary<string, MediaOutput>()
+{
+ ["acsGroupCall"] = new GroupCallOutput("d12d2277-ffec-4e22-9979-8c0d8c13d193")
+};
+
+var mediaCompositionId = "twoByTwoGridLayout"
+var response = await mediaCompositionClient.CreateAsync(mediaCompositionId, layout, inputs, outputs);
+```
+
+You can use the `mediaCompositionId` to view or update the properties of a media composition object. Therefore, it is important to keep track of and persist the `mediaCompositionId` in your storage medium of choice.
+
+## Get properties of an existing media composition
+
+Retrieve the details of an existing media composition by referencing the `mediaCompositionId`.
+
+```C# Snippet:GetMediaComposition
+var gridMediaComposition = await mediaCompositionClient.GetAsync(mediaCompositionId);
+```
+
+## Updates
+
+Updating the `layout` of a media composition can happen on-the-fly as the media composition is running. However, `input` updates while the media composition is running are not supported. The media composition will need to be stopped and restarted before any changes to the inputs are applied.
+
+### Update layout
+
+Updating the `layout` can be issued by passing in the new `layout` object and the `mediaCompositionId`. For example, we can update the grid layout to an auto-grid layout following the snippet below:
+
+```csharp
+var layout = new AutoGridLayout(new List<string>() { "teamsMeeting" })
+{
+ Resolution = new(720, 480),
+};
+
+var response = await mediaCompositionClient.UpdateLayoutAsync(mediaCompositionId, layout);
+```
+
+### Upsert or remove inputs
+
+To upsert inputs from the media composition object, use the `UpsertInputsAsync` function exposed in the client.
+
+```csharp
+var inputsToUpsert = new Dictionary<string, MediaInput>()
+{
+ ["James"] = new ParticipantInput
+ (
+ id: new MicrosoftTeamsUserIdentifier("f3ba9014-6dca-4456-8ec0-fa03cfa2b70p"),
+ call: "teamsMeeting"
+ )
+ {
+ PlaceholderImageUri = "https://imageendpoint"
+ }
+};
+
+var response = await mediaCompositionClient.UpsertInputsAsync(mediaCompositionId, inputsToUpsert);
+```
+
+You can also explicitly remove inputs from the list.
+```csharp
+var inputIdsToRemove = new List<string>()
+{
+ "Jane", "Jerry"
+};
+var response = await mediaCompositionClient.RemoveInputsAsync(mediaCompositionId, inputIdsToRemove);
+```
+
+### Upsert or remove outputs
+
+To upsert outputs, you can use the `UpsertOutputsAsync` function from the client.
+```csharp
+var outputsToUpsert = new Dictionary<string, MediaOutput>()
+{
+ ["youtube"] = new RtmpOutput("key", new(1920, 1080), "rtmp://a.rtmp.youtube.com/live2")
+};
+
+var response = await mediaCompositionClient.UpsertOutputsAsync(mediaCompositionId, outputsToUpsert);
+```
+
+You can remove outputs by following the snippet below:
+```csharp
+var outputIdsToRemove = new List<string>()
+{
+ "acsGroupCall"
+};
+var response = await mediaCompositionClient.RemoveOutputsAsync(mediaCompositionId, outputIdsToRemove);
+```
+
+## Start running a media composition
+
+After defining the media composition with the correct properties, you can start composing the media by calling the `StartAsync` function using the `mediaCompositionId`.
+
+```csharp
+var compositionSteamState = await mediaCompositionClient.StartAsync(mediaCompositionId);
+```
+
+## Stop running a media composition
+
+To stop a media composition, call the `StopAsync` function using the `mediaCompositionId`.
+
+```csharp
+var compositionSteamState = await mediaCompositionClient.StopAsync(mediaCompositionId);
+```
+
+## Delete a media composition
+
+If you wish to delete a media composition, you may issue a delete request:
+```csharp
+await mediaCompositionClient.DeleteAsync(mediaCompositionId);
+```
+
+## Object model
+
+The table below lists the main properties of media composition objects:
+
+| Name | Description |
+|--|-|
+| `mediaCompositionId` | Media composition identifier that can be a user-friendly string. Must be unique across a Communication Service resource. |
+| `layout` | Specifies how the media sources will be composed into a single frame. |
+| `inputs` | Defines which media sources will be used in the layout composition. |
+| `outputs` | Defines where to send the composed streams to.|
+
+## Next steps
+
+In this section you learned how to:
+> [!div class="checklist"]
+> - Create a new media composition
+> - Get the properties of a media composition
+> - Update layout
+> - Upsert and remove inputs
+> - Upsert and remove outputs
+> - Start and stop a media composition
+> - Delete a media composition
+
+You may also want to:
+ - Learn about [media composition concept](../../concepts/voice-video-calling//media-comp.md)
+ - Learn about [how to define a media composition](./define-media-composition.md)
connectors Connectors Create Api Sqlazure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-sqlazure.md
Title: Connect to SQL databases
+ Title: Connect to SQL databases from workflows
description: Connect to SQL databases from workflows in Azure Logic Apps. ms.suite: integration Previously updated : 06/08/2022 Last updated : 08/19/2022 tags: connectors
-# Connect to a SQL database from workflows in Azure Logic Apps
+# Connect to an SQL database from workflows in Azure Logic Apps
This article shows how to access your SQL database from a workflow in Azure Logic Apps with the SQL Server connector. You can then create automated workflows that run when triggered by events in your SQL database or in other systems and run actions to manage your SQL data and resources.
The SQL Server connector has different versions, based on [logic app type and ho
| Logic app | Environment | Connector version | |--|-|-|
-| **Consumption** | Multi-tenant Azure Logic Apps | [Managed connector - Standard class](managed.md). For operations, limits, and other information, review the [SQL Server managed connector reference](/connectors/sql). |
-| **Consumption** | Integration service environment (ISE) | [Managed connector - Standard class](managed.md) and ISE version. For operations, managed connector limits, and other information, review the [SQL Server managed connector reference](/connectors/sql). For ISE-versioned limits, review the [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits), not the managed connector's message limits. |
-| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | [Managed connector - Standard class](managed.md) and [built-in connector](built-in.md), which is [service provider based](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). For managed connector operations, limits, and other information, review the [SQL Server managed connector reference](/connectors/sql/). <br><br>The built-in connector differs in the following ways: <br><br>- The built-in version has no triggers. <br><br>- The built-in version has a single **Execute Query** action. This action can directly access Azure virtual networks with a connection string and doesn't need the on-premises data gateway. <br><br>For built-in connector operations, limits, and other information, review the [SQL Server built-in connector reference](#built-in-connector-operations). |
-||||
+| **Consumption** | Multi-tenant Azure Logic Apps | Managed connector (Standard class). For more information, review the following documentation: <br><br>- [SQL Server managed connector reference](/connectors/sql). <br>- [Managed connectors in Azure Logic Apps](managed.md) |
+| **Consumption** | Integration service environment (ISE) | Managed connector (Standard class) and ISE version, which has different message limits than the Standard class. For more information, review the following documentation: <br><br>- [SQL Server managed connector reference](/connectors/sql) <br>- [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits) <br>- [Managed connectors in Azure Logic Apps](managed.md) |
+| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | Managed connector (Standard class) and built-in connector, which is [service provider based](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). The built-in version differs in the following ways: <br><br>- The built-in version doesn't have triggers. You can use either an SQL managed connector trigger or a different trigger. <br><br>- The built-in version connects directly to an SQL server and database requiring only a connection string. You don't need the on-premises data gateway. <br><br>- The built-in version can directly access Azure virtual networks. You don't need the on-premises data gateway.<br><br>For more information, review the following documentation: <br><br>- [SQL Server managed connector reference](/connectors/sql/) <br>- [SQL Server built-in connector reference](#built-in-connector-operations) section later in this article <br>- [Built-in connectors in Azure Logic Apps](built-in.md) |
## Limitations
For more information, review the [SQL Server managed connector reference](/conne
The SQL Server connector requires that your tables contain data so that the connector operations can return results when called. For example, if you use Azure SQL Database, you can use the included sample databases to try the SQL Server connector operations.
-* The information required to create a SQL database connection, such as your SQL server and database names. If you're using Windows Authentication or SQL Server Authentication to authenticate access, you also need your user name and password. You can usually find this information in the connection string.
+* The information required to create an SQL database connection, such as your SQL server and database name. If you're using Windows Authentication or SQL Server Authentication to authenticate access, you also need your user name and password. You can usually find this information in the connection string.
- > [!NOTE]
+ > [!IMPORTANT]
>
- > If you use a SQL Server connection string that you copied directly from the Azure portal,
+ > If you use an SQL Server connection string that you copied directly from the Azure portal,
> you have to manually add your password to the connection string.
- * For a SQL database in Azure, the connection string has the following format:
+ * For an SQL database in Azure, the connection string has the following format:
`Server=tcp:{your-server-name}.database.windows.net,1433;Initial Catalog={your-database-name};Persist Security Info=False;User ID={your-user-name};Password={your-password};MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;`
For more information, review the [SQL Server managed connector reference](/conne
* Standard logic app workflow
- You can use the SQL Server built-in connector, which requires a connection string. To use the SQL Server managed connector, follow the same requirements as a Consumption logic app workflow in multi-tenant Azure Logic Apps.
+ You can use the SQL Server built-in connector, which requires a connection string. The built-in connector currently supports only SQL Server Authentication. You can adjust connection pooling by specifying parameters in the connection string. For more information, review [Connection Pooling](/dotnet/framework/data/adonet/connection-pooling).
+
+ To use the SQL Server managed connector, follow the same requirements as a Consumption logic app workflow in multi-tenant Azure Logic Apps.
-For other connector requirements, review [SQL Server managed connector reference](/connectors/sql/).
+ For other connector requirements, review [SQL Server managed connector reference](/connectors/sql/).
<a name="add-sql-trigger"></a>
The following steps use the Azure portal, but with the appropriate Azure Logic A
* Standard logic app workflows: [Visual Studio Code](../logic-apps/create-single-tenant-workflows-visual-studio-code.md)
-In this example, the logic app workflow starts with the [Recurrence trigger](../connectors/connectors-native-recurrence.md), and calls an action that gets a row from a SQL database.
+In this example, the logic app workflow starts with the [Recurrence trigger](../connectors/connectors-native-recurrence.md), and calls an action that gets a row from an SQL database.
### [Consumption](#tab/consumption)
In this example, the logic app workflow starts with the [Recurrence trigger](../
1. Under the **Choose an operation** search box, select either of the following options:
- * **Built-in** when you want to use SQL Server built-in actions such as **Execute Query**
+ * **Built-in** when you want to use SQL Server [built-in actions](#built-in-connector-operations) such as **Execute query**
![Screenshot showing the Azure portal, workflow designer for Standard logic app, and designer search box with "Built-in" selected underneath.](./media/connectors-create-api-sqlazure/select-built-in-category-standard.png)
In this example, the logic app workflow starts with the [Recurrence trigger](../
1. From the actions list, select the SQL Server action that you want.
- * Built-in actions
+ * [Built-in actions](#built-in-connector-operations)
- This example selects the only available built-in action named **Execute Query**.
+ This example selects the built-in action named **Execute query**.
- ![Screenshot showing the designer search box with "sql server" and "Built-in" selected underneath with the "Execute Query" action selected in the "Actions" list.](./media/connectors-create-api-sqlazure/select-sql-execute-query-action-standard.png)
+ ![Screenshot showing the designer search box with "sql server" and "Built-in" selected underneath with the "Execute query" action selected in the "Actions" list.](./media/connectors-create-api-sqlazure/select-sql-execute-query-action-standard.png)
- * Managed actions
+ * [Managed actions](/connectors/sql/#actions)
This example selects the action named **Get row**, which gets a single record.
In this example, the logic app workflow starts with the [Recurrence trigger](../
1. Provide the [information for your connection](#create-connection). When you're done, select **Create**.
-1. If you haven't already provided the SQL server name and database name, provide those values. Otherwise, from the **Table name** list, select the table that you want to use. In the **Row id** property, enter the ID for the record that you want.
+1. Provide the information required by your selected action.
- In this example, the table name is **SalesLT.Customer**.
+ The following example continues with the managed action named **Get row**. If you haven't already provided the SQL server name and database name, provide those values. Otherwise, from the **Table name** list, select the table that you want to use. In this example, the table name is **SalesLT.Customer**. In the **Row id** property, enter the ID for the record that you want.
- ![Screenshot showing Standard workflow designer and "Get row" action with the example "Table name" property value and empty row ID.](./media/connectors-create-api-sqlazure/specify-table-row-id-standard.png)
+ ![Screenshot showing Standard workflow designer and managed action "Get row" with the example "Table name" property value and empty row ID.](./media/connectors-create-api-sqlazure/specify-table-row-id-standard.png)
- This action returns only one row from the selected table, and nothing else. To view the data in this row, add other actions. For example, such actions might create a file, include the fields from the returned row, and store the file in a cloud storage account. To learn about other available actions for this connector, see the [connector's reference page](/connectors/sql/).
+ This action returns only one row from the selected table, and nothing else. To view the data in this row, add other actions. For example, such actions might create a file, include the fields from the returned row, and store the file in a cloud storage account. To learn about other available actions for this connector, review the [managed connector's reference page](/connectors/sql/).
1. When you're done, save your workflow.
In the connection information box, complete the following steps:
| **Service principal (Azure AD application)** | - Supported with the SQL Server managed connector. <br><br>- Requires an Azure AD application and service principal. For more information, see [Create an Azure AD application and service principal that can access resources using the Azure portal](../active-directory/develop/howto-create-service-principal-portal.md). | | **Logic Apps Managed Identity** | - Supported with the SQL Server managed connector and ISE-versioned connector. <br><br>- Requires the following items: <br><br> A valid managed identity that's [enabled on your logic app resource](../logic-apps/create-managed-service-identity.md) and has access to your database. <br><br> **SQL DB Contributor** role access to the SQL Server resource <br><br> **Contributor** access to the resource group that includes the SQL Server resource. <br><br>For more information, see [SQL - Server-Level Roles](/sql/relational-databases/security/authentication-access/server-level-roles). | | [**Azure AD Integrated**](/azure/azure-sql/database/authentication-aad-overview) | - Supported with the SQL Server managed connector and ISE-versioned connector. <br><br>- Requires a valid managed identity in Azure Active Directory (Azure AD) that's [enabled on your logic app resource](../logic-apps/create-managed-service-identity.md) and has access to your database. For more information, see these topics: <br><br>- [Azure SQL Security Overview - Authentication](/azure/azure-sql/database/security-overview#authentication) <br>- [Authorize database access to Azure SQL - Authentication and authorization](/azure/azure-sql/database/logins-create-manage#authentication-and-authorization) <br>- [Azure SQL - Azure AD Integrated authentication](/azure/azure-sql/database/authentication-aad-overview) |
- | [**SQL Server Authentication**](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-sql-server-authentication) | - Supported with the SQL Server managed connector and ISE-versioned connector. <br><br>- Requires the following items: <br><br> A data gateway resource that's previously created in Azure for your connection, regardless whether your logic app is in multi-tenant Azure Logic Apps or an ISE. <br><br> A valid user name and strong password that are created and stored in your SQL Server database. For more information, see the following topics: <br><br>- [Azure SQL Security Overview - Authentication](/azure/azure-sql/database/security-overview#authentication) <br>- [Authorize database access to Azure SQL - Authentication and authorization](/azure/azure-sql/database/logins-create-manage#authentication-and-authorization) |
+ | [**SQL Server Authentication**](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-sql-server-authentication) | - Supported with the SQL Server managed connector, SQL Server built-in connector, and ISE-versioned connector. <br><br>- Requires the following items: <br><br> A data gateway resource that's previously created in Azure for your connection, regardless whether your logic app is in multi-tenant Azure Logic Apps or an ISE. <br><br> A valid user name and strong password that are created and stored in your SQL Server database. For more information, see the following topics: <br><br>- [Azure SQL Security Overview - Authentication](/azure/azure-sql/database/security-overview#authentication) <br>- [Authorize database access to Azure SQL - Authentication and authorization](/azure/azure-sql/database/logins-create-manage#authentication-and-authorization) |
The following examples show how the connection information box might appear if you select **Azure AD Integrated** authentication.
In the connection information box, complete the following steps:
| **Server name** | Yes | The address for your SQL server, for example, **Fabrikam-Azure-SQL.database.windows.net** | | **Database name** | Yes | The name for your SQL database, for example, **Fabrikam-Azure-SQL-DB** | | **Table name** | Yes | The table that you want to use, for example, **SalesLT.Customer** |
- ||||
> [!TIP]
+ >
> To provide your database and table information, you have these options: > > * Find this information in your database's connection string. For example, in the Azure portal, find and open your database. On the database menu, select either **Connection strings** or **Properties**, where you can find the following string:
In the connection information box, complete the following steps:
| Authentication | Description | |-|-|
- | [**SQL Server Authentication**](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-sql-server-authentication) | - Supported with the SQL Server managed connector and ISE-versioned connector. <br><br>- Requires the following items: <br><br> A data gateway resource that's previously created in Azure for your connection, regardless whether your logic app is in multi-tenant Azure Logic Apps or an ISE. <br><br> A valid user name and strong password that are created and stored in your SQL Server. <br><br>For more information, see [SQL Server Authentication](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-sql-server-authentication). |
+ | [**SQL Server Authentication**](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-sql-server-authentication) | - Supported with the SQL Server managed connector, SQL Server built-in connector, and ISE-versioned connector. <br><br>- Requires the following items: <br><br> A data gateway resource that's previously created in Azure for your connection, regardless whether your logic app is in multi-tenant Azure Logic Apps or an ISE. <br><br> A valid user name and strong password that are created and stored in your SQL Server. <br><br>For more information, see [SQL Server Authentication](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-sql-server-authentication). |
| [**Windows Authentication**](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-windows-authentication) | - Supported with the SQL Server managed connector. <br><br>- Requires the following items: <br><br> A data gateway resource that's previously created in Azure for your connection, regardless whether your logic app is in multi-tenant Azure Logic Apps or an ISE. <br><br> A valid Windows user name and password to confirm your identity through your Windows account. <br><br>For more information, see [Windows Authentication](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-windows-authentication). |
- |||
1. Select or provide the following values for your SQL database:
In the connection information box, complete the following steps:
| **Password** | Yes | Your password for the SQL server and database | | **Subscription** | Yes, for Windows authentication | The Azure subscription for the data gateway resource that you previously created in Azure | | **Connection Gateway** | Yes, for Windows authentication | The name for the data gateway resource that you previously created in Azure <br><br><br><br>**Tip**: If your gateway doesn't appear in the list, check that you correctly [set up your gateway](../logic-apps/logic-apps-gateway-connection.md). |
- |||
> [!TIP] > You can find this information in your database's connection string:
When you call a stored procedure by using the SQL Server connector, the returned
> [!NOTE] >
- > If you get an error that Azure Logic Apps can't generate a schema,
- > check that your sample output's syntax is correctly formatted.
- > If you still can't generate the schema, in the **Schema** box,
- > manually enter the schema.
+ > If you get an error that Azure Logic Apps can't generate a schema, check that your
+ > sample output's syntax is correctly formatted. If you still can't generate the schema,
+ > in the **Schema** box, manually enter the schema.
1. When you're done, save your workflow. 1. To reference the JSON content properties, click inside the edit boxes where you want to reference those properties so that the dynamic content list appears. In the list, under the [**Parse JSON**](../logic-apps/logic-apps-perform-data-operations.md#parse-json-action) heading, select the data tokens for the JSON content properties that you want.
+<a name="built-in-connector-app-settings"></a>
+
+## Built-in connector app settings
+
+In a Standard logic app resource, the SQL Server built-in connector includes app settings that control various thresholds for performance, throughput, capacity, and so on. For example, you can change the query timeout value from 30 seconds. For more information, review [Reference for app settings - local.settings.json](../logic-apps/edit-app-settings-host-settings.md#reference-local-settings-json).
+ <a name="built-in-connector-operations"></a>
-## Built-in connector operations
+## SQL built-in connector operations
+
+The SQL Server built-in connector is available only for Standard logic app workflows and provides the following actions, but no triggers:
+
+| Action | Description |
+|--|-|
+| [**Delete rows**](#delete-rows) | Deletes and returns the table rows that match the specified **Where condition** value. |
+| [**Execute query**](#execute-query) | Runs a query on an SQL database. |
+| [**Execute stored procedure**](#execute-stored-procedure) | Runs a stored procedure on an SQL database. |
+| [**Get rows**](#get-rows) | Gets the table rows that match the specified **Where condition** value. |
+| [**Get tables**](#get-tables) | Gets all the tables from the database. |
+| [**Insert row**](#insert-row) | Inserts a single row in the specified table. |
+| [**Update rows**](#update-rows) | Updates the specified columns in all the table rows that match the specified **Where condition** value using the **Set columns** column names and values. |
+
+<a name="delete-rows"></a>
+
+### Delete rows
+
+Operation ID: `deleteRows`
+
+Deletes and returns the table rows that match the specified **Where condition** value.
+
+#### Parameters
+
+| Name | Key | Required | Type | Description |
+||--|-||-|
+| **Table name** | `tableName` | True | String | The name for the table |
+| **Where condition** | `columnValuesForWhereCondition` | True | Object | This object contains the column names and corresponding values used for selecting the rows to delete. To provide this information, follow the *key-value* pair format, for example, *columnName* and *columnValue*, which also lets you specify single or specific rows to delete. |
+
+#### Returns
+| Name | Type |
+|||
+| **Result** | An array object that returns all the deleted rows. Each row contains the column name and the corresponding deleted value. |
+| **Result Item** | An array object that returns one deleted row at a time. A **For each** loop is automatically added to your workflow to iterate through the array. Each row contains the column name and the corresponding deleted value. |
-### Actions
+*Example*
-The SQL Server built-in connector has a single action.
+The following example shows sample parameter values for the **Delete rows** action:
-#### Execute Query
+**Sample values**
+
+| Parameter | JSON name | Sample value |
+|--|--|--|
+| **Table name** | `tableName` | tableName1 |
+| **Where condition** | `columnValuesForWhereCondition` | Key-value pairs: <br><br>- <*columnName1*>, <*columnValue1*> <br><br>- <*columnName2*>, <*columnValue2*> |
+
+**Parameters in the action's underlying JSON definition**
+
+```json
+"parameters": {
+ "tableName": "tableName1",
+ "columnValuesForWhereCondition": {
+ "columnName1": "columnValue1",
+ "columnName2": "columnValue2"
+ }
+},
+```
+
+<a name="execute-query"></a>
+
+### Execute query
Operation ID: `executeQuery`
-Runs a query against a SQL database.
+Runs a query on an SQL database.
-##### Parameters
+#### Parameters
| Name | Key | Required | Type | Description | ||--|-||-|
-| **Query** | `query` | True | Dynamic | The body for your query |
-| **Query Parameters** | `queryParameters` | False | Objects | The parameters for your query |
-||||||
+| **Query** | `query` | True | Dynamic | The body for your SQL query |
+| **Query parameters** | `queryParameters` | False | Objects | The parameters for your query. <br><br>**Note**: If the query requires input parameters, you must provide these parameters. |
-##### Returns
+#### Returns
-The outputs from this operation are dynamic.
+| Name | Type |
+|||
+| **Result** | An array object that returns all the query results. Each row contains the column name and the corresponding value. |
+| **Result Item** | An array object that returns one query result at a time. A **For each** loop is automatically added to your workflow to iterate through the array. Each row contains the column name and the corresponding value. |
-## Built-in connector app settings
+<a name="execute-stored-procedure"></a>
+
+### Execute stored procedure
+
+Operation ID: `executeStoredProcedure`
+
+Runs a stored procedure on an SQL database.
+
+#### Parameters
+
+| Name | Key | Required | Type | Description |
+||--|-||-|
+| **Procedure name** | `storedProcedureName` | True | String | The name for your stored procedure |
+| **Parameters** | `storedProcedureParameters` | False | Dynamic | The parameters for your stored procedure. <br><br>**Note**: If the stored procedure requires input parameters, you must provide these parameters. |
+
+#### Returns
+
+| Name | Type |
+|||
+| **Result** | An object that contains the result sets array, return code, and output parameters |
+| **Result Result Sets** | An object array that contains all the result sets from the stored procedure, which might return zero, one, or multiple result sets. |
+| **Result Return Code** | An integer that represents the status code from the stored procedure |
+| **Result Stored Procedure Parameters** | An object that contains the final values of the stored procedure's output and input-output parameters |
+| **Status Code** | The status code from the **Execute stored procedure** operation |
+
+<a name="get-rows"></a>
+
+### Get rows
+
+Operation ID: `getRows`
+
+Gets the table rows that match the specified **Where condition** value.
+
+#### Parameters
+
+| Name | Key | Required | Type | Description |
+||--|-||-|
+| **Table name** | `tableName` | True | String | The name for the table |
+| **Where condition** | `columnValuesForWhereCondition` | False | Dynamic | This object contains the column names and corresponding values used for selecting the rows to get. To provide this information, follow the *key-value* pair format, for example, *columnName* and *columnValue*, which also lets you specify single or specific rows to get. |
+
+#### Returns
+
+| Name | Type |
+|||
+| **Result** | An array object that returns all the row results. |
+| **Result Item** | An array object that returns one row result at a time. A **For each** loop is automatically added to your workflow to iterate through the array. |
+
+*Example*
+
+The following example shows sample parameter values for the **Get rows** action:
+
+**Sample values**
+
+| Parameter | JSON name | Sample value |
+|--|--|--|
+| **Table name** | `tableName` | tableName1 |
+| **Where condition** | `columnValuesForWhereCondition` | Key-value pairs: <br><br>- <*columnName1*>, <*columnValue1*> <br><br>- <*columnName2*>, <*columnValue2*> |
+
+**Parameters in the action's underlying JSON definition**
+
+```json
+"parameters": {
+ "tableName": "tableName1",
+ "columnValuesForWhereCondition": {
+ "columnName1": "columnValue1",
+ "columnName2": "columnValue2"
+ }
+},
+```
+
+<a name="get-tables"></a>
+
+### Get tables
+
+Operation ID: `getTables`
+
+Gets a list of all the tables in the database.
+
+#### Parameters
+
+None.
+
+#### Returns
+
+| Name | Type |
+|||
+| **Result** | An array object that contains the full names and display names for all tables in the database. |
+| **Result Display Name** | An array object that contains the display name for each table in the database. A **For each** loop is automatically added to your workflow to iterate through the array. |
+| **Result Full Name** | An array object that contains the full name for each table in the database. A **For each** loop is automatically added to your workflow to iterate through the array. |
+| **Result Item** | An array object that returns the full name and display name one at time for each table. A **For each** loop is automatically added to your workflow to iterate through the array. |
+
+<a name="insert-row"></a>
+
+### Insert row
+
+Operation ID: `insertRow`
+
+Inserts a single row in the specified table.
+
+| Name | Key | Required | Type | Description |
+||--|-||-|
+| **Table name** | `tableName` | True | String | The name for the table |
+| **Set columns** | `setColumns` | False | Dynamic | This object contains the column names and corresponding values to insert. To provide this information, follow the *key-value* pair format, for example, *columnName* and *columnValue*. If the table has columns with default or autogenerated values, you can leave this field empty. |
+
+#### Returns
+
+| Name | Type |
+|||
+| **Result** | The inserted row, including the names and values of any autogenerated, default, and null value columns. |
+
+<a name="update-rows"></a>
+
+### Update rows
+
+Operation ID: `updateRows`
+
+Updates the specified columns in all the table rows that match the specified **Where condition** value using the **Set columns** column names and values.
+
+| Name | Key | Required | Type | Description |
+||--|-||-|
+| **Table name** | `tableName` | True | String | The name for the table |
+| **Where condition** | `columnValuesForWhereCondition` | True | Dynamic | This object contains the column names and corresponding values for selecting the rows to update. To provide this information, follow the *key-value* pair format, for example, *columnName* and *columnValue*, which also lets you specify single or specific rows to update. |
+| **Set columns** | `setColumns` | True | Dynamic | This object contains the column names and the corresponding values to use for the update. To provide this information, follow the *key-value* pair format, for example, *columnName* and *columnValue*. |
+
+#### Returns
+
+| Name | Type |
+|||
+| **Result** | An array object that returns all the columns for the updated rows. |
+| **Result Item** | An array object that returns one column at a time from the updated rows. A **For each** loop is automatically added to your workflow to iterate through the array. |
+
+*Example*
+
+The following example shows sample parameter values for the **Update rows** action:
+
+**Sample values**
+
+| Parameter | JSON name | Sample value |
+|--|--|--|
+| **Table name** | `tableName` | tableName1 |
+| **Where condition** | `columnValuesForWhereCondition` | Key-value pairs: <br><br>- <*columnName1*>, <*columnValue1*> <br><br>- <*columnName2*>, <*columnValue2*> |
+
+**Parameters in the action's underlying JSON definition**
-The SQL Server built-in connector includes app settings on your Standard logic app resource that control various thresholds for performance, throughput, capacity, and so on. For example, you can change the default timeout value for connector operations. For more information, review [Reference for app settings - local.settings.json](../logic-apps/edit-app-settings-host-settings.md#reference-local-settings-json).
+```json
+"parameters": {
+ "tableName": "tableName1",
+ "columnValuesForWhereCondition": {
+ "columnName1": "columnValue1",
+ "columnName2": "columnValue2"
+ }
+},
+```
## Troubleshoot problems
Connection problems can commonly happen, so to troubleshoot and resolve these ki
## Next steps
-* Learn about other [managed connectors for Azure Logic Apps](../connectors/apis-list.md)
+* [Managed connectors for Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors)
+* [Built-in connectors for Azure Logic Apps](built-in.md)
cosmos-db Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/cli-samples.md
Previously updated : 02/21/2022 Last updated : 08/19/2022
These samples apply to all Azure Cosmos DB APIs. These samples use a SQL (Core)
| [Secure with IP firewall](../scripts/cli/common/ipfirewall.md)| Create a Cosmos account with IP firewall configured.| | [Secure new account with service endpoints](../scripts/cli/common/service-endpoints.md)| Create a Cosmos account and secure with service-endpoints.| | [Secure existing account with service endpoints](../scripts/cli/common/service-endpoints-ignore-missing-vnet.md)| Update a Cosmos account to secure with service-endpoints when the subnet is eventually configured.|
+| [Find existing free-tier account](../scripts/cli/common/free-tier.md)| Find whether there is an existing free-tier account in your subscription.|
||| ## Next steps
cosmos-db Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/concepts-limits.md
Cosmos DB automatically takes backups of your data at regular intervals. For det
| Resource | Limit | | | |
+| Maximum number of databases per account | 100 |
| Maximum number of containers per account | 100 | | Maximum number of regions | 1 (Any Azure region) |
cosmos-db Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/cli-samples.md
Previously updated : 02/21/2022 Last updated : 08/19/2022
These samples apply to all Azure Cosmos DB APIs. These samples use a SQL (Core)
| [Secure with IP firewall](../scripts/cli/common/ipfirewall.md)| Create a Cosmos account with IP firewall configured.| | [Secure new account with service endpoints](../scripts/cli/common/service-endpoints.md)| Create a Cosmos account and secure with service-endpoints.| | [Secure existing account with service endpoints](../scripts/cli/common/service-endpoints-ignore-missing-vnet.md)| Update a Cosmos account to secure with service-endpoints when the subnet is eventually configured.|
+| [Find existing free-tier account](../scripts/cli/common/free-tier.md)| Find whether there is an existing free-tier account in your subscription.|
||| ## Next steps
cosmos-db Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/cli-samples.md
Previously updated : 02/21/2022 Last updated : 08/18/2022
These samples apply to all Azure Cosmos DB APIs. These samples use a SQL (Core)
| [Secure with IP firewall](../scripts/cli/common/ipfirewall.md)| Create a Cosmos account with IP firewall configured.| | [Secure new account with service endpoints](../scripts/cli/common/service-endpoints.md)| Create a Cosmos account and secure with service-endpoints.| | [Secure existing account with service endpoints](../scripts/cli/common/service-endpoints-ignore-missing-vnet.md)| Update a Cosmos account to secure with service-endpoints when the subnet is eventually configured.|
+| [Find existing free-tier account](../scripts/cli/common/free-tier.md)| Find whether there is an existing free-tier account in your subscription.|
||| ## Next steps
cosmos-db Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/cli-samples.md
Previously updated : 02/21/2022 Last updated : 08/19/2022
These samples apply to all Azure Cosmos DB APIs. These samples use a SQL (Core)
| [Secure with IP firewall](../scripts/cli/common/ipfirewall.md)| Create a Cosmos account with IP firewall configured.| | [Secure new account with service endpoints](../scripts/cli/common/service-endpoints.md)| Create a Cosmos account and secure with service-endpoints.| | [Secure existing account with service endpoints](../scripts/cli/common/service-endpoints-ignore-missing-vnet.md)| Update a Cosmos account to secure with service-endpoints when the subnet is eventually configured.|
+| [Find existing free-tier account](../scripts/cli/common/free-tier.md)| Find whether there is an existing free-tier account in your subscription.|
||| ## Next steps
cosmos-db Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/cli-samples.md
Previously updated : 02/21/2022 Last updated : 08/19/2022
These samples apply to all Azure Cosmos DB APIs. These samples use a SQL (Core)
| [Secure with IP firewall](../scripts/cli/common/ipfirewall.md)| Create a Cosmos account with IP firewall configured.| | [Secure new account with service endpoints](../scripts/cli/common/service-endpoints.md)| Create a Cosmos account and secure with service-endpoints.| | [Secure existing account with service endpoints](../scripts/cli/common/service-endpoints-ignore-missing-vnet.md)| Update a Cosmos account to secure with service-endpoints when the subnet is eventually configured.|
+| [Find existing free-tier account](../scripts/cli/common/free-tier.md)| Find whether there is an existing free-tier account in your subscription.|
||| ## Next steps
data-factory Connector Snowflake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-snowflake.md
If your data store is located inside an on-premises network, an Azure virtual ne
If your data store is a managed cloud data service, you can use the Azure Integration Runtime. If the access is restricted to IPs that are approved in the firewall rules, you can add [Azure Integration Runtime IPs](azure-integration-runtime-ip-addresses.md) to the allowed list.
-The Snowflake account that is used for Source or Sink should have the necessary `USAGE` access on the Database and Read / Write access on Schema and the Tables/Views under it. In addition, it should also have `CREATE STAGE` on the schema to be able to create the External stage with SAS URI.
+The Snowflake account that is used for Source or Sink should have the necessary `USAGE` access on the database and read/write access on schema and the tables/views under it.. In addition, it should also have `CREATE STAGE` on the schema to be able to create the External stage with SAS URI.
The following Account properties values must be set
data-factory Connector Troubleshoot Azure Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-azure-blob-storage.md
Previously updated : 10/01/2021 Last updated : 08/12/2022
This article provides suggestions to troubleshoot common problems with the Azure
- **Cause**: Multiple concurrent writing requests occur, which causes conflicts on file content.
+## Error code: AzureBlobFailedToCreateContainer
+
+- **Message**: `Unable to create Azure Blob container. Endpoint: '%endpoint;', Container Name: '%containerName;'.`
+
+- **Cause**: This error happens when copying data with Azure Blob Storage account public access.
+
+- **Recommendation**: For more information about connection errors in the public endpoint, see [Connection error in public endpoint](security-and-access-control-troubleshoot-guide.md#connection-error-in-public-endpoint).
+ ## Next steps For more troubleshooting help, try these resources:
data-factory Connector Troubleshoot Azure Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-azure-data-explorer.md
+
+ Title: Troubleshoot the Azure Data Explorer connector
+
+description: Learn how to troubleshoot issues with the Azure Data Explorer connector in Azure Data Factory and Azure Synapse Analytics.
++++ Last updated : 08/12/2022++++
+# Troubleshoot the Azure Data Explorer connector in Azure Data Factory and Azure Synapse
++
+This article provides suggestions to troubleshoot common problems with the Azure Data Explorer connector in Azure Data Factory and Azure Synapse.
+
+## Error code: KustoMappingReferenceHasWrongKind
+
+- **Message**: `Mapping reference should be of kind 'Csv'. Mapping reference: '%reference;'. Kind '%kind;'.`
+
+- **Cause**: The ingestion mapping reference is not CSV type.
+
+- **Recommendation**: Create a CSV ingestion mapping reference.
+
+## Error code: KustoWriteFailed
+
+- **Message**: `Write to Kusto failed with following error: '%message;'.`
+
+- **Cause**: Wrong configuration or transient errors when the sink reads data from the source.
+
+- **Recommendation**: For transient failures, set retries for the activity. For permanent failures, check your configuration and contact support.
+
+## Next steps
+
+For more troubleshooting help, try these resources:
+
+- [Connector troubleshooting guide](connector-troubleshoot-guide.md)
+- [Data Factory blog](https://azure.microsoft.com/blog/tag/azure-data-factory/)
+- [Data Factory feature requests](/answers/topics/azure-data-factory.html)
+- [Azure videos](https://azure.microsoft.com/resources/videos/index/?sort=newest&services=data-factory)
+- [Microsoft Q&A page](/answers/topics/azure-data-factory.html)
+- [Stack Overflow forum for Data Factory](https://stackoverflow.com/questions/tagged/azure-data-factory)
+- [Twitter information about Data Factory](https://twitter.com/hashtag/DataFactory)
data-factory Connector Troubleshoot Azure Data Lake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-azure-data-lake.md
Previously updated : 10/13/2021 Last updated : 08/10/2022
This article provides suggestions to troubleshoot common problems with the Azure
| If Azure Data Lake Storage Gen2 throws error indicating some operation failed.| Check the detailed error message thrown by Azure Data Lake Storage Gen2. If the error is a transient failure, retry the operation. For further help, contact Azure Storage support, and provide the request ID in error message. | | If the error message contains the string "Forbidden", the service principal or managed identity you use might not have sufficient permission to access Azure Data Lake Storage Gen2. | To troubleshoot this error, see [Copy and transform data in Azure Data Lake Storage Gen2](./connector-azure-data-lake-storage.md#service-principal-authentication). | | If the error message contains the string "InternalServerError", the error is returned by Azure Data Lake Storage Gen2. | The error might be caused by a transient failure. If so, retry the operation. If the issue persists, contact Azure Storage support and provide the request ID from the error message. |
+ | If the error message is `Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host`, your integration runtime has a network issue in connecting to Azure Data Lake Storage Gen2. | In the firewall rule setting of Azure Data Lake Storage Gen2, make sure Azure Data Factory IP addresses are in the allowed list. For more information, see [Configure Azure Storage firewalls and virtual networks](../storage/common/storage-network-security.md). |
+ | If the error message is `This endpoint does not support BlobStorageEvents or SoftDelete`, you are using an Azure Data Lake Storage Gen2 linked service to connect to an Azure Blob Storage account that enables Blob storage events or soft delete. | Try the following options:<br>1. If you still want to use an Azure Data Lake Storage Gen2 linked service, upgrade your Azure Blob Storage to Azure Data Lake Storage Gen2. For more information, see [Upgrade Azure Blob Storage with Azure Data Lake Storage Gen2 capabilities](../storage/blobs/upgrade-to-data-lake-storage-gen2-how-to.md).<br>2. Switch your linked service to Azure Blob Storage.<br>3. Disable Blob storage events or soft delete in your Azure Blob Storage account. |
### Request to Azure Data Lake Storage Gen2 account caused a timeout error
data-factory Connector Troubleshoot Ftp Sftp Http https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-ftp-sftp-http.md
Previously updated : 07/29/2022 Last updated : 08/12/2022
This article provides suggestions to troubleshoot common problems with the FTP,
- **Recommendation**: Check the HTTP status code in the error message, and fix the remote server issue.
+### Error code: HttpSourceUnsupportedStatusCode
+
+- **Message**: `Http source doesn't support HTTP Status Code '%code;'.`
+
+- **Cause**: This error happens when Azure Data Factory requests HTTP source but gets unexpected status code.
+
+- **Recommendation**: For more information about HTTP status code, see this [document](/troubleshoot/developer/webapps/iis/www-administration-management/http-status-code).
+ ## Next steps For more troubleshooting help, try these resources:
data-factory Data Access Strategies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-access-strategies.md
This should work in many scenarios, and we do understand that a unique Static IP
For more information about supported network security mechanisms on data stores in Azure Integration Runtime and Self-hosted Integration Runtime, see below two tables. * **Azure Integration Runtime**
- | Data Stores | Supported Network Security Mechanism on Data Stores | Private Link | Trusted Service | Static IP range | Service Tags | Allow Azure Services |
+ | Data Stores | Supported Network Security Mechanism on Data Stores | Private Link | Trusted Service | Static IP range | Service Tags | Allow Azure Services |
||-||--|--|-|--| | Azure PaaS Data stores | Azure Cosmos DB | Yes | - | Yes | - | Yes | | | Azure Data Explorer | - | - | Yes* | Yes* | - |
For more information about supported network security mechanisms on data stores
| | Azure SQL DB, Azure Synapse Analytics), SQL Ml | Yes (only Azure SQL DB/DW) | - | Yes | - | Yes | | | Azure Key Vault (for fetching secrets/ connection string) | yes | Yes | Yes | - | - | | Other PaaS/ SaaS Data stores | AWS S3, SalesForce, Google Cloud Storage, etc. | - | - | Yes | - | - |
+ | | Snowflake | Yes | - | Yes | - | - |
| Azure IaaS | SQL Server, Oracle, etc. | - | - | Yes | Yes | - |
- | On-premises IaaS | SQL Server, Oracle, etc. | - | - | Yes | - | - |
-
+ | On-premises IaaS | SQL Server, Oracle, etc. | - | - | Yes | - | -
+
+
**Applicable only when Azure Data Explorer is virtual network injected, and IP range can be applied on NSG/ Firewall.* * **Self-hosted Integration Runtime (in VNet/on-premises)**
data-factory Scenario Ssis Migration Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/scenario-ssis-migration-rules.md
Connection that contains host name may fail, typically because the Azure virtual
You can use below options for SSIS Integration runtime to access these resources: -- [Join Azure-SSIS IR to a virtual network that connects to on-premise sources](/azure/data-factory/join-azure-ssis-integration-runtime-virtual-network)
+- [Join Azure-SSIS IR to a virtual network that connects to on-premises sources](/azure/data-factory/join-azure-ssis-integration-runtime-virtual-network)
- Migrate your data to Azure and use Azure resource endpoint. - Use Managed Identity authentication if moving to Azure resources.-- [Use self-hosted IR to connect on-premise sources](/azure/data-factory/self-hosted-integration-runtime-proxy-ssis).
+- [Use self-hosted IR to connect on-premises sources](/azure/data-factory/self-hosted-integration-runtime-proxy-ssis).
### [1002]Connection with absolute or UNC path might not be accessible
You can use below options for SSIS Integration runtime to access these resources
- [Change to %TEMP%](/azure/data-factory/ssis-azure-files-file-shares) - [Migrate your files to Azure Files](/azure/data-factory/ssis-azure-files-file-shares)-- [Join Azure-SSIS IR to a virtual network that connects to on-premise sources](/azure/data-factory/join-azure-ssis-integration-runtime-virtual-network).-- [Use self-hosted IR to connect on-premise sources](/azure/data-factory/self-hosted-integration-runtime-proxy-ssis).
+- [Join Azure-SSIS IR to a virtual network that connects to on-premises sources](/azure/data-factory/join-azure-ssis-integration-runtime-virtual-network).
+- [Use self-hosted IR to connect on-premises sources](/azure/data-factory/self-hosted-integration-runtime-proxy-ssis).
### [1003]Connection with Windows authentication may fail
Recommendation
You can use below options for SSIS Integration runtime to launch your executable(s): - [Migrate your executable(s) to Azure Files](/azure/data-factory/ssis-azure-files-file-shares).-- [Join Azure-SSIS IR to a virtual network](/azure/data-factory/join-azure-ssis-integration-runtime-virtual-network) that connects to on-premise sources.
+- [Join Azure-SSIS IR to a virtual network](/azure/data-factory/join-azure-ssis-integration-runtime-virtual-network) that connects to on-premises sources.
- If necessary, [customize setup script to install your executable(s)](/azure/data-factory/how-to-configure-azure-ssis-ir-custom-setup) in advance when starting IR. ### [4001]Absolute or UNC configuration path is discovered in package configuration
Recommendation
You can use below options for SSIS Integration runtime to access these resources: - [Migrate your files to Azure Files](/azure/data-factory/ssis-azure-files-file-shares)-- [Join Azure-SSIS IR to a virtual network that connects to on-premise sources](/azure/data-factory/join-azure-ssis-integration-runtime-virtual-network).-- [Use self-hosted IR to connect on-premise sources](/azure/data-factory/self-hosted-integration-runtime-proxy-ssis).
+- [Join Azure-SSIS IR to a virtual network that connects to on-premises sources](/azure/data-factory/join-azure-ssis-integration-runtime-virtual-network).
+- [Use self-hosted IR to connect on-premises sources](/azure/data-factory/self-hosted-integration-runtime-proxy-ssis).
### [4002]Registry entry is discovered in package configuration
data-factory Security And Access Control Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/security-and-access-control-troubleshoot-guide.md
Previously updated : 02/07/2022 Last updated : 08/15/2022
For example: The Azure Blob Storage sink was using Azure IR (public, not Managed
` <LogProperties><Text>Invoke callback url with req:
-"ErrorCode=UserErrorFailedToCreateAzureBlobContainer,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Unable to create Azure Blob container. Endpoint: XXXXXXX/, Container Name: test.,Source=Microsoft.DataTransfer.ClientLibrary,''Type=Microsoft.WindowsAzure.Storage.StorageException,Message=Unable to connect to the remote server,Source=Microsoft.WindowsAzure.Storage,''Type=System.Net.WebException,Message=Unable to connect to the remote server,Source=System,''Type=System.Net.Sockets.SocketException,Message=A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond public ip:443,Source=System,'","Details":null}}</Text></LogProperties>.
+"ErrorCode=AzureBlobFailedToCreateContainer,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Unable to create Azure Blob container. Endpoint: XXXXXXX/, Container Name: test.,Source=Microsoft.DataTransfer.ClientLibrary,''Type=Microsoft.WindowsAzure.Storage.StorageException,Message=Unable to connect to the remote server,Source=Microsoft.WindowsAzure.Storage,''Type=System.Net.WebException,Message=Unable to connect to the remote server,Source=System,''Type=System.Net.Sockets.SocketException,Message=A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond public ip:443,Source=System,'","Details":null}}</Text></LogProperties>.
` #### Cause
dns Dns Private Resolver Get Started Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-get-started-portal.md
This quickstart walks you through the steps to create an Azure DNS Private Resolver (Public Preview) using the Azure portal. If you prefer, you can complete this quickstart using [Azure PowerShell](private-dns-getstarted-powershell.md).
-Azure DNS Private Resolver enables you to query Azure DNS private zones from an on-premises environment, and vice versa, without deploying VM based DNS servers. You no longer need to provision IaaS based solutions on your virtual networks to resolve names registered on Azure private DNS zones. You can configure conditional forwarding of domains back to on-premises, multi-cloud and public DNS servers. For more information, including benefits, capabilities, and regional availability, see [What is Azure DNS Private Resolver](dns-private-resolver-overview.md).
+Azure DNS Private Resolver enables you to query Azure DNS private zones from an on-premises environment, and vice versa, without deploying VM based DNS servers. You no longer need to provision IaaS based solutions on your virtual networks to resolve names registered on Azure private DNS zones. You can configure conditional forwarding of domains back to on-premises, multicloud and public DNS servers. For more information, including benefits, capabilities, and regional availability, see [What is Azure DNS Private Resolver](dns-private-resolver-overview.md).
## Prerequisites
dns Dns Private Resolver Get Started Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-get-started-powershell.md
description: In this quickstart, you learn how to create and manage your first p
Previously updated : 06/02/2022 Last updated : 08/18/2022
dns Dns Private Resolver Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-overview.md
Previously updated : 08/02/2022 Last updated : 08/17/2022 #Customer intent: As an administrator, I want to evaluate Azure DNS Private Resolver so I can determine if I want to use it instead of my current DNS resolver service.
The DNS query process when using an Azure DNS Private Resolver is summarized bel
The architecture for Azure DNS Private Resolver is summarized in the following figure. DNS resolution between Azure virtual networks and on-premises networks requires [Azure ExpressRoute](../expressroute/expressroute-introduction.md) or a [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md).
-[ ![Azure DNS Private Resolver architecture](./media/dns-resolver-overview/resolver-architecture.png) ](./media/dns-resolver-overview/resolver-architecture.png#lightbox)
+[ ![Azure DNS Private Resolver architecture](./media/dns-resolver-overview/resolver-architecture.png) ](./media/dns-resolver-overview/resolver-architecture-highres.png#lightbox)
Figure 1: Azure DNS Private Resolver architecture
Azure DNS Private Resolver is available in the following regions:
## DNS resolver endpoints
+For more information about endpoints and rulesets, see [Azure DNS Private Resolver endpoints and rulesets](private-resolver-endpoints-rulesets.md).
+ ### Inbound endpoints An inbound endpoint enables name resolution from on-premises or other private locations via an IP address that is part of your private virtual network address space. To resolve your Azure private DNS zone from on-premises, enter the IP address of the inbound endpoint into your on-premises DNS conditional forwarder. The on-premises DNS conditional forwarder must have a network connection to the virtual network.
Virtual network links enable name resolution for virtual networks that are linke
## DNS forwarding rulesets
-A DNS forwarding ruleset is a group of DNS forwarding rules (up to 1,000) that can be applied to one or more outbound endpoints, or linked to one or more virtual networks. This is a 1:N relationship. Rulesets are associated with a specific outbound endpoint.
+A DNS forwarding ruleset is a group of DNS forwarding rules (up to 1,000) that can be applied to one or more outbound endpoints, or linked to one or more virtual networks. This is a 1:N relationship. Rulesets are associated with a specific outbound endpoint. For more information, see [DNS forwarding rulesets](private-resolver-endpoints-rulesets.md#dns-forwarding-rulesets).
## DNS forwarding rules
The following restrictions hold with respect to virtual networks:
Subnets used for DNS resolver have the following limitations: - A subnet must be a minimum of /28 address space or a maximum of /24 address space. - A subnet can't be shared between multiple DNS resolver endpoints. A single subnet can only be used by a single DNS resolver endpoint.-- All IP configurations for a DNS resolver inbound endpoint must reference the same subnet. Spanning multiple subnets in the IP configuration for a single DNS resolver inbound endpoint is not allowed.
+- All IP configurations for a DNS resolver inbound endpoint must reference the same subnet. Spanning multiple subnets in the IP configuration for a single DNS resolver inbound endpoint isn't allowed.
- The subnet used for a DNS resolver inbound endpoint must be within the virtual network referenced by the parent DNS resolver. ### Outbound endpoint restrictions
Outbound endpoints have the following limitations:
- IPv6 enabled subnets aren't supported in Public Preview. - ## Next steps * Learn how to create an Azure DNS Private Resolver by using [Azure PowerShell](./dns-private-resolver-get-started-powershell.md) or [Azure portal](./dns-private-resolver-get-started-portal.md).
+* Understand how to [Resolve Azure and on-premises domains](private-resolver-hybrid-dns.md) using the Azure DNS Private Resolver.
+* Learn about [Azure DNS Private Resolver endpoints and rulesets](private-resolver-endpoints-rulesets.md).
+* Learn how to [Set up DNS failover using private resolvers](tutorial-dns-private-resolver-failover.md)
+* Learn how to [configure hybrid DNS](private-resolver-hybrid-dns.md) using private resolvers.
* Learn about some of the other key [networking capabilities](../networking/fundamentals/networking-overview.md) of Azure. * [Learn module: Introduction to Azure DNS](/learn/modules/intro-to-azure-dns).
dns Private Resolver Endpoints Rulesets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-resolver-endpoints-rulesets.md
+
+ Title: Azure DNS Private Resolver endpoints and rulesets
+description: In this article, understand the Azure DNS Private Resolver endpoints and rulesets
++++ Last updated : 08/16/2022+
+#Customer intent: As an administrator, I want to understand components of the Azure DNS Private Resolver.
++
+# Azure DNS Private Resolver endpoints and rulesets
+
+In this article, you'll learn about components of the [Azure DNS Private Resolver](dns-private-resolver-overview.md). Inbound endpoints, outbound endpoints, and DNS forwarding rulesets are discussed. Properties and settings of these components are described, and examples are provided for how to use them.
+
+> [!IMPORTANT]
+> Azure DNS Private Resolver is currently in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Inbound endpoints
+
+As the name suggests, inbound endpoints will ingress to Azure. Inbound endpoints provide an IP address to forward DNS queries from on-premises and other locations outside your virtual network. DNS queries sent to the inbound endpoint are resolved using Azure DNS. Private DNS zones that are linked to the virtual network where the inbound endpoint is provisioned are resolved by the inbound endpoint.
+
+The IP address associated with an inbound endpoint is always part of the private virtual network address space where the private resolver is deployed. No other resources can exist in the same subnet with the inbound endpoint. The following screenshot shows an inbound endpoint with an IP address of 10.10.0.4 inside the subnet `snet-E-inbound` provisioned within a virtual network with address space of 10.10.0.0/16.
+
+![View inbound endpoints](./media/private-resolver-endpoints-rulesets/east-inbound-endpoint.png)
+
+## Outbound endpoints
+
+Outbound endpoints egress from Azure and can be linked to [DNS Forwarding Rulesets](#dns-forwarding-rulesets).
+
+Outbound endpoints are also part of the private virtual network address space where the private resolver is deployed. An outbound endpoint is associated with a subnet, but isn't provisioned with an IP address like the inbound endpoint. No other resources can exist in the same subnet with the outbound endpoint. The following screenshot shows an outbound endpoint inside the subnet `snet-E-outbound`.
+
+![View outbound endpoints](./media/private-resolver-endpoints-rulesets/east-outbound-endpoint.png)
+
+## DNS forwarding rulesets
+
+DNS forwarding rulesets enable you to specify one or more custom DNS servers to answer queries for specific DNS namespaces. The individual [rules](#rules) in a ruleset determine how these DNS names are resolved. Rulesets can also be linked one or more virtual networks, enabling resources in the vnets to use the forwarding rules that you configure.
+
+Rulesets have the following associations:
+- A single ruleset can be associated with multiple outbound endpoints.
+- A ruleset can have up to 1000 DNS forwarding rules.
+- A ruleset can be linked to any number of virtual networks in the same region
+
+A ruleset can't be linked to a virtual network in another region.
+
+When you link a ruleset to a virtual network, resources within that virtual network will use the DNS forwarding rules enabled in the ruleset. The linked virtual network must peer with the virtual network where the outbound endpoint exists. This configuration is typically used in a hub and spoke design, with spoke vnets peered to a hub vnet that has one or more private resolver endpoints. In this hub and spoke scenario, the spoke vnet does not need to be linked to the private DNS zone in order to resolve resource records in the zone. In this case, the forwarding ruleset rule for the private zone sends queries to the hub vnet's inbound endpoint. For example: **azure.contoso.com** to **10.10.0.4**.
+
+The following screenshot shows a DNS forwarding ruleset linked to two virtual networks: a hub vnet: **myeastvnet**, and a spoke vnet: **myeastspoke**.
+
+![View ruleset links](./media/private-resolver-endpoints-rulesets/ruleset-links.png)
+
+Virtual network links for DNS forwarding rulesets enable resources in vnets to use forwarding rules when resolving DNS names. Vnets that are linked from a ruleset, but don't have their own private resolver, must have a peering connection to the vnet that contains the private resolver. The vnet with the private resolver must also be linked from any private DNS zones for which there are ruleset rules.
+
+For example, resources in the vnet `myeastspoke` can resolve records in the private DNS zone `azure.contoso.com` if:
+- The vnet `myeastspoke` peers with `myeastvnet`
+- The ruleset provisioned in `myeastvnet` is linked to `myeastspoke` and `myeastvnet`
+- A ruleset rule is configured and enabled in the linked ruleset to resolve `azure.contoso.com` using the inbound endpoint in `myeastvnet`
+
+### Rules
+
+DNS forwarding rules (ruleset rules) have the following properties:
+
+| Property | Description |
+| | |
+| Rule name | The name of your rule. The name must begin with a letter, and can contain only letters, numbers, underscores, and dashes. |
+| Domain name | The dot-terminated DNS namespace where your rule applies. The namespace must have either zero labels (for wildcard) or between 2 and 34 labels. For example, `contoso.com.` has two labels. |
+| Destination IP:Port | The forwarding destination. One or more IP addresses and ports of DNS servers that will be used to resolve DNS queries in the specified namespace. |
+| Rule state | The rule state: Enabled or disabled. If a rule is disabled, it's ignored. |
+
+If multiple rules are matched, the longest prefix match is used.
+
+For example, if you have the following rules:
+
+| Rule name | Domain name | Destination IP:Port | Rule state |
+| | | | |
+| Contoso | contoso.com. | 10.100.0.2:53 | Enabled |
+| AzurePrivate | azure.contoso.com. | 10.10.0.4:53 | Enabled |
+| Wildcard | . | 10.100.0.2:53 | Enabled |
+
+A query for `secure.store.azure.contoso.com` will match the **AzurePrivate** rule for `azure.contoso.com` and also the **Contoso** rule for `contoso.com`, but the **AzurePrivate** rule takes precedence because the prefix `azure.contoso` is longer than `contoso`.
+
+## Next steps
+
+* Review components, benefits, and requirements for [Azure DNS Private Resolver](dns-private-resolver-overview.md).
+* Learn how to create an Azure DNS Private Resolver by using [Azure PowerShell](./dns-private-resolver-get-started-powershell.md) or [Azure portal](./dns-private-resolver-get-started-portal.md).
+* Understand how to [Resolve Azure and on-premises domains](private-resolver-hybrid-dns.md) using the Azure DNS Private Resolver.
+* Learn how to [Set up DNS failover using private resolvers](tutorial-dns-private-resolver-failover.md)
+* Learn how to [configure hybrid DNS](private-resolver-hybrid-dns.md) using private resolvers.
+* Learn about some of the other key [networking capabilities](../networking/fundamentals/networking-overview.md) of Azure.
+* [Learn module: Introduction to Azure DNS](/learn/modules/intro-to-azure-dns).
dns Private Resolver Hybrid Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-resolver-hybrid-dns.md
+
+ Title: Resolve Azure and on-premises domains
+description: Configure Azure and on-premises DNS to resolve private DNS zones and on-premises domains
++++ Last updated : 08/18/2022+
+# Customer intent: As an administrator, I want to resolve on-premises domains in Azure and resolve Azure private zones on-premises.
++
+# Resolve Azure and on-premises domains
+
+This article provides guidance on how to configure hybrid DNS resolution by using an [Azure DNS Private Resolver](#azure-dns-private-resolver) with a [DNS forwarding ruleset](#dns-forwarding-ruleset).
+
+*Hybrid DNS resolution* is defined here as enabling Azure resources to resolve your on-premises domains, and on-premises DNS to resolve your Azure private DNS zones.
+
+> [!IMPORTANT]
+> Azure DNS Private Resolver is currently in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Azure DNS Private Resolver
+
+The [Azure DNS Private Resolver](dns-private-resolver-overview.md) is a service that can resolve on-premises DNS queries for Azure DNS private zones. Previously, it was necessary to [deploy a VM-based custom DNS resolver](/azure/hdinsight/connect-on-premises-network), or use non-Microsoft DNS, DHCP, and IPAM (DDI) solutions to perform this function.
+
+Benefits of using the Azure DNS Private Resolver service vs. VM-based resolvers or DDI solutions include:
+- Zero maintenance: Unlike VM or hardware based solutions, the private resolver doesn't require software updates, vulnerability scans, or security patching. The private resolver service is fully managed.
+- Cost reduction: Azure DNS Private Resolver is a multi-tenant service and can cost a fraction of the expense that is required to use and license multiple VM-based DNS resolvers.
+- High availability: The Azure DNS Private Resolver service has built-in high availability features. The service is [availability zone](/azure/availability-zones/az-overview) aware, thus ensuring that high availability and redundancy of your DNS solution can be accomplished with much less effort. For more information on how to configure DNS failover using the private resolver service, see [Tutorial: Set up DNS failover using private resolvers](tutorial-dns-private-resolver-failover.md).
+- DevOps friendly: Traditional DNS solutions are hard to integrate with DevOps workflows as these often require manual configuration for every DNS change. Azure DNS private resolver provides a fully functional ARM interface that can be easily integrated with DevOps workflows.
+
+## DNS forwarding ruleset
+
+A DNS forwarding ruleset is a group of rules that specify one or more custom DNS servers to answer queries for specific DNS namespaces. For more information, see [Azure DNS Private Resolver endpoints and rulesets](private-resolver-endpoints-rulesets.md).
+
+## Procedures
+
+The following procedures in this article are used to enable and test hybrid DNS:
+- [Create an Azure DNS private zone](#create-an-azure-dns-private-zone)
+- [Create an Azure DNS Private Resolver](#create-an-azure-dns-private-resolver)
+- [Configure an Azure DNS forwarding ruleset](#configure-an-azure-dns-forwarding-ruleset)
+- [Configure on-premises DNS conditional forwarders](#configure-on-premises-dns-conditional-forwarders)
+- [Demonstrate hybrid DNS](#demonstrate-hybrid-dns)
+
+## Create an Azure DNS private zone
+
+Create a private zone with at least one resource record to use for testing. The following quickstarts are available to help you create a private zone:
+- [Create a private zone - portal](private-dns-getstarted-portal.md)
+- [Create a private zone - PowerShell](private-dns-getstarted-powershell.md)
+- [Create a private zone - CLI](private-dns-getstarted-cli.md)
+
+In this article, the private zone **azure.contoso.com** and the resource record **test** are used. Autoregistration isn't required for the current demonstration.
+
+[ ![View resource records](./media/private-resolver-hybrid-dns/private-zone-records-small.png) ](./media/private-resolver-hybrid-dns/private-zone-records.png#lightbox)
+
+**Requirement**: You must create a virtual network link in the zone to the virtual network where you'll deploy your Azure DNS Private Resolver. In the example shown below, the private zone is linked to two vnets: **myeastvnet** and **mywestvnet**. At least one link is required.
+
+[ ![View zone links](./media/private-resolver-hybrid-dns/private-zone-links-small.png) ](./media/private-resolver-hybrid-dns/private-zone-links.png#lightbox)
+
+## Create an Azure DNS Private Resolver
+
+The following quickstarts are available to help you create a private resolver. These quickstarts walk you through creating a resource group, a virtual network, and Azure DNS Private Resolver. The steps to configure an inbound endpoint, outbound endpoint, and DNS forwarding ruleset are provied:
+- [Create a private resolver - portal](dns-private-resolver-get-started-portal.md)
+- [Create a private resolver - PowerShell](dns-private-resolver-get-started-powershell.md)
+
+ When you're finished, write down the IP address of the inbound endpoint for the Azure DNS Private Resolver, as shown below. In this case, the IP address is **10.10.0.4**. This IP address will be used later to configure on-premises DNS conditional forwarders.
+
+[ ![View endpoint IP address](./media/private-resolver-hybrid-dns/inbound-endpoint-ip-small.png) ](./media/private-resolver-hybrid-dns/inbound-endpoint-ip.png#lightbox)
+
+## Configure an Azure DNS forwarding ruleset
+
+Create a forwarding ruleset in the same region as your private resolver. The following example shows two rulesets. The **East US** region ruleset is used for the hybrid DNS demonstration.
+
+[ ![View ruleset region](./media/private-resolver-hybrid-dns/forwarding-ruleset-region-small.png) ](./media/private-resolver-hybrid-dns/forwarding-ruleset-region.png#lightbox)
+
+**Requirement**: You must create a virtual network link to the vnet where your private resolver is deployed. In the following example, two virtual network links are present. The link **myeastvnet-link** is created to a hub vnet where the private resolver is provisioned. There's also a virtual network link **myeastspoke-link** that provides hybrid DNS resolution in a spoke vnet that doesn't have its own private resolver. The spoke network is able to use the private resolver because it peers with the hub network. The spoke vnet link isn't required for the current demonstration.
+
+[ ![View ruleset links](./media/private-resolver-hybrid-dns/ruleset-links-small.png) ](./media/private-resolver-hybrid-dns/ruleset-links.png#lightbox)
+
+Next, create a rule in your ruleset for your on-premises domain. In this example, we use **contoso.com**. Set the destination IP address for your rule to be the IP address of your on-premises DNS server. In this example, the on-premises DNS server is at **10.100.0.2**. Verify that the rule is **Enabled**.
+
+[ ![View rules](./media/private-resolver-hybrid-dns/ruleset-rules-small.png) ](./media/private-resolver-hybrid-dns/ruleset-rules.png#lightbox)
+
+> [!NOTE]
+> Don't change the DNS settings for your virtual network to use the inbound endpoint IP address. Leave the default DNS settings.
+
+## Configure on-premises DNS conditional forwarders
+
+The procedure to configure on-premises DNS depends on the type of DNS server you're using. In the following example, a Windows DNS server at **10.100.0.2** is configured with a conditional forwarder for the private DNS zone **azure.contoso.com**. The conditional forwarder is set to forward queries to **10.10.0.4**, which is the inbound endpoint IP address for your Azure DNS Private Resolver. There's another IP address also configured here to enable DNS failover. For more information about enabling failover, see [Tutorial: Set up DNS failover using private resolvers](tutorial-dns-private-resolver-failover.md). For the purposes of this demonstration, only the **10.10.0.4** inbound endpoint is required.
+
+![View on-premises forwarding](./media/private-resolver-hybrid-dns/on-premises-forwarders.png)
+
+## Demonstrate hybrid DNS
+
+Using a VM located in the virtual network where the Azure DNS Private Resolver is provisioned, issue a DNS query for a resource record in your on-premises domain. In this example, a query is performed for the record **testdns.contoso.com**:
+
+![Verify Azure to on-premise](./media/private-resolver-hybrid-dns/azure-to-on-premises-lookup.png)
+
+The path for the query is: Azure DNS > inbound endpoint > outbound endpoint > ruleset rule for contoso.com > on-premises DNS (10.100.0.2). The DNS server at 10.100.0.2 is an on-premises DNS resolver, but it could also be an authoritative DNS server.
+
+Using an on-premises VM or device, issue a DNS query for a resource record in your Azure private DNS zone. In this example, a query is performed for the record **test.azure.contoso.com**:
+
+![Verify on-premises to Azure](./media/private-resolver-hybrid-dns/on-premises-to-azure-lookup.png)
+
+The path for this query is: client's default DNS resolver (10.100.0.2) > on-premises conditional forwarder rule for azure.contoso.com > inbound endpoint (10.10.0.4)
+
+## Next steps
+* Review components, benefits, and requirements for [Azure DNS Private Resolver](dns-private-resolver-overview.md).
+* Learn how to create an Azure DNS Private Resolver by using [Azure PowerShell](./dns-private-resolver-get-started-powershell.md) or [Azure portal](./dns-private-resolver-get-started-portal.md).
+* Understand how to [Resolve Azure and on-premises domains](private-resolver-hybrid-dns.md) using the Azure DNS Private Resolver.
+* Learn about [Azure DNS Private Resolver endpoints and rulesets](private-resolver-endpoints-rulesets.md).
+* Learn how to [Set up DNS failover using private resolvers](tutorial-dns-private-resolver-failover.md)
+* Learn about some of the other key [networking capabilities](../networking/fundamentals/networking-overview.md) of Azure.
+* [Learn module: Introduction to Azure DNS](/learn/modules/intro-to-azure-dns).
dns Tutorial Dns Private Resolver Failover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/tutorial-dns-private-resolver-failover.md
+
+ Title: Tutorial - Set up DNS failover using private resolvers
+description: A tutorial on how to configure regional failover using the Azure DNS Private Resolver
++++ Last updated : 08/18/2022+
+#Customer intent: As an administrator, I want to avoid having a single point of failure for DNS resolution.
++
+# Tutorial: Set up DNS failover using private resolvers
+
+This article details how to eliminate a single point of failure in your on-premises DNS services by using two or more Azure DNS private resolvers deployed across different regions. DNS failover is enabled by assigning a local resolver as your primary DNS and the resolver in an adjacent region as secondary DNS.
+
+> [!IMPORTANT]
+> Azure DNS Private Resolver is currently in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Resolve Azure Private DNS zones using on-premises conditional fowarders and Azure DNS private resolvers.
+> * Enable on-premises DNS failover for your Azure Private DNS zones.
+
+The following diagram shows the failover scenario discussed in this article.
+
+[ ![Azure DNS Private Resolver architecture](./media/tutorial-dns-private-resolver-failover/private-resolver-failover.png) ](./media/tutorial-dns-private-resolver-failover/private-resolver-failover-highres.png#lightbox)
+
+In this scenario, you have connections from two on-premises locations to two Azure hub vnets.
+- In the east region, the primary path is to the east vnet hub. You have a secondary connection to the west hub. The west region is configured in the reverse.
+- Due to an Internet connectivity issue, the connection to one vnet (west) is temporarily broken.
+- Service is maintained in both regions due to the redundant design.
+
+The DNS resolution path is:
+1) Redundant on-premises DNS [conditional forwarders](#on-premise-forwarding) send DNS queries to inbound endpoints.
+2) [Inbound endpoints](#inbound-endpoints) receive DNS queries from on-premises.
+3) Outbound endpoints and DNS forwarding rulesets process DNS queries and return replies to your on-premises resources.
+
+Outbound endpoints and DNS forwarding rulesets aren't needed for the failover scenario, but are included here for completeness. Rulesets can be used is to resolve on-premises domains from Azure. For more information, see [Azure DNS Private Resolver endpoints and rulesets](private-resolver-endpoints-rulesets.md) and [Resolve Azure and on-premises domains](private-resolver-hybrid-dns.md).
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- Two [Azure virtual networks](../virtual-network/quick-create-portal.md) in two regions
+- A [VPN](../vpn-gateway/tutorial-site-to-site-portal.md) or [ExpressRoute](../expressroute/expressroute-howto-circuit-portal-resource-manager.md) link from on-premises to each virtual network
+- An [Azure DNS Private Resolver](dns-private-resolver-get-started-portal.md) in each virtual network
+- An Azure [private DNS zone](private-dns-getstarted-portal.md) that is linked to each virtual network
+- An on-premises DNS server
+
+> [!NOTE]
+> In this tutorial,`azure.contoso.com` is an Azure private DNS zone. Replace `azure.contoso.com` with your private DNS zone name.
+
+## Sign in to Azure
+
+Sign in to the [Azure portal](https://portal.azure.com).
+
+<a name="inbound-endpoints"></a>
+
+## Determine inbound endpoint IP addresses
+
+Write down the IP addresses assigned to the inbound endpoints of your DNS private resolvers. The IP addresses will be used to configure on-premises DNS forwarders.
+
+In this example, there are two virtual networks in two regions:
+- **myeastvnet** is in the East US region, assigned the address space 10.10.0.0/16
+- **mywestvnet** is in the West Central US region, assigned the address space 10.20.0.0/16
+
+1. Search for **DNS Private Resolvers** and select your private resolver from the first region. For example: **myeastresolver**.
+2. Under **Settings**, select **Inbound endpoints** and write down the **IP address** setting. For example: **10.10.0.4**.
+
+ ![View inbound endpoint](./media/tutorial-dns-private-resolver-failover/east-inbound-endpoint.png)
+
+3. Return to the list of **DNS Private Resolvers** and select a resolver from a different region. For example: **mywestresolver**.
+4. Under **Settings**, select **Inbound endpoints** and write down the **IP address** setting of this resolver. For example: **10.20.0.4**.
+
+## Verify private zone links
+
+To resolve DNS records in an Azure DNS private zone, the zone must be linked to the virtual network. In this example, the zone `azure.contoso.com` is linked to **myeastvnet** and **mywestvnet**. Links to other vnets can also be present.
+
+1. Search for **Private DNS zones** and select your private zone. For example: **azure.contoso.com**.
+2. Under **Settings**, select **Virtual network links** and verify that the vnets you used for inbound endpoints in the previous procedure are also listed under Virtual network. For example: **myeastvnet** and **mywestvnet**.
+
+ ![View vnet links](./media/tutorial-dns-private-resolver-failover/vnet-links.png)
+
+3. If one or more vnets aren't yet linked, you can add it here by selecting **Add**, providing a **Link name**, choosing your **Subscription**, and then choosing the **Virtual network**.
+
+> [!TIP]
+> You can also use peering to resolve records in private DNS zones. For more information, see [Azure DNS Private Resolver endpoints and rulesets](private-resolver-endpoints-rulesets.md).
+
+## Verify Azure DNS resolution
+
+Check that DNS settings for your virtual networks are set to Default (Azure-provided).
+
+1. Search for **Virtual networks** and select the first Vnet. For example: **myeastvnet**.
+2. Under **Settings**, select **DNS servers** and verify that **Default (Azure-provided)** is chosen.
+3. Select the next Vnet (ex: **mywestvnet**) and verify that **Default (Azure-provided)** is chosen.
+
+ > [!NOTE]
+ > Custom DNS settings can also be made to work, but this is not in scope for the current scenario.
+
+4. Search for **Private DNS zones** and select your private zone name. For example: **azure.contoso.com**.
+5. Create a test record in the zone by selecting **+ Record set** and adding a new A record. For example: **test**.
+
+ ![Create a test A record](./media/tutorial-dns-private-resolver-failover/test-record.png)
+
+5. Open a command prompt using an on-premises client and use nslookup to look up your test record using the first private resolver IP address that you wrote down (ex: 10.10.0.4). See the following example:
+
+ ```cmd
+ nslookup test.azure.contoso.com 10.10.0.4
+ ```
+ The query should return the IP address that you assigned to your test record.
+ ![Results of nslookup - east](./media/tutorial-dns-private-resolver-failover/nslookup-results-e.png)
+
+6. Repeat this nslookup query using the IP address that you wrote down for the second private resolver (ex: 10.20.0.4).
+
+ ![Results of nslookup - west](./media/tutorial-dns-private-resolver-failover/nslookup-results-w.png)
+
+ > [!NOTE]
+ > If DNS resolution for the private zone is not working, check that your on-premises links to the Azure Vnets are connected.
+
+<a name="on-premise-forwarding"></a>
+
+## Configure on-premises DNS forwarding
+
+Now that DNS resolution is working from on-premises to Azure using two different Azure DNS Private Resolvers, we can configure forwarding to use both of these addresses. This will enable redundancy in case one of the connections to Azure is interrupted. The procedure to configure forwarders will depend on the type of DNS server that you're using. The following example uses a Windows Server that is running the DNS Server role service and has an IP address of 10.100.0.2.
+
+ > [!NOTE]
+ > The DNS server that you use to configure forwarding should be a server that client devices on your network will use for DNS resolution. If the server you're configuring is not the default, you'll need to query it's IP address directly (ex: nslookup test.azure.contoso.com 10.100.0.2) after forwarding is configured.
+
+1. Open an elevated Windows PowerShell prompt and issue the following command. Replace **azure.contoso.com** with the name of your private zone, and replace the IP addresses below with the IP addresses of your private resolvers.
+
+ ```PowerShell
+ Add-DnsServerConditionalForwarderZone -Name "azure.contoso.com" -MasterServers 10.20.0.4,10.10.0.4
+ ```
+2. If preferred, you can also use the DNS console to enter conditional forwarders. See the following example:
+
+ ![View DNS forwarders](./media/tutorial-dns-private-resolver-failover/forwarders.png)
+
+3. Now that forwarding is in place, issue the same DNS query that you used in the previous procedure. However, this time don't enter a destination IP address for the query. The query will use the client's default DNS server.
+
+ ![Results of nslookup](./media/tutorial-dns-private-resolver-failover/nslookup-results.png)
+
+## Demonstrate resiliency (optional)
+
+You can now demonstrate that DNS resolution works when one of the connections is broken.
+
+1. Interrupt connectivity from on-premises to one of your Vnets by disabling or disconnecting the interface. Verify that the connection doesn't automatically reconnect on-demand.
+2. Run the nslookup query using the private resolver from the Vnet that is no longer connected and verify that it fails (see below).
+3. Run the nslookup query using your default DNS server (configured with forwarders) and verify it still works due to the redundancy you enabled.
+
+ ![Results of nslookup - failover](./media/tutorial-dns-private-resolver-failover/nslookup-results-failover.png)
+
+## Next steps
+
+* Review components, benefits, and requirements for [Azure DNS Private Resolver](dns-private-resolver-overview.md).
+* Learn how to create an Azure DNS Private Resolver by using [Azure PowerShell](./dns-private-resolver-get-started-powershell.md) or [Azure portal](./dns-private-resolver-get-started-portal.md).
+* Understand how to [Resolve Azure and on-premises domains](private-resolver-hybrid-dns.md) using the Azure DNS Private Resolver.
+* Learn about [Azure DNS Private Resolver endpoints and rulesets](private-resolver-endpoints-rulesets.md).
+* Learn how to [configure hybrid DNS](private-resolver-hybrid-dns.md) using private resolvers.
+* Learn about some of the other key [networking capabilities](../networking/fundamentals/networking-overview.md) of Azure.
+* [Learn module: Introduction to Azure DNS](/learn/modules/intro-to-azure-dns).
+
firewall Premium Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/premium-features.md
You can identify what category a given FQDN or URL is by using the **Web Categor
:::image type="content" source="media/premium-features/firewall-category-search.png" alt-text="Firewall category search dialog":::
+> [!IMPORTANT]
+> To use **Web Category Check** feature, user has to have an access of Microsoft.Network/azureWebCategories/getwebcategory/action for **subscription** level, not resource group level.
+ ### Category change Under the **Web Categories** tab in **Firewall Policy Settings**, you can request a categorization change if you:
hdinsight General Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/general-guidelines.md
HDInsight can't depend on on-premises domain controllers or custom domain contro
### Properties synced from Azure AD to Azure AD DS
-* Azure AD connect syncs from on-premise to Azure AD.
+* Azure AD connect syncs from on-premises to Azure AD.
* Azure AD DS syncs from Azure AD. Azure AD DS syncs objects from Azure AD periodically. The Azure AD DS blade on the Azure portal displays the sync status. During each stage of sync, unique properties may get into conflict and renamed. Pay attention to the property mapping from Azure AD to Azure AD DS.
hdinsight Hdinsight Troubleshoot Yarn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-troubleshoot-yarn.md
These changes are visible immediately on the YARN Scheduler UI.
- [Connect to HDInsight (Apache Hadoop) by using SSH](./hdinsight-hadoop-linux-use-ssh-unix.md) - [Apache Hadoop YARN concepts and applications](https://hadoop.apache.org/docs/r2.7.4/hadoop-yarn/hadoop-yarn-site/WritingYarnApplications.html#Concepts_and_Flow) +
+## How do I troubleshoot YARN common issues?
+
+### Yarn UI isn't loading
+
+If your YARN UI isn't loading or is unreachable, and it returns "HTTP Error 502.3 - Bad Gateway," it highly indicates your ResourceManager service is unhealthy. To mitigate the issue, follow these steps:
+
+1. Go to **Ambari UI** > **YARN** > **SUMMARY** and check to see if only the active ResourceManager is in the **Started** state. If not, try to mitigate by restarting the unhealthy or stopped ResourceManager.
+2. If step 1 doesn't resolve the issue, SSH the active ResourceManager head node and check the garbage collection status using `jstat -gcutil <ResourceManager pid> 1000 100`. If you see the **FGCT** increase significantly in just a few seconds, it indicates ResourceManager is busy in *Full GC*, and is unable to process the other requests.
+3. Go to **Ambari UI** > **YARN** > **CONFIGS** > **Advanced** and increase `ResourceManager java heap size`.
+4. Restart required services in Ambari UI.
+
+### Both resource managers are in standby
+
+1. Check ResourceManager log to see if below similar error exists.
+```
+Service RMActiveServices failed in state STARTED; cause: org.apache.hadoop.service.ServiceStateException: com.google.protobuf.InvalidProtocolBufferException: Could not obtain block: BP-452067264-10.0.0.16-1608006815288:blk_1074235266_494491 file=/yarn/node-labels/nodelabel.mirror
+```
+2. If the error exists, check to see if some files are under replication or if there are missing blocks in the HDFS. You can run `hdfs fsck hdfs://mycluster/`
+
+3. Run `hdfs fsck hdfs://mycluster/ -delete` to forcefully clean up the HDFS and to get rid of the standby RM issue. Alternatively, run [PatchYarnNodeLabel](https://hdiconfigactions.blob.core.windows.net/hadoopcorepatchingscripts/PatchYarnNodeLabel.sh) on one of headnodes to patch the cluster.
+ ## Next steps
hdinsight Apache Spark Perf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-perf.md
Last updated 08/21/2020
-# Optimize Apache Spark jobs in HDInsight
+# Optimize Apache Spark applications in HDInsight
-This article provides an overview of strategies to optimize Apache Spark jobs on Azure HDInsight.
+This article provides an overview of strategies to optimize Apache Spark applications on Azure HDInsight.
## Overview
-The performance of your Apache Spark jobs depends on multiple factors. These performance factors include: how your data is stored, how the cluster is configured, and the operations that are used when processing the data.
+You might face below common Scenarios
-Common challenges you might face include: memory constraints due to improperly sized executors, long-running operations, and tasks that result in cartesian operations.
+- The same spark job is slower than before in the same HDInsight cluster
+- The spark job is slower in HDInsight cluster than on-premise or other third party service provider
+- The spark job is slower in one HDI cluster than another HDI cluster
-There are also many optimizations that can help you overcome these challenges, such as caching, and allowing for data skew.
+The performance of your Apache Spark jobs depends on multiple factors. These performance factors include:
+
+- How your data is stored
+- How the cluster is configured
+- The operations that are used when processing the data.
+- Unhealthy yarn service
+- Memory constraints due to improperly sized executors and OutOfMemoryError
+- Too many tasks or too few tasks
+- Data skew caused a few heavy tasks or slow tasks
+- Tasks slower in bad nodes
+++
+## Step 1: Check if your yarn service is healthy
+
+1. Go to Ambari UI:
+- Check if ResourceManager or NodeManager alerts
+- Check ResourceManager and NodeManager status in YARN > SUMMARY: All NodeManager should be in Started and only Active ResourceManager should be in Started
+
+2. Check if Yarn UI is accessible through `https://YOURCLUSTERNAME.azurehdinsight.net/yarnui/hn/cluster`
+
+3. Check if any exceptions or errors in ResourceManager log in `/var/log/hadoop-yarn/yarn/hadoop-yarn-resourcemanager-*.log`
+
+See more information in [Yarn Common Issues](../hdinsight-troubleshoot-yarn.md#how-do-i-troubleshoot-yarn-common-issues)
+
+## Step 2: Compare your new application resources with yarn available resources
+
+1. Go to **Ambari UI > YARN > SUMMARY**, check **CLUSTER MEMORY** in ServiceMetrics
+
+2. Check yarn queue metrics in details:
+- Go to Yarn UI, check Yarn scheduler metrics through `https://YOURCLUSTERNAME.azurehdinsight.net/yarnui/hn/cluster/scheduler`
+- Alternatively, you can check yarn scheduler metrics through Yarn Rest API. For example, `curl -u "xxxx" -sS -G "https://YOURCLUSTERNAME.azurehdinsight.net/ws/v1/cluster/scheduler"`. For ESP, you should use domain admin user.
+
+3. Calculate total resources for your new application
+- All executors resources: `spark.executor.instances * (spark.executor.memory + spark.yarn.executor.memoryOverhead) and spark.executor.instances * spark.executor.cores`. See more information in [spark executors configuration](apache-spark-settings.md#configuring-spark-executors)
+- ApplicationMaster
+ - In cluster mode, use `spark.driver.memory` and `spark.driver.cores`
+ - In client mode, use `spark.yarn.am.memory+spark.yarn.am.memoryOverhead` and `spark.yarn.am.cores`
+
+> [!NOTE]
+> `yarn.scheduler.minimum-allocation-mb <= spark.executor.memory+spark.yarn.executor.memoryOverhead <= yarn.scheduler.maximum-allocation-mb`
+++
+4. Compare your new application total resources with yarn available resources in your specified queue
++
+## Step 3: Track your spark application
+
+1. [Monitor your running spark application through Spark UI](apache-spark-job-debugging.md#track-an-application-in-the-spark-ui)
+
+2. [Monitor your complete or incomplete spark application through Spark History Server UI](apache-spark-job-debugging.md#find-information-about-completed-jobs-using-the-spark-history-server)
+
+We need to identify below symptoms through Spark UI or Spark History UI:
+
+- Which stage is slow
+- Are total executor CPU v-cores fully utilized in Event-Timeline in **Stage** tab
+- If using spark sql, what's the physical plan in SQL tab
+- Is DAG too long in one stage
+- Observe tasks metrics(input size, shuffle write size, GC Time) in **Stage** tab
+
+See more information in [Monitoring your Spark Applications ](https://spark.apache.org/docs/latest/monitoring.html)
+
+## Step 4: Optimize your spark application
+
+There are many optimizations that can help you overcome these challenges, such as caching, and allowing for data skew.
In each of the following articles, you can find information on different aspects of Spark optimization.
In each of the following articles, you can find information on different aspects
* [Optimize memory usage for Apache Spark](optimize-memory-usage.md) * [Optimize HDInsight cluster configuration for Apache Spark](optimize-cluster-configuration.md)
+### Optimize Spark SQL partitions
+
+- `spark.sql.shuffle.paritions` is 200 by default. We can adjust based on the business needs when shuffling data for joins or aggregations.
+- `spark.sql.files.maxPartitionBytes` is 1G by default in HDI. The maximum number of bytes to pack into a single partition when reading files. This configuration is effective only when using file-based sources such as Parquet, JSON and ORC.
+- AQE in Spark 3.0. See [Adaptive Query Execution](https://spark.apache.org/docs/latest/sql-performance-tuning.html#adaptive-query-execution)
++ ## Next steps * [Debug Apache Spark jobs running on Azure HDInsight](apache-spark-job-debugging.md)
hdinsight Optimize Cluster Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/optimize-cluster-configuration.md
For more information on using Ambari to configure executors, see [Apache Spark s
Monitor query performance for outliers or other performance issues, by looking at the timeline view. Also SQL graph, job statistics, and so forth. For information on debugging Spark jobs using YARN and the Spark History server, see [Debug Apache Spark jobs running on Azure HDInsight](apache-spark-job-debugging.md). For tips on using YARN Timeline Server, see [Access Apache Hadoop YARN application logs](../hdinsight-hadoop-access-yarn-app-logs-linux.md).
+## Tasks slower on some executors or nodes
+ Sometimes one or a few of the executors are slower than the others, and tasks take much longer to execute. This slowness frequently happens on larger clusters (> 30 nodes). In this case, divide the work into a larger number of tasks so the scheduler can compensate for slow tasks. For example, have at least twice as many tasks as the number of executor cores in the application. You can also enable speculative execution of tasks with `conf: spark.speculation = true`. ## Next steps
iot-edge How To Authenticate Downstream Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-authenticate-downstream-device.md
For X.509 self-signed authentication, sometimes referred to as thumbprint authen
* C: [iotedge_downstream_device_sample.c](https://github.com/Azure/azure-iot-sdk-c/tree/master/iothub_client/samples/iotedge_downstream_device_sample) * Node.js: [simple_sample_device_x509.js](https://github.com/Azure/azure-iot-sdk-node/blob/main/device/samples/javascript/simple_sample_device_x509.js) * Java: [SendEventX509.java](https://github.com/Azure/azure-iot-sdk-java/tree/main/device/iot-device-samples/send-event-x509)
- * Python: [send_message_x509.py](https://github.com/Azure/azure-iot-sdk-python/blob/main/azure-iot-device/samples/async-hub-scenarios/send_message_x509.py)
+ * Python: [send_message_x509.py](https://github.com/Azure/azure-iot-sdk-python/blob/main/samples/async-hub-scenarios/send_message_x509.py)
You also can use the [IoT extension for Azure CLI](https://github.com/Azure/azure-iot-cli-extension) to complete the same device creation operation. The following example uses the [az iot hub device-identity](/cli/azure/iot/hub/device-identity) command to create a new IoT device with X.509 self-signed authentication and assigns a parent device:
This section is based on the IoT Hub X.509 certificate tutorial series. See [Und
* C: [iotedge_downstream_device_sample.c](https://github.com/Azure/azure-iot-sdk-c/tree/master/iothub_client/samples/iotedge_downstream_device_sample) * Node.js: [simple_sample_device_x509.js](https://github.com/Azure/azure-iot-sdk-node/blob/main/device/samples/javascript/simple_sample_device_x509.js) * Java: [SendEventX509.java](https://github.com/Azure/azure-iot-sdk-java/tree/main/device/iot-device-samples/send-event-x509)
- * Python: [send_message_x509.py](https://github.com/Azure/azure-iot-sdk-python/blob/main/azure-iot-device/samples/async-hub-scenarios/send_message_x509.py)
+ * Python: [send_message_x509.py](https://github.com/Azure/azure-iot-sdk-python/blob/main/samples/async-hub-scenarios/send_message_x509.py)
You also can use the [IoT extension for Azure CLI](https://github.com/Azure/azure-iot-cli-extension) to complete the same device creation operation. The following example uses the [az iot hub device-identity](/cli/azure/iot/hub/device-identity) command to create a new IoT device with X.509 CA signed authentication and assigns a parent device:
iot-edge How To Connect Downstream Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-connect-downstream-device.md
This section introduces a sample application to connect an Azure IoT Java device
This section introduces a sample application to connect an Azure IoT Python device client to an IoT Edge gateway.
-1. Get the sample for **send_message_downstream** from the [Azure IoT device SDK for Python samples](https://github.com/Azure/azure-iot-sdk-python/tree/main/azure-iot-device/samples/async-edge-scenarios).
+1. Get the sample for **send_message_downstream** from the [Azure IoT device SDK for Python samples](https://github.com/Azure/azure-iot-sdk-python/tree/main/samples/async-edge-scenarios).
2. Set the `IOTHUB_DEVICE_CONNECTION_STRING` and `IOTEDGE_ROOT_CA_CERT_PATH` environment variables as specified in the Python script comments. 3. Refer to the SDK documentation for any additional instructions on how to run the sample on your device.
iot-edge Troubleshoot Iot Edge For Linux On Windows Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/troubleshoot-iot-edge-for-linux-on-windows-common-errors.md
The following section addresses the common errors when installing the EFLOW MSI
- [Azure IoT Edge for Linux on Windows prerequisites](https://aka.ms/AzEFLOW-Requirements) - [Nested virtualization for Azure IoT Edge for Linux on Windows](./nested-virtualization.md) - [Networking configuration for Azure IoT Edge for Linux on Windows](./how-to-configure-iot-edge-for-linux-on-windows-networking.md)-- [Azure IoT Edge for Linux on Windows virtual switch creation](/how-to-create-virtual-switch.md)
+- [Azure IoT Edge for Linux on Windows virtual switch creation](./how-to-create-virtual-switch.md)
- [PowerShell functions for IoT Edge for Linux on Windows](./reference-iot-edge-for-linux-on-windows-functions.md) > [!div class="mx-tdCol2BreakAll"]
The following section addresses the common errors related to EFLOW networking an
- [Azure IoT Edge for Linux on Windows networking](./iot-edge-for-linux-on-windows-networking.md) - [Networking configuration for Azure IoT Edge for Linux on Windows](./how-to-configure-iot-edge-for-linux-on-windows-networking.md)-- [Azure IoT Edge for Linux on Windows virtual switch creation](/how-to-create-virtual-switch.md)
+- [Azure IoT Edge for Linux on Windows virtual switch creation](./how-to-create-virtual-switch.md)
> [!div class="mx-tdCol2BreakAll"] > | Error | Error Description | Solution |
iot-edge Troubleshoot Iot Edge For Linux On Windows Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/troubleshoot-iot-edge-for-linux-on-windows-networking.md
For more information about EFLOW VM firewall, see [IoT Edge for Linux on Windows
sudo iptables -L ```
-To add a firewall rule to the EFLOW VM, you can use the [EFLOW Util - Firewall Rules](https://github.com/Azure/iotedge-eflow/tree/eflow-usbip/eflow-util/firewall-rules) sample PowerShell cmdlets. Also, you can achieve the same rules creation by following these steps:
+To add a firewall rule to the EFLOW VM, you can use the [EFLOW Util - Firewall Rules](https://github.com/Azure/iotedge-eflow/tree/main/eflow-util#get-eflowvmfirewallrules) sample PowerShell cmdlets. Also, you can achieve the same rules creation by following these steps:
1. Start an elevated _PowerShell_ session using **Run as Administrator**. 1. Connect to the EFLOW virtual machine
iot-edge Troubleshoot Iot Edge For Linux On Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/troubleshoot-iot-edge-for-linux-on-windows.md
Second, if the GPU is correctly assigned, but still not being able to use it ins
The first step before checking *WSSDAgent* logs is to check if the VM was created and is running. 1. Start an elevated _PowerShell_ session using **Run as Administrator**.
-1. On Windows Client SKUs, check the [HCS](/virtualization/community/team-blog/2017/20170127-introducing-the-host-compute-service-hcs.md) virtual machines.
+1. On Windows Client SKUs, check the [HCS](/virtualization/community/team-blog/2017/20170127-introducing-the-host-compute-service-hcs) virtual machines.
```powershell hcsdiag list ```
The first step before checking *WSSDAgent* logs is to check if the VM was create
VM, SavedAsTemplate, 88D7AA8C-0D1F-4786-B4CB-62EFF1DECD92, CmService ```
-1. On Windows Server SKUs, check the [VMMS](/windows-server/virtualization/hyper-v/hyper-v-technology-overview.md) virtual machines
+1. On Windows Server SKUs, check the [VMMS](/windows-server/virtualization/hyper-v/hyper-v-technology-overview) virtual machines
```powershell hcsdiag list ```
key-vault About Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/about-certificates.md
Key Vault allows for creation of multiple issuer objects with different issuer p
Issuer objects are created in the vault and can only be used with KV certificates in the same vault.
+>[!Note]
+>Publicly trusted certificates are sent to Certificate Authorities (CAs) and Certificate Transparency (CT) logs outside of the Azure boundary during enrollment and will be covered by the GDPR policies of those entities.
+ ## Certificate contacts Certificate contacts contain contact information to send notifications triggered by certificate lifetime events. The contacts information is shared by all the certificates in the key vault. A notification is sent to all the specified contacts for an event for any certificate in the key vault. For information on how to set Certificate contact, see [here](overview-renew-certificate.md#steps-to-set-certificate-notifications)
key-vault About Keys Secrets Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/about-keys-secrets-certificates.md
Objects stored in Key Vault are versioned whenever a new instance of an object i
Objects in Key Vault can be addressed by specifying a version or by omitting version for operations on current version of the object. For example, given a Key with the name `MasterKey`, performing operations without specifying a version causes the system to use the latest available version. Performing operations with the version-specific identifier causes the system to use that specific version of the object.
+> [!NOTE]
+> The values you provide for Azure resource or object IDs may be copied globally for the purpose of running the service. The value provided should not include personally identifiable or sensitive information.
+ ### Vault-name and Object-name Objects are uniquely identified within Key Vault using a URL. No two objects in the system have the same URL, regardless of geo-location. The complete URL to an object is called the Object Identifier. The URL consists of a prefix that identifies the Key Vault, object type, user provided Object Name, and an Object Version. The Object Name is case-insensitive and immutable. Identifiers that don't include the Object Version are referred to as Base Identifiers.
key-vault Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/best-practices.md
Managed HSM is a cloud service that safeguards encryption keys. As these keys ar
- [Soft Delete](soft-delete-overview.md) is on by default. You can choose a retention period between 7 and 90 days. - Turn on purge protection to prevent immediate permanent deletion of HSM or keys. When purge protection is on HSM or keys will remain in deleted state until the retention days have passed.
-## Generate and import keys from on-premise HSM
+## Generate and import keys from on-premises HSM
> [!NOTE] > Keys created or imported into Managed HSM are not exportable. -- To ensure long term portability and key durability, generate keys in your on-premise HSM and [import them to Managed HSM](hsm-protected-keys-byok.md). You will have a copy of your key securely stored in your on-premises HSM for future use.
+- To ensure long term portability and key durability, generate keys in your on-premises HSM and [import them to Managed HSM](hsm-protected-keys-byok.md). You will have a copy of your key securely stored in your on-premises HSM for future use.
## Next steps
key-vault Hsm Protected Keys Byok https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/hsm-protected-keys-byok.md
# Import HSM-protected keys to Managed HSM (BYOK)
- Azure Key Vault Managed HSM supports importing keys generated in your on-premise hardware security module (HSM); the keys will never leave the HSM protection boundary. This scenario often is referred to as *bring your own key* (BYOK). Managed HSM uses the Marvell LiquidSecurity HSM adapters (FIPS 140-2 Level 3 validated) to protect your keys.
+ Azure Key Vault Managed HSM supports importing keys generated in your on-premises hardware security module (HSM); the keys will never leave the HSM protection boundary. This scenario often is referred to as *bring your own key* (BYOK). Managed HSM uses the Marvell LiquidSecurity HSM adapters (FIPS 140-2 Level 3 validated) to protect your keys.
Use the information in this article to help you plan for, generate, and transfer your own HSM-protected keys to use with Managed HSM.
key-vault Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/overview.md
For pricing information, please see Managed HSM Pools section on [Azure Key Vaul
### Import keys from your on-premises HSMs -- Generate HSM-protected keys in your on-premise HSM and import them securely into Managed HSM.
+- Generate HSM-protected keys in your on-premises HSM and import them securely into Managed HSM.
## Next steps - See [Quickstart: Provision and activate a managed HSM using Azure CLI](quick-create-cli.md) to create and activate a managed HSM
key-vault Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/policy-reference.md
Title: Built-in policy definitions for Key Vault description: Lists Azure Policy built-in policy definitions for Key Vault. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/16/2022 Last updated : 08/19/2022
logic-apps Create Single Tenant Workflows Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-visual-studio-code.md
After you deploy a logic app to the Azure portal from Visual Studio Code, you ca
![Screenshot that shows the Azure portal search box with the "logic apps" search text.](./media/create-single-tenant-workflows-visual-studio-code/portal-find-logic-app-resource.png)
-1. On the **Logic App (Standard)** pane, select the logic app that you deployed from Visual Studio Code.
+1. On the **Logic apps** pane, select the logic app that you deployed from Visual Studio Code.
![Screenshot that shows the Azure portal and the Logic App (Standard) resources deployed in Azure.](./media/create-single-tenant-workflows-visual-studio-code/logic-app-resources-pane.png)
machine-learning Concept Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automated-ml.md
Automated machine learning, also referred to as automated ML or AutoML, is the p
Traditional machine learning model development is resource-intensive, requiring significant domain knowledge and time to produce and compare dozens of models. With automated machine learning, you'll accelerate the time it takes to get production-ready ML models with great ease and efficiency.
-<a name="parity"></a>
## Ways to use AutoML in Azure Machine Learning
-Azure Machine Learning offers the following two experiences for working with automated ML. See the following sections to understand [feature availability in each experience](#parity).
+Azure Machine Learning offers the following two experiences for working with automated ML. See the following sections to understand feature availability in each experience.
-* For code-experienced customers, [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro). Get started with [Tutorial: Use automated machine learning to predict taxi fares](tutorial-auto-train-models.md).
+* For code-experienced customers, [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro). Get started with [Tutorial: Train an object detection model (preview) with AutoML and Python](tutorial-auto-train-image-models.md)
* For limited/no-code experience customers, Azure Machine Learning studio at [https://ml.azure.com](https://ml.azure.com/). Get started with these tutorials: * [Tutorial: Create a classification model with automated ML in Azure Machine Learning](tutorial-first-experiment-automated-ml.md).
See examples of classification and automated machine learning in these Python no
Similar to classification, regression tasks are also a common supervised learning task. Azure Machine Learning offers [featurizations specifically for these tasks](how-to-configure-auto-features.md#featurization).
-Different from classification where predicted output values are categorical, regression models predict numerical output values based on independent predictors. In regression, the objective is to help establish the relationship among those independent predictor variables by estimating how one variable impacts the others. For example, automobile price based on features like, gas mileage, safety rating, etc. Learn more and see an example of [regression with automated machine learning](tutorial-auto-train-models.md).
+Different from classification where predicted output values are categorical, regression models predict numerical output values based on independent predictors. In regression, the objective is to help establish the relationship among those independent predictor variables by estimating how one variable impacts the others. For example, automobile price based on features like, gas mileage, safety rating, etc. Learn more and see an example of [regression with automated machine learning](v1/how-to-auto-train-models-v1.md).
See examples of regression and automated machine learning for predictions in these Python notebooks: [CPU Performance Prediction](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml/regression-explanation-featurization),
Using **Azure Machine Learning**, you can design and run your automated ML train
1. **Identify the ML problem** to be solved: classification, forecasting, regression or computer vision (preview). 1. **Choose whether you want to use the Python SDK or the studio web experience**:
- Learn about the parity between the [Python SDK and studio web experience](#parity).
+ Learn about the parity between the [Python SDK and studio web experience](#ways-to-use-automl-in-azure-machine-learning).
* For limited or no code experience, try the Azure Machine Learning studio web experience at [https://ml.azure.com](https://ml.azure.com/) * For Python developers, check out the [Azure Machine Learning Python SDK](how-to-configure-auto-train.md)
There are multiple resources to get you up and running with AutoML.
### Tutorials/ how-tos Tutorials are end-to-end introductory examples of AutoML scenarios.
-+ **For a code first experience**, follow the [Tutorial: Train a regression model with AutoML and Python](tutorial-auto-train-models.md).
+++ **For a code first experience**, follow the [Tutorial: Train an object detection model (preview) with AutoML and Python](tutorial-auto-train-image-models.md) + **For a low or no-code experience**, see the [Tutorial: Train a classification model with no-code AutoML in Azure Machine Learning studio](tutorial-first-experiment-automated-ml.md).
machine-learning Concept Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-data.md
Just like `uri_file` and `uri_folder`, you can create a data asset with `mltable
- [Install and set up the CLI (v2)](how-to-configure-cli.md#install-and-set-up-the-cli-v2) - [Create datastores](how-to-datastore.md#create-datastores)-- [Create data assets](how-to-create-register-data-assets.md#create-data-assets)
+- [Create data assets](how-to-create-data-assets.md#create-data-assets)
- [Read and write data in a job](how-to-read-write-data-v2.md#read-and-write-data-in-a-job) - [Data administration](how-to-administrate-data-authentication.md#data-administration)
machine-learning Concept Manage Ml Pitfalls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-manage-ml-pitfalls.md
The following techniques are additional options to handle imbalanced data **outs
See examples and learn how to build models using automated machine learning:
-+ Follow the [Tutorial: Automatically train a regression model with Azure Machine Learning](tutorial-auto-train-models.md)
++ Follow the [Tutorial: Train an object detection model (preview) with AutoML and Python](tutorial-auto-train-image-models.md). + Configure the settings for automatic training experiment: + In Azure Machine Learning studio, [use these steps](how-to-use-automated-ml-for-ml-models.md).
machine-learning Concept Train Machine Learning Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-train-machine-learning-model.md
Define the iterations, hyperparameter settings, featurization, and other setting
* [What is automated machine learning?](concept-automated-ml.md) * [Tutorial: Create your first classification model with automated machine learning](tutorial-first-experiment-automated-ml.md)
-* [Tutorial: Use automated machine learning to predict taxi fares](tutorial-auto-train-models.md)
* [Examples: Jupyter Notebook examples for automated machine learning](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning) * [How to: Configure automated ML experiments in Python](how-to-configure-auto-train.md) * [How to: Autotrain a time-series forecast model](how-to-auto-train-forecast.md)
machine-learning How To Access Azureml Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-azureml-behind-firewall.md
The hosts in the following tables are owned by Microsoft, and provide services r
| Compute cluster/instance | graph.chinacloudapi.cn | TCP | 443 | | Compute instance | \*.instances.azureml.cn | TCP | 443 | | Compute instance | \*.instances.azureml.ms | TCP | 443, 8787, 18881 |
-| Microsoft storage access | \*blob.core.chinacloudapi.cn | TCP | 443 |
+| Microsoft storage access | \*.blob.core.chinacloudapi.cn | TCP | 443 |
| Microsoft storage access | \*.table.core.chinacloudapi.cn | TCP | 443 | | Microsoft storage access | \*.queue.core.chinacloudapi.cn | TCP | 443 | | Your storage account | \<storage\>.file.core.chinacloudapi.cn | TCP | 443, 445 |
machine-learning How To Auto Train Forecast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-forecast.md
For this article you need,
* An Azure Machine Learning workspace. To create the workspace, see [Create workspace resources](quickstart-create-resources.md).
-* This article assumes some familiarity with setting up an automated machine learning experiment. Follow the [tutorial](tutorial-auto-train-models.md) or [how-to](how-to-configure-auto-train.md) to see the main automated machine learning experiment design patterns.
+* This article assumes some familiarity with setting up an automated machine learning experiment. Follow the [how-to](how-to-configure-auto-train.md) to see the main automated machine learning experiment design patterns.
[!INCLUDE [automl-sdk-version](../../includes/machine-learning-automl-sdk-version.md)]
See the [forecasting sample notebooks](https://github.com/Azure/azureml-examples
## Next steps
-* Learn more about [how and where to deploy a model](./v1/how-to-deploy-and-where.md).
+* Learn more about [How to deploy an AutoML model to an online endpoint](how-to-deploy-automl-endpoint.md).
* Learn about [Interpretability: model explanations in automated machine learning (preview)](how-to-machine-learning-interpretability-automl.md).
-* Follow the [Tutorial: Train regression models](tutorial-auto-train-models.md) for an end to end example for creating experiments with automated machine learning.
+
machine-learning How To Configure Auto Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-auto-features.md
automl_settings = {
* Learn more about [how and where to deploy a model](./v1/how-to-deploy-and-where.md).
-* Learn more about [how to train a regression model by using automated machine learning](tutorial-auto-train-models.md) or [how to train by using automated machine learning on a remote resource](./v1/concept-automated-ml-v1.md#local-remote).
+* Learn more about [how to train a regression model by using automated machine learning](./v1/how-to-auto-train-models-v1.md) or [how to train by using automated machine learning on a remote resource](./v1/concept-automated-ml-v1.md#local-remote).
machine-learning How To Configure Cross Validation Data Splits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-cross-validation-data-splits.md
For this article you need,
* An Azure Machine Learning workspace. To create the workspace, see [Create workspace resources](quickstart-create-resources.md).
-* Familiarity with setting up an automated machine learning experiment with the Azure Machine Learning SDK. Follow the [tutorial](tutorial-auto-train-models.md) or [how-to](how-to-configure-auto-train.md) to see the fundamental automated machine learning experiment design patterns.
+* Familiarity with setting up an automated machine learning experiment with the Azure Machine Learning SDK. Follow the [tutorial](tutorial-auto-train-image-models.md) or [how-to](how-to-configure-auto-train.md) to see the fundamental automated machine learning experiment design patterns.
* An understanding of train/validation data splits and cross-validation as machine learning concepts. For a high-level explanation,
Passing the `test_data` or `test_size` parameters into the `AutoMLConfig`, autom
## Next steps * [Prevent imbalanced data and overfitting](concept-manage-ml-pitfalls.md).
-* [Tutorial: Use automated machine learning to predict taxi fares - Split data section](tutorial-auto-train-models.md#split-the-data-into-train-and-test-sets).
+ * How to [Auto-train a time-series forecast model](how-to-auto-train-forecast.md).
machine-learning How To Create Data Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-data-assets.md
+
+ Title: Create Data Assets
+
+description: Learn how to create Azure Machine Learning data assets.
++++++++ Last updated : 05/24/2022+
+# Customer intent: As an experienced data scientist, I need to package my data into a consumable and reusable object to train my machine learning models.
+++
+# Create data assets
+
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK you are using:"]
+> * [v1](./v1/how-to-create-register-datasets.md)
+> * [v2 (current version)](how-to-create-data-assets.md)
++
+In this article, you learn how to create a data asset in Azure Machine Learning. By creating a data asset, you create a *reference* to the data source location, along with a copy of its metadata. Because the data remains in its existing location, you incur no extra storage cost, and don't risk the integrity of your data sources. You can create Data from datastores, Azure Storage, public URLs, and local files.
+
+The benefits of creating data assets are:
+
+* You can **share and reuse data** with other members of the team such that they do not need to remember file locations.
+
+* You can **seamlessly access data** during model training (on any supported compute type) without worrying about connection strings or data paths.
+
+* You can **version** the data.
++
+## Prerequisites
+
+To create and work with data assets, you need:
+
+* An Azure subscription. If you don't have one, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
+
+* An Azure Machine Learning workspace. [Create workspace resources](quickstart-create-resources.md).
+
+* The [Azure Machine Learning CLI/SDK installed](how-to-configure-cli.md) and MLTable package installed (`pip install mltable`).
+
+## Supported paths
+
+When you create a data asset in Azure Machine Learning, you'll need to specify a `path` parameter that points to its location. Below is a table that shows the different data locations supported in Azure Machine Learning and examples for the `path` parameter:
++
+|Location | Examples |
+|||
+|A path on your local computer | `./home/username/data/my_data` |
+|A path on a public http(s) server | `https://raw.githubusercontent.com/pandas-dev/pandas/main/doc/data/titanic.csv` |
+|A path on Azure Storage | `https://<account_name>.blob.core.windows.net/<container_name>/path` <br> `abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>` |
+|A path on a datastore | `azureml://datastores/<data_store_name>/paths/<path>` |
++
+> [!NOTE]
+> When you create a data asset from a local path, it will be automatically uploaded to the default Azure Machine Learning datastore in the cloud.
+
+## Create a `uri_folder` data asset
+
+Below shows you how to create a *folder* as an asset:
+
+# [CLI](#tab/CLI)
+
+Create a `YAML` file (`<file-name>.yml`):
+
+```yaml
+$schema: https://azuremlschemas.azureedge.net/latest/data.schema.json
+
+# Supported paths include:
+# local: ./<path>
+# blob: https://<account_name>.blob.core.windows.net/<container_name>/<path>
+# ADLS gen2: abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>/
+# Datastore: azureml://datastores/<data_store_name>/paths/<path>
+type: uri_folder
+name: <name_of_data>
+description: <description goes here>
+path: <path>
+```
+
+Next, create the data asset using the CLI:
+
+```azurecli
+az ml data create -f <file-name>.yml
+```
+
+# [Python-SDK](#tab/Python-SDK)
+
+You can create a data asset in Azure Machine Learning using the following Python Code:
+
+```python
+from azure.ai.ml.entities import Data
+from azure.ai.ml.constants import AssetTypes
+
+# Supported paths include:
+# local: './<path>'
+# blob: 'https://<account_name>.blob.core.windows.net/<container_name>/<path>'
+# ADLS gen2: 'abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>/'
+# Datastore: 'azureml://datastores/<data_store_name>/paths/<path>'
+
+my_path = '<path>'
+
+my_data = Data(
+ path=my_path,
+ type=AssetTypes.URI_FOLDER,
+ description="<description>",
+ name="<name>",
+ version='<version>'
+)
+
+ml_client.data.create_or_update(my_data)
+```
+++
+## Create a `uri_file` data asset
+
+Below shows you how to create a *specific file* as a data asset:
+
+# [CLI](#tab/CLI)
+
+Sample `YAML` file `<file-name>.yml` for data in local path is as below:
+
+```yaml
+$schema: https://azuremlschemas.azureedge.net/latest/data.schema.json
+
+# Supported paths include:
+# local: ./<path>/<file>
+# blob: https://<account_name>.blob.core.windows.net/<container_name>/<path>/<file>
+# ADLS gen2: abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>/<file>
+# Datastore: azureml://datastores/<data_store_name>/paths/<path>/<file>
+
+type: uri_file
+name: <name>
+description: <description>
+path: <uri>
+```
+
+```cli
+> az ml data create -f <file-name>.yml
+```
+
+# [Python-SDK](#tab/Python-SDK)
+```python
+from azure.ai.ml.entities import Data
+from azure.ai.ml.constants import AssetTypes
+
+# Supported paths include:
+# local: './<path>/<file>'
+# blob: 'https://<account_name>.blob.core.windows.net/<container_name>/<path>/<file>'
+# ADLS gen2: 'abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>/<file>'
+# Datastore: 'azureml://datastores/<data_store_name>/paths/<path>/<file>'
+my_path = '<path>'
+
+my_data = Data(
+ path=my_path,
+ type=AssetTypes.URI_FILE,
+ description="<description>",
+ name="<name>",
+ version="<version>"
+)
+
+ml_client.data.create_or_update(my_data)
+```
++
+
+## Create a `mltable` data asset
+
+`mltable` is a way to abstract the schema definition for tabular data to make it easier to share data assets (an overview can be found in [MLTable](concept-data.md#mltable)).
+
+In this section, we show you how to create a data asset when the type is an `mltable`.
+
+### The MLTable file
+
+The MLTable file is a file that provides the specification of the data's schema so that the `mltable` *engine* can materialize the data into an in-memory object (Pandas/Dask/Spark). An *example* MLTable file is provided below:
+
+```yml
+type: mltable
+
+paths:
+ - pattern: ./*.txt
+transformations:
+ - read_delimited:
+ delimiter: ,
+ encoding: ascii
+ header: all_files_same_headers
+```
+> [!IMPORTANT]
+> We recommend co-locating the MLTable file with the underlying data in storage. For example:
+>
+> ```Text
+> Γö£ΓöÇΓöÇ my_data
+> Γöé Γö£ΓöÇΓöÇ MLTable
+> Γöé Γö£ΓöÇΓöÇ file_1.txt
+> .
+> .
+> .
+> Γöé Γö£ΓöÇΓöÇ file_n.txt
+> ```
+> Co-locating the MLTable with the data ensures a **self-contained *artifact*** where all that is needed is stored in that one folder (`my_data`); regardless of whether that folder is stored on your local drive or in your cloud store or on a public http server. You should **not** specify *absolute paths* in the MLTable file.
+
+In your Python code, you materialize the MLTable artifact into a Pandas dataframe using:
+
+```python
+import mltable
+
+tbl = mltable.load(uri="./my_data")
+df = tbl.to_pandas_dataframe()
+```
+
+The `uri` parameter in `mltable.load()` should be a valid path to a local or cloud **folder** which contains a valid MLTable file.
+
+> [!NOTE]
+> You will need the `mltable` library installed in your Environment (`pip install mltable`).
+
+Below shows you how to create an `mltable` data asset. The `path` can be any of the supported path formats outlined above.
++
+# [CLI](#tab/CLI)
+
+Create a `YAML` file (`<file-name>.yml`):
+
+```yaml
+$schema: https://azuremlschemas.azureedge.net/latest/data.schema.json
+
+# path must point to **folder** containing MLTable artifact (MLTable file + data
+# Supported paths include:
+# local: ./<path>
+# blob: https://<account_name>.blob.core.windows.net/<container_name>/<path>
+# ADLS gen2: abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>/
+# Datastore: azureml://datastores/<data_store_name>/paths/<path>
+
+type: mltable
+name: <name_of_data>
+description: <description goes here>
+path: <path>
+```
+
+> [!NOTE]
+> The path points to the **folder** containing the MLTable artifact.
+
+Next, create the data asset using the CLI:
+
+```azurecli
+az ml data create -f <file-name>.yml
+```
+
+# [Python-SDK](#tab/Python-SDK)
+
+You can create a data asset in Azure Machine Learning using the following Python Code:
+
+```python
+from azure.ai.ml.entities import Data
+from azure.ai.ml.constants import AssetTypes
+
+# my_path must point to folder containing MLTable artifact (MLTable file + data
+# Supported paths include:
+# local: './<path>'
+# blob: 'https://<account_name>.blob.core.windows.net/<container_name>/<path>'
+# ADLS gen2: 'abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>/'
+# Datastore: 'azureml://datastores/<data_store_name>/paths/<path>'
+
+my_path = '<path>'
+
+my_data = Data(
+ path=my_path,
+ type=AssetTypes.MLTABLE,
+ description="<description>",
+ name="<name>",
+ version='<version>'
+)
+
+ml_client.data.create_or_update(my_data)
+```
+
+> [!NOTE]
+> The path points to the **folder** containing the MLTable artifact.
+++
+## Next steps
+
+- [Read data in a job](how-to-read-write-data-v2.md#read-data-in-a-job)
machine-learning How To Datastore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-datastore.md
ml_client.create_or_update(store)
## Next steps - [Read data in a job](how-to-read-write-data-v2.md#read-data-in-a-job)-- [Create data assets](how-to-create-register-data-assets.md#create-data-assets)
+- [Create data assets](how-to-create-data-assets.md#create-data-assets)
- [Data administration](how-to-administrate-data-authentication.md#data-administration)
machine-learning How To Generate Automl Training Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-generate-automl-training-code.md
The following diagram illustrates that you can generate the code for automated M
* An Azure Machine Learning workspace. To create the workspace, see [Create workspace resources](quickstart-create-resources.md).
-* This article assumes some familiarity with setting up an automated machine learning experiment. Follow the [tutorial](tutorial-auto-train-models.md) or [how-to](how-to-configure-auto-train.md) to see the main automated machine learning experiment design patterns.
+* This article assumes some familiarity with setting up an automated machine learning experiment. Follow the [tutorial](tutorial-auto-train-image-models.md) or [how-to](how-to-configure-auto-train.md) to see the main automated machine learning experiment design patterns.
* Automated ML code generation is only available for experiments run on remote Azure ML compute targets. Code generation isn't supported for local runs.
However, in order to load that model in a notebook in your custom local Conda en
## Next steps
-* Learn more about [how and where to deploy a model](/azure/machine-learning/how-to-deploy-managed-online-endpoints).
+* Learn more about [how and where to deploy a model](how-to-deploy-and-where.md).
* See how to [enable interpretability features](how-to-machine-learning-interpretability-automl.md) specifically within automated ML experiments.
machine-learning How To High Availability Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-high-availability-machine-learning.md
To maximize your uptime, plan ahead to maintain business continuity and prepare
Microsoft strives to ensure that Azure services are always available. However, unplanned service outages may occur. We recommend having a disaster recovery plan in place for handling regional service outages. In this article, you'll learn how to: * Plan for a multi-regional deployment of Azure Machine Learning and associated resources.
+* Maximize chances to recover logs, notebooks, docker images, and other metadata.
* Design for high availability of your solution. * Initiate a failover to another region. > [!NOTE]
-> Azure Machine Learning itself does not provide automatic failover or disaster recovery.
+> Azure Machine Learning itself does not provide automatic failover or disaster recovery. Backup and restore of workspace metadata such as run history is unavailable.
In case you have accidentally deleted your workspace or corresponding components, this article also provides you with currently supported recovery options. ## Understand Azure services for Azure Machine Learning
-Azure Machine Learning depends on multiple Azure services and has several layers. Some of these services are provisioned in your (customer) subscription. You're responsible for the high-availability configuration of these services. Other services are created in a Microsoft subscription and managed by Microsoft.
+Azure Machine Learning depends on multiple Azure services. Some of these services are provisioned in your subscription. You're responsible for the high-availability configuration of these services. Other services are created in a Microsoft subscription and are managed by Microsoft.
Azure services include: * **Azure Machine Learning infrastructure**: A Microsoft-managed environment for the Azure Machine Learning workspace.
-* **Associated resources**: Resources provisioned in your subscription during Azure Machine Learning workspace creation. These resources include Azure Storage, Azure Key Vault, Azure Container Registry, and Application Insights. You're responsible for configuring high-availability settings for these resources.
+* **Associated resources**: Resources provisioned in your subscription during Azure Machine Learning workspace creation. These resources include Azure Storage, Azure Key Vault, Azure Container Registry, and Application Insights.
* Default storage has data such as model, training log data, and dataset. * Key Vault has credentials for Azure Storage, Container Registry, and data stores. * Container Registry has a Docker image for training and inferencing environments.
By keeping your data storage isolated from the default storage the workspace use
* Attach the same storage instances as datastores to the primary and secondary workspaces. * Make use of geo-replication for data storage accounts and maximize your uptime.
-### Manage machine learning artifacts as code
+### Manage machine learning assets as code
+
+> [!NOTE]
+> Backup and restore of workspace metadata such as run history, models and environments is unavailable. Specifying assets and configurations as code using YAML specs, will help you re-recreate assets across workspaces in case of a disaster.
Jobs in Azure Machine Learning are defined by a job specification. This specification includes dependencies on input artifacts that are managed on a workspace-instance level, including environments, datasets, and compute. For multi-region job submission and deployments, we recommend the following practices:
If you accidentally deleted your workspace it is currently not possible to recov
## Next steps
-To deploy Azure Machine Learning with associated resources with your high-availability settings, use an [Azure Resource Manager template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices/).
+To learn about repeatable infrastructure deployments with Azure Machine Learning, use an [Azure Resource Manager template](https://docs.microsoft.com/azure/machine-learning/tutorial-create-secure-workspace-template).
machine-learning How To Machine Learning Interpretability Automl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-machine-learning-interpretability-automl.md
In this article, you learn how to:
## Prerequisites - Interpretability features. Run `pip install azureml-interpret` to get the necessary package.-- Knowledge of building automated ML experiments. For more information on how to use the Azure Machine Learning SDK, complete this [regression model tutorial](tutorial-auto-train-models.md) or see how to [configure automated ML experiments](how-to-configure-auto-train.md).
+- Knowledge of building automated ML experiments. For more information on how to use the Azure Machine Learning SDK, complete this [object detection model tutorial](tutorial-auto-train-image-models.md) or see how to [configure automated ML experiments](how-to-configure-auto-train.md).
## Interpretability during training for the best model
machine-learning How To Migrate From V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-migrate-from-v1.md
# How to migrate from v1 to v2
-Azure Machine Learning's v2 REST APIs, Azure CLI extension, and Python SDK (preview) introduce consistency and a set of new features to accelerate the production machine learning lifecycle. In this article, we'll overview migrating from v1 to v2 with recommendations to help you decide on v1, v2, or both.
+Azure Machine Learning's v2 REST APIs, Azure CLI extension, and Python SDK (preview) introduce consistency and a set of new features to accelerate the production machine learning lifecycle. This article provides an overview of migrating from v1 to v2 with recommendations to help you decide on v1, v2, or both.
## Prerequisites
In v2 interfaces via REST API, CLI, and Python SDK (preview) are available. The
|API|Notes| |-|-|
-|REST|Fewest dependencies and overhead. Use for building applications on Azure ML as a platform, directly in programming languages without a SDK provided, or per personal preference.|
+|REST|Fewest dependencies and overhead. Use for building applications on Azure ML as a platform, directly in programming languages without an SDK provided, or per personal preference.|
|CLI|Recommended for automation with CI/CD or per personal preference. Allows quick iteration with YAML files and straightforward separation between Azure ML and ML model code.| |Python SDK|Recommended for complicated scripting (for example, programmatically generating large pipeline jobs) or per personal preference. Allows quick iteration with YAML files or development solely in Python.|
You can continue using your existing v1 model deployments. For new model deploym
|-|-|-| |Local|ACI|Quick test of model deployment locally; not for production.| |Managed online endpoint|ACI, AKS|Enterprise-grade managed model deployment infrastructure with near real-time responses and massive scaling for production.|
-|Managed batch endpoint|ParallelRunStep in a pipeline for batch scoring|Enterprise-grade managed model deployment infrastructure with massively-parallel batch processing for production.|
+|Managed batch endpoint|ParallelRunStep in a pipeline for batch scoring|Enterprise-grade managed model deployment infrastructure with massively parallel batch processing for production.|
|Azure Kubernetes Service (AKS)|ACI, AKS|Manage your own AKS cluster(s) for model deployment, giving flexibility and granular control at the cost of IT overhead.|
-|Azure Arc Kubernetes|N/A|Manage your own Kubernetes cluster(s) in other clouds or on-prem, giving flexibility and granular control at the cost of IT overhead.|
+|Azure Arc Kubernetes|N/A|Manage your own Kubernetes cluster(s) in other clouds or on-premises, giving flexibility and granular control at the cost of IT overhead.|
### Jobs (experiments, runs, pipelines in v1)
Data assets in v2 (or File Datasets in v1) are *references* to files in object s
For details on data in v2, see the [data concept article](concept-data.md).
-We recommend migrating the code for [creating data assets](how-to-create-register-data-assets.md) to v2.
+We recommend migrating the code for [creating data assets](how-to-create-data-assets.md) to v2.
### Model
machine-learning How To Troubleshoot Auto Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-auto-ml.md
If you have over 100 automated ML experiments, this may cause new automated ML e
## Next steps
-+ Learn more about [how to train a regression model with Automated machine learning](tutorial-auto-train-models.md) or [how to train using Automated machine learning on a remote resource](./v1/concept-automated-ml-v1.md#local-remote).
++ Learn more about [how to train a regression model with Automated machine learning](./v1/how-to-auto-train-models-v1.md) or [how to train using Automated machine learning on a remote resource](./v1/concept-automated-ml-v1.md#local-remote). + Learn more about [how and where to deploy a model](./v1/how-to-deploy-and-where.md).
machine-learning How To Use Automated Ml For Ml Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automated-ml-for-ml-models.md
Otherwise, you'll see a list of your recent automated ML experiments, including
1. Select **+ New automated ML job** and populate the form.
-1. Select a data asset from your storage container, or create a new data asset. Data asset can be created from local files, web urls, datastores, or Azure open datasets. Learn more about [data asset creation](how-to-create-register-data-assets.md).
+1. Select a data asset from your storage container, or create a new data asset. Data asset can be created from local files, web urls, datastores, or Azure open datasets. Learn more about [data asset creation](how-to-create-data-assets.md).
>[!Important] > Requirements for training data:
machine-learning Samples Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/samples-notebooks.md
Try these tutorials:
- [Train and deploy an image classification model with MNIST](tutorial-train-deploy-notebook.md) -- [Prepare data and use automated machine learning to train a regression model with the NYC taxi data set](tutorial-auto-train-models.md)
+- [Tutorial: Train an object detection model (preview) with AutoML and Python](tutorial-auto-train-image-models.md)
machine-learning Tutorial Automated Ml Forecast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-automated-ml-forecast.md
You won't write any code in this tutorial, you'll use the studio interface to pe
Also try automated machine learning for these other model types: * For a no-code example of a classification model, see [Tutorial: Create a classification model with automated ML in Azure Machine Learning](tutorial-first-experiment-automated-ml.md).
-* For a code first example of a regression model, see the [Tutorial: Use automated machine learning to predict taxi fares](tutorial-auto-train-models.md).
+* For a code first example of an object detection model, see the [Tutorial: Train an object detection model (preview) with AutoML and Python](tutorial-auto-train-image-models.md).
## Prerequisites
machine-learning Tutorial First Experiment Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-first-experiment-automated-ml.md
You won't write any code in this tutorial, you'll use the studio interface to pe
Also try automated machine learning for these other model types: * For a no-code example of forecasting, see [Tutorial: Demand forecasting & AutoML](tutorial-automated-ml-forecast.md).
-* For a code first example of a regression model, see the [Tutorial: Regression model with AutoML](tutorial-auto-train-models.md).
+* For a code first example of an object detection model, see the [Tutorial: Train an object detection model (preview) with AutoML and Python](tutorial-auto-train-image-models.md),
## Prerequisites
machine-learning Tutorial Train Deploy Notebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-train-deploy-notebook.md
Use these steps to delete your Azure Machine Learning workspace and all compute
+ Learn how to [authenticate to the deployed model](how-to-authenticate-online-endpoint.md). + [Make predictions on large quantities of data](./tutorial-pipeline-batch-scoring-classification.md) asynchronously. + Monitor your Azure Machine Learning models with [Application Insights](./v1/how-to-enable-app-insights.md).
-+ Try out the [automatic algorithm selection](tutorial-auto-train-models.md) tutorial.
+
machine-learning Concept Automated Ml V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-automated-ml-v1.md
Traditional machine learning model development is resource-intensive, requiring
Azure Machine Learning offers the following two experiences for working with automated ML. See the following sections to understand [feature availability in each experience (v1)](#parity).
-* For code-experienced customers, [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro). Get started with [Tutorial: Use automated machine learning to predict taxi fares (v1)](../tutorial-auto-train-models.md).
+* For code-experienced customers, [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro). Get started with [Tutorial: Use automated machine learning to predict taxi fares (v1)](how-to-auto-train-models-v1.md).
* For limited/no-code experience customers, Azure Machine Learning studio at [https://ml.azure.com](https://ml.azure.com/). Get started with these tutorials: * [Tutorial: Create a classification model with automated ML in Azure Machine Learning](../tutorial-first-experiment-automated-ml.md).
See examples of classification and automated machine learning in these Python no
Similar to classification, regression tasks are also a common supervised learning task.
-Different from classification where predicted output values are categorical, regression models predict numerical output values based on independent predictors. In regression, the objective is to help establish the relationship among those independent predictor variables by estimating how one variable impacts the others. For example, automobile price based on features like, gas mileage, safety rating, etc. Learn more and see an example of [regression with automated machine learning (v1)](../tutorial-auto-train-models.md).
+Different from classification where predicted output values are categorical, regression models predict numerical output values based on independent predictors. In regression, the objective is to help establish the relationship among those independent predictor variables by estimating how one variable impacts the others. For example, automobile price based on features like, gas mileage, safety rating, etc. Learn more and see an example of [regression with automated machine learning (v1)](how-to-auto-train-models-v1.md).
See examples of regression and automated machine learning for predictions in these Python notebooks: [CPU Performance Prediction](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml/regression-explanation-featurization),
There are multiple resources to get you up and running with AutoML.
### Tutorials/ how-tos Tutorials are end-to-end introductory examples of AutoML scenarios.
-+ **For a code first experience**, follow the [Tutorial: Train a regression model with AutoML and Python (v1)](../tutorial-auto-train-models.md).
++ **For a code first experience**, follow the [Tutorial: Train a regression model with AutoML and Python (v1)](how-to-auto-train-models-v1.md). + **For a low or no-code experience**, see the [Tutorial: Train a classification model with no-code AutoML in Azure Machine Learning studio](../tutorial-first-experiment-automated-ml.md).
machine-learning How To Auto Train Models V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-auto-train-models-v1.md
+
+ Title: 'AutoML-train regression model (SDK v1)'
+
+description: Train a regression model to predict NYC taxi fares with the Azure Machine Learning Python SDK using Azure Machine Learning automated ML SDK (v1).
+++++++ Last updated : 10/21/2021+++
+# Train a regression model with AutoML and Python (SDK v1)
++
+In this article, you learn how to train a regression model with the Azure Machine Learning Python SDK using Azure Machine Learning automated ML. This regression model predicts NYC taxi fares.
+
+This process accepts training data and configuration settings, and automatically iterates through combinations of different feature normalization/standardization methods, models, and hyperparameter settings to arrive at the best model.
+
+![Flow diagram](./media/how-to-auto-train-models/flow2.png)
+
+You'll write code using the Python SDK in this article. You'll learn the following tasks:
+
+> [!div class="checklist"]
+> * Download, transform, and clean data using Azure Open Datasets
+> * Train an automated machine learning regression model
+> * Calculate model accuracy
+
+For no-code AutoML, try the following tutorials:
+
+* [Tutorial: Train no-code classification models](../tutorial-first-experiment-automated-ml.md)
+
+* [Tutorial: Forecast demand with automated machine learning](../tutorial-automated-ml-forecast.md)
+
+## Prerequisites
+
+If you donΓÇÖt have an Azure subscription, create a free account before you begin. Try the [free or paid version](https://azure.microsoft.com/free/) of Azure Machine Learning today.
+
+* Complete the [Quickstart: Get started with Azure Machine Learning](../quickstart-create-resources.md) if you don't already have an Azure Machine Learning workspace or a compute instance.
+* After you complete the quickstart:
+ 1. Select **Notebooks** in the studio.
+ 1. Select the **Samples** tab.
+ 1. Open the *tutorials/regression-automl-nyc-taxi-data/regression-automated-ml.ipynb* notebook.
+ 1. To run each cell in the tutorial, select **Clone this notebook**
+
+This article is also available on [GitHub](https://github.com/Azure/MachineLearningNotebooks/tree/master/tutorials) if you wish to run it in your own [local environment](../how-to-configure-environment.md#local).
+To get the required packages,
+* [Install the full `automl` client](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment).
+* Run `pip install azureml-opendatasets azureml-widgets` to get the required packages.
+
+## Download and prepare data
+
+Import the necessary packages. The Open Datasets package contains a class representing each data source (`NycTlcGreen` for example) to easily filter date parameters before downloading.
+
+```python
+from azureml.opendatasets import NycTlcGreen
+import pandas as pd
+from datetime import datetime
+from dateutil.relativedelta import relativedelta
+```
+
+Begin by creating a dataframe to hold the taxi data. When working in a non-Spark environment, Open Datasets only allows downloading one month of data at a time with certain classes to avoid `MemoryError` with large datasets.
+
+To download taxi data, iteratively fetch one month at a time, and before appending it to `green_taxi_df` randomly sample 2,000 records from each month to avoid bloating the dataframe. Then preview the data.
++
+```python
+green_taxi_df = pd.DataFrame([])
+start = datetime.strptime("1/1/2015","%m/%d/%Y")
+end = datetime.strptime("1/31/2015","%m/%d/%Y")
+
+for sample_month in range(12):
+ temp_df_green = NycTlcGreen(start + relativedelta(months=sample_month), end + relativedelta(months=sample_month)) \
+ .to_pandas_dataframe()
+ green_taxi_df = green_taxi_df.append(temp_df_green.sample(2000))
+
+green_taxi_df.head(10)
+```
+
+|vendorID| lpepPickupDatetime| lpepDropoffDatetime| passengerCount| tripDistance| puLocationId| doLocationId| pickupLongitude| pickupLatitude| dropoffLongitude |...| paymentType |fareAmount |extra| mtaTax| improvementSurcharge| tipAmount| tollsAmount| ehailFee| totalAmount| tripType|
+|-|-|-|-|-|-||--||||-|-|-|--||-|--|-|-|-|
+|131969|2|2015-01-11 05:34:44|2015-01-11 05:45:03|3|4.84|None|None|-73.88|40.84|-73.94|...|2|15.00|0.50|0.50|0.3|0.00|0.00|nan|16.30|
+|1129817|2|2015-01-20 16:26:29|2015-01-20 16:30:26|1|0.69|None|None|-73.96|40.81|-73.96|...|2|4.50|1.00|0.50|0.3|0.00|0.00|nan|6.30|
+|1278620|2|2015-01-01 05:58:10|2015-01-01 06:00:55|1|0.45|None|None|-73.92|40.76|-73.91|...|2|4.00|0.00|0.50|0.3|0.00|0.00|nan|4.80|
+|348430|2|2015-01-17 02:20:50|2015-01-17 02:41:38|1|0.00|None|None|-73.81|40.70|-73.82|...|2|12.50|0.50|0.50|0.3|0.00|0.00|nan|13.80|
+1269627|1|2015-01-01 05:04:10|2015-01-01 05:06:23|1|0.50|None|None|-73.92|40.76|-73.92|...|2|4.00|0.50|0.50|0|0.00|0.00|nan|5.00|
+|811755|1|2015-01-04 19:57:51|2015-01-04 20:05:45|2|1.10|None|None|-73.96|40.72|-73.95|...|2|6.50|0.50|0.50|0.3|0.00|0.00|nan|7.80|
+|737281|1|2015-01-03 12:27:31|2015-01-03 12:33:52|1|0.90|None|None|-73.88|40.76|-73.87|...|2|6.00|0.00|0.50|0.3|0.00|0.00|nan|6.80|
+|113951|1|2015-01-09 23:25:51|2015-01-09 23:39:52|1|3.30|None|None|-73.96|40.72|-73.91|...|2|12.50|0.50|0.50|0.3|0.00|0.00|nan|13.80|
+|150436|2|2015-01-11 17:15:14|2015-01-11 17:22:57|1|1.19|None|None|-73.94|40.71|-73.95|...|1|7.00|0.00|0.50|0.3|1.75|0.00|nan|9.55|
+|432136|2|2015-01-22 23:16:33 2015-01-22 23:20:13 1 0.65|None|None|-73.94|40.71|-73.94|...|2|5.00|0.50|0.50|0.3|0.00|0.00|nan|6.30|
+
+Remove some of the columns that you won't need for training or additional feature building. Automate machine learning will automatically handle time-based features such as **lpepPickupDatetime**.
+
+```python
+columns_to_remove = ["lpepDropoffDatetime", "puLocationId", "doLocationId", "extra", "mtaTax",
+ "improvementSurcharge", "tollsAmount", "ehailFee", "tripType", "rateCodeID",
+ "storeAndFwdFlag", "paymentType", "fareAmount", "tipAmount"
+ ]
+for col in columns_to_remove:
+ green_taxi_df.pop(col)
+
+green_taxi_df.head(5)
+```
+
+### Cleanse data
+
+Run the `describe()` function on the new dataframe to see summary statistics for each field.
+
+```python
+green_taxi_df.describe()
+```
+
+|vendorID|passengerCount|tripDistance|pickupLongitude|pickupLatitude|dropoffLongitude|dropoffLatitude| totalAmount|month_num day_of_month|day_of_week|hour_of_day
+|-|-|||-|||||||
+|count|48000.00|48000.00|48000.00|48000.00|48000.00|48000.00|48000.00|48000.00|48000.00|48000.00|
+|mean|1.78|1.37|2.87|-73.83|40.69|-73.84|40.70|14.75|6.50|15.13|
+|std|0.41|1.04|2.93|2.76|1.52|2.61|1.44|12.08|3.45|8.45|
+|min|1.00|0.00|0.00|-74.66|0.00|-74.66|0.00|-300.00|1.00|1.00|
+|25%|2.00|1.00|1.06|-73.96|40.70|-73.97|40.70|7.80|3.75|8.00|
+|50%|2.00|1.00|1.90|-73.94|40.75|-73.94|40.75|11.30|6.50|15.00|
+|75%|2.00|1.00|3.60|-73.92|40.80|-73.91|40.79|17.80|9.25|22.00|
+|max|2.00|9.00|97.57|0.00|41.93|0.00|41.94|450.00|12.00|30.00|
++
+From the summary statistics, you see that there are several fields that have outliers or values that will reduce model accuracy. First filter the lat/long fields to be within the bounds of the Manhattan area. This will filter out longer taxi trips or trips that are outliers in respect to their relationship with other features.
+
+Additionally filter the `tripDistance` field to be greater than zero but less than 31 miles (the haversine distance between the two lat/long pairs). This eliminates long outlier trips that have inconsistent trip cost.
+
+Lastly, the `totalAmount` field has negative values for the taxi fares, which don't make sense in the context of our model, and the `passengerCount` field has bad data with the minimum values being zero.
+
+Filter out these anomalies using query functions, and then remove the last few columns unnecessary for training.
++
+```python
+final_df = green_taxi_df.query("pickupLatitude>=40.53 and pickupLatitude<=40.88")
+final_df = final_df.query("pickupLongitude>=-74.09 and pickupLongitude<=-73.72")
+final_df = final_df.query("tripDistance>=0.25 and tripDistance<31")
+final_df = final_df.query("passengerCount>0 and totalAmount>0")
+
+columns_to_remove_for_training = ["pickupLongitude", "pickupLatitude", "dropoffLongitude", "dropoffLatitude"]
+for col in columns_to_remove_for_training:
+ final_df.pop(col)
+```
+
+Call `describe()` again on the data to ensure cleansing worked as expected. You now have a prepared and cleansed set of taxi, holiday, and weather data to use for machine learning model training.
+
+```python
+final_df.describe()
+```
+
+## Configure workspace
+
+Create a workspace object from the existing workspace. A [Workspace](/python/api/azureml-core/azureml.core.workspace.workspace) is a class that accepts your Azure subscription and resource information. It also creates a cloud resource to monitor and track your model runs. `Workspace.from_config()` reads the file **config.json** and loads the authentication details into an object named `ws`. `ws` is used throughout the rest of the code in this article.
+
+```python
+from azureml.core.workspace import Workspace
+ws = Workspace.from_config()
+```
+
+## Split the data into train and test sets
+
+Split the data into training and test sets by using the `train_test_split` function in the `scikit-learn` library. This function segregates the data into the x (**features**) data set for model training and the y (**values to predict**) data set for testing.
+
+The `test_size` parameter determines the percentage of data to allocate to testing. The `random_state` parameter sets a seed to the random generator, so that your train-test splits are deterministic.
+
+```python
+from sklearn.model_selection import train_test_split
+
+x_train, x_test = train_test_split(final_df, test_size=0.2, random_state=223)
+```
+
+The purpose of this step is to have data points to test the finished model that haven't been used to train the model, in order to measure true accuracy.
+
+In other words, a well-trained model should be able to accurately make predictions from data it hasn't already seen. You now have data prepared for auto-training a machine learning model.
+
+## Automatically train a model
+
+To automatically train a model, take the following steps:
+1. Define settings for the experiment run. Attach your training data to the configuration, and modify settings that control the training process.
+1. Submit the experiment for model tuning. After submitting the experiment, the process iterates through different machine learning algorithms and hyperparameter settings, adhering to your defined constraints. It chooses the best-fit model by optimizing an accuracy metric.
+
+### Define training settings
+
+Define the experiment parameter and model settings for training. View the full list of [settings](how-to-configure-auto-train-v1.md). Submitting the experiment with these default settings will take approximately 5-20 min, but if you want a shorter run time, reduce the `experiment_timeout_hours` parameter.
+
+|Property| Value in this article |Description|
+|-|-||
+|**iteration_timeout_minutes**|10|Time limit in minutes for each iteration. Increase this value for larger datasets that need more time for each iteration.|
+|**experiment_timeout_hours**|0.3|Maximum amount of time in hours that all iterations combined can take before the experiment terminates.|
+|**enable_early_stopping**|True|Flag to enable early termination if the score is not improving in the short term.|
+|**primary_metric**| spearman_correlation | Metric that you want to optimize. The best-fit model will be chosen based on this metric.|
+|**featurization**| auto | By using **auto**, the experiment can preprocess the input data (handling missing data, converting text to numeric, etc.)|
+|**verbosity**| logging.INFO | Controls the level of logging.|
+|**n_cross_validations**|5|Number of cross-validation splits to perform when validation data is not specified.|
+
+```python
+import logging
+
+automl_settings = {
+ "iteration_timeout_minutes": 10,
+ "experiment_timeout_hours": 0.3,
+ "enable_early_stopping": True,
+ "primary_metric": 'spearman_correlation',
+ "featurization": 'auto',
+ "verbosity": logging.INFO,
+ "n_cross_validations": 5
+}
+```
+
+Use your defined training settings as a `**kwargs` parameter to an `AutoMLConfig` object. Additionally, specify your training data and the type of model, which is `regression` in this case.
+
+```python
+from azureml.train.automl import AutoMLConfig
+
+automl_config = AutoMLConfig(task='regression',
+ debug_log='automated_ml_errors.log',
+ training_data=x_train,
+ label_column_name="totalAmount",
+ **automl_settings)
+```
+
+> [!NOTE]
+> Automated machine learning pre-processing steps (feature normalization, handling missing data,
+> converting text to numeric, etc.) become part of the underlying model. When using the model for
+> predictions, the same pre-processing steps applied during training are applied to
+> your input data automatically.
+
+### Train the automatic regression model
+
+Create an experiment object in your workspace. An experiment acts as a container for your individual jobs. Pass the defined `automl_config` object to the experiment, and set the output to `True` to view progress during the job.
+
+After starting the experiment, the output shown updates live as the experiment runs. For each iteration, you see the model type, the run duration, and the training accuracy. The field `BEST` tracks the best running training score based on your metric type.
+
+```python
+from azureml.core.experiment import Experiment
+experiment = Experiment(ws, "Tutorial-NYCTaxi")
+local_run = experiment.submit(automl_config, show_output=True)
+```
+
+```output
+Running on local machine
+Parent Run ID: AutoML_1766cdf7-56cf-4b28-a340-c4aeee15b12b
+Current status: DatasetFeaturization. Beginning to featurize the dataset.
+Current status: DatasetEvaluation. Gathering dataset statistics.
+Current status: FeaturesGeneration. Generating features for the dataset.
+Current status: DatasetFeaturizationCompleted. Completed featurizing the dataset.
+Current status: DatasetCrossValidationSplit. Generating individually featurized CV splits.
+Current status: ModelSelection. Beginning model selection.
+
+****************************************************************************************************
+ITERATION: The iteration being evaluated.
+PIPELINE: A summary description of the pipeline being evaluated.
+DURATION: Time taken for the current iteration.
+METRIC: The result of computing score on the fitted pipeline.
+BEST: The best observed score thus far.
+****************************************************************************************************
+
+ ITERATION PIPELINE DURATION METRIC BEST
+ 0 StandardScalerWrapper RandomForest 0:00:16 0.8746 0.8746
+ 1 MinMaxScaler RandomForest 0:00:15 0.9468 0.9468
+ 2 StandardScalerWrapper ExtremeRandomTrees 0:00:09 0.9303 0.9468
+ 3 StandardScalerWrapper LightGBM 0:00:10 0.9424 0.9468
+ 4 RobustScaler DecisionTree 0:00:09 0.9449 0.9468
+ 5 StandardScalerWrapper LassoLars 0:00:09 0.9440 0.9468
+ 6 StandardScalerWrapper LightGBM 0:00:10 0.9282 0.9468
+ 7 StandardScalerWrapper RandomForest 0:00:12 0.8946 0.9468
+ 8 StandardScalerWrapper LassoLars 0:00:16 0.9439 0.9468
+ 9 MinMaxScaler ExtremeRandomTrees 0:00:35 0.9199 0.9468
+ 10 RobustScaler ExtremeRandomTrees 0:00:19 0.9411 0.9468
+ 11 StandardScalerWrapper ExtremeRandomTrees 0:00:13 0.9077 0.9468
+ 12 StandardScalerWrapper LassoLars 0:00:15 0.9433 0.9468
+ 13 MinMaxScaler ExtremeRandomTrees 0:00:14 0.9186 0.9468
+ 14 RobustScaler RandomForest 0:00:10 0.8810 0.9468
+ 15 StandardScalerWrapper LassoLars 0:00:55 0.9433 0.9468
+ 16 StandardScalerWrapper ExtremeRandomTrees 0:00:13 0.9026 0.9468
+ 17 StandardScalerWrapper RandomForest 0:00:13 0.9140 0.9468
+ 18 VotingEnsemble 0:00:23 0.9471 0.9471
+ 19 StackEnsemble 0:00:27 0.9463 0.9471
+```
+
+## Explore the results
+
+Explore the results of automatic training with a [Jupyter widget](/python/api/azureml-widgets/azureml.widgets). The widget allows you to see a graph and table of all individual job iterations, along with training accuracy metrics and metadata. Additionally, you can filter on different accuracy metrics than your primary metric with the dropdown selector.
+
+```python
+from azureml.widgets import RunDetails
+RunDetails(local_run).show()
+```
+
+![Jupyter widget run details](./media/how-to-auto-train-models/automl-dash-output.png)
+![Jupyter widget plot](./media/how-to-auto-train-models/automl-chart-output.png)
+
+### Retrieve the best model
+
+Select the best model from your iterations. The `get_output` function returns the best run and the fitted model for the last fit invocation. By using the overloads on `get_output`, you can retrieve the best run and fitted model for any logged metric or a particular iteration.
+
+```python
+best_run, fitted_model = local_run.get_output()
+print(best_run)
+print(fitted_model)
+```
+
+### Test the best model accuracy
+
+Use the best model to run predictions on the test data set to predict taxi fares. The function `predict` uses the best model and predicts the values of y, **trip cost**, from the `x_test` data set. Print the first 10 predicted cost values from `y_predict`.
+
+```python
+y_test = x_test.pop("totalAmount")
+
+y_predict = fitted_model.predict(x_test)
+print(y_predict[:10])
+```
+
+Calculate the `root mean squared error` of the results. Convert the `y_test` dataframe to a list to compare to the predicted values. The function `mean_squared_error` takes two arrays of values and calculates the average squared error between them. Taking the square root of the result gives an error in the same units as the y variable, **cost**. It indicates roughly how far the taxi fare predictions are from the actual fares.
+
+```python
+from sklearn.metrics import mean_squared_error
+from math import sqrt
+
+y_actual = y_test.values.flatten().tolist()
+rmse = sqrt(mean_squared_error(y_actual, y_predict))
+rmse
+```
+
+Run the following code to calculate mean absolute percent error (MAPE) by using the full `y_actual` and `y_predict` data sets. This metric calculates an absolute difference between each predicted and actual value and sums all the differences. Then it expresses that sum as a percent of the total of the actual values.
+
+```python
+sum_actuals = sum_errors = 0
+
+for actual_val, predict_val in zip(y_actual, y_predict):
+ abs_error = actual_val - predict_val
+ if abs_error < 0:
+ abs_error = abs_error * -1
+
+ sum_errors = sum_errors + abs_error
+ sum_actuals = sum_actuals + actual_val
+
+mean_abs_percent_error = sum_errors / sum_actuals
+print("Model MAPE:")
+print(mean_abs_percent_error)
+print()
+print("Model Accuracy:")
+print(1 - mean_abs_percent_error)
+```
+
+```output
+Model MAPE:
+0.14353867606052823
+
+Model Accuracy:
+0.8564613239394718
+```
++
+From the two prediction accuracy metrics, you see that the model is fairly good at predicting taxi fares from the data set's features, typically within +- $4.00, and approximately 15% error.
+
+The traditional machine learning model development process is highly resource-intensive, and requires significant domain knowledge and time investment to run and compare the results of dozens of models. Using automated machine learning is a great way to rapidly test many different models for your scenario.
+
+## Clean up resources
+
+Do not complete this section if you plan on running other Azure Machine Learning tutorials.
+
+### Stop the compute instance
++
+### Delete everything
+
+If you don't plan to use the resources you created, delete them, so you don't incur any charges.
+
+1. In the Azure portal, select **Resource groups** on the far left.
+1. From the list, select the resource group you created.
+1. Select **Delete resource group**.
+1. Enter the resource group name. Then select **Delete**.
+
+You can also keep the resource group but delete a single workspace. Display the workspace properties and select **Delete**.
+
+## Next steps
+
+In this automated machine learning article, you did the following tasks:
+
+> [!div class="checklist"]
+> * Configured a workspace and prepared data for an experiment.
+> * Trained by using an automated regression model locally with custom parameters.
+> * Explored and reviewed training results.
+
+[Set up AutoML to train computer vision models with Python (v1)](how-to-auto-train-image-models-v1.md)
machine-learning How To Auto Train Nlp Models V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-auto-train-nlp-models-v1.md
You can seamlessly integrate with the [Azure Machine Learning data labeling](../
[!INCLUDE [automl-sdk-version](../../../includes/machine-learning-automl-sdk-version.md)]
-* This article assumes some familiarity with setting up an automated machine learning experiment. Follow the [tutorial](../tutorial-auto-train-models.md) or [how-to](../how-to-configure-auto-train.md) to see the main automated machine learning experiment design patterns.
+* This article assumes some familiarity with setting up an automated machine learning experiment. Follow the [tutorial](how-to-auto-train-models-v1.md) or [how-to](../how-to-configure-auto-train.md) to see the main automated machine learning experiment design patterns.
## Select your NLP task
machine-learning How To Configure Auto Train V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-configure-auto-train-v1.md
In this guide, learn how to set up an automated machine learning, AutoML, training run with the [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro) using Azure Machine Learning automated ML. Automated ML picks an algorithm and hyperparameters for you and generates a model ready for deployment. This guide provides details of the various options that you can use to configure automated ML experiments.
-For an end to end example, see [Tutorial: AutoML- train regression model](../tutorial-auto-train-models.md).
+For an end to end example, see [Tutorial: AutoML- train regression model](how-to-auto-train-models-v1.md).
If you prefer a no-code experience, you can also [Set up no-code AutoML training in the Azure Machine Learning studio](../how-to-use-automated-ml-for-ml-models.md).
For general information on how model explanations and feature importance can be
+ Learn more about [how and where to deploy a model](/azure/machine-learning/how-to-deploy-managed-online-endpoints).
-+ Learn more about [how to train a regression model with Automated machine learning](../tutorial-auto-train-models.md).
++ Learn more about [how to train a regression model with Automated machine learning](how-to-auto-train-models-v1.md). + [Troubleshoot automated ML experiments](../how-to-troubleshoot-auto-ml.md).
machine-learning How To Convert Ml Experiment To Production https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-convert-ml-experiment-to-production.md
+
+ Title: Convert notebook code into Python scripts
+
+description: Turn your machine learning experimental notebooks into production-ready code using the MLOpsPython code template. You can then test, deploy, and automate that code.
+++++ Last updated : 10/21/2021+++
+# Convert ML experiments to production Python code
++
+In this tutorial, you learn how to convert Jupyter notebooks into Python scripts to make it testing and automation friendly using the MLOpsPython code template and Azure Machine Learning. Typically, this process is used to take experimentation / training code from a Jupyter notebook and convert it into Python scripts. Those scripts can then be used testing and CI/CD automation in your production environment.
+
+A machine learning project requires experimentation where hypotheses are tested with agile tools like Jupyter Notebook using real datasets. Once the model is ready for production, the model code should be placed in a production code repository. In some cases, the model code must be converted to Python scripts to be placed in the production code repository. This tutorial covers a recommended approach on how to export experimentation code to Python scripts.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Clean nonessential code
+> * Refactor Jupyter Notebook code into functions
+> * Create Python scripts for related tasks
+> * Create unit tests
+
+## Prerequisites
+
+- Generate the [MLOpsPython template](https://github.com/microsoft/MLOpsPython/generate)
+and use the `experimentation/Diabetes Ridge Regression Training.ipynb` and `experimentation/Diabetes Ridge Regression Scoring.ipynb` notebooks. These notebooks are used as an example of converting from experimentation to production. You can find these notebooks at [https://github.com/microsoft/MLOpsPython/tree/master/experimentation](https://github.com/microsoft/MLOpsPython/tree/master/experimentation).
+- Install `nbconvert`. Follow only the installation instructions under section __Installing nbconvert__ on the [Installation](https://nbconvert.readthedocs.io/en/latest/install.html) page.
+
+## Remove all nonessential code
+
+Some code written during experimentation is only intended for exploratory purposes. Therefore, the first step to convert experimental code into production code is to remove this nonessential code. Removing nonessential code will also make the code more maintainable. In this section, you'll remove code from the `experimentation/Diabetes Ridge Regression Training.ipynb` notebook. The statements printing the shape of `X` and `y` and the cell calling `features.describe` are just for data exploration and can be removed. After removing nonessential code, `experimentation/Diabetes Ridge Regression Training.ipynb` should look like the following code without markdown:
+
+```python
+from sklearn.datasets import load_diabetes
+from sklearn.linear_model import Ridge
+from sklearn.metrics import mean_squared_error
+from sklearn.model_selection import train_test_split
+import joblib
+import pandas as pd
+
+sample_data = load_diabetes()
+
+df = pd.DataFrame(
+ data=sample_data.data,
+ columns=sample_data.feature_names)
+df['Y'] = sample_data.target
+
+X = df.drop('Y', axis=1).values
+y = df['Y'].values
+
+X_train, X_test, y_train, y_test = train_test_split(
+ X, y, test_size=0.2, random_state=0)
+data = {"train": {"X": X_train, "y": y_train},
+ "test": {"X": X_test, "y": y_test}}
+
+args = {
+ "alpha": 0.5
+}
+
+reg_model = Ridge(**args)
+reg_model.fit(data["train"]["X"], data["train"]["y"])
+
+preds = reg_model.predict(data["test"]["X"])
+mse = mean_squared_error(preds, y_test)
+metrics = {"mse": mse}
+print(metrics)
+
+model_name = "sklearn_regression_model.pkl"
+joblib.dump(value=reg, filename=model_name)
+```
+
+## Refactor code into functions
+
+Second, the Jupyter code needs to be refactored into functions. Refactoring code into functions makes unit testing easier and makes the code more maintainable. In this section, you'll refactor:
+
+- The Diabetes Ridge Regression Training notebook(`experimentation/Diabetes Ridge Regression Training.ipynb`)
+- The Diabetes Ridge Regression Scoring notebook(`experimentation/Diabetes Ridge Regression Scoring.ipynb`)
+
+### Refactor Diabetes Ridge Regression Training notebook into functions
+
+In `experimentation/Diabetes Ridge Regression Training.ipynb`, complete the following steps:
+
+1. Create a function called `split_data` to split the data frame into test and train data. The function should take the dataframe `df` as a parameter, and return a dictionary containing the keys `train` and `test`.
+
+ Move the code under the *Split Data into Training and Validation Sets* heading into the `split_data` function and modify it to return the `data` object.
+
+1. Create a function called `train_model`, which takes the parameters `data` and `args` and returns a trained model.
+
+ Move the code under the heading *Training Model on Training Set* into the `train_model` function and modify it to return the `reg_model` object. Remove the `args` dictionary, the values will come from the `args` parameter.
+
+1. Create a function called `get_model_metrics`, which takes parameters `reg_model` and `data`, and evaluates the model then returns a dictionary of metrics for the trained model.
+
+ Move the code under the *Validate Model on Validation Set* heading into the `get_model_metrics` function and modify it to return the `metrics` object.
+
+The three functions should be as follows:
+
+```python
+# Split the dataframe into test and train data
+def split_data(df):
+ X = df.drop('Y', axis=1).values
+ y = df['Y'].values
+
+ X_train, X_test, y_train, y_test = train_test_split(
+ X, y, test_size=0.2, random_state=0)
+ data = {"train": {"X": X_train, "y": y_train},
+ "test": {"X": X_test, "y": y_test}}
+ return data
++
+# Train the model, return the model
+def train_model(data, args):
+ reg_model = Ridge(**args)
+ reg_model.fit(data["train"]["X"], data["train"]["y"])
+ return reg_model
++
+# Evaluate the metrics for the model
+def get_model_metrics(reg_model, data):
+ preds = reg_model.predict(data["test"]["X"])
+ mse = mean_squared_error(preds, data["test"]["y"])
+ metrics = {"mse": mse}
+ return metrics
+```
+
+Still in `experimentation/Diabetes Ridge Regression Training.ipynb`, complete the following steps:
+
+1. Create a new function called `main`, which takes no parameters and returns nothing.
+1. Move the code under the "Load Data" heading into the `main` function.
+1. Add invocations for the newly written functions into the `main` function:
+ ```python
+ # Split Data into Training and Validation Sets
+ data = split_data(df)
+ ```
+
+ ```python
+ # Train Model on Training Set
+ args = {
+ "alpha": 0.5
+ }
+ reg = train_model(data, args)
+ ```
+
+ ```python
+ # Validate Model on Validation Set
+ metrics = get_model_metrics(reg, data)
+ ```
+1. Move the code under the "Save Model" heading into the `main` function.
+
+The `main` function should look like the following code:
+
+```python
+def main():
+ # Load Data
+ sample_data = load_diabetes()
+
+ df = pd.DataFrame(
+ data=sample_data.data,
+ columns=sample_data.feature_names)
+ df['Y'] = sample_data.target
+
+ # Split Data into Training and Validation Sets
+ data = split_data(df)
+
+ # Train Model on Training Set
+ args = {
+ "alpha": 0.5
+ }
+ reg = train_model(data, args)
+
+ # Validate Model on Validation Set
+ metrics = get_model_metrics(reg, data)
+
+ # Save Model
+ model_name = "sklearn_regression_model.pkl"
+
+ joblib.dump(value=reg, filename=model_name)
+```
+
+At this stage, there should be no code remaining in the notebook that isn't in a function, other than import statements in the first cell.
+
+Add a statement that calls the `main` function.
+
+```python
+main()
+```
+
+After refactoring, `experimentation/Diabetes Ridge Regression Training.ipynb` should look like the following code without the markdown:
+
+```python
+from sklearn.datasets import load_diabetes
+from sklearn.linear_model import Ridge
+from sklearn.metrics import mean_squared_error
+from sklearn.model_selection import train_test_split
+import pandas as pd
+import joblib
++
+# Split the dataframe into test and train data
+def split_data(df):
+ X = df.drop('Y', axis=1).values
+ y = df['Y'].values
+
+ X_train, X_test, y_train, y_test = train_test_split(
+ X, y, test_size=0.2, random_state=0)
+ data = {"train": {"X": X_train, "y": y_train},
+ "test": {"X": X_test, "y": y_test}}
+ return data
++
+# Train the model, return the model
+def train_model(data, args):
+ reg_model = Ridge(**args)
+ reg_model.fit(data["train"]["X"], data["train"]["y"])
+ return reg_model
++
+# Evaluate the metrics for the model
+def get_model_metrics(reg_model, data):
+ preds = reg_model.predict(data["test"]["X"])
+ mse = mean_squared_error(preds, data["test"]["y"])
+ metrics = {"mse": mse}
+ return metrics
++
+def main():
+ # Load Data
+ sample_data = load_diabetes()
+
+ df = pd.DataFrame(
+ data=sample_data.data,
+ columns=sample_data.feature_names)
+ df['Y'] = sample_data.target
+
+ # Split Data into Training and Validation Sets
+ data = split_data(df)
+
+ # Train Model on Training Set
+ args = {
+ "alpha": 0.5
+ }
+ reg = train_model(data, args)
+
+ # Validate Model on Validation Set
+ metrics = get_model_metrics(reg, data)
+
+ # Save Model
+ model_name = "sklearn_regression_model.pkl"
+
+ joblib.dump(value=reg, filename=model_name)
+
+main()
+```
+
+### Refactor Diabetes Ridge Regression Scoring notebook into functions
+
+In `experimentation/Diabetes Ridge Regression Scoring.ipynb`, complete the following steps:
+
+1. Create a new function called `init`, which takes no parameters and return nothing.
+1. Copy the code under the "Load Model" heading into the `init` function.
+
+The `init` function should look like the following code:
+
+```python
+def init():
+ model_path = Model.get_model_path(
+ model_name="sklearn_regression_model.pkl")
+ model = joblib.load(model_path)
+```
+
+Once the `init` function has been created, replace all the code under the heading "Load Model" with a single call to `init` as follows:
+
+```python
+init()
+```
+
+In `experimentation/Diabetes Ridge Regression Scoring.ipynb`, complete the following steps:
+
+1. Create a new function called `run`, which takes `raw_data` and `request_headers` as parameters and returns a dictionary of results as follows:
+
+ ```python
+ {"result": result.tolist()}
+ ```
+
+1. Copy the code under the "Prepare Data" and "Score Data" headings into the `run` function.
+
+ The `run` function should look like the following code (Remember to remove the statements that set the variables `raw_data` and `request_headers`, which will be used later when the `run` function is called):
+
+ ```python
+ def run(raw_data, request_headers):
+ data = json.loads(raw_data)["data"]
+ data = numpy.array(data)
+ result = model.predict(data)
+
+ return {"result": result.tolist()}
+ ```
+
+Once the `run` function has been created, replace all the code under the "Prepare Data" and "Score Data" headings with the following code:
+
+```python
+raw_data = '{"data":[[1,2,3,4,5,6,7,8,9,10],[10,9,8,7,6,5,4,3,2,1]]}'
+request_header = {}
+prediction = run(raw_data, request_header)
+print("Test result: ", prediction)
+```
+
+The previous code sets variables `raw_data` and `request_header`, calls the `run` function with `raw_data` and `request_header`, and prints the predictions.
+
+After refactoring, `experimentation/Diabetes Ridge Regression Scoring.ipynb` should look like the following code without the markdown:
+
+```python
+import json
+import numpy
+from azureml.core.model import Model
+import joblib
+
+def init():
+ model_path = Model.get_model_path(
+ model_name="sklearn_regression_model.pkl")
+ model = joblib.load(model_path)
+
+def run(raw_data, request_headers):
+ data = json.loads(raw_data)["data"]
+ data = numpy.array(data)
+ result = model.predict(data)
+
+ return {"result": result.tolist()}
+
+init()
+test_row = '{"data":[[1,2,3,4,5,6,7,8,9,10],[10,9,8,7,6,5,4,3,2,1]]}'
+request_header = {}
+prediction = run(test_row, {})
+print("Test result: ", prediction)
+```
+
+## Combine related functions in Python files
+
+Third, related functions need to be merged into Python files to better help code reuse. In this section, you'll be creating Python files for the following notebooks:
+
+- The Diabetes Ridge Regression Training notebook(`experimentation/Diabetes Ridge Regression Training.ipynb`)
+- The Diabetes Ridge Regression Scoring notebook(`experimentation/Diabetes Ridge Regression Scoring.ipynb`)
+
+### Create Python file for the Diabetes Ridge Regression Training notebook
+
+Convert your notebook to an executable script by running the following statement in a command prompt, which uses the `nbconvert` package and the path of `experimentation/Diabetes Ridge Regression Training.ipynb`:
+
+```
+jupyter nbconvert "Diabetes Ridge Regression Training.ipynb" --to script --output train
+```
+
+Once the notebook has been converted to `train.py`, remove any unwanted comments. Replace the call to `main()` at the end of the file with a conditional invocation like the following code:
+
+```python
+if __name__ == '__main__':
+ main()
+```
+
+Your `train.py` file should look like the following code:
+
+```python
+from sklearn.datasets import load_diabetes
+from sklearn.linear_model import Ridge
+from sklearn.metrics import mean_squared_error
+from sklearn.model_selection import train_test_split
+import pandas as pd
+import joblib
++
+# Split the dataframe into test and train data
+def split_data(df):
+ X = df.drop('Y', axis=1).values
+ y = df['Y'].values
+
+ X_train, X_test, y_train, y_test = train_test_split(
+ X, y, test_size=0.2, random_state=0)
+ data = {"train": {"X": X_train, "y": y_train},
+ "test": {"X": X_test, "y": y_test}}
+ return data
++
+# Train the model, return the model
+def train_model(data, args):
+ reg_model = Ridge(**args)
+ reg_model.fit(data["train"]["X"], data["train"]["y"])
+ return reg_model
++
+# Evaluate the metrics for the model
+def get_model_metrics(reg_model, data):
+ preds = reg_model.predict(data["test"]["X"])
+ mse = mean_squared_error(preds, data["test"]["y"])
+ metrics = {"mse": mse}
+ return metrics
++
+def main():
+ # Load Data
+ sample_data = load_diabetes()
+
+ df = pd.DataFrame(
+ data=sample_data.data,
+ columns=sample_data.feature_names)
+ df['Y'] = sample_data.target
+
+ # Split Data into Training and Validation Sets
+ data = split_data(df)
+
+ # Train Model on Training Set
+ args = {
+ "alpha": 0.5
+ }
+ reg = train_model(data, args)
+
+ # Validate Model on Validation Set
+ metrics = get_model_metrics(reg, data)
+
+ # Save Model
+ model_name = "sklearn_regression_model.pkl"
+
+ joblib.dump(value=reg, filename=model_name)
+
+if __name__ == '__main__':
+ main()
+```
+
+`train.py` can now be invoked from a terminal by running `python train.py`.
+The functions from `train.py` can also be called from other files.
+
+The `train_aml.py` file found in the `diabetes_regression/training` directory in the MLOpsPython repository calls the functions defined in `train.py` in the context of an Azure Machine Learning experiment job. The functions can also be called in unit tests, covered later in this guide.
+
+### Create Python file for the Diabetes Ridge Regression Scoring notebook
+
+Covert your notebook to an executable script by running the following statement in a command prompt that which uses the `nbconvert` package and the path of `experimentation/Diabetes Ridge Regression Scoring.ipynb`:
+
+```
+jupyter nbconvert "Diabetes Ridge Regression Scoring.ipynb" --to script --output score
+```
+
+Once the notebook has been converted to `score.py`, remove any unwanted comments. Your `score.py` file should look like the following code:
+
+```python
+import json
+import numpy
+from azureml.core.model import Model
+import joblib
+
+def init():
+ model_path = Model.get_model_path(
+ model_name="sklearn_regression_model.pkl")
+ model = joblib.load(model_path)
+
+def run(raw_data, request_headers):
+ data = json.loads(raw_data)["data"]
+ data = numpy.array(data)
+ result = model.predict(data)
+
+ return {"result": result.tolist()}
+
+init()
+test_row = '{"data":[[1,2,3,4,5,6,7,8,9,10],[10,9,8,7,6,5,4,3,2,1]]}'
+request_header = {}
+prediction = run(test_row, request_header)
+print("Test result: ", prediction)
+```
+
+The `model` variable needs to be global so that it's visible throughout the script. Add the following statement at the beginning of the `init` function:
+
+```python
+global model
+```
+
+After adding the previous statement, the `init` function should look like the following code:
+
+```python
+def init():
+ global model
+
+ # load the model from file into a global object
+ model_path = Model.get_model_path(
+ model_name="sklearn_regression_model.pkl")
+ model = joblib.load(model_path)
+```
+
+## Create unit tests for each Python file
+
+Fourth, create unit tests for your Python functions. Unit tests protect code against functional regressions and make it easier to maintain. In this section, you'll be creating unit tests for the functions in `train.py`.
+
+`train.py` contains multiple functions, but we'll only create a single unit test for the `train_model` function using the Pytest framework in this tutorial. Pytest isn't the only Python unit testing framework, but it's one of the most commonly used. For more information, visit [Pytest](https://pytest.org).
+
+A unit test usually contains three main actions:
+
+- Arrange object - creating and setting up necessary objects
+- Act on an object
+- Assert what is expected
+
+The unit test will call `train_model` with some hard-coded data and arguments, and validate that `train_model` acted as expected by using the resulting trained model to make a prediction and comparing that prediction to an expected value.
+
+```python
+import numpy as np
+from code.training.train import train_model
++
+def test_train_model():
+ # Arrange
+ X_train = np.array([1, 2, 3, 4, 5, 6]).reshape(-1, 1)
+ y_train = np.array([10, 9, 8, 8, 6, 5])
+ data = {"train": {"X": X_train, "y": y_train}}
+
+ # Act
+ reg_model = train_model(data, {"alpha": 1.2})
+
+ # Assert
+ preds = reg_model.predict([[1], [2]])
+ np.testing.assert_almost_equal(preds, [9.93939393939394, 9.03030303030303])
+```
+
+## Next steps
+
+Now that you understand how to convert from an experiment to production code, see the following links for more information and next steps:
+++ [MLOpsPython](https://github.com/microsoft/MLOpsPython/blob/master/docs/custom_model.md): Build a CI/CD pipeline to train, evaluate and deploy your own model using Azure Pipelines and Azure Machine Learning++ [Monitor Azure ML experiment jobs and metrics](how-to-log-view-metrics.md)++ [Monitor and collect data from ML web service endpoints](how-to-enable-app-insights.md)
machine-learning How To Create Register Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-create-register-datasets.md
Last updated 05/11/2022
> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK you are using:"] > * [v1](how-to-create-register-datasets.md)
-> * [v2 (current version)](../how-to-create-register-data-assets.md)
+> * [v2 (current version)](../how-to-create-data-assets.md)
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
managed-instance-apache-cassandra Configure Hybrid Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/configure-hybrid-cluster.md
This quickstart demonstrates how to use the Azure CLI commands to configure a hy
## <a id="hybrid-real-time-migration"></a>Use hybrid cluster for real-time migration
-The above instructions provide guidance for configuring a hybrid cluster. However, this is also a great way of achieving a seamless zero-downtime migration. If you have an on-premise or other Cassandra environment that you want to decommission with zero downtime, in favour of running your workload in Azure Managed Instance for Apache Cassandra, the following steps must be completed in this order:
+The above instructions provide guidance for configuring a hybrid cluster. However, this is also a great way of achieving a seamless zero-downtime migration. If you have an on-premises or other Cassandra environment that you want to decommission with zero downtime, in favour of running your workload in Azure Managed Instance for Apache Cassandra, the following steps must be completed in this order:
1. Configure hybrid cluster - follow the instructions above. 1. Temporarily disable automatic repairs in Azure Managed Instance for Apache Cassandra for the duration of the migration:
marketplace Dynamics 365 Business Central Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-business-central-availability.md
description: Configure Dynamics 365 Business Central offer availability on Micro
--++ Last updated 11/24/2021
marketplace Dynamics 365 Business Central Offer Listing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-business-central-offer-listing.md
description: Configure Dynamics 365 Business Central offer listing details on Mi
--++ Last updated 03/15/2022
marketplace Dynamics 365 Business Central Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-business-central-offer-setup.md
description: Create a Dynamics 365 Business Central offer on Microsoft AppSource
--++ Last updated 07/20/2022
marketplace Dynamics 365 Business Central Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-business-central-properties.md
description: Configure Dynamics 365 Business Central offer properties on Microso
--++ Last updated 11/24/2021
marketplace Dynamics 365 Business Central Supplemental Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-business-central-supplemental-content.md
description: Set up Dynamics 365 Business Central offer supplemental content on
--++ Last updated 12/04/2021
marketplace Dynamics 365 Business Central Technical Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-business-central-technical-configuration.md
description: Set up Dynamics 365 Business Central offer technical configuration
--++ Last updated 12/03/2021
marketplace Dynamics 365 Customer Engage Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-customer-engage-availability.md
description: Configure Dynamics 365 apps on Dataverse and Power Apps offer avail
--++ Last updated 05/25/2022
marketplace Dynamics 365 Customer Engage Offer Listing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-customer-engage-offer-listing.md
description: Configure Dynamics 365 apps on Dataverse and Power App offer listin
--++ Last updated 12/03/2021
marketplace Dynamics 365 Customer Engage Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-customer-engage-offer-setup.md
description: Create a Dynamics 365 apps on Dataverse and Power Apps offer on Mic
--++ Last updated 07/18/2022
marketplace Dynamics 365 Customer Engage Plans https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-customer-engage-plans.md
description: Configure Dynamics 365 apps on Dataverse and Power Apps offer plans
--++ Last updated 05/25/2022
marketplace Dynamics 365 Customer Engage Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-customer-engage-properties.md
description: Configure Dynamics 365 apps on Dataverse and Power Apps offer prope
--++ Last updated 12/03/2021
marketplace Dynamics 365 Customer Engage Supplemental Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-customer-engage-supplemental-content.md
description: Set up DDynamics 365 apps on Dataverse and Power Apps offer supplem
--++ Last updated 12/03/2021
marketplace Dynamics 365 Customer Engage Technical Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-customer-engage-technical-configuration.md
--++ Last updated 12/03/2021
marketplace Dynamics 365 Operations Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-operations-availability.md
description: Configure Dynamics 365 Operations Apps offer availability on Micros
--++ Last updated 12/04/2021
marketplace Dynamics 365 Operations Offer Listing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-operations-offer-listing.md
description: Configure Dynamics 365 for Operations Apps offer listing details on
--++ Last updated 12/03/2021
marketplace Dynamics 365 Operations Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-operations-offer-setup.md
description: Create a Dynamics 365 Operations Apps offer on Microsoft AppSource
--++ Last updated 07/20/2022
marketplace Dynamics 365 Operations Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-operations-properties.md
description: Configure Dynamics 365 Operations Apps offer properties on Microsof
--++ Last updated 12/03/2021
marketplace Dynamics 365 Operations Supplemental Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-operations-supplemental-content.md
description: Set up Dynamics 365 Operations Apps offer supplemental content on M
--++ Last updated 12/03/2021
marketplace Dynamics 365 Operations Technical Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-operations-technical-configuration.md
description: Set up Dynamics 365 Operations Apps offer technical configuration o
--++ Last updated 12/03/2021
marketplace Dynamics 365 Operations Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-operations-validation.md
description: Functionally validate a Dynamics 365 Operations Apps offer in Micro
--++ Last updated 12/03/2021
marketplace Dynamics 365 Review Publish https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-review-publish.md
description: Review and publish a Dynamics 365 offer to Microsoft AppSource (Azu
--++ Last updated 08/01/2022
marketplace Marketplace Dynamics 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-dynamics-365.md
description: Plan Dynamics 365 offers for Microsoft AppSource
--++ Last updated 06/29/2022
migrate Common Questions Server Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/common-questions-server-migration.md
Azure Migrate: Server Migration provides agentless replication options for the m
The agentless replication option works by using mechanisms provided by the virtualization provider (VMware, Hyper-V). In the case of VMware virtual machines, the agentless replication mechanism uses VMware snapshots and VMware changed block tracking technology to replicate data from virtual machine disks. This mechanism is similar to the one used by many backup products. In the case of Hyper-V virtual machines, the agentless replication mechanism uses VM snapshots and the change tracking capability of the Hyper-V replica to replicate data from virtual machine disks. When replication is configured for a virtual machine, it first goes through an initial replication phase. During initial replication, a VM snapshot is taken, and a full copy of data from the snapshot disks are replicated to managed disks in your subscription. After initial replication for the VM is complete, the replication process transitions to an incremental replication (delta replication) phase. In the incremental replication phase, data changes that have occurred since the last completed replication cycle are periodically replicated and applied to the replica managed disks, thus keeping replication in sync with changes happening on the VM. In the case of VMware virtual machines, VMware changed block tracking technology is used to keep track of changes between replication cycles. At the start of the replication cycle, a VM snapshot is taken and changed block tracking is used to get the changes between the current snapshot and the last successfully replicated snapshot. That way only data that has changed since the last completed replication cycle needs to be replicated to keep replication for the VM in sync. At the end of each replication cycle, the snapshot is released, and snapshot consolidation is performed for the virtual machine. Similarly, in the case of Hyper-V virtual machines, the Hyper-V replica change tracking engine is used to keep track of changes between consecutive replication cycles.
-When you perform the migrate operation on a replicating virtual machine, you have the option to shut down the on-premise virtual machine and perform one final incremental replication to ensure zero data loss. On performing the migration, the replica managed disks corresponding to the virtual machine are used to create the virtual machine in Azure.
+When you perform the migrate operation on a replicating virtual machine, you have the option to shut down the on-premises virtual machine and perform one final incremental replication to ensure zero data loss. On performing the migration, the replica managed disks corresponding to the virtual machine are used to create the virtual machine in Azure.
To get started, refer the [VMware agentless migration](./tutorial-migrate-vmware.md) and [Hyper-V agentless migration](./tutorial-migrate-hyper-v.md) tutorials.
The agentless replication option works by using mechanisms provided by the virtu
When replication is configured for a virtual machine, it first goes through an initial replication phase. During initial replication, a VM snapshot is taken, and a full copy of data from the snapshot disks are replicated to managed disks in your subscription. After initial replication for the VM is complete, the replication process transitions to an incremental replication (delta replication) phase. In the incremental replication phase, data changes that have occurred since the last completed replication cycle are periodically replicated and applied to the replica managed disks, thus keeping replication in sync with changes happening on the VM. In the case of VMware virtual machines, VMware changed block tracking technology is used to keep track of changes between replication cycles. At the start of the replication cycle, a VM snapshot is taken and changed block tracking is used to get the changes between the current snapshot and the last successfully replicated snapshot. That way only data that has changed since the last completed replication cycle needs to be replicated to keep replication for the VM in sync. At the end of each replication cycle, the snapshot is released, and snapshot consolidation is performed for the virtual machine. Similarly, in the case of Hyper-V virtual machines, the Hyper-V replica change tracking engine is used to keep track of changes between consecutive replication cycles.
-When you perform the migrate operation on a replicating virtual machine, you have the option to shut down the on-premise virtual machine and perform one final incremental replication to ensure zero data loss. On performing the migration, the replica managed disks corresponding to the virtual machine are used to create the virtual machine in Azure.
+When you perform the migrate operation on a replicating virtual machine, you have the option to shut down the on-premises virtual machine and perform one final incremental replication to ensure zero data loss. On performing the migration, the replica managed disks corresponding to the virtual machine are used to create the virtual machine in Azure.
To get started, refer the [Hyper-V agentless migration](./tutorial-migrate-hyper-v.md) tutorial.
mysql How To Restart Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-restart-server-portal.md
The time required to complete a restart depends on the MySQL recovery process. T
To complete this how-to guide, you need: - An [Azure Database for MySQL Flexible server](quickstart-create-server-portal.md)
+>[!Note]
+>If the user restarting the server is part of [custom role](../../role-based-access-control/custom-roles.md) the user should have write privilege on the server.
+ ## Perform server restart The following steps restart the MySQL server:
mysql How To Restart Stop Start Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-restart-stop-start-server-cli.md
az mysql flexible-server start
## Restart a server To restart a server, run ```az mysql flexible-server restart``` command. If you are using [local context](/cli/azure/config/param-persist), you don't need to provide any arguments.
+>[!Note]
+>If the user restarting the server is part of [custom role](../../role-based-access-control/custom-roles.md) the user should have write privilege on the server.
+ **Usage:** ```azurecli az mysql flexible-server restart [--name]
mysql How To Restart Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-restart-server-cli.md
To complete this how-to guide:
- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+>[!Note]
+>If the user restarting the server is part of [custom role](../../role-based-access-control/custom-roles.md) the user should have write privilege on the server.
+ ## Restart the server Restart the server with the following command:
mysql How To Restart Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-restart-server-portal.md
The time required to complete a restart depends on the MySQL recovery process. T
To complete this how-to guide, you need: - An [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-portal.md)
+>[!Note]
+>If the user restarting the server is part of [custom role](../../role-based-access-control/custom-roles.md) the user should have write privilege on the server.
++ ## Perform server restart The following steps restart the MySQL server:
mysql How To Restart Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-restart-server-powershell.md
To complete this how-to guide, you need:
> Once the Az.MySql PowerShell module is generally available, it becomes part of future Az > PowerShell module releases and available natively from within Azure Cloud Shell.
+>[!Note]
+>If the user restarting the server is part of [custom role](../../role-based-access-control/custom-roles.md) the user should have write privilege on the server.
+ If you choose to use PowerShell locally, connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) cmdlet.
Restart the server with the following command:
Restart-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup ``` + ## Next steps > [!div class="nextstepaction"]
payment-hsm Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/overview.md
Azure Payment HSM a highly specialized service. Therefore, we recommend that you
Momentum is building as financial institutions move some or all of their payment applications to the cloud. This entails a migration from the legacy on-premises (on-prem) applications and HSMs to a cloud-based infrastructure that isn't generally under their direct control. Often it means a subscription service rather than perpetual ownership of physical equipment and software. Corporate initiatives for efficiency and a scaled-down physical presence are the drivers for this. Conversely, with cloud-native organizations, the adoption of cloud-first without any on-premises presence is their fundamental business model. Whatever the reason, end users of a cloud-based payment infrastructure expect reduced IT complexity, streamlined security compliance, and flexibility to scale their solution seamlessly as their business grows.
-The cloud offers significant benefits, but challenges when migrating a legacy on-premise payment application (involving payment HSMs) to the cloud must be addressed. Some of these are:
+The cloud offers significant benefits, but challenges when migrating a legacy on-premises payment application (involving payment HSMs) to the cloud must be addressed. Some of these are:
- Shared responsibility and trust ΓÇô what potential loss of control in some areas is acceptable? - Latency ΓÇô how can an efficient, high-performance link between the application and HSM be achieved?
End users of the service can leverage Microsoft security and compliance investme
### Customer-managed HSM in Azure
-The Azure Payment HSM is a part of a subscription service that offers single-tenant HSMs for the service customer to have complete administrative control and exclusive access to the HSM. The customer could be a payment service provider acting on behalf of multiple financial institutions or a financial institution that wishes to directly access the Azure Payment HSM service. Once the HSM is allocated to a customer, Microsoft has no access to customer data. Likewise, when the HSM is no longer required, customer data is zeroized and erased as soon as the HSM is released to ensure complete privacy and security is maintained. The customer is responsible for ensuring sufficient HSM subscriptions are active to meet their requirements for backup, disaster recovery, and resilience to achieve the same performance available on their on-premise HSMs.
+The Azure Payment HSM is a part of a subscription service that offers single-tenant HSMs for the service customer to have complete administrative control and exclusive access to the HSM. The customer could be a payment service provider acting on behalf of multiple financial institutions or a financial institution that wishes to directly access the Azure Payment HSM service. Once the HSM is allocated to a customer, Microsoft has no access to customer data. Likewise, when the HSM is no longer required, customer data is zeroized and erased as soon as the HSM is released to ensure complete privacy and security is maintained. The customer is responsible for ensuring sufficient HSM subscriptions are active to meet their requirements for backup, disaster recovery, and resilience to achieve the same performance available on their on-premises HSMs.
### Accelerate digital transformation and innovation in cloud
-For existing Thales payShield customers wishing to add a cloud option, the Azure Payment HSM solution offers native access to a payment HSM in Azure for "lift and shift" while still experiencing the low latency they're accustomed to via their on-premise payShield HSMs. The solution also offers high-performance transactions for mission-critical payment applications. Consequently, customers can continue their digital transformation strategy by leveraging technology innovation in the cloud. Existing Thales payShield customers can utilize their existing remote management solutions (payShield Manager and payShield TMD together with associated smart card readers and smart cards as appropriate) to work with the Azure Payment HSM service. Customers new to payShield can source the hardware accessories from Thales or one of its partners before deploying their HSM as part of the subscription service.
+For existing Thales payShield customers wishing to add a cloud option, the Azure Payment HSM solution offers native access to a payment HSM in Azure for "lift and shift" while still experiencing the low latency they're accustomed to via their on-premises payShield HSMs. The solution also offers high-performance transactions for mission-critical payment applications. Consequently, customers can continue their digital transformation strategy by leveraging technology innovation in the cloud. Existing Thales payShield customers can utilize their existing remote management solutions (payShield Manager and payShield TMD together with associated smart card readers and smart cards as appropriate) to work with the Azure Payment HSM service. Customers new to payShield can source the hardware accessories from Thales or one of its partners before deploying their HSM as part of the subscription service.
## Typical use cases
Sensitive data protection
## Suitable for both existing and new payment HSM users
-The solution provides clear benefits for both Payment HSM users with a legacy on-premise HSM footprint and those new payment ecosystem entrants with no legacy infrastructure to support and who may choose a cloud-native approach from the outset.
+The solution provides clear benefits for both Payment HSM users with a legacy on-premises HSM footprint and those new payment ecosystem entrants with no legacy infrastructure to support and who may choose a cloud-native approach from the outset.
-Benefits for existing on-premise HSM users
+Benefits for existing on-premises HSM users
- Requires no modifications to payment applications or HSM software to migrate existing applications to the Azure solution - Enables more flexibility and efficiency in HSM utilization - Simplifies HSM sharing between multiple teams, geographically dispersed
Benefits for existing on-premise HSM users
- Improves cash flow for new projects Benefits for new payment participants-- Avoids introduction of on-premise HSM infrastructure
+- Avoids introduction of on-premises HSM infrastructure
- Lowers upfront investment via the Azure subscription model - Offers access to latest certified hardware and software on-demand
private-5g-core Create A Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/create-a-site.md
In this step, you'll create the mobile network site resource representing the ph
1. Use the information you collected in [Collect access network values](collect-required-information-for-a-site.md#collect-access-network-values) to fill out the fields in the **Access network** section. Note the following:
- - Use the same value for both the **N2 subnet** and **N3 subnet** fields (if this site will support 5G user equipment (UEs)).
- - Use the same value for both the **N2 gateway** and **N3 gateway** fields (if this site will support 5G UEs).
- - Use the same value for both the **S1-MME subnet** and **S1-U subnet** fields (if this site will support 4G UEs).
- - Use the same value for both the **S1-MME gateway** and **S1-U gateway** fields (if this site will support 4G UEs).
-
-1. In the **Attached data networks** section, select **Add data network**. Use the information you collected in [Collect data network values](collect-required-information-for-a-site.md#collect-data-network-values) to fill out the fields. If you decided not to configure a DNS server, untick the **Specify DNS addresses for UEs?** checkbox.
+ - If this site will support 5G user equipment (UEs):
+ - **N2 interface name** and **N3 interface name** must match the corresponding virtual network names on port 5 on your Azure Stack Edge Pro device.
+ - **N2 subnet** must match **N3 subnet**.
+ - **N2 gateway** must match **N3 gateway**.
+ - If this site will support 4G UEs:
+ - **S1-MME interface name** and **S1-U interface name** must match the corresponding virtual network names on port 5 on your Azure Stack Edge Pro device.
+ - **S1-MME subnet** must match **S1-U subnet**.
+ - **S1-MME gateway** must match **S1-U gateway**.
+
+1. In the **Attached data networks** section, select **Add data network**. Use the information you collected in [Collect data network values](collect-required-information-for-a-site.md#collect-data-network-values) to fill out the fields. Note the following:
+ - **N6 interface name** (if this site will support 5G UEs) or **SGi interface name** (if this site will support 4G UEs) must match the corresponding virtual network name on port 6 on your Azure Stack Edge Pro device.
+ - If you decided not to configure a DNS server, untick the **Specify DNS addresses for UEs?** checkbox.
:::image type="content" source="media/create-a-site/create-site-add-data-network.png" alt-text="Screenshot of the Azure portal showing the Add data network screen.":::
purview Catalog Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-permissions.md
The Microsoft Purview governance portal uses a set of predefined roles to contro
:::image type="content" source="media/catalog-permissions/catalog-permission-role.svg" alt-text="Chart showing Microsoft Purview governance portal roles" lightbox="media/catalog-permissions/catalog-permission-role.svg"::: >[!NOTE]
-> **\*Data source administrator permissions on Policies** - Data source administrators are also able to publish data policies.
+> **\*Data curator** - Data curators can read insights only if they are assigned data curator at the root collection level.
+> **\*\*Data source administrator permissions on Policies** - Data source administrators are also able to publish data policies.
## Understand how to use the Microsoft Purview governance portal's roles and collections
purview How To Deploy Profisee Purview Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-deploy-profisee-purview-integration.md
The reference architecture shows how both Microsoft Purview and Profisee MDM wor
## Microsoft Purview - Profisee integration deployment on Azure Kubernetes Service (AKS)
-1. Get the license file from Profisee by raising a support ticket on [https://support.profisee.com/](https://support.profisee.com/). Only pre-requisite for this step is your need to pre-determine the DNS resolved URL your Profisee setup on Azure. In other words, keep the DNS HOST NAME of the load balancer used in the deployment. It will be something like "[profisee_name].[region].cloudapp.azure.com".
+1. Get the license file from Profisee by raising a support ticket on [https://support.profisee.com/](https://support.profisee.com/). The only pre-requisite for this step is your need to pre-determine the DNS resolved URL your Profisee setup on Azure. In other words, keep the DNS HOST NAME of the load balancer used in the deployment. It will be something like "[profisee_name].[region].cloudapp.azure.com".
For example, DNSHOSTNAME="purviewprofisee.southcentralus.cloudapp.azure.com". Supply this DNSHOSTNAME to Profisee support when you raise the support ticket and Profisee will revert with the license file. You'll need to supply this file during the next configuration steps below.
-1. [Create a user-assigned managed identity in Azure](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#create-a-user-assigned-managed-identity). You must have a managed identity created to run the deployment. This managed identity must have the following permissions when running a deployment. After the deployment is done, the managed identity can be deleted. Based on your ARM template choices, you'll need some or all of the following roles and permissions assigned to your managed identity:
+1. [Create a user-assigned managed identity in Azure](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#create-a-user-assigned-managed-identity). You must have a managed identity created to run the deployment. After the deployment is done, the managed identity can be deleted. Based on your ARM template choices, you'll need some or all of the following roles and permissions assigned to your managed identity:
- Contributor role to the resource group where AKS will be deployed. It can either be assigned directly to the resource group **OR** at the subscription level and down. - DNS Zone Contributor role to the particular DNS zone where the entry will be created **OR** Contributor role to the DNS Zone resource group. This DNS role is needed only if updating DNS hosted in Azure. - Application Administrator role in Azure Active Directory so the required permissions that are needed for the application registration can be assigned.
An output response that looks similar as the above confirms successful installat
## Next steps Through this guide, we learned of the importance of MDM in driving and supporting Data Governance in the context of the Azure data estate, and how to set up and deploy a Microsoft Purview-Profisee integration.
-For more usage details on Profisee MDM, register for scheduled trainings, live product demonstration and Q&A on [Profisee Academy Tutorials and Demos](https://profisee.com/demo/)!
+For more usage details on Profisee MDM, register for scheduled trainings, live product demonstration and Q&A on [Profisee Academy Tutorials and Demos](https://profisee.com/demo/)!
purview How To Monitor Scan Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-monitor-scan-runs.md
Previously updated : 08/03/2022 Last updated : 08/19/2022 # Monitor scan runs in Microsoft Purview In Microsoft Purview, you can register and scan various types of data sources, and you can view the scan status over time. This article outlines how to monitor and get a bird's eye view of your scan runs in Microsoft Purview.
-> [!IMPORTANT]
-> The monitoring experience is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- ## Monitor scan runs 1. Go to your Microsoft Purview account -> open **Microsoft Purview governance portal** -> **Data map** -> **Monitoring**. You need to have **Data source admin** role on any collection to access this page. And you will see the scan runs that belong to the collections on which you have data source admin privilege. 1. The high-level KPIs show total scan runs within a period. The time period is defaulted at last 30 days, you can also choose to select last seven days. Based on the time filter selected, you can see the distribution of successful, failed, and canceled scan runs by week or by the day in the graph.
- :::image type="content" source="./media/how-to-monitor-scan-runs/monitor-scan-runs.png" alt-text="View scan runs over time":::
+ :::image type="content" source="./media/how-to-monitor-scan-runs/monitor-scan-runs.png" alt-text="View scan runs over time" lightbox="./media/how-to-monitor-scan-runs/monitor-scan-runs.png":::
1. At the bottom of the graph, there is a **View more** link for you to explore further. The link opens the **Scan status** page. Here you can see a scan name and the number of times it has succeeded, failed, or been canceled in the time period. You can also filter the list by source types.
purview Tutorial Atlas 2 2 Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-atlas-2-2-apis.md
In this tutorial, learn to programmatically interact with new Atlas 2.2 APIs wit
Business metadata is a template that contains custom attributes (key values). You can create these attributes globally and then apply them across multiple typedefs.
+### Atlas endpoint
+
+For all the requests, you'll need the Atlas endpoint for your Microsoft Purview account.
+
+1. Find your Microsoft Purview account in the [Azure portal](https://portal.azure.com)
+1. Select the **Properties** page on the left side menu
+1. Copy the **Atlas endpoint** value
++ ### Create business metadata with attributes You can send a `POST` request to the following endpoint:
remote-rendering Late Stage Reprojection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/overview/features/late-stage-reprojection.md
Static models are expected to visually maintain their position when you move around them. If they appear to be unstable, this behavior may hint at LSR issues. Mind that extra dynamic transformations, like animations or explosion views, might mask this behavior.
-You may choose between two different LSR modes, namely **Planar LSR** or **Depth LSR**. Both LSR modes improve hologram stability, although they have their distinct limitations. Start by trying Depth LSR, as it is arguably giving better results in most cases.
+You may choose between two different LSR modes, namely **Planar LSR** or **Depth LSR**. Both LSR modes improve hologram stability, although they have their distinct limitations. Start by trying Depth LSR, as it's arguably giving better results in most cases.
## How to set the LSR mode
To mitigate reprojection instability for transparent objects, you can force dept
## Planar LSR
-Planar LSR does not have per-pixel depth information, as Depth LSR does. Instead it reprojects all content based on a plane that you must provide each frame.
+Planar LSR doesn't have per-pixel depth information, as Depth LSR does. Instead it reprojects all content based on a plane that you must provide each frame.
Planar LSR reprojects those objects best that lie close to the supplied plane. The further away an object is, the more unstable it will look. While Depth LSR is better at reprojecting objects at different depths, Planar LSR may work better for content aligning well with a plane.
The general problem scope with hybrid rendering can be stated like this: Remote
![Diagram that illustrates remote and local pose in relation to target viewport.](./media/reprojection-remote-local.png)
-ARR provides two reprojection modes that work orthogonally to the LSR mode discussed above. These modes are referred to as **:::no-loc text="Remote pose mode":::** and **:::no-loc text="Local pose mode":::**. Unlike the LSR mode, the pose modes define how remote and local content is combined. The choice of the mode trades visual quality of local content for runtime performance, so applications should carefully consider which option is appropriate. See considerations below.
+Depending on the `GraphicsBinding` used, ARR provides up to three reprojection modes that work orthogonally to the LSR mode discussed above. These modes are referred to as **:::no-loc text="Remote pose mode":::**, **:::no-loc text="Local pose mode":::**, and **:::no-loc text="Passthrough pose mode":::**. Unlike the LSR mode, the pose modes define how remote and local content is combined. The choice of the mode trades visual quality of local content for runtime performance, so applications should carefully consider which option is appropriate. See considerations below.
### :::no-loc text="Remote pose mode":::
Accordingly, the illustration looks like this:
![Reprojection steps in local pose mode.](./media/reprojection-pose-mode-local.png)
+### :::no-loc text="Passthrough pose mode":::
+
+This pose mode behaves essentially the same as **:::no-loc text="Remote pose mode":::**, meaning the local and remote content are combined in remote space. However, the content won't be reprojected after combination but remain in remote pose space. The main advantage of this mode is that the resulting image won't be affected by reprojection artifacts.
+
+Conceptually, this mode can be compared to conventional cloud-streaming applications. Due to the high latency it incurs, it isn't suitable for head-mounted scenarios, but is a viable alternative for Desktop and other flat-screen applications where higher image quality is desired. It's therefore only available on `GraphicsBindingSimD3D11` for the time being.
+ ### Performance and quality considerations
-The choice of the pose mode has visual quality and performance implications. The additional runtime cost on the client side for doing the extra reprojection in :::no-loc text="Local pose mode"::: on a HoloLens 2 device amounts to about 1 millisecond per frame of GPU time. This extra cost needs to be put into consideration if the client application is already close to the frame budget of 16 milliseconds. On the other hand, there are types of applications with either no local content or local content that is not prone to distortion artifacts. In those cases :::no-loc text="Local pose mode"::: does not gain any visual benefit because the quality of the remote content reprojection is unaffected.
+The choice of the pose mode has visual quality and performance implications. The additional runtime cost on the client side for doing the extra reprojection in :::no-loc text="Local pose mode"::: on a HoloLens 2 device amounts to about 1 millisecond per frame of GPU time. This extra cost needs to be put into consideration if the client application is already close to the frame budget of 16 milliseconds. On the other hand, there are types of applications with either no local content or local content that is not prone to distortion artifacts. In those cases :::no-loc text="Local pose mode"::: doesn't gain any visual benefit because the quality of the remote content reprojection is unaffected.
-The general advice would thus be to test the modes on a per use case basis and see whether the gain in visual quality justifies the extra performance overhead. It is also possible to toggle the mode dynamically, for instance enable local mode only when important UIs are shown.
+The general advice would thus be to test the modes on a per use case basis and see whether the gain in visual quality justifies the extra performance overhead. It's also possible to toggle the mode dynamically, for instance enable local mode only when important UIs are shown.
### How to change the :::no-loc text="Pose mode"::: at runtime
ApiHandle<RenderingSession> session = ...;
session->GetGraphicsBinding()->SetPoseMode(PoseMode::Local); // set local pose mode ```
-In general, the mode can be changed anytime the graphics binding object is available. There is an important distinction for `GraphicsBindingSimD3D11`: the pose mode can only be changed to `PoseMode.Remote`, if it has been initialized with proxy textures. If this isn't the case, `PoseMode.Local` is forced until the graphics binding is reinitialized. See the two overloads of `GraphicsBindingSimD3d11.InitSimulation`, which take either native pointers to [ID3D11Texture2D](/windows/win32/api/d3d11/nn-d3d11-id3d11texture2d) objects (proxy path) or the `width` and `height` of the desired user viewport (non-proxy path).
+In general, the mode can be changed anytime the graphics binding object is available. There's an important distinction for `GraphicsBindingSimD3D11`: the pose mode can only be changed to `PoseMode.Remote`, if it has been initialized with proxy textures. If this isn't the case, the pose mode can only be toggled between `PoseMode.Local` and `PoseMode.Passthrough` until the graphics binding is reinitialized. See the two overloads of `GraphicsBindingSimD3d11.InitSimulation`, which take either native pointers to [ID3D11Texture2D](/windows/win32/api/d3d11/nn-d3d11-id3d11texture2d) objects (proxy path) or the `width` and `height` of the desired user viewport (non-proxy path).
### Desktop Unity runtime considerations
public static void InitRemoteManager(Camera camera)
} ```
-If `PoseMode.Remote` is specified, the graphics binding will be initialized with offscreen proxy textures and all rendering will be redirected from the Unity scene's main camera to a proxy camera. This code path is only recommended for usage if runtime pose mode changes are required.
+If `PoseMode.Remote` is specified, the graphics binding will be initialized with offscreen proxy textures and all rendering will be redirected from the Unity scene's main camera to a proxy camera. This code path is only recommended for usage if runtime pose mode changes to `PoseMode.Remote` are required. If no pose mode is specified, the ARR Unity runtime will select an appropriate default depending on the current platform.
> [!WARNING] > The proxy camera redirection might be incompatible with other Unity extensions, which expect scene rendering to take place with the main camera. The proxy camera can be retrieved via the `RemoteManagerUnity.ProxyCamera` property if it needs to be queried or registered elsewhere.
-If `PoseMode.Local` is used instead, the graphics binding will not be initialized with offscreen proxy textures and a fast path using the Unity scene's main camera to render will be used. This means that if the respective use case requires pose mode changes at runtime, `PoseMode.Remote` should be specified on `RemoteManagerUnity` initialization. It is strongly recommended to only use local pose mode and thus the non-proxy rendering path.
+If `PoseMode.Local` or `PoseMode.Passthrough` is used instead, the graphics binding won't be initialized with offscreen proxy textures and a fast path using the Unity scene's main camera to render will be used. If the respective use case requires remote pose mode at runtime, `PoseMode.Remote` should be specified on `RemoteManagerUnity` initialization. Directly rendering with Unity's main camera is more efficient and can prevent issues with other Unity extensions. Therefore, it's recommended to use the non-proxy rendering path.
## Next steps
role-based-access-control Custom Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/custom-roles.md
Previously updated : 07/28/2022 Last updated : 08/19/2022
If the [Azure built-in roles](built-in-roles.md) don't meet the specific needs of your organization, you can create your own custom roles. Just like built-in roles, you can assign custom roles to users, groups, and service principals at management group (in preview only), subscription, and resource group scopes.
-Custom roles can be shared between subscriptions that trust the same Azure AD directory. There is a limit of **5,000** custom roles per directory. (For Azure Germany and Azure China 21Vianet, the limit is 2,000 custom roles.) Custom roles can be created using the Azure portal, Azure PowerShell, Azure CLI, or the REST API.
+Custom roles can be shared between subscriptions that trust the same Azure AD tenant. There is a limit of **5,000** custom roles per tenant. (For Azure Germany and Azure China 21Vianet, the limit is 2,000 custom roles.) Custom roles can be created using the Azure portal, Azure PowerShell, Azure CLI, or the REST API.
## Steps to create a custom role
The following table describes what the custom role properties mean.
| Property | Required | Type | Description | | | | | |
-| `Name`</br>`roleName` | Yes | String | The display name of the custom role. While a role definition is a management group or subscription-level resource, a role definition can be used in multiple subscriptions that share the same Azure AD directory. This display name must be unique at the scope of the Azure AD directory. Can include letters, numbers, spaces, and special characters. Maximum number of characters is 512. |
+| `Name`</br>`roleName` | Yes | String | The display name of the custom role. While a role definition is a management group or subscription-level resource, a role definition can be used in multiple subscriptions that share the same Azure AD tenant. This display name must be unique at the scope of the Azure AD tenant. Can include letters, numbers, spaces, and special characters. Maximum number of characters is 512. |
| `Id`</br>`name` | Yes | String | The unique ID of the custom role. For Azure PowerShell and Azure CLI, this ID is automatically generated when you create a new role. | | `IsCustom`</br>`roleType` | Yes | String | Indicates whether this is a custom role. Set to `true` or `CustomRole` for custom roles. Set to `false` or `BuiltInRole` for built-in roles. | | `Description`</br>`description` | Yes | String | The description of the custom role. Can include letters, numbers, spaces, and special characters. Maximum number of characters is 2048. |
The following table describes what the custom role properties mean.
| `NotActions`</br>`notActions` | No | String[] | An array of strings that specifies the control plane actions that are excluded from the allowed `Actions`. For more information, see [NotActions](role-definitions.md#notactions). | | `DataActions`</br>`dataActions` | No | String[] | An array of strings that specifies the data plane actions that the role allows to be performed to your data within that object. If you create a custom role with `DataActions`, that role cannot be assigned at the management group scope. For more information, see [DataActions](role-definitions.md#dataactions). | | `NotDataActions`</br>`notDataActions` | No | String[] | An array of strings that specifies the data plane actions that are excluded from the allowed `DataActions`. For more information, see [NotDataActions](role-definitions.md#notdataactions). |
-| `AssignableScopes`</br>`assignableScopes` | Yes | String[] | An array of strings that specifies the scopes that the custom role is available for assignment. Maximum number of `AssignableScopes` is 2,000. You can define only one management group in `AssignableScopes` of a custom role. Adding a management group to `AssignableScopes` is currently in preview. For more information, see [AssignableScopes](role-definitions.md#assignablescopes). |
+| `AssignableScopes`</br>`assignableScopes` | Yes | String[] | An array of strings that specifies the scopes that the custom role is available for assignment. Maximum number of `AssignableScopes` is 2,000. For more information, see [AssignableScopes](role-definitions.md#assignablescopes). |
Permission strings are case-insensitive. When you create your custom roles, the convention is to match the case that you see for permissions in [Azure resource provider operations](resource-provider-operations.md).
Before you can delete a custom role, you must remove any role assignments that u
Here are steps to help find the role assignments before deleting a custom role: - List the [custom role definition](role-definitions-list.md).-- In the [assignable scopes](role-definitions.md#assignablescopes) section, get the management groups, subscriptions, and resource groups.-- Iterate over the assignable scopes and [list the role assignments](role-assignments-list-portal.md).
+- In the [AssignableScopes](role-definitions.md#assignablescopes) section, get the management groups, subscriptions, and resource groups.
+- Iterate over the `AssignableScopes` and [list the role assignments](role-assignments-list-portal.md).
- [Remove the role assignments](role-assignments-remove.md) that use the custom role. - [Delete the custom role](custom-roles-portal.md#delete-a-custom-role).
Here are steps to help find the role assignments before deleting a custom role:
The following list describes the limits for custom roles. -- Each directory can have up to **5000** custom roles.-- Azure Germany and Azure China 21Vianet can have up to 2000 custom roles for each directory.
+- Each tenant can have up to **5000** custom roles.
+- Azure Germany and Azure China 21Vianet can have up to 2000 custom roles for each tenant.
- You cannot set `AssignableScopes` to the root scope (`"/"`). - You cannot use wildcards (`*`) in `AssignableScopes`. This wildcard restriction helps ensure a user can't potentially obtain access to a scope by updating the role definition. - You can only define one management group in `AssignableScopes` of a custom role. Adding a management group to `AssignableScopes` is currently in preview. - You can have only one wildcard in an action string. - Custom roles with `DataActions` cannot be assigned at the management group scope.-- Azure Resource Manager doesn't validate the management group's existence in the role definition's assignable scope.
+- Azure Resource Manager doesn't validate the management group's existence in the role definition's `AssignableScopes`.
For more information about custom roles and management groups, see [What are Azure management groups?](../governance/management-groups/overview.md#azure-custom-role-definition-and-assignment).
role-based-access-control Resource Provider Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/resource-provider-operations.md
Azure service: [Cognitive Services](../cognitive-services/index.yml)
> | Microsoft.CognitiveServices/accounts/CustomVoice/endpoints/write | Create or update an voice endpoint. | > | Microsoft.CognitiveServices/accounts/CustomVoice/endpoints/delete | Delete the specified voice endpoint. | > | Microsoft.CognitiveServices/accounts/CustomVoice/endpoints/read | Get one or more voice endpoints |
-> | Microsoft.CognitiveServices/accounts/CustomVoice/endpoints/manifest/read | Returns an endpoint manifest which can be used in an on-premise container. |
+> | Microsoft.CognitiveServices/accounts/CustomVoice/endpoints/manifest/read | Returns an endpoint manifest which can be used in an on-premises container. |
> | Microsoft.CognitiveServices/accounts/CustomVoice/evaluations/delete | Deletes the specified evaluation. | > | Microsoft.CognitiveServices/accounts/CustomVoice/evaluations/read | Gets details of one or more evaluations | > | Microsoft.CognitiveServices/accounts/CustomVoice/features/read | Gets a list of allowed features. |
Azure service: [Cognitive Services](../cognitive-services/index.yml)
> | Microsoft.CognitiveServices/accounts/SpeechServices/speechrest/models/write | Create or update a model. | > | Microsoft.CognitiveServices/accounts/SpeechServices/speechrest/models/delete | Delete a model | > | Microsoft.CognitiveServices/accounts/SpeechServices/speechrest/models/read | Get one or more models |
-> | Microsoft.CognitiveServices/accounts/SpeechServices/speechrest/models/base/manifest/read | Returns an manifest for this base model which can be used in an on-premise container. |
-> | Microsoft.CognitiveServices/accounts/SpeechServices/speechrest/models/manifest/read | Returns an manifest for this model which can be used in an on-premise container. |
+> | Microsoft.CognitiveServices/accounts/SpeechServices/speechrest/models/base/manifest/read | Returns an manifest for this base model which can be used in an on-premises container. |
+> | Microsoft.CognitiveServices/accounts/SpeechServices/speechrest/models/manifest/read | Returns an manifest for this model which can be used in an on-premises container. |
> | Microsoft.CognitiveServices/accounts/SpeechServices/speechrest/projects/write | Create or update a project | > | Microsoft.CognitiveServices/accounts/SpeechServices/speechrest/projects/delete | Delete a project | > | Microsoft.CognitiveServices/accounts/SpeechServices/speechrest/projects/read | Get one or more projects |
Azure service: [IoT security](../iot-fundamentals/iot-security-architecture.md)
> | Microsoft.IoTSecurity/locations/sites/sensors/triggerTiPackageUpdate/action | Triggers threat intelligence package update | > | Microsoft.IoTSecurity/locations/sites/sensors/downloadResetPassword/action | Downloads reset password file for IoT Sensors | > | Microsoft.IoTSecurity/locations/sites/sensors/updateSoftwareVersion/action | Trigger sensor update |
-> | Microsoft.IoTSecurity/onPremiseSensors/read | Gets on-premise IoT Sensors |
-> | Microsoft.IoTSecurity/onPremiseSensors/write | Creates or updates on-premise IoT Sensors |
-> | Microsoft.IoTSecurity/onPremiseSensors/delete | Deletes on-premise IoT Sensors |
-> | Microsoft.IoTSecurity/onPremiseSensors/downloadActivation/action | Gets on-premise IoT Sensor Activation File |
-> | Microsoft.IoTSecurity/onPremiseSensors/downloadResetPassword/action | Downloads file for reset password of the on-premise IoT Sensor |
+> | Microsoft.IoTSecurity/onPremiseSensors/read | Gets on-premises IoT Sensors |
+> | Microsoft.IoTSecurity/onPremiseSensors/write | Creates or updates on-premises IoT Sensors |
+> | Microsoft.IoTSecurity/onPremiseSensors/delete | Deletes on-premises IoT Sensors |
+> | Microsoft.IoTSecurity/onPremiseSensors/downloadActivation/action | Gets on-premises IoT Sensor Activation File |
+> | Microsoft.IoTSecurity/onPremiseSensors/downloadResetPassword/action | Downloads fi