Updates from: 07/16/2021 03:08:50
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Page Layout https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/page-layout.md
Page layout packages are periodically updated to include fixes and improvements
> Azure Active Directory B2C releases improvements and fixes with each new page layout version. We highly recommend you keep your page layout versions up-to-date so that all page elements reflect the latest security enhancements, accessibility standards, and your feedback. >
-## jQuery version
-
-Azure AD B2C page layout uses the following version of the [jQuery library](https://jquery.com/):
-
-|From page layout version |jQuery version |
-|||
-|2.1.4 | 3.5.1 |
-|1.2.0 | 3.4.1 |
-|1.1.0 | 1.10.2 |
+## jQuery and Handlebars versions
+
+Azure AD B2C page layout uses the following versions of the [jQuery library](https://jquery.com/) and the [Handlebars templates](https://handlebarsjs.com/):
+
+|Element |Page layout version range |jQuery version |Handlebars Runtime version |Handlebars Compliler version |
+||||--|-|
+|multifactor |>= 1.2.4 | 3.5.1 | 4.7.6 |4.7.7 |
+| |< 1.2.4 | 3.4.1 |4.0.12 |2.0.1 |
+| |< 1.2.0 | 1.12.4 |
+|selfasserted |>= 2.1.4 | 3.5.1 |4.7.6 |4.7.7 |
+| |< 2.1.4 | 3.4.1 |4.0.12 |2.0.1 |
+| |< 1.2.0 | 1.12.4 |
+|unifiedssp |>= 2.1.4 | 3.5.1 |4.7.6 |4.7.7 |
+| |< 2.1.4 | 3.4.1 |4.0.12 |2.0.1 |
+| |< 1.2.0 | 1.12.4 |
+|globalexception |>= 1.2.1 | 3.5.1 |4.7.6 |4.7.7 |
+| |< 1.2.1 | 3.4.1 |4.0.12 |2.0.1 |
+| |< 1.2.0 | 1.12.4 |
+|providerselection |>= 1.2.1 | 3.5.1 |4.7.6 |4.7.7 |
+| |< 1.2.1 | 3.4.1 |4.0.12 |2.0.1 |
+| |< 1.2.0 | 1.12.4 |
+|claimsconsent |>= 1.2.1 | 3.5.1 |4.7.6 |4.7.7 |
+| |< 1.2.1 | 3.4.1 |4.0.12 |2.0.1 |
+| |< 1.2.0 | 1.12.4 |
+|unifiedssd |>= 1.2.1 | 3.5.1 |4.7.6 |4.7.7 |
+| |< 1.2.1 | 3.4.1 |4.0.12 |2.0.1 |
+| |< 1.2.0 | 1.12.4 |
## Self-asserted page (selfasserted)
active-directory-b2c Partner Datawiza https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-datawiza.md
To get started, you'll need:
- An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). -- An [Azure AD B2C tenant](https://docs.microsoft.com/azure/active-directory-b2c/tutorial-create-tenant) that's linked to your Azure subscription.
+- An [Azure AD B2C tenant](./tutorial-create-tenant.md) that's linked to your Azure subscription.
- [Docker](https://docs.docker.com/get-docker/) is required to run DAB. Your applications can run on any platform, such as virtual machine and bare metal.
To integrate your legacy on-premises app with Azure AD B2C, contact [Datawiza](h
For additional information, review the following articles: -- [Custom policies in Azure AD B2C](https://docs.microsoft.com/azure/active-directory-b2c/custom-policy-overview)
+- [Custom policies in Azure AD B2C](./custom-policy-overview.md)
-- [Get started with custom policies in Azure AD B2C](https://docs.microsoft.com/azure/active-directory-b2c/custom-policy-get-started?tabs=applications)
+- [Get started with custom policies in Azure AD B2C](./tutorial-create-user-flows.md?pivots=b2c-custom-policy&tabs=applications)
active-directory-domain-services Deploy Azure App Proxy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/deploy-azure-app-proxy.md
With a VM ready to be used as the Azure AD Application Proxy connector, now copy
> For example, if the Azure AD domain is *contoso.com*, the global administrator should be `admin@contoso.com` or another valid alias on that domain. * If Internet Explorer Enhanced Security Configuration is turned on for the VM where you install the connector, the registration screen might be blocked. To allow access, follow the instructions in the error message, or turn off Internet Explorer Enhanced Security during the install process.
- * If connector registration fails, see [Troubleshoot Application Proxy](/azure/active-directory/app-proxy/application-proxy-troubleshoot).
+ * If connector registration fails, see [Troubleshoot Application Proxy](../active-directory/app-proxy/application-proxy-troubleshoot.md).
1. At the end of the setup, a note is shown for environments with an outbound proxy. To configure the Azure AD Application Proxy connector to work through the outbound proxy, run the provided script, such as `C:\Program Files\Microsoft AAD App Proxy connector\ConfigureOutBoundProxy.ps1`. 1. On the Application proxy page in the Azure portal, the new connector is listed with a status of *Active*, as shown in the following example:
active-directory Howto Authentication Passwordless Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-authentication-passwordless-deployment.md
Passwords are a primary attack vector. Bad actors use social engineering, phishi
Microsoft offers the following [three passwordless authentication options](concept-authentication-passwordless.md) that integrate with Azure Active Directory (Azure AD):
-* [Microsoft Authenticator app](https://docs.microsoft.com/azure/active-directory/authentication/concept-authentication-passwordless#microsoft-authenticator-app) - turns any iOS or Android phone into a strong, passwordless credential by allowing users to sign into any platform or browser.
+* [Microsoft Authenticator app](./concept-authentication-passwordless.md#microsoft-authenticator-app) - turns any iOS or Android phone into a strong, passwordless credential by allowing users to sign into any platform or browser.
-* [FIDO2-compliant security keys](https://docs.microsoft.com/azure/active-directory/authentication/concept-authentication-passwordless#fido2-security-keys) - useful for users who sign in to shared machines like kiosks, in situations where use of phones is restricted, and for highly privileged identities.
+* [FIDO2-compliant security keys](./concept-authentication-passwordless.md#fido2-security-keys) - useful for users who sign in to shared machines like kiosks, in situations where use of phones is restricted, and for highly privileged identities.
-* [Windows Hello for Business](https://docs.microsoft.com/azure/active-directory/authentication/concept-authentication-passwordless#windows-hello-for-business) - best for users on their dedicated Windows computers.
+* [Windows Hello for Business](./concept-authentication-passwordless.md#windows-hello-for-business) - best for users on their dedicated Windows computers.
> [!NOTE] > To create an offline version of this plan with all links, use your browsers print to pdf functionality.
You can also manage the passwordless authentication methods using the authentica
* Manage your authentication method policies for security keys and Microsoft Authenticator app.
-For more information on what authentication methods can be managed in Microsoft Graph, see [Azure AD authentication methods API overview](https://docs.microsoft.com/graph/api/resources/authenticationmethods-overview?view=graph-rest-beta).
+For more information on what authentication methods can be managed in Microsoft Graph, see [Azure AD authentication methods API overview](/graph/api/resources/authenticationmethods-overview?view=graph-rest-beta).
### Rollback
Azure AD adds entries to the audit logs when:
* A user enables or disables their account on a security key or resets the second factor for the security key on their Win 10 machine. See event IDs: 4670 and 5382.
-**Azure AD keeps most auditing data for 30 days** and makes the data available via Azure Admin portal or API for you to download into your analysis systems. If you require longer retention, export and consume logs in a SIEM tool such as [Azure Sentinel](https://docs.microsoft.com/azure/sentinel/connect-azure-active-directory), Splunk, or Sumo Logic. We recommend longer retention for auditing, trend analysis, and other business needs as applicable
+**Azure AD keeps most auditing data for 30 days** and makes the data available via Azure Admin portal or API for you to download into your analysis systems. If you require longer retention, export and consume logs in a SIEM tool such as [Azure Sentinel](../../sentinel/connect-azure-active-directory.md), Splunk, or Sumo Logic. We recommend longer retention for auditing, trend analysis, and other business needs as applicable
There are two tabs in the Authentication methods activity dashboard - Registration and Usage.
Select the user row, and then select the **Authentication Details** tab to view
* [Learn how passwordless authentication works](concept-authentication-passwordless.md)
-* [Deploy other identity features](https://aka.ms/deploymentplans)
+* [Deploy other identity features](../fundamentals/active-directory-deployment-plans.md)
active-directory Howto Mfa App Passwords https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-mfa-app-passwords.md
When users complete their initial registration for Azure AD Multi-Factor Authent
Users can also create app passwords after registration. For more information and detailed steps for your users, see the following resources: * [What are app passwords in Azure AD Multi-Factor Authentication?](../user-help/multi-factor-authentication-end-user-app-passwords.md)
-* [Create app passwords from the Security info page](https://docs.microsoft.com/azure/active-directory/user-help/security-info-app-passwords)
+* [Create app passwords from the Security info page](../user-help/security-info-app-passwords.md)
## Next steps
-For more information on how to allow users to quickly register for Azure AD Multi-Factor Authentication, see [Combined security information registration overview](concept-registration-mfa-sspr-combined.md).
+For more information on how to allow users to quickly register for Azure AD Multi-Factor Authentication, see [Combined security information registration overview](concept-registration-mfa-sspr-combined.md).
active-directory Howto Mfa Userstates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-mfa-userstates.md
For Azure AD free tenants without Conditional Access, you can [use security defa
If needed, you can instead enable each account for per-user Azure AD Multi-Factor Authentication. When users are enabled individually, they perform multi-factor authentication each time they sign in (with some exceptions, such as when they sign in from trusted IP addresses or when the _remember MFA on trusted devices_ feature is turned on).
-Changing [user states](https://docs.microsoft.com/azure/active-directory/authentication/howto-mfa-userstates#azure-ad-multi-factor-authentication-user-states) isn't recommended unless your Azure AD licenses don't include Conditional Access and you don't want to use security defaults. For more information on the different ways to enable MFA, see [Features and licenses for Azure AD Multi-Factor Authentication](concept-mfa-licensing.md).
+Changing [user states](#azure-ad-multi-factor-authentication-user-states) isn't recommended unless your Azure AD licenses don't include Conditional Access and you don't want to use security defaults. For more information on the different ways to enable MFA, see [Features and licenses for Azure AD Multi-Factor Authentication](concept-mfa-licensing.md).
> [!IMPORTANT] >
To configure Azure AD Multi-Factor Authentication settings, see [Configure Azur
To manage user settings for Azure AD Multi-Factor Authentication, see [Manage user settings with Azure AD Multi-Factor Authentication](howto-mfa-userdevicesettings.md).
-To understand why a user was prompted or not prompted to perform MFA, see [Azure AD Multi-Factor Authentication reports](howto-mfa-reporting.md).
+To understand why a user was prompted or not prompted to perform MFA, see [Azure AD Multi-Factor Authentication reports](howto-mfa-reporting.md).
active-directory Howto Password Smart Lockout https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-password-smart-lockout.md
When the smart lockout threshold is triggered, you will get the following messag
*Your account is temporarily locked to prevent unauthorized use. Try again later, and if you still have trouble, contact your admin.*
-When you test smart lockout, your sign-in requests might be handled by different datacenters due to the geo-distributed and load-balanced nature of the Azure AD authentication service. In that scenario, because each Azure AD datacenter tracks lockout independently, it might take more than your defined lockout threshold number of attempts to cause a lockout. A user has (*threshold_limit * datacenter_count*) number of bad attempts if the user hits each datacenter before a lockout occurs.
+When you test smart lockout, your sign-in requests might be handled by different datacenters due to the geo-distributed and load-balanced nature of the Azure AD authentication service. In that scenario, because each Azure AD datacenter tracks lockout independently, it might take more than your defined lockout threshold number of attempts to cause a lockout. A user has (*threshold_limit * datacenter_count*) number of bad attempts if the user hits each datacenter before a lockout occurs. Additionally, due to each datacenter tracking lockout independently, a user can be locked out of one datacenter, but not another.
## Next steps
active-directory Howto Conditional Access Policy Azure Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/howto-conditional-access-policy-azure-management.md
Conditional Access policies are powerful tools, we recommend excluding the follo
## Create a Conditional Access policy
-The following steps will help create a Conditional Access policy to require those with access to the [Microsoft Azure Management](concept-conditional-access-cloud-apps.md#microsoft-azure-management) app to perform multi-factor authentication.
+The following steps will help create a Conditional Access policy to require those with access to the [Microsoft Azure Management](concept-conditional-access-cloud-apps.md#microsoft-azure-management) app to perform multi-factor authentication.For Azure Government, this should be the Azure Government Cloud Management API app.
1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator. 1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
active-directory Custom Rbac For Developers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/custom-rbac-for-developers.md
+
+ Title: Custom role-based access control (RBAC) for application developers - Microsoft identity platform
+description: Learn about what custom RBAC is and why it's important to implement in your applications.
+++
+
++++ Last updated : 06/28/2021++++
+#Customer intent: As a developer, I want to learn about custom RBAC and why I need to use it in my application.
++
+# Role-based access control for application developers
+
+Role-based access control (RBAC) allows certain users or groups to have specific permissions regarding which resources they have access to, what they can do with those resources, and who manages which resources. This article explains application-specific role-based access control.
+
+> [!NOTE]
+> Application role-based access control differs from [Azure role-based access control](/azure/role-based-access-control/overview) and [Azure AD role-based access control](../roles/custom-overview.md#understand-azure-ad-role-based-access-control). Azure custom roles and built-in roles are both part of Azure RBAC, which helps you manage Azure resources. Azure AD RBAC allows you to manage Azure AD resources.
+++
+## What are roles?
+
+Role-based access control (RBAC) is a popular mechanism to enforce authorization in applications. When using RBAC, an application developer defines roles rather than authorizing individual users or groups. An administrator can then assign roles to different users and groups to control who has access to what content and functionality.
+
+RBAC helps you, as an app developer, manage resources and what users can do with those resources. RBAC also allows an app developer to control what areas of an app users have access to. While admins can control which users have access to an app using the *User assignment required* property, developers need to account for specific users within the app and what users can do within the app.
+
+As an app developer, you need to first create a role definition within the appΓÇÖs registration section in the Azure AD admin center. The role definition includes a value that is returned for users who are assigned to that role. A developer can then use this value to implement application logic to determine what those users can or can't do in an application.
+
+## Options for adding RBAC to apps
+
+There are several considerations that must be managed when including role-based access control authorization in an application. These include:
+- Defining the roles that are required by an applicationΓÇÖs authorization needs.
+- Applying, storing, and retrieving the pertinent roles for authenticated users.
+- Affecting the desired application behavior based on the roles assigned to the current user.
+
+Once you define the roles, the Microsoft identity platform supports several different solutions that can be used to apply, store, and retrieve role information for authenticated users. These solutions include app roles, Azure AD groups, and the use of custom datastores for user role information.
+
+Developers have the flexibility to provide their own implementation for how role assignments are to be interpreted as application permissions. This can involve leveraging middleware or other functionality provided by their applicationsΓÇÖ platform or related libraries. Apps will typically receive user role information as claims and will decide user permissions based on those claims.
+
+### App roles
+
+Azure AD supports declaring app roles for an application registration. When a user signs into an application, Azure AD will include a [roles claim](./access-tokens.md#payload-claims) for each role that the user has been granted for that application. Applications that receive tokens that contain these claims can then use this information to determine what permissions the user may exercise based on the roles they're assigned.
+
+### Groups
+
+Developers can also use [Azure AD groups](../fundamentals/active-directory-manage-groups.md) to implement RBAC in their applications, where the usersΓÇÖ memberships in specific groups are interpreted as their role memberships. When using Azure AD groups, Azure AD will include a [groups claim](./access-tokens.md#payload-claims) that will include the identifiers of all of the groups to which the user is assigned within the current Azure AD tenant. Applications that receive tokens that contain these claims can then use this information to determine what permissions the user may exercise based on the roles they're assigned.
+
+> [!IMPORTANT]
+> When working with groups, developers need to be aware of the concept of an [overage claim](./access-tokens.md#payload-claims). By default, if a user is a member of more than the overage limit (150 for SAML tokens, 200 for JWT tokens, 6 if using the implicit flow), Azure AD will not emit a groups claim in the token. Instead, it will include an ΓÇ£overage claimΓÇ¥ in the token that indicates the tokenΓÇÖs consumer will need to query the Graph API to retrieve the userΓÇÖs group memberships. For more information about working with overage claims, see [Claims in access tokens](./access-tokens.md#claims-in-access-tokens). It is possible to only emit groups that are assigned to an application, though [group-based assignment](../manage-apps/assign-user-or-group-access-portal.md) does require Azure Active Directory Premium P1 or P2 edition.
+
+### Custom data store
+
+App roles and groups both store information about user assignments in the Azure AD directory. Another option for managing user role information that is available to developers is to maintain the information outside of the directory in a custom data store. For example, in a SQL Database, Azure Table storage or Azure Cosmos DB Table API.
+
+Using custom storage allows developers extra customization and control over how to assign roles to users and how to represent them. However, the extra flexibility also introduces more responsibility. For example, there's no mechanism currently available to include this information in tokens returned from Azure AD. If developers maintain role information in a custom data store, they'll need to have the apps retrieve the roles. This is typically done using extensibility points defined in the middleware available to the platform that is being used to develop the application. Furthermore, developers are responsible for properly securing the custom data store.
+
+> [!NOTE]
+> Using [Azure AD B2C Custom policies](/azure/active-directory-b2c/custom-policy-overview) it is possible to interact with custom data stores and to include custom claims within a token.
+
+## Choosing an approach
+
+In general, app roles are the recommended solution. App roles provide the simplest programming model and are purpose made for RBAC implementations. However, specific application requirements may indicate that a different approach would be better solution.
+
+Developers can use app roles to control whether a user can sign into an app, or an app can obtain an access token for a web API. App roles are preferred over Azure AD groups by developers when they want to describe and control the parameters of authorization in their app themselves. For example, an app using groups for authorization will break in the next tenant as both the group ID and name could be different. An app using app roles remains safe. In fact, assigning groups to app roles is popular with SaaS apps for the same reasons.
+
+Although either app roles or groups can be used for authorization, key differences between them can influence which is the best solution for a given scenario.
+
+| |App Roles |Azure AD Groups |Custom Data Store|
+|-|--||--|
+|**Programming model** |**Simplest**. They are specific to an application and are defined in the app registration. They move with the application.|**More complex**. Group IDs vary between tenants and overage claims may need to be considered. Groups aren't specific to an app, but to an Azure AD tenant.|**Most complex**. Developers must implement means by which role information is both stored and retrieved.|
+|**Role values are static between Azure AD tenants**|Yes |No |Depends on the implementation.|
+|**Role values can be used in multiple applications**|No. Unless role configuration is duplicated in each app registration.|Yes |Yes |
+|**Information stored within directory**|Yes |Yes |No |
+|**Information is delivered via tokens**|Yes (roles claim) |Yes* (groups claim) |No. Retrieved at runtime via custom code. |
+|**Lifetime**|Lives in app registration in directory. Removed when the app registration is removed.|Lives in directory. Remain intact even if the app registration is removed. |Lives in custom data store. Not tied to app registration.|
++
+> [!NOTE]
+> Yes* - In the case of an overage, *groups claims* may need to be retrieved at runtime.
+
+## Next steps
+
+- [How to add app roles to your application and receive them in the token](./howto-add-app-roles-in-azure-ad-apps.md).
+- [Register an application with the Microsoft identity platform](./quickstart-register-app.md).
+- [Azure Identity Management and access control security best practices](/azure/security/fundamentals/identity-management-best-practices).
active-directory Msal B2c Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-b2c-overview.md
MSAL.js enables [single-page applications](../../active-directory-b2c/applicatio
- Users **can** authenticate with their social and local identities. - Users **can** be authorized to access Azure AD B2C protected resources (but not Azure AD protected resources).-- Users **cannot** obtain tokens for Microsoft APIs (e.g. MS Graph API) using [delegated permissions](/azure/active-directory/develop/v2-permissions-and-consent#permission-types).-- Users with administrator privileges **can** obtain tokens for Microsoft APIs (e.g. MS Graph API) using [delegated permissions](/azure/active-directory/develop/v2-permissions-and-consent#permission-types).
+- Users **cannot** obtain tokens for Microsoft APIs (e.g. MS Graph API) using [delegated permissions](./v2-permissions-and-consent.md#permission-types).
+- Users with administrator privileges **can** obtain tokens for Microsoft APIs (e.g. MS Graph API) using [delegated permissions](./v2-permissions-and-consent.md#permission-types).
For more information, see: [Working with Azure AD B2C](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/working-with-b2c.md)
active-directory Msal Net Token Cache Serialization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-net-token-cache-serialization.md
After it [acquires a token](msal-acquire-cache-tokens.md), Microsoft Authenticat
The recommendation is: - In web apps and web APIs, use [token cache serializers from "Microsoft.Identity.Web"](https://github.com/AzureAD/microsoft-identity-web/wiki/token-cache-serialization). They even provide distributed database or cache system to store tokens. - In ASP.NET Core [web apps](scenario-web-app-call-api-overview.md) and [web API](scenario-web-api-call-api-overview.md), use Microsoft.Identity.Web as a higher-level API in ASP.NET Core.
- - In ASP.NET classic, .NET Core, .NET framework, use MSAL.NET directly with [token cache serialization adapters for MSAL](https://aka.ms/ms-id-web/token-cache-serialization-msal) provided in Microsoft.Identity.Web.
+ - In ASP.NET classic, .NET Core, .NET framework, use MSAL.NET directly with [token cache serialization adapters for MSAL]() provided in Microsoft.Identity.Web.
- In desktop applications (which can use file system to store tokens), use [Microsoft.Identity.Client.Extensions.Msal](https://github.com/AzureAD/microsoft-authentication-extensions-for-dotnet/wiki/Cross-platform-Token-Cache) with MSAL.Net. - In mobile applications (Xamarin.iOS, Xamarin.Android, Universal Windows Platform) don't do anything, as MSAL.NET handles the cache for you: these platforms have a secure storage.
The following samples illustrate token cache serialization.
| Sample | Platform | Description| | | -- | -- | |[active-directory-dotnet-desktop-msgraph-v2](https://github.com/azure-samples/active-directory-dotnet-desktop-msgraph-v2) | Desktop (WPF) | Windows Desktop .NET (WPF) application calling the Microsoft Graph API. ![Diagram shows a topology with Desktop App WPF TodoListClient flowing to Azure AD by acquiring a token interactively and to Microsoft Graph.](media/msal-net-token-cache-serialization/topology.png)|
-|[active-directory-dotnet-v1-to-v2](https://github.com/Azure-Samples/active-directory-dotnet-v1-to-v2) | Desktop (Console) | Set of Visual Studio solutions illustrating the migration of Azure AD v1.0 applications (using ADAL.NET) to Microsoft identity platform applications (using MSAL.NET). In particular, see [Token Cache Migration](https://github.com/Azure-Samples/active-directory-dotnet-v1-to-v2/blob/master/TokenCacheMigration/README.md)|
+|[active-directory-dotnet-v1-to-v2](https://github.com/Azure-Samples/active-directory-dotnet-v1-to-v2) | Desktop (Console) | Set of Visual Studio solutions illustrating the migration of Azure AD v1.0 applications (using ADAL.NET) to Microsoft identity platform applications (using MSAL.NET). In particular, see [Token Cache Migration](https://github.com/Azure-Samples/active-directory-dotnet-v1-to-v2/blob/master/TokenCacheMigration/README.md)|
active-directory Reference Aadsts Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/reference-aadsts-error-codes.md
For example, if you received the error code "AADSTS50058" then do a search in [h
| AADSTS50048 | SubjectMismatchesIssuer - Subject mismatches Issuer claim in the client assertion. Contact the tenant admin. | | AADSTS50049 | NoSuchInstanceForDiscovery - Unknown or invalid instance. | | AADSTS50050 | MalformedDiscoveryRequest - The request is malformed. |
-| AADSTS50053 | IdsLocked - The account is locked because the user tried to sign in too many times with an incorrect user ID or password. The user is blocked due to repeated sign-in attempts. See [Remediate risks and unblock users](/azure/active-directory/identity-protection/howto-unblock-user). |
-| AADSTS50055 | InvalidPasswordExpiredPassword - The password is expired. The user's password is expired, and therefore their login or session was ended. They will be offered the opportunity to reset it, or may ask an admin to reset it via [Reset a user's password using Azure Active Directory](/azure/active-directory/fundamentals/active-directory-users-reset-password-azure-portal). |
+| AADSTS50053 | IdsLocked - The account is locked because the user tried to sign in too many times with an incorrect user ID or password. The user is blocked due to repeated sign-in attempts. See [Remediate risks and unblock users](../identity-protection/howto-identity-protection-remediate-unblock.md). |
+| AADSTS50055 | InvalidPasswordExpiredPassword - The password is expired. The user's password is expired, and therefore their login or session was ended. They will be offered the opportunity to reset it, or may ask an admin to reset it via [Reset a user's password using Azure Active Directory](../fundamentals/active-directory-users-reset-password-azure-portal.md). |
| AADSTS50056 | Invalid or null password: password does not exist in the directory for this user. The user should be asked to enter their password again. | | AADSTS50057 | UserDisabled - The user account is disabled. The user object in Active Directory backing this account has been disabled. An admin can re-enable this account [through Powershell](/powershell/module/activedirectory/enable-adaccount) | | AADSTS50058 | UserInformationNotProvided - Session information is not sufficient for single-sign-on. This means that a user is not signed in. This is a common error that's expected when a user is unauthenticated and has not yet signed in.</br>If this error is encountered in an SSO context where the user has previously signed in, this means that the SSO session was either not found or invalid.</br>This error may be returned to the application if prompt=none is specified. |
For example, if you received the error code "AADSTS50058" then do a search in [h
## Next steps
-* Have a question or can't find what you're looking for? Create a GitHub issue or see [Support and help options for developers](./developer-support-help-options.md) to learn about other ways you can get help and support.
+* Have a question or can't find what you're looking for? Create a GitHub issue or see [Support and help options for developers](./developer-support-help-options.md) to learn about other ways you can get help and support.
active-directory Sample V2 Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/sample-v2-code.md
The following samples illustrate web applications that sign in users. Some sampl
> | ASP.NET |[GitHub repo](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect) | [Sign in users and call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect) | [MSAL.NET](https://aka.ms/msal-net) | | > | ASP.NET |[GitHub repo](https://github.com/azure-samples/active-directory-dotnet-admin-restricted-scopes-v2) | [Admin Restricted Scopes <br/> &#8226; Sign in users <br/> &#8226; call Microsoft Graph](https://github.com/azure-samples/active-directory-dotnet-admin-restricted-scopes-v2) | [MSAL.NET](https://aka.ms/msal-net) | | > | ASP.NET |[GitHub repo](https://github.com/microsoftgraph/msgraph-training-aspnetmvcapp) | Microsoft Graph Training Sample | [MSAL.NET](https://aka.ms/msal-net) | |
-> | Java </p> Spring |[GitHub repo](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial) | Azure AD Spring Boot Starter Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial/tree/main/1-Authentication/sign-in) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial/tree/main/1-Authentication/sign-in-b2c) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial/tree/main/2-Authorization-I/call-graph) <br/> &#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial/tree/main/3-Authorization-II/roles) <br/> &#8226; [Use Groups for access control](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial/tree/main/3-Authorization-II/groups) <br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial/tree/main/4-Deployment/deploy-to-azure-app-service) | MSAL Java <br/> AAD Boot Starter | [Auth code flow](https://docs.microsoft.com/azure/active-directory/develop/v2-oauth2-auth-code-flow) |
-> | Java </p> Servlets |[GitHub repo](https://github.com/Azure-Samples/ms-identity-java-servlet-webapp-authentication) | Spring-less Servlet Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-java-servlet-webapp-authentication/tree/main/1-Authentication/sign-in) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-java-servlet-webapp-authentication/tree/main/1-Authentication/sign-in-b2c) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-java-servlet-webapp-authentication/tree/main/2-Authorization-I/call-graph) <br/> &#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-java-servlet-webapp-authentication/tree/main/3-Authorization-II/roles) <br/> &#8226; [Use Security Groups for access control](https://github.com/Azure-Samples/ms-identity-java-servlet-webapp-authentication/tree/main/3-Authorization-II/groups) <br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-java-servlet-webapp-authentication/tree/main/4-Deployment/deploy-to-azure-app-service) | MSAL Java | [Auth code flow](https://docs.microsoft.com/azure/active-directory/develop/v2-oauth2-auth-code-flow) |
-> | Java |[GitHub repo](https://github.com/Azure-Samples/ms-identity-java-webapp) | Sign in users, call Microsoft Graph | MSAL Java | [Auth code flow](https://docs.microsoft.com/azure/active-directory/develop/v2-oauth2-auth-code-flow) |
-> | Java </p> Spring|[GitHub repo](https://github.com/Azure-Samples/ms-identity-java-webapi) | Sign in users & call Microsoft Graph via OBO </p> &#8226; web API | MSAL Java | &#8226; [Auth code flow](https://docs.microsoft.com/azure/active-directory/develop/v2-oauth2-auth-code-flow) <br/> &#8226; [On-Behalf-Of (OBO) flow](https://docs.microsoft.com/azure/active-directory/develop/v2-oauth2-on-behalf-of-flow) |
-> | Node.js </p> Express |[GitHub repo](https://github.com/Azure-Samples/ms-identity-node) | Express web app sample <br/> &#8226; Sign in users | MSAL Node | [Auth code flow](https://docs.microsoft.com/azure/active-directory/develop/v2-oauth2-auth-code-flow) |
-> | Node.js </p> Express |[GitHub repo](https://github.com/Azure-Samples/ms-identity-node) | Express web app series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/1-Authentication/1-sign-in/README.md)<br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/1-Authentication/2-sign-in-b2c/README.md)<br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/2-Authorization/1-call-graph/README.md)<br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/3-Deployment/README.md)<br/> &#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/4-AccessControl/1-app-roles/README.md)<br/> &#8226; [Use Security Groups for access control](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/4-AccessControl/2-security-groups/README.md) | MSAL Node | [Auth code flow](https://docs.microsoft.com/azure/active-directory/develop/v2-oauth2-auth-code-flow) |
-> | Python </p> Flask |[GitHub repo](https://github.com/Azure-Samples/ms-identity-python-flask-tutorial) | Flask Series <br/> &#8226; Sign in users <br/> &#8226; Sign in users (B2C) <br/> &#8226; Call Microsoft Graph <br/> &#8226; Deploy to Azure App Service | MSAL Python | [Auth code flow](https://docs.microsoft.com/azure/active-directory/develop/v2-oauth2-auth-code-flow) |
-> | Python </p> Django |[GitHub repo](https://github.com/Azure-Samples/ms-identity-python-django-tutorial) | Django Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/1-Authentication/sign-in) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/1-Authentication/sign-in-b2c) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/2-Authorization-I/call-graph) <br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/3-Deployment/deploy-to-azure-app-service)| MSAL Python | [Auth code flow](https://docs.microsoft.com/azure/active-directory/develop/v2-oauth2-auth-code-flow) |
-> | Python </p> Flask |[GitHub repo](https://github.com/Azure-Samples/ms-identity-python-webapp) | Flask standalone sample <br/> [Sign in users and call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-python-webapp) | MSAL Python | [Auth code flow](https://docs.microsoft.com/azure/active-directory/develop/v2-oauth2-auth-code-flow) |
+> | Java </p> Spring |[GitHub repo](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial) | Azure AD Spring Boot Starter Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial/tree/main/1-Authentication/sign-in) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial/tree/main/1-Authentication/sign-in-b2c) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial/tree/main/2-Authorization-I/call-graph) <br/> &#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial/tree/main/3-Authorization-II/roles) <br/> &#8226; [Use Groups for access control](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial/tree/main/3-Authorization-II/groups) <br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial/tree/main/4-Deployment/deploy-to-azure-app-service) | MSAL Java <br/> AAD Boot Starter | [Auth code flow](./v2-oauth2-auth-code-flow.md) |
+> | Java </p> Servlets |[GitHub repo](https://github.com/Azure-Samples/ms-identity-java-servlet-webapp-authentication) | Spring-less Servlet Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-java-servlet-webapp-authentication/tree/main/1-Authentication/sign-in) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-java-servlet-webapp-authentication/tree/main/1-Authentication/sign-in-b2c) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-java-servlet-webapp-authentication/tree/main/2-Authorization-I/call-graph) <br/> &#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-java-servlet-webapp-authentication/tree/main/3-Authorization-II/roles) <br/> &#8226; [Use Security Groups for access control](https://github.com/Azure-Samples/ms-identity-java-servlet-webapp-authentication/tree/main/3-Authorization-II/groups) <br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-java-servlet-webapp-authentication/tree/main/4-Deployment/deploy-to-azure-app-service) | MSAL Java | [Auth code flow](./v2-oauth2-auth-code-flow.md) |
+> | Java |[GitHub repo](https://github.com/Azure-Samples/ms-identity-java-webapp) | Sign in users, call Microsoft Graph | MSAL Java | [Auth code flow](./v2-oauth2-auth-code-flow.md) |
+> | Java </p> Spring|[GitHub repo](https://github.com/Azure-Samples/ms-identity-java-webapi) | Sign in users & call Microsoft Graph via OBO </p> &#8226; web API | MSAL Java | &#8226; [Auth code flow](./v2-oauth2-auth-code-flow.md) <br/> &#8226; [On-Behalf-Of (OBO) flow](./v2-oauth2-on-behalf-of-flow.md) |
+> | Node.js </p> Express |[GitHub repo](https://github.com/Azure-Samples/ms-identity-node) | Express web app sample <br/> &#8226; Sign in users | MSAL Node | [Auth code flow](./v2-oauth2-auth-code-flow.md) |
+> | Node.js </p> Express |[GitHub repo](https://github.com/Azure-Samples/ms-identity-node) | Express web app series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/1-Authentication/1-sign-in/README.md)<br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/1-Authentication/2-sign-in-b2c/README.md)<br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/2-Authorization/1-call-graph/README.md)<br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/3-Deployment/README.md)<br/> &#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/4-AccessControl/1-app-roles/README.md)<br/> &#8226; [Use Security Groups for access control](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/4-AccessControl/2-security-groups/README.md) | MSAL Node | [Auth code flow](./v2-oauth2-auth-code-flow.md) |
+> | Python </p> Flask |[GitHub repo](https://github.com/Azure-Samples/ms-identity-python-flask-tutorial) | Flask Series <br/> &#8226; Sign in users <br/> &#8226; Sign in users (B2C) <br/> &#8226; Call Microsoft Graph <br/> &#8226; Deploy to Azure App Service | MSAL Python | [Auth code flow](./v2-oauth2-auth-code-flow.md) |
+> | Python </p> Django |[GitHub repo](https://github.com/Azure-Samples/ms-identity-python-django-tutorial) | Django Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/1-Authentication/sign-in) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/1-Authentication/sign-in-b2c) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/2-Authorization-I/call-graph) <br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/3-Deployment/deploy-to-azure-app-service)| MSAL Python | [Auth code flow](./v2-oauth2-auth-code-flow.md) |
+> | Python </p> Flask |[GitHub repo](https://github.com/Azure-Samples/ms-identity-python-webapp) | Flask standalone sample <br/> [Sign in users and call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-python-webapp) | MSAL Python | [Auth code flow](./v2-oauth2-auth-code-flow.md) |
> | Ruby |[GitHub repo](https://github.com/microsoftgraph/msgraph-training-rubyrailsapp) | Graph Training <br/> &#8226; [Sign in and Microsoft Graph](https://github.com/microsoftgraph/msgraph-training-rubyrailsapp) | | | ## Desktop and mobile public client apps
To learn about [samples](https://github.com/microsoftgraph/msgraph-community-sam
## See also
-[Microsoft Graph API conceptual and reference](/graph/use-the-api?context=graph%2fapi%2fbeta&view=graph-rest-beta&preserve-view=true)
+[Microsoft Graph API conceptual and reference](/graph/use-the-api?context=graph%2fapi%2fbeta&view=graph-rest-beta&preserve-view=true)
active-directory Scenario Web App Call Api App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-web-app-call-api-app-configuration.md
Instead of `clientapp.AddInMemoryTokenCache()`, you can also use more advanced c
}); ```
-For details see [Token cache serialization for MSAL.NET](https://aka.ms/ms-id-web/token-cache-serialization-msal).
+For details see [Token cache serialization for MSAL.NET](./msal-net-token-cache-serialization.md).
# [Java](#tab/java)
def _build_msal_app(cache=None):
At this point, when the user signs in, a token is stored in the token cache. Let's see how it's then used in other parts of the web app.
-[Remove accounts from the cache on global sign-out](scenario-web-app-call-api-sign-in.md)
+[Remove accounts from the cache on global sign-out](scenario-web-app-call-api-sign-in.md)
active-directory V2 Howto App Gallery Listing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-howto-app-gallery-listing.md
# Publish your app to the Azure AD app gallery
-You can publish your app in the Azure Active Directory (Azure AD) app gallery. When your app is published, it will show up as an option for customers when they are [adding apps to their tenant](/en-us/azure/active-directory/manage-apps/add-application-portal).
+You can publish your app in the Azure Active Directory (Azure AD) app gallery. When your app is published, it will show up as an option for customers when they are [adding apps to their tenant](../manage-apps/add-application-portal.md).
The steps to publishing your app in the Azure AD app gallery are: 1. Prerequisites
Here's the flow of customer-requested applications.
## Next steps * [Build a SCIM endpoint and configure user provisioning](../app-provisioning/use-scim-to-provision-users-and-groups.md)
-* [Authentication scenarios for Azure AD](authentication-flows-app-scenarios.md)
+* [Authentication scenarios for Azure AD](authentication-flows-app-scenarios.md)
active-directory Assign Local Admin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/assign-local-admin.md
Starting with Windows 10 version 2004, you can use Azure AD groups to manage adm
> [!NOTE] > Starting in the Windows 10 20H2 update, we recommend using [Local Users and Groups](/windows/client-management/mdm/policy-csp-localusersandgroups) policy instead of the Restricted Groups policy. - Currently, there's no UI in Intune to manage these policies and they need to be configured using [Custom OMA-URI Settings](/mem/intune/configuration/custom-settings-windows-10). A few considerations for using either of these policies: - Adding Azure AD groups through the policy requires the group's SID that can be obtained by executing the [Microsoft Graph API for Groups](/graph/api/resources/group). The SID is defined by the property `securityIdentifier` in the API response.
Currently, there's no UI in Intune to manage these policies and they need to be
- Managing local administrators using Azure AD groups is not applicable to Hybrid Azure AD joined or Azure AD Registered devices. - While the Restricted Groups policy existed prior to Windows 10 version 2004, it did not support Azure AD groups as members of a device's local administrators group.
+- Azure AD groups deployed to a device with either of the two policies do not apply to remote desktop connections. To control remote desktop permissions for Azure AD joined devices, you need to add the individual user's SID to the appropriate group.
> [!IMPORTANT] > Windows sign-in with Azure AD supports evaluation of up to 20 groups for administrator rights. We recommend having no more than 20 Azure AD groups on each device to ensure that administrator rights are correctly assigned. This limitation also applies to nested groups.
active-directory Azuread Join Sso https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/azuread-join-sso.md
If you have a hybrid environment, with both Azure AD and on-premises AD, it is l
>[!NOTE] > Windows Hello for Business requires additional configuration to enable on-premises SSO from an Azure AD joined device. For more information, see [Configure Azure AD joined devices for On-premises Single-Sign On using Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-hybrid-aadj-sso-base). >
-> FIDO2 security key based passwordless authentication with Windows 10 requires additional configuration to enable on-premises SSO from an Azure AD joined device. For more information, see [Enable passwordless security key sign-in to on-premises resources with Azure Active Directory](/azure/active-directory/authentication/howto-authentication-passwordless-security-key-on-premises).
+> FIDO2 security key based passwordless authentication with Windows 10 requires additional configuration to enable on-premises SSO from an Azure AD joined device. For more information, see [Enable passwordless security key sign-in to on-premises resources with Azure Active Directory](../authentication/howto-authentication-passwordless-security-key-on-premises.md).
During an access attempt to a resource requesting Kerberos or NTLM in the user's on-premises environment, the device:
You can't share files with other users on an Azure AD-joined device.
## Next steps
-For more information, see [What is device management in Azure Active Directory?](overview.md)
+For more information, see [What is device management in Azure Active Directory?](overview.md)
active-directory Howto Device Identity Virtual Desktop Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/howto-device-identity-virtual-desktop-infrastructure.md
When deploying non-persistent VDI, Microsoft recommends that IT administrators i
- For non-persistent VDI deployments on Windows current and down-level, you should delete devices that have **ApproximateLastLogonTimestamp** of older than 15 days. > [!NOTE]
-> When using non-persistent VDI, if you want to prevent a device join state ensure the following registry key is set:
+> When using non-persistent VDI, if you want to prevent adding a work or school account ensure the following registry key is set:
> `HKLM\SOFTWARE\Policies\Microsoft\Windows\WorkplaceJoin: "BlockAADWorkplaceJoin"=dword:00000001` > > Ensure you are running Windows 10, version 1803 or higher.
When deploying non-persistent VDI, Microsoft recommends that IT administrators i
> * `%localappdata%\Microsoft\TokenBroker` > * `HKEY_CURRENT_USER\SOFTWARE\Microsoft\IdentityCRL` > * `HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\AAD`
+> * `HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows NT\CurrentVersion\WorkplaceJoin`
>
+> Romaing of the work account's device certificate is not supported. The certificate, issued by "MS-Organization-Access", is stored in the Personal (MY) certificate store of the current user.
### Persistent VDI
active-directory Troubleshoot Device Dsregcmd https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/troubleshoot-device-dsregcmd.md
This section lists the device join state parameters. The table below lists the c
## Device details
-Displayed only when the device is Azure AD joined or hybrid Azure AD joined (not Azure AD registered). This section lists device identifying details stored in the cloud.
+Displayed only when the device is Azure AD joined or hybrid Azure AD joined (not Azure AD registered). This section lists device identifying details stored in Azure AD.
- **DeviceId:** Unique ID of the device in the Azure AD tenant - **Thumbprint:** Thumbprint of the device certificate
Displayed only when the device is Azure AD joined or hybrid Azure AD joined (not
- **KeyProvider:** KeyProvider (Hardware/Software) used to store the device private key. - **TpmProtected:** "YES" if the device private key is stored in a Hardware TPM.
+> [!NOTE]
+> **DeviceAuthStatus** field was added in **Windows 10 May 2021 Update (version 21H1)**.
+
+- **DeviceAuthStatus:** Performs a check to determine device's health in Azure AD.
+"SUCCESS" if the device is present and Enabled in Azure AD.
+"FAILED. Device is either disabled or deleted" if the device is either disabled or deleted, [More Info](faq.yml#why-do-my-users-see-an-error-message-saying--your-organization-has-deleted-the-device--or--your-organization-has-disabled-the-device--on-their-windows-10-devices).
+"FAILED. ERROR" if the test was unable to run. This test requires network connectivity to Azure AD.
+ ### Sample device details output ```
Displayed only when the device is Azure AD joined or hybrid Azure AD joined (not
KeyContainerId : 13e68a58-xxxx-xxxx-xxxx-a20a2411xxxx KeyProvider : Microsoft Software Key Storage Provider TpmProtected : NO
+ DeviceAuthStatus : SUCCESS
+-+ ```
This section lists the status of various attributes for the user currently logge
- **CanReset:** Denotes if the Windows Hello key can be reset by the user. - **Possible values:** DestructiveOnly, NonDestructiveOnly, DestructiveAndNonDestructive, or Unknown if error. - **WorkplaceJoined:** Set to "YES" if Azure AD registered accounts have been added to the device in the current NTUSER context.-- **WamDefaultSet:** Set to "YES" if a WAM default WebAccount is created for the logged in user. This field could display an error if dsreg /status is run from an elevated command prompt.
+- **WamDefaultSet:** Set to "YES" if a WAM default WebAccount is created for the logged in user. This field could display an error if dsregcmd /status is run from an elevated command prompt.
- **WamDefaultAuthority:** Set to "organizations" for Azure AD. - **WamDefaultId:** Always "https://login.microsoft.com" for Azure AD. - **WamDefaultGUID:** The WAM provider's (Azure AD/Microsoft account) GUID for the default WAM WebAccount.
This section can be ignored for Azure AD registered devices.
- **EnterprisePrtExpiryTime:** Set to the time in UTC when the PRT is going to expire if it is not renewed. - **EnterprisePrtAuthority:** ADFS authority URL
+>[!NOTE]
+> The following PRT diagnostics fields were added in **Windows 10 May 2021 Update (version 21H1)**
+
+>[!NOTE]
+> Diagnostic info displayed under **AzureAdPrt** field are for AzureAD PRT acquisition/refresh and diagnostic info displayed under **EnterprisePrt** and for Enterprise PRT acquisition/refresh respectively.
+
+>[!NOTE]
+>Diagnostic is info is displayed only if the acquisition/refresh failure happened after the the last successful PRT update time (AzureAdPrtUpdateTime/EnterprisePrtUpdateTime).
+>On a shared device this diagnostic info could be form a different user's logon attempt.
+
+- **AcquirePrtDiagnostics:** Set to "PRESENT" if acquire PRT diagnostic info is present in the logs.
+This field is skipped if no diagnostics info is available.
+- **Previous Prt Attempt:** Local time in UTC at which the failed PRT attempt occurred.
+- **Attempt Status:** Client error code returned (HRESULT).
+- **User Identity:** UPN of the user for whom the PRT attempt happened.
+- **Credential Type:** Credential used to acquire/refresh PRT. Common credential types are Password and NGC (Windows Hello).
+- **Correlation ID:** Correlation ID sent by the server for the failed PRT attempt.
+- **Endpoint URI:** Last endpoint accessed before the failure.
+- **HTTP Method:** HTTP method used to access the endpoint.
+- **HTTP Error:** WinHttp transport error code. WinHttp errors can be found [here](/windows/win32/winhttp/error-messages).
+- **HTTP Status:** HTTP status returned by the endpoint.
+- **Server Error Code:** Error code from server.
+- **Server Error Description:** Error message from server.
+- **RefreshPrtDiagnostics:** Set to "PRESENT" if acquire PRT diagnostic info is present in the logs.
+This field is skipped if no diagnostics info is available.
+The diagnostic info fields are same as **AcquirePrtDiagnostics**
++ ### Sample SSO state output ```
This section can be ignored for Azure AD registered devices.
| SSO State | +-+
- AzureAdPrt : YES
- AzureAdPrtUpdateTime : 2019-01-24 19:15:26.000 UTC
- AzureAdPrtExpiryTime : 2019-02-07 19:15:26.000 UTC
+ AzureAdPrt : NO
AzureAdPrtAuthority : https://login.microsoftonline.com/96fa76d0-xxxx-xxxx-xxxx-eb60cc22xxxx
+ AcquirePrtDiagnostics : PRESENT
+ Previous Prt Attempt : 2020-07-18 20:10:33.789 UTC
+ Attempt Status : 0xc000006d
+ User Identity : john@contoso.com
+ Credential Type : Password
+ Correlation ID : 63648321-fc5c-46eb-996e-ed1f3ba7740f
+ Endpoint URI : https://login.microsoftonline.com/96fa76d0-xxxx-xxxx-xxxx-eb60cc22xxxx/oauth2/token/
+ HTTP Method : POST
+ HTTP Error : 0x0
+ HTTP status : 400
+ Server Error Code : invalid_grant
+ Server Error Description : AADSTS50126: Error validating credentials due to invalid username or password.
EnterprisePrt : YES EnterprisePrtUpdateTime : 2019-01-24 19:15:33.000 UTC EnterprisePrtExpiryTime : 2019-02-07 19:15:33.000 UTC
The following example shows diagnostics tests are passing but the registration a
This section displays the output of sanity checks performed on a device joined to the cloud. -- **AadRecoveryEnabled:** If "YES", the keys stored in the device are not usable and the device is marked for recovery. The next sign in will trigger the recovery flow and re-register the device.-- **KeySignTest:** If "PASSED" the device keys are in good health. If KeySignTest fails, the device will usually be marked for recovery. The next sign in will trigger the recovery flow and re-register the device. For hybrid Azure AD joined devices the recovery is silent. While Azure AD joined or Azure AD registered, devices will prompt for user authentication to recover and re-register the device if necessary. **The KeySignTest requires elevated privileges.**
+- **AadRecoveryEnabled:** If "YES", the keys stored in the device are not usable and the device is marked for recovery. The next sign-in will trigger the recovery flow and re-register the device.
+- **KeySignTest:** If "PASSED" the device keys are in good health. If KeySignTest fails, the device will usually be marked for recovery. The next sign-in will trigger the recovery flow and re-register the device. For hybrid Azure AD joined devices the recovery is silent. While Azure AD joined or Azure AD registered, devices will prompt for user authentication to recover and re-register the device if necessary. **The KeySignTest requires elevated privileges.**
#### Sample post-join diagnostics output
This section performs the prerequisite checks for the provisioning of Windows He
> You may not see NGC prerequisite check details in dsregcmd /status if the user already successfully configured WHFB. - **IsDeviceJoined:** Set to "YES" if the device is joined to Azure AD.-- **IsUserAzureAD:** Set to "YES" if the logged in user is present in Azure AD .
+- **IsUserAzureAD:** Set to "YES" if the logged in user is present in Azure AD.
- **PolicyEnabled:** Set to "YES" if the WHFB policy is enabled on the device. - **PostLogonEnabled:** Set to "YES" if WHFB enrollment is triggered natively by the platform. If it's set to "NO", it indicates that Windows Hello for Business enrollment is triggered by a custom mechanism - **DeviceEligible:** Set to "YES" if the device meets the hardware requirement for enrolling with WHFB.
active-directory Troubleshoot Hybrid Join Windows Current https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/troubleshoot-hybrid-join-windows-current.md
Use Event Viewer logs to locate the phase and errorcode for the join failures.
##### Other Errors - **DSREG_AUTOJOIN_ADCONFIG_READ_FAILED** (0x801c001d/-2145648611)
- - Reason: EventID 220 is present in User Device Registration event logs. Windows cannot access the computer object in Active Directory. A Windows error code may be included in the event. For error codes ERROR_NO_SUCH_LOGON_SESSION (1312) and ERROR_NO_SUCH_USER (1317), these are related to replication issues in on-premises AD.
+ - Reason: EventID 220 is present in User Device Registration event logs. Windows cannot access the computer object in Active Directory. A Windows error code may be included in the event. For error codes ERROR_NO_SUCH_LOGON_SESSION (1312) and ERROR_NO_SUCH_USER (1317), these error codes are related to replication issues in on-premises AD.
- Resolution: Troubleshoot replication issues in AD. Replication issues may be transient and may go way after a period of time. ##### Federated join server Errors
Use Event Viewer logs to locate the phase and errorcode for the join failures.
### Step 5: Collect logs and contact Microsoft Support
-Download the file Auth.zip from [https://github.com/CSS-Identity/DRS/tree/main/Auth](https://github.com/CSS-Identity/DRS/tree/main/Auth)
+Download the file Auth.zip from [https://cesdiagtools.blob.core.windows.net/windows/Auth.zip](https://cesdiagtools.blob.core.windows.net/windows/Auth.zip)
-1. Unzip the files and rename the included files **start-auth.txt** and **stop-auth.txt** to **start-auth.cmd** and **stop-auth.cmd**.
-1. From an elevated command prompt, run **start-auth.cmd**.
+1. Unzip the files to a folder such as c:\temp and change into the folder.
+1. From an elevated PowerShell session, run **.\start-auth.ps1 -v -accepteula**.
1. Use Switch Account to toggle to another session with the problem user. 1. Reproduce the issue. 1. Use Switch Account to toggle back to the admin session running the tracing.
-1. From an elevated command prompt, run **stop-auth.cmd**.
+1. From the elevated PowerShell session, run **.\stop-auth.ps1**.
1. Zip and send the folder **Authlogs** from the folder where the scripts were executed from.
+
+## Troubleshoot Post-Join Authentication issues
-## Troubleshoot Post-Join issues
+### Step 1: Retrieve PRT status using dsregcmd /status
-### Retrieve the join status
+**To retrieve the PRT status:**
-#### WamDefaultSet: YES and AzureADPrt: YES
+1. Open a command prompt.
+ > [!NOTE]
+ > To get PRT status the command prompt should be run in the context of the logged in user
-These fields indicate whether the user has successfully authenticated to Azure AD when signing in to the device.
-If the values are **NO**, it could be due:
+2. Type dsregcmd /status
-- Bad storage key in the TPM associated with the device upon registration (check the KeySignTest while running elevated).-- Alternate Login ID-- HTTP Proxy not found
+3. ΓÇ£SSO stateΓÇ¥ section provides the current PRT status.
+
+4. If the AzureAdPrt field is set to ΓÇ£NOΓÇ¥, there was an error acquiring PRT from Azure AD.
+
+5. If the AzureAdPrtUpdateTime is more than 4 hours, there is likely an issue refreshing PRT. Lock and unlock the device to force PRT refresh and check if the time got updated.
+
+```
++-+
+| SSO State |
++-++
+ AzureAdPrt : YES
+ AzureAdPrtUpdateTime : 2020-07-12 22:57:53.000 UTC
+ AzureAdPrtExpiryTime : 2019-07-26 22:58:35.000 UTC
+ AzureAdPrtAuthority : https://login.microsoftonline.com/96fa76d0-xxxx-xxxx-xxxx-eb60cc22xxxx
+ EnterprisePrt : YES
+ EnterprisePrtUpdateTime : 2020-07-12 22:57:54.000 UTC
+ EnterprisePrtExpiryTime : 2020-07-26 22:57:54.000 UTC
+ EnterprisePrtAuthority : https://corp.hybridadfs.contoso.com:443/adfs
+++-+
+```
+
+### Step 2: Find the error code
+
+### From dsregcmd output
+
+> [!NOTE]
+> Available from **Windows 10 May 2021 Update (version 21H1)**.
+
+"Attempt Status" field under AzureAdPrt Field will provide the status of previous PRT attempt along with other required debug information. For older Windows versions, this information needs to be extracted from AAD analytic and operational logs.
+
+```
++-+
+| SSO State |
++-++
+ AzureAdPrt : NO
+ AzureAdPrtAuthority : https://login.microsoftonline.com/96fa76d0-xxxx-xxxx-xxxx-eb60cc22xxxx
+ AcquirePrtDiagnostics : PRESENT
+ Previous Prt Attempt : 2020-07-18 20:10:33.789 UTC
+ Attempt Status : 0xc000006d
+ User Identity : john@contoso.com
+ Credential Type : Password
+ Correlation ID : 63648321-fc5c-46eb-996e-ed1f3ba7740f
+ Endpoint URI : https://login.microsoftonline.com/96fa76d0-xxxx-xxxx-xxxx-eb60cc22xxxx/oauth2/token/
+ HTTP Method : POST
+ HTTP Error : 0x0
+ HTTP status : 400
+ Server Error Code : invalid_grant
+ Server Error Description : AADSTS50126: Error validating credentials due to invalid username or password.
+```
+
+### From AAD Analytic and operational logs
+
+Use Event Viewer to locate the log entries logged by AAD CloudAP plugin during PRT acquisition
+
+1. Open the AAD event logs in event viewer. Located under Application and Services Log > Microsoft > Windows > AAD
+
+ > [!NOTE]
+ > CloudAP plugin logs error events into the Operational logs while the info events are logged to the Analytic logs. Both Analytic and Operational log events are required to troubleshoot issues.
+
+2. Event 1006 in Analytic logs denotes the start of the PRT acquisition flow and Event 1007 in Analytic logs denotes the end of the PRT acquisition flow. All events in AAD logs (Analytic and Operational) between logged between the events 1006 and 1007 were logged as part of the PRT acquisition flow.
+
+3. Event 1007 logs the final error code.
++
+### Step 3: Follow additional troubleshooting, based on the found error code, from the list below
+
+**STATUS_LOGON_FAILURE** (-1073741715/ 0xc000006d)
+
+**STATUS_WRONG_PASSWORD** (-1073741718/ 0xc000006a)
+
+Reason(s):
+- Device is unable to connect to the AAD authentication service
+- Received an error response (HTTP 400) from AAD authentication service or WS-Trust endpoint .
+> [!NOTE]
+> WS-Trust is required for federated authentication
+
+Resolution:
+- If the on-premises environment requires an outbound proxy, the IT admin must ensure that the computer account of the device is able to discover and silently authenticate to the outbound proxy.
+- Events 1081 and 1088 (AAD operational logs) would contain the server error code and error description for errors originating from AAD authentication service and WS-Trust endpoint, respectively. Common server error codes and their resolutions are listed in the next section. First instance of Event 1022 (AAD analytic logs), preceding events 1081 or 1088, will contain the URL being accessed.
+++
+**STATUS_REQUEST_NOT_ACCEPTED** (-1073741616/ 0xc00000d0)
+
+Reason(s):
+- Received an error response (HTTP 400) from AAD authentication service or WS-Trust endpoint.
+> [!NOTE]
+> WS-Trust is required for federated authentication
+
+Resolution:
+- Events 1081 and 1088 (AAD operational logs) would contain the server error code and error description for errors originating from AAD authentication service and WS-Trust endpoint, respectively. Common server error codes and their resolutions are listed in the next section. First instance of Event 1022 (AAD analytic logs), preceding events 1081 or 1088, will contain the URL being accessed.
+++
+**STATUS_NETWORK_UNREACHABLE** (-1073741252/ 0xc000023c)
+
+**STATUS_BAD_NETWORK_PATH** (-1073741634/ 0xc00000be)
+
+**STATUS_UNEXPECTED_NETWORK_ERROR** (-1073741628/ 0xc00000c4)
+
+Reason(s):
+- Received an error response (HTTP > 400) from AAD authentication service or WS-Trust endpoint.
+> [!NOTE]
+> WS-Trust is required for federated authentication
+- Network connectivity issue to a required endpoint
+
+Resolution:
+- For server errors, Events 1081 and 1088 (AAD operational logs) would contain the error code and error description from AAD authentication service and WS-Trust endpoint, respectively. Common server error codes and their resolutions are listed in the next section.
+- For connectivity issues, Events 1022 (AAD analytic logs) and 1084 (AAD operational logs) will contain the URL being accessed and the sub-error code from network stack , respectively.
++
+**STATUS_NO_SUCH_LOGON_SESSION** (-1073741729/ 0xc000005f)
+
+Reason(s):
+- User realm discovery failed as AAD authentication service was unable to find the userΓÇÖs domain
+
+Resolution:
+- The domain of the userΓÇÖs UPN must be added as a custom domain in AAD. Event 1144 (AAD analytic logs) will contain the UPN provided.
+- If the on-premises domain name is non-routable (jdoe@contoso.local), configure Alternate Login ID (AltID). References: [prerequisites](hybrid-azuread-join-plan.md) [configuring-alternate-login-id](/windows-server/identity/ad-fs/operations/configuring-alternate-login-id)
+++
+**AAD_CLOUDAP_E_OAUTH_USERNAME_IS_MALFORMED** (-1073445812/ 0xc004844c)
+
+Reason(s):
+- UserΓÇÖs UPN is not in expected format.
+> [!NOTE]
+> - For AADJ devices the UPN is the text entered by the user in the LoginUI.
+> - For Hybrid Joined devices the UPN is returned from the domain controller during the login process.
+
+Resolution:
+- UserΓÇÖs UPN should be in the Internet-style login name, based on the Internet standard [RFC 822](https://www.ietf.org/rfc/rfc0822.txt). Event 1144 (AAD analytic logs) will contain the UPN provided.
+- For Hybrid joined devices, ensure the domain controller is configured to return the UPN in the correct format. whoami /upn should display the configured UPN in the domain controller.
+- If the on-premises domain name is non-routable (jdoe@contoso.local), configure Alternate Login ID (AltID). References: [prerequisites](hybrid-azuread-join-plan.md) [configuring-alternate-login-id](/windows-server/identity/ad-fs/operations/configuring-alternate-login-id)
+++
+**AAD_CLOUDAP_E_OAUTH_USER_SID_IS_EMPTY** (-1073445822/ 0xc0048442)
+
+Reason(s):
+- User SID missing in ID Token returned by AAD authentication service
+
+Resolution:
+- Ensure that network proxy is not interfering and modifying the server response.
+++
+**AAD_CLOUDAP_E_WSTRUST_SAML_TOKENS_ARE_EMPTY** (--1073445695/ 0xc00484c1)
+
+Reason(s):
+- Received an error from WS-Trust endpoint.
+> [!NOTE]
+> WS-Trust is required for federated authentication
+
+Resolution:
+- Ensure that network proxy is not interfering and modifying the WS-Trust response.
+- Event 1088 (AAD operational logs) would contain the server error code and error description from WS-Trust endpoint. Common server error codes and their resolutions are listed in the next section
+++
+**AAD_CLOUDAP_E_HTTP_PASSWORD_URI_IS_EMPTY** (-1073445749/ 0xc004848b)
+
+Reason:
+- MEX endpoint incorrectly configured. MEX response does not contain any password URLs
+
+Resolution:
+- Ensure that network proxy is not interfering and modifying the server response
+- Fix the MEX configuration to return valid URLs in response.
+++
+**WC_E_DTDPROHIBITED** (-1072894385/ 0xc00cee4f)
+
+Reason:
+- XML response, from WS-TRUST endpoint, included a DTD. DTD is not expected in the XML responses and parsing the response will fail if DTD is included.
+> [!NOTE]
+> WS-Trust is required for federated authentication
+
+Resolution:
+- Fix configuration in the identity provider to avoid sending DTD in XML response .
+- Event 1022 (AAD analytic logs) will contain the URL being accessed that is returning the XML response with DTD.
+++
+**Common Server Error codes:**
+
+**AADSTS50155: Device authentication failed**
+
+Reason:
+- AAD is unable to authenticate the device to issue a PRT
+- Confirm the device has not been deleted or disabled in the Azure portal. [More Info](faq.yml#why-do-my-users-see-an-error-message-saying--your-organization-has-deleted-the-device--or--your-organization-has-disabled-the-device--on-their-windows-10-devices)
+
+Resolution :
+- Follow steps listed [here](faq.yml#i-disabled-or-deleted-my-device-in-the-azure-portal-or-by-using-windows-powershell--but-the-local-state-on-the-device-says-it-s-still-registered--what-should-i-do) to re-register the device based on the device join type.
+++
+**AADSTS50034: The user account <Account> does not exist in the <tenant id> directory**
+
+Reason:
+- AAD is unable to find the user account in the tenant.
+
+Resolution:
+- Ensure the user is typing the correct UPN.
+- Ensure the on-prem user account is being synced to AAD.
+- Event 1144 (AAD analytic logs) will contain the UPN provided.
+++
+**AADSTS50126: Error validating credentials due to invalid username or password.**
+
+Reason:
+- Username and password entered by the user in the windows LoginUI are incorrect.
+- If the tenant has Password Hash Sync enabled, the device is Hybrid Joined and the user just changed the password it is likely the new password hasnΓÇÖt synced to AAD.
+
+Resolution:
+- Wait for the AAD sync to complete to acquire a fresh PRT with the new credentials.
+++
+**Common Network Error codes:**
+
+**ERROR_WINHTTP_TIMEOUT** (12002)
+
+**ERROR_WINHTTP_NAME_NOT_RESOLVED** (12007)
+
+**ERROR_WINHTTP_CANNOT_CONNECT** (12029)
+
+**ERROR_WINHTTP_CONNECTION_ERROR** (12030)
+
+Reason:
+- Common general network related issues.
+
+Resolution:
+- Events 1022 (AAD analytic logs) and 1084 (AAD operational logs) will contain the URL being accessed
+- If the on-premises environment requires an outbound proxy, the IT admin must ensure that the computer account of the device is able to discover and silently authenticate to the outbound proxy
+
+> [!NOTE]
+> Other network error codes located [here](/windows/win32/winhttp/error-messages).
+++
+### Step 4: Collect logs ###
+
+**Regular logs**
+
+1. Go to https://aka.ms/icesdptool, which will automatically download a .cab file containing the Diagnostic tool.
+2. Run the tool and repro your scenario, once the repro is complete. Finish the process.
+3. For Fiddler traces accept the certificate requests that will pop up.
+4. The wizard will prompt you for a password to safeguard your trace files. Provide a password.
+5. Finally, open the folder where all the logs collected are stored. It is typically in a folder like
+ %LOCALAPPDATA%\ElevatedDiagnostics\<numbers>
+7. Contact support with contents of latest.cab, which contains all the collected logs.
+
+**Network traces**
+
+> [!NOTE]
+> Collecting Network Traces: (it is important to NOT use Fiddler during repro)
+
+1. netsh trace start scenario=InternetClient_dbg capture=yes persistent=yes
+2. Lock and Unlock the device. For Hybrid joined devices wait a > minute to allow PRT acquisition task to complete.
+3. netsh trace stop
+4. Share nettrace.cab
++ ## Known issues - Under Settings -> Accounts -> Access Work or School, Hybrid Azure AD joined devices may show two different accounts, one for Azure AD and one for on-premises AD, when connected to mobile hotspots or external WiFi networks. This is only a UI issue and does not have any impact on functionality.
active-directory Directory Delete Howto https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/directory-delete-howto.md
Previously updated : 12/02/2020 Last updated : 07/14/2021
When an Azure AD organization (tenant) is deleted, all resources that are contai
You can't delete a organization in Azure AD until it passes several checks. These checks reduce risk that deleting an Azure AD organization negatively impacts user access, such as the ability to sign in to Microsoft 365 or access resources in Azure. For example, if the organization associated with a subscription is unintentionally deleted, then users can't access the Azure resources for that subscription. The following conditions are checked:
-* There can be no users in the Azure AD organization (tenant) except one global administrator who is to delete the organization. Any other users must be deleted before the organization can be deleted. If users are synchronized from on-premises, then sync must first be turned off, and the users must be deleted in the cloud organization using the Azure portal or Azure PowerShell cmdlets.
+* There can be no users in the Azure AD tenant except one global administrator who is to delete the organization. Any other users must be deleted before the organization can be deleted. If users are synchronized from on-premises, then sync must first be turned off, and the users must be deleted in the cloud organization using the Azure portal or Azure PowerShell cmdlets.
* There can be no applications in the organization. Any applications must be removed before the organization can be deleted. * There can be no multi-factor authentication providers linked to the organization. * There can be no subscriptions for any Microsoft Online Services such as Microsoft Azure, Microsoft 365, or Azure AD Premium associated with the organization. For example, if a default Azure AD organization was created for you in Azure, you cannot delete this organization if your Azure subscription still relies on this organization for authentication. Similarly, you can't delete a organization if another user has associated a subscription with it.
You can put a subscription into the **Deprovisioned** state to be deleted in thr
![pass subscription check at deletion screen](./media/directory-delete-howto/delete-checks-passed.png)
+## Enterprise apps with no way to delete
+
+If you find that there are still enterprise applications that you can't delete in the portal, you can use the following PowerShell commands to remove them. For more information on this PowerShell command, see [Remove-AzureADServicePrincipal](/powershell/module/azuread/remove-azureadserviceprincipal?view=azureadps-2.0&preserve-view=true).
+
+1. Open PowerShell as an administrator
+1. Run `Connect-AzAccount -tenant <TENANT_ID>`
+1. Sign in the Azure AD Global Administrator role
+1. Run `Get-AzADServicePrincipal | ForEach-Object {ΓÇïΓÇïΓÇïΓÇïΓÇï Remove-AzADServicePrincipal -ObjectId $_.Id -Force}ΓÇï`ΓÇïΓÇïΓÇïΓÇï
+ ## I have a trial subscription that blocks deletion There are [self-service sign-up products](/office365/admin/misc/self-service-sign-up) like Microsoft Power BI, Rights Management Services, Microsoft Power Apps, or Dynamics 365, individual users can sign up via Microsoft 365, which also creates a guest user for authentication in your Azure AD organization. These self-service products block directory deletions until the products are fully deleted from the organization, to avoid data loss. They can be deleted only by the Azure AD admin whether the user signed up individually or was assigned the product.
active-directory Add Users Administrator https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/add-users-administrator.md
To add B2B collaboration users to the directory, follow these steps:
- **Email address (required)**. The email address of the guest user. - **Personal message (optional)** Include a personal welcome message to the guest user. - **Groups**: You can add the guest user to one or more existing groups, or you can do it later.
- - **Roles**: If you require Azure AD administrative permissions for the user, you can add them to an Azure AD role by selecting **User** next to **Roles**. [Learn more](/azure/role-based-access-control/role-assignments-external-users) about Azure roles for external guest users.
+ - **Roles**: If you require Azure AD administrative permissions for the user, you can add them to an Azure AD role by selecting **User** next to **Roles**. [Learn more](../../role-based-access-control/role-assignments-external-users.md) about Azure roles for external guest users.
> [!NOTE] > Group email addresses arenΓÇÖt supported; enter the email address for an individual. Also, some email providers allow users to add a plus symbol (+) and additional text to their email addresses to help with things like inbox filtering. However, Azure AD doesnΓÇÖt currently support plus symbols in email addresses. To avoid delivery issues, omit the plus symbol and any characters following it up to the @ symbol.
active-directory Redemption Experience https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/redemption-experience.md
When you add a guest user to your directory by [using the Azure portal](./b2b-qu
## Redemption limitation with conflicting Contact object Sometimes the invited external guest user's email may conflict with an existing [Contact object](/graph/api/resources/contact?view=graph-rest-1.0&preserve-view=true), resulting in the guest user being created without a proxyAddress. This is a known limitation that prevents guest users from: -- Redeeming an invitation through a direct link using [SAML/WS-Fed IdP](/azure/active-directory/external-identities/direct-federation), [Microsoft Accounts](/azure/active-directory/external-identities/microsoft-account), [Google Federation](/azure/active-directory/external-identities/google-federation), or [Email One-Time Passcode](/azure/active-directory/external-identities/one-time-passcode) accounts. -- Redeeming an invitation through an invitation email redemption link using [SAML/WS-Fed IdP](/azure/active-directory/external-identities/direct-federation) and [Email One-Time Passcode](/azure/active-directory/external-identities/one-time-passcode) accounts.-- Signing back into an application after redemption using [SAML/WS-Fed IdP](/azure/active-directory/external-identities/direct-federation) and [Google Federation](/azure/active-directory/external-identities/google-federation) accounts.
+- Redeeming an invitation through a direct link using [SAML/WS-Fed IdP](./direct-federation.md), [Microsoft Accounts](./microsoft-account.md), [Google Federation](./google-federation.md), or [Email One-Time Passcode](./one-time-passcode.md) accounts.
+- Redeeming an invitation through an invitation email redemption link using [SAML/WS-Fed IdP](./direct-federation.md) and [Email One-Time Passcode](./one-time-passcode.md) accounts.
+- Signing back into an application after redemption using [SAML/WS-Fed IdP](./direct-federation.md) and [Google Federation](./google-federation.md) accounts.
To unblock users who can't redeem an invitation due to a conflicting [Contact object](/graph/api/resources/contact?view=graph-rest-1.0&preserve-view=true), follow these steps: 1. Delete the conflicting Contact object.
active-directory Azure Active Directory B2c Deployment Plans https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/azure-active-directory-b2c-deployment-plans.md
This phase includes the following capabilities:
| Monitoring |[Monitor Azure AD B2C with Azure Monitor](https://docs.microsoft.com/azure/active-directory-b2c/azure-monitor). Watch [this video](https://www.youtube.com/watch?v=Mu9GQy-CbXI&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=1)| | Auditing and Logging | [Access and review audit logs](https://docs.microsoft.com/azure/active-directory-b2c/view-audit-logs)
+## More information
+
+To accelerate Azure AD B2C deployments and monitor the service at scale, see these articles:
+
+- [Manage Azure AD B2C with Microsoft Graph](../../active-directory-b2c/microsoft-graph-get-started.md)
+
+- [Manage Azure AD B2C user accounts with Microsoft Graph](../../active-directory-b2c/microsoft-graph-operations.md)
+
+- [Deploy custom policies with Azure Pipelines](../../active-directory-b2c/deploy-custom-policies-devops.md)
+
+- [Manage Azure AD B2C custom policies with Azure PowerShell](../../active-directory-b2c/manage-custom-policies-powershell.md)
+
+- [Monitor Azure AD B2C with Azure Monitor](../../active-directory-b2c/azure-monitor.md)
+ ## Next steps - [Azure AD B2C best practices](https://docs.microsoft.com/azure/active-directory-b2c/best-practices)
active-directory Entitlement Management Delegate Catalog https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-delegate-catalog.md
na
ms.devlang: na Previously updated : 06/18/2020 Last updated : 07/6/2021
To allow delegated roles, such as catalog creators and access package managers,
![Azure AD user settings - Administration portal](./media/entitlement-management-delegate-catalog/user-settings.png)
+## Manage role assignments programmatically (preview)
+
+You can also view and update catalog creators and entitlement management catalog-specific role assignments using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.ReadWrite.All` permission can call the Graph API to [list the role definitions](/graph/api/rbacapplication-list-roledefinitions?view=graph-rest-beta&preserve-view=true) of entitlement management, and [list role assignments](/graph/api/rbacapplication-list-roleassignments?view=graph-rest-beta&preserve-view=true) to those role definitions.
+
+To retrieve a list of the users and groups assigned to the catalog creators role, the role with definition id `ba92d953-d8e0-4e39-a797-0cbedb0a89e8`, use the Graph query
+
+```http
+GET https://graph.microsoft.com/beta/roleManagement/entitlementManagement/roleAssignments?$filter=roleDefinitionId eq 'ba92d953-d8e0-4e39-a797-0cbedb0a89e8'&$expand=principal
+```
++ ## Next steps - [Create and manage a catalog of resources](entitlement-management-catalog-create.md) - [Delegate access governance to access package managers](entitlement-management-delegate-managers.md)
+- [Delegate access governance to resource owners](entitlement-management-delegate.md)
active-directory Entitlement Management Delegate Managers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-delegate-managers.md
This video provides an overview of how to delegate access governance from catalo
> [!VIDEO https://www.microsoft.com/videoplayer/embed/RE3Lq08]
+In addition to the catalog owner and access package manager roles, you can also add users to the catalog reader role, which provides view-only access to the catalog, or to the access package assignment manager role, which enables the users to change assignments but not access packages or policies.
+ ## As a catalog owner, delegate to an access package manager Follow these steps to assign a user to the access package manager role:
active-directory Entitlement Management Delegate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-delegate.md
na
ms.devlang: na Previously updated : 12/23/2020 Last updated : 7/6/2021
After delegation, the marketing department might have roles similar to the follo
Entitlement management has the following roles that are specific to entitlement management.
-| Entitlement management role | Description |
-| | |
-| Catalog creator | Create and manage catalogs. Typically an IT administrator who isn't a Global administrator, or a resource owner for a collection of resources. The person that creates a catalog automatically becomes the catalog's first catalog owner, and can add more catalog owners. A catalog creator canΓÇÖt manage or see catalogs that they donΓÇÖt own and canΓÇÖt add resources they donΓÇÖt own to a catalog. If the catalog creator needs to manage another catalog or add resources they donΓÇÖt own, they can request to be a co-owner of that catalog or resource. |
-| Catalog owner | Edit and manage existing catalogs. Typically an IT administrator or resource owners, or a user who the catalog owner has chosen. |
-| Access package manager | Edit and manage all existing access packages within a catalog. |
-| Access package assignment manager | Edit and manage all existing access packages' assignments. |
+| Entitlement management role | Role definition ID | Description |
+| | | -- |
+| Catalog creator | `ba92d953-d8e0-4e39-a797-0cbedb0a89e8` | Create and manage catalogs. Typically an IT administrator who isn't a Global administrator, or a resource owner for a collection of resources. The person that creates a catalog automatically becomes the catalog's first catalog owner, and can add more catalog owners. A catalog creator canΓÇÖt manage or see catalogs that they donΓÇÖt own and canΓÇÖt add resources they donΓÇÖt own to a catalog. If the catalog creator needs to manage another catalog or add resources they donΓÇÖt own, they can request to be a co-owner of that catalog or resource. |
+| Catalog owner | `ae79f266-94d4-4dab-b730-feca7e132178` | Edit and manage existing catalogs. Typically an IT administrator or resource owners, or a user who the catalog owner has chosen. |
+| Catalog reader | `44272f93-9762-48e8-af59-1b5351b1d6b3` | View existing access packages within a catalog. |
+| Access package manager | `7f480852-ebdc-47d4-87de-0d8498384a83` | Edit and manage all existing access packages within a catalog. |
+| Access package assignment manager | `e2182095-804a-4656-ae11-64734e9b7ae5` | Edit and manage all existing access packages' assignments. |
Also, the chosen approver and a requestor of an access package have rights, although they're not roles.
For a user who isn't a global administrator, to add groups, applications, or Sha
To determine the least privileged role for a task, you can also reference [Administrator roles by admin task in Azure Active Directory](../roles/delegate-by-task.md#entitlement-management).
+## Manage role assignments programmatically (preview)
+
+You can also view and update catalog creators and entitlement management catalog-specific role assignments using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.ReadWrite.All` permission can call the Graph API to [list the role definitions](/graph/api/rbacapplication-list-roledefinitions?view=graph-rest-beta&preserve-view=true) of entitlement management, and [list role assignments](/graph/api/rbacapplication-list-roleassignments?view=graph-rest-beta&preserve-view=true) to those role definitions.
+
+For example, to view the entitlement management-specific roles which a particular user or group has been assigned, use the Graph query to list role assignments, and provide the user or group's ID as the value of the `principalId` query filter, as in
+
+```http
+GET https://graph.microsoft.com/beta/roleManagement/entitlementManagement/roleAssignments?$filter=principalId eq '10850a21-5283-41a6-9df3-3d90051dd111'&$expand=roleDefinition&$select=id,appScopeId,roleDefinition
+```
+
+For a role that is specific to a catalog, the `appScopeId` in the response indicates the catalog in which the user is assigned a role. Note that this response only retrieves explicit assignments of that principal to role in entitlement management, it does not return results for a user who has access rights via a directory role, or through membership in a group assigned to a role.
++ ## Next steps - [Delegate access governance to catalog creators](entitlement-management-delegate-catalog.md)
+- [Delegate access governance to access package managers](entitlement-management-delegate-managers.md)
- [Create and manage a catalog of resources](entitlement-management-catalog-create.md)
active-directory How To Connect Health Agent Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-health-agent-install.md
Check out the following related articles:
* [Using Azure AD Connect Health for Sync](how-to-connect-health-sync.md) * [Using Azure AD Connect Health with Azure AD DS](how-to-connect-health-adds.md) * [Azure AD Connect Health FAQ](reference-connect-health-faq.yml)
-* [Azure AD Connect Health version history](reference-connect-health-version-history.md)
+* [Azure AD Connect Health version history](reference-connect-health-version-history.md)
active-directory How To Connect Monitor Federation Changes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-monitor-federation-changes.md
To monitor the trust relationship, we recommend you set up alerts to be notified
Follow these steps to set up alerts to monitor the trust relationship: 1. [Configure Azure AD audit logs](../../active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md) to flow to an Azure Log Analytics Workspace.
-2. [Create an alert rule](https://docs.microsoft.com/azure/azure-monitor/alerts/alerts-log) that triggers based on Azure AD log query.
-3. [Add an action group](https://docs.microsoft.com/azure/azure-monitor/alerts/action-groups) to the alert rule that gets notified when the alert condition is met.
+2. [Create an alert rule](../../azure-monitor/alerts/alerts-log.md) that triggers based on Azure AD log query.
+3. [Add an action group](../../azure-monitor/alerts/action-groups.md) to the alert rule that gets notified when the alert condition is met.
After the environment is configured, the data flows as follows:
After the environment is configured, the data flows as follows:
## Next steps - [Integrate Azure AD logs with Azure Monitor logs](../../active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md)-- [Create, view, and manage log alerts using Azure Monitor](https://docs.microsoft.com/azure/azure-monitor/alerts/alerts-log)
+- [Create, view, and manage log alerts using Azure Monitor](../../azure-monitor/alerts/alerts-log.md)
- [Manage AD FS trust with Azure AD using Azure AD Connect](how-to-connect-azure-ad-trust.md)-- [Best practices for securing Active Directory Federation Services](https://docs.microsoft.com/windows-server/identity/ad-fs/deployment/best-practices-securing-ad-fs)
+- [Best practices for securing Active Directory Federation Services](/windows-server/identity/ad-fs/deployment/best-practices-securing-ad-fs)
active-directory How To Connect Single Object Sync https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-single-object-sync.md
The HTML report has the following:
In order to use the Single Object Sync tool, you will need to use the following: - 2021 March release ([1.6.4.0](reference-connect-version-history.md#1640)) of Azure AD Connect or later.
+ - [PowerShell 5.0](/powershell/scripting/windows-powershell/whats-new/what-s-new-in-windows-powershell-50?view=powershell-7.1)
### Run the Single Object Sync tool
active-directory How To Connect Sync Feature Preferreddatalocation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-sync-feature-preferreddatalocation.md
The purpose of this topic is to walk you through how to configure the attribute for preferred data location in Azure Active Directory (Azure AD) Connect sync. When someone uses Multi-Geo capabilities in Microsoft 365, you use this attribute to designate the geo-location of the userΓÇÖs Microsoft 365 data. (The terms *region* and *geo* are used interchangeably.) ## Supported Multi-Geo locations
-For a list of all geos supported by Azure AD Connect see [Microsoft 365 Multi-Geo availability](https://docs.microsoft.com/microsoft-365/enterprise/microsoft-365-multi-geo?view=o365-worldwide#microsoft-365-multi-geo-availability)
+For a list of all geos supported by Azure AD Connect see [Microsoft 365 Multi-Geo availability](/microsoft-365/enterprise/microsoft-365-multi-geo?view=o365-worldwide#microsoft-365-multi-geo-availability)
## Enable synchronization of preferred data location By default, Microsoft 365 resources for your users are located in the same geo as your Azure AD tenant. For example, if your tenant is located in North America, then the users' Exchange mailboxes are also located in North America. For a multinational organization, this might not be optimal.
By setting the attribute **preferredDataLocation**, you can define a user's geo.
> [!IMPORTANT] > Multi-Geo is currently available to customers with an active Enterprise Agreement and a minimum of 250 Microsoft 365 Services subscriptions. Please talk to your Microsoft representative for details. >
-> For a list of all geos supported by Azure AD Connect see [Microsoft 365 Multi-Geo availability](https://docs.microsoft.com/microsoft-365/enterprise/microsoft-365-multi-geo?view=o365-worldwide#microsoft-365-multi-geo-availability).
+> For a list of all geos supported by Azure AD Connect see [Microsoft 365 Multi-Geo availability](/microsoft-365/enterprise/microsoft-365-multi-geo?view=o365-worldwide#microsoft-365-multi-geo-availability).
Learn more about the configuration model in the sync engine:
Overview topics: * [Azure AD Connect sync: Understand and customize synchronization](how-to-connect-sync-whatis.md)
-* [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md)
+* [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md)
active-directory Reference Connect Adsync https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/reference-connect-adsync.md
The following documentation provides reference information for the ADSync.psm1 P
``` #### CommonParameters
- This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+ This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
### INPUTS
The following documentation provides reference information for the ADSync.psm1 P
``` #### CommonParameters
- This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+ This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
### INPUTS
The following documentation provides reference information for the ADSync.psm1 P
``` #### CommonParameters
- This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+ This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
### INPUTS
The following documentation provides reference information for the ADSync.psm1 P
``` #### CommonParameters
- This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+ This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
### INPUTS
The following documentation provides reference information for the ADSync.psm1 P
``` #### CommonParameters
- This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+ This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
### INPUTS
The following documentation provides reference information for the ADSync.psm1 P
``` #### CommonParameters
- This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+ This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
### INPUTS
The following documentation provides reference information for the ADSync.psm1 P
### PARAMETERS #### CommonParameters
- This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+ This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
### INPUTS
The following documentation provides reference information for the ADSync.psm1 P
``` #### CommonParameters
- This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+ This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
### INPUTS
The following documentation provides reference information for the ADSync.psm1 P
``` #### CommonParameters
- This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+ This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
### INPUTS
The following documentation provides reference information for the ADSync.psm1 P
``` #### CommonParameters
- This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+ This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
### INPUTS
The following documentation provides reference information for the ADSync.psm1 P
``` #### CommonParameters
- This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+ This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
### INPUTS
The following documentation provides reference information for the ADSync.psm1 P
### PARAMETERS #### CommonParameters
- This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+ This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
### INPUTS
The following documentation provides reference information for the ADSync.psm1 P
``` #### CommonParameters
- This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+ This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
### INPUTS
The following documentation provides reference information for the ADSync.psm1 P
``` #### CommonParameters
- This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+ This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
### INPUTS
The following documentation provides reference information for the ADSync.psm1 P
``` #### CommonParameters
- This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+ This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
### INPUTS
The following documentation provides reference information for the ADSync.psm1 P
``` #### CommonParameters
- This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+ This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
### INPUTS
The following documentation provides reference information for the ADSync.psm1 P
``` #### CommonParameters
- This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+ This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
### INPUTS
The following documentation provides reference information for the ADSync.psm1 P
``` #### CommonParameters
- This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+ This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
### INPUTS
The following documentation provides reference information for the ADSync.psm1 P
``` #### CommonParameters
- This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+ This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
### INPUTS
The following documentation provides reference information for the ADSync.psm1 P
``` #### CommonParameters
- This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+ This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
### INPUTS
The following documentation provides reference information for the ADSync.psm1 P
``` #### CommonParameters
- This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+ This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
### INPUTS
The following documentation provides reference information for the ADSync.psm1 P
``` #### CommonParameters
- This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+ This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
### INPUTS
The following documentation provides reference information for the ADSync.psm1 P
### PARAMETERS #### CommonParameters
- This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+ This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
### INPUTS
The following documentation provides reference information for the ADSync.psm1 P
``` #### CommonParameters
- This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+ This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
### INPUTS
The following documentation provides reference information for the ADSync.psm1 P
``` #### CommonParameters
- This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+ This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
### INPUTS
The following documentation provides reference information for the ADSync.psm1 P
``` #### CommonParameters
- This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+ This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
### INPUTS
The following documentation provides reference information for the ADSync.psm1 P
## Next Steps - [What is hybrid identity?](./whatis-hybrid-identity.md)-- [What is Azure AD Connect and Connect Health?](whatis-azure-ad-connect.md)
+- [What is Azure AD Connect and Connect Health?](whatis-azure-ad-connect.md)
active-directory Reference Connect Device Disappearance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/reference-connect-device-disappearance.md
- Title: 'Understanding Azure AD Connect 1.4.xx.x and device disappearance | Microsoft Docs'
-description: This document describes an issue that arises with version 1.4.xx.x of Azure AD Connect
------ Previously updated : 06/30/2021----
-# Understanding Azure AD Connect 1.4.xx.x and device disappearance
-With version 1.4.xx.x of Azure AD Connect, some customers may see some or all of their Windows devices disappear from Azure AD. This is not a cause for concern, as these device identities are not used by Azure AD during Conditional Access authorization. This change won't delete any Windows devices that were correctly registered with Azure AD for Hybrid Azure AD Join.
-
-If you see the deletion of device objects in Azure AD exceeding the Export Deletion Threshold, it is advised that the customer allow the deletions to go through. [How To: allow deletes to flow when they exceed the deletion threshold](how-to-connect-sync-feature-prevent-accidental-deletes.md)
-
-## Background
-Windows devices registered as Hybrid Azure AD Joined are represented in Azure AD as device objects. These device objects can be used for Conditional Access. Windows 10 devices are synced to the cloud via Azure AD Connect, down level Windows devices are registered directly using either AD FS or seamless single sign-on.
-
-## Windows 10 devices
-Only Windows 10 devices with a specific userCertificate attribute value configured by Hybrid Azure AD Join are supposed to be synced to the cloud by Azure AD Connect. In previous versions of Azure AD Connect this requirement was not rigorously enforced, resulting in unnecessary device objects in Azure AD. Such devices in Azure AD always stayed in the ΓÇ£pendingΓÇ¥ state because these devices were not intended to be registered with Azure AD.
-
-This version of Azure AD Connect will only sync Windows 10 devices that are correctly configured to be Hybrid Azure AD Joined. Windows 10 device objects without the Azure AD join specific userCertificate will be removed from Azure AD.
-
-## Down-Level Windows devices
-Azure AD Connect should never be syncing [down-level Windows devices](../devices/hybrid-azuread-join-plan.md#windows-down-level-devices). Any devices in Azure AD previously synced incorrectly will now be deleted from Azure AD. If Azure AD Connect is attempting to delete [down-level Windows devices](../devices/hybrid-azuread-join-plan.md#windows-down-level-devices), then the device is not the one that was created by the [Microsoft Workplace Join for non-Windows 10 computers MSI](https://www.microsoft.com/download/details.aspx?id=53554) and it is not able to be consumed by any other Azure AD feature.
-
-Some customers may need to revisit [How To: Plan your hybrid Azure Active Directory join implementation](../devices/hybrid-azuread-join-plan.md) to get their Windows devices registered correctly and ensure that such devices can fully participate in device-based Conditional Access.
-
-## How can I verify which devices are deleted with this update?
-
-To verify which devices are deleted, you can use the PowerShell script [below.](#powershell-certificate-report-script)
--
-This script generates a report about certificates stored in Active Directory Computer objects, specifically, certificates issued by the Hybrid Azure AD join feature.
-It checks the certificates present in the UserCertificate property of a Computer object in AD and, for each non-expired certificate present, validates if the certificate was issued for the Hybrid Azure AD join feature (i.e. Subject Name matches CN={ObjectGUID}).
-Before, Azure AD Connect would synchronize to Azure AD any Computer that contained at least one valid certificate but starting on Azure AD Connect version 1.4, the synchronization engine can identify Hybrid Azure AD join certificates and will ΓÇÿcloudfilterΓÇÖ the computer object from synchronizing to Azure AD unless thereΓÇÖs a valid Hybrid Azure AD join certificate.
-Azure AD Devices that were already synchronized to AD but do not have a valid Hybrid Azure AD join certificate will be deleted (CloudFiltered=TRUE) by the sync engine.
-
-## PowerShell certificate report script
--
- ```PowerShell
-<#
-
-Filename: Export-ADSyncToolsHybridAzureADjoinCertificateReport.ps1.
-
-DISCLAIMER:
-Copyright (c) Microsoft Corporation. All rights reserved. This script is made available to you without any express, implied or statutory warranty, not even the implied warranty of merchantability or fitness for a particular purpose, or the warranty of title or non-infringement. The entire risk of the use or the results from the use of this script remains with you.
-.Synopsis
-This script generates a report about certificates stored in Active Directory Computer objects, specifically,
-certificates issued by the Hybrid Azure AD join feature.
-.DESCRIPTION
-It checks the certificates present in the UserCertificate property of a Computer object in AD and, for each
-non-expired certificate present, validates if the certificate was issued for the Hybrid Azure AD join feature
-(i.e. Subject Name matches CN={ObjectGUID}).
-Before, Azure AD Connect would synchronize to Azure AD any Computer that contained at least one valid
-certificate but starting on Azure AD Connect version 1.4, the sync engine can identify Hybrid
-Azure AD join certificates and will ΓÇÿcloudfilterΓÇÖ the computer object from synchronizing to Azure AD unless
-thereΓÇÖs a valid Hybrid Azure AD join certificate.
-Azure AD Device objects that were already synchronized to AD but do not have a valid Hybrid Azure AD join
-certificate will be deleted (CloudFiltered=TRUE) by the sync engine.
-.EXAMPLE
-.\Export-ADSyncToolsHybridAzureADjoinCertificateReport.ps1 -DN 'CN=Computer1,OU=SYNC,DC=Fabrikam,DC=com'
-.EXAMPLE
-.\Export-ADSyncToolsHybridAzureADjoinCertificateReport.ps1 -OU 'OU=SYNC,DC=Fabrikam,DC=com' -Filename "MyHybridAzureADjoinReport.csv" -Verbose
-
-#>
- [CmdletBinding()]
- Param
- (
- # Computer DistinguishedName
- [Parameter(ParameterSetName='SingleObject',
- Mandatory=$true,
- ValueFromPipelineByPropertyName=$true,
- Position=0)]
- [String]
- $DN,
-
- # AD OrganizationalUnit
- [Parameter(ParameterSetName='MultipleObjects',
- Mandatory=$true,
- ValueFromPipelineByPropertyName=$true,
- Position=0)]
- [String]
- $OU,
-
- # Output CSV filename (optional)
- [Parameter(Mandatory=$false,
- ValueFromPipelineByPropertyName=$false,
- Position=1)]
- [String]
- $Filename
-
- )
-
- # Generate Output filename if not provided
- If ($Filename -eq "")
- {
- $Filename = [string] "$([string] $(Get-Date -Format yyyyMMddHHmmss))_ADSyncAADHybridJoinCertificateReport.csv"
- }
- Write-Verbose "Output filename: '$Filename'"
-
- # Read AD object(s)
- If ($PSCmdlet.ParameterSetName -eq 'SingleObject')
- {
- $directoryObjs = @(Get-ADObject $DN -Properties UserCertificate)
- Write-Verbose "Starting report for a single object '$DN'"
- }
- Else
- {
- $directoryObjs = Get-ADObject -Filter { ObjectClass -like 'computer' } -SearchBase $OU -Properties UserCertificate
- Write-Verbose "Starting report for $($directoryObjs.Count) computer objects in OU '$OU'"
- }
-
- Write-Host "Processing $($directoryObjs.Count) directory object(s). Please wait..."
- # Check Certificates on each AD Object
- $results = @()
- ForEach ($obj in $directoryObjs)
- {
- # Read UserCertificate multi-value property
- $objDN = [string] $obj.DistinguishedName
- $objectGuid = [string] ($obj.ObjectGUID).Guid
- $userCertificateList = @($obj.UserCertificate)
- $validEntries = @()
- $totalEntriesCount = $userCertificateList.Count
- Write-verbose "'$objDN' ObjectGUID: $objectGuid"
- Write-verbose "'$objDN' has $totalEntriesCount entries in UserCertificate property."
- If ($totalEntriesCount -eq 0)
- {
- Write-verbose "'$objDN' has no Certificates - Skipped."
- Continue
- }
-
- # Check each UserCertificate entry and build array of valid certs
- ForEach($entry in $userCertificateList)
- {
- Try
- {
- $cert = [System.Security.Cryptography.X509Certificates.X509Certificate2] $entry
- }
- Catch
- {
- Write-verbose "'$objDN' has an invalid Certificate!"
- Continue
- }
- Write-verbose "'$objDN' has a Certificate with Subject: $($cert.Subject); Thumbprint:$($cert.Thumbprint)."
- $validEntries += $cert
-
- }
-
- $validEntriesCount = $validEntries.Count
- Write-verbose "'$objDN' has a total of $validEntriesCount certificates (shown above)."
-
- # Get non-expired Certs (Valid Certificates)
- $validCerts = @($validEntries | Where-Object {$_.NotAfter -ge (Get-Date)})
- $validCertsCount = $validCerts.Count
- Write-verbose "'$objDN' has $validCertsCount valid certificates (not-expired)."
-
- # Check for AAD Hybrid Join Certificates
- $hybridJoinCerts = @()
- $hybridJoinCertsThumbprints = [string] "|"
- ForEach ($cert in $validCerts)
- {
- $certSubjectName = $cert.Subject
- If ($certSubjectName.StartsWith($("CN=$objectGuid")) -or $certSubjectName.StartsWith($("CN={$objectGuid}")))
- {
- $hybridJoinCerts += $cert
- $hybridJoinCertsThumbprints += [string] $($cert.Thumbprint) + '|'
- }
- }
-
- $hybridJoinCertsCount = $hybridJoinCerts.Count
- if ($hybridJoinCertsCount -gt 0)
- {
- $cloudFiltered = 'FALSE'
- Write-verbose "'$objDN' has $hybridJoinCertsCount AAD Hybrid Join Certificates with Thumbprints: $hybridJoinCertsThumbprints (cloudFiltered=FALSE)"
- }
- Else
- {
- $cloudFiltered = 'TRUE'
- Write-verbose "'$objDN' has no AAD Hybrid Join Certificates (cloudFiltered=TRUE)."
- }
-
- # Save results
- $r = "" | Select ObjectDN, ObjectGUID, TotalEntriesCount, CertsCount, ValidCertsCount, HybridJoinCertsCount, CloudFiltered
- $r.ObjectDN = $objDN
- $r.ObjectGUID = $objectGuid
- $r.TotalEntriesCount = $totalEntriesCount
- $r.CertsCount = $validEntriesCount
- $r.ValidCertsCount = $validCertsCount
- $r.HybridJoinCertsCount = $hybridJoinCertsCount
- $r.CloudFiltered = $cloudFiltered
- $results += $r
- }
-
- # Export results to CSV
- Try
- {
- $results | Export-Csv $Filename -NoTypeInformation -Delimiter ';'
- Write-Host "Exported Hybrid Azure AD Domain Join Certificate Report to '$Filename'.`n"
- }
- Catch
- {
- Throw "There was an error saving the file '$Filename': $($_.Exception.Message)"
- }
-
- ```
-
-## Next Steps
-- [Azure AD Connect Version history](reference-connect-version-history.md)
active-directory Configure Linked Sign On https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/configure-linked-sign-on.md
The **Linked** option doesn't provide sign-on functionality through Azure AD. Th
> > If the application was registered using **App registrations** then the single sign-on capability is setup to use OIDC OAuth by default. In this case, the **Single sign-on** option won't show in the navigation under **Enterprise applications**. When you use **App registrations** to add your custom app, you configure options in the manifest file. To learn more about the manifest file, see [Azure Active Directory app manifest](../develop/reference-app-manifest.md). To learn more about SSO standards, see [Authentication and authorization using Microsoft identity platform](../develop/authentication-vs-authorization.md#authentication-and-authorization-using-the-microsoft-identity-platform). >
-> Other scenarios where **Single sign-on** will be missing from the navigation include when an application is hosted in another tenant or if your account does not have the required permissions (Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal). Permissions can also cause a scenario where you can open **Single sign-on** but won't be able to save. To learn more about Azure AD administrative roles, see [Azure AD built-in roles](https://docs.microsoft.com/azure/active-directory/users-groups-roles/directory-assign-admin-roles).
+> Other scenarios where **Single sign-on** will be missing from the navigation include when an application is hosted in another tenant or if your account does not have the required permissions (Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal). Permissions can also cause a scenario where you can open **Single sign-on** but won't be able to save. To learn more about Azure AD administrative roles, see [Azure AD built-in roles](../roles/permissions-reference.md).
### Configure link
After you configure an app, assign users and groups to it. When you assign users
## Next steps - [Assign users or groups to the application](./assign-user-or-group-access-portal.md)-- [Configure automatic user account provisioning](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+- [Configure automatic user account provisioning](../app-provisioning/configure-automatic-user-provisioning-portal.md)
active-directory Services Support Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/services-support-managed-identities.md
Managed identity type | All Generally Available<br>Global Azure Regions | Azure
Refer to the following list to configure managed identity for Azure Digital Twins (in regions where available): -- [Azure portal](../../digital-twins/how-to-enable-managed-identities-portal.md)
+- [Azure portal](../../digital-twins/how-to-route-with-managed-identity.md)
### Azure Event Grid
Managed identity type | All Generally Available<br>Global Azure Regions | Azure
> You can use Managed Identities to authenticate an [Azure Stream analytics job to Power BI](../../stream-analytics/powerbi-output-managed-identity.md).
-[check]: media/services-support-managed-identities/check.png "Available"
+[check]: media/services-support-managed-identities/check.png "Available"
active-directory Pim How To Change Default Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-how-to-change-default-settings.md
Previously updated : 06/03/2021 Last updated : 07/14/2021
And, you can choose one of these **active** assignment duration options:
> [!NOTE] > All assignments that have a specified end date can be renewed by Global admins and Privileged role admins. Also, users can initiate self-service requests to [extend or renew role assignments](pim-resource-roles-renew-extend.md).
-## Require multi-factor authentication
+## Require multifactor authentication
-Privileged Identity Management provides optional enforcement of Azure AD Multi-Factor Authentication for two distinct scenarios.
+Privileged Identity Management provides enforcement of Azure AD Multi-Factor Authentication on activation and on active assignment.
-### Require Multi-Factor Authentication on active assignment
+### On activation
-In some cases, you might want to assign a user to a role for a short duration (one day, for example). In this case, the assigned users don't need to request activation. In this scenario, Privileged Identity Management can't enforce multi-factor authentication when the user uses their role assignment because they are already active in the role from the time that it is assigned.
+You can require users who are eligible for a role to prove who they are using Azure AD Multi-Factor Authentication before they can activate. Multifactor authentication ensures that the user is who they say they are with reasonable certainty. Enforcing this option protects critical resources in situations when the user account might have been compromised.
-To ensure that the administrator fulfilling the assignment is who they say they are, you can enforce multi-factor authentication on active assignment by checking the **Require Multi-Factor Authentication on active assignment** box.
+To require multifactor authentication to activate the role assignment, select the **On activation, require Azure MFA** option in the Activation tab of **Edit role setting**.
-### Require Multi-Factor Authentication on activation
+### On active assignment
-You can require users who are eligible for a role to prove who they are using Azure AD Multi-Factor Authentication before they can activate. Multi-factor authentication ensures that the user is who they say they are with reasonable certainty. Enforcing this option protects critical resources in situations when the user account might have been compromised.
+In some cases, you might want to assign a user to a role for a short duration (one day, for example). In this case, the assigned users don't need to request activation. In this scenario, Privileged Identity Management can't enforce multifactor authentication when the user uses their role assignment because they are already active in the role from the time that it is assigned.
-To require multi-factor authentication before activation, check the **Require Multi-Factor Authentication on activation** box in the Assignment tab of **Edit role setting**.
+To require multifactor authentication when the assignment is active, select the **Require Azure Multi-Factor Authentication on active assignment** option in the Assignment tab of **Edit role setting**.
-For more information, see [Multi-factor authentication and Privileged Identity Management](pim-how-to-require-mfa.md).
+For more information, see [Multifactor authentication and Privileged Identity Management](pim-how-to-require-mfa.md).
## Activation maximum duration
If setting multiple approvers, approval completes as soon as one of them approve
![Select a user or group pane to select approvers](./media/pim-resource-roles-configure-role-settings/resources-role-settings-select-approvers.png)
-1. Select at least one user and then click **Select**. You must select at least one approver. There are no default approvers.
+1. Select at least one user and then click **Select**. Select at least one approver. There are no default approvers.
Your selections will appear in the list of selected approvers.
active-directory Pim How To Start Security Review https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-how-to-start-security-review.md
Previously updated : 05/28/2021 Last updated : 7/14/2021
This article describes how to create one or more access reviews for privileged A
> [!Note] > Currently, you can scope an access review to service principals with access to Azure AD and Azure resource roles (Preview) with an Azure Active Directory Premium P2 edition active in your tenant. The licensing model for service principals will be finalized for general availability of this feature and additional licenses may be required.
-## Prerequisites
+## Prerequisite role
[Global Administrator](../roles/permissions-reference.md#global-administrator)
This article describes how to create one or more access reviews for privileged A
4. Select **Azure AD roles** again under **Manage**.
-5. Under Manage, select **Access reviews**, and then select **New**.
+5. Under Manage, select **Access reviews**, and then select **New** to create a new access review.
- ![Azure AD roles - Access reviews list showing the status of all reviews](./media/pim-how-to-start-security-review/access-reviews.png)
+ <kbd> ![Azure AD roles - Access reviews list showing the status of all reviews](./media/pim-how-to-start-security-review/access-reviews.png) </kbd>
-6. Click **New** to create a new access review.
+6. Name the access review. Optionally, give the review a description. The name and description are shown to the reviewers.
-7. Name the access review. Optionally, give the review a description. The name and description are shown to the reviewers.
+ <kbd> ![Create an access review - Review name and description](./media/pim-how-to-start-security-review/name-description.png) </kbd>
- ![Create an access review - Review name and description](./media/pim-how-to-start-security-review/name-description.png)
+7. Set the **Start date**. By default, an access review occurs once, starts the same time it's created, and it ends in one month. You can change the start and end dates to have an access review start in the future and last however many days you want.
-8. Set the **Start date**. By default, an access review occurs once, starts the same time it's created, and it ends in one month. You can change the start and end dates to have an access review start in the future and last however many days you want.
+ <kbd> ![Start date, frequency, duration, end, number of times, and end date](./media/pim-how-to-start-security-review/start-end-dates.png) </kbd>
- ![Start date, frequency, duration, end, number of times, and end date](./media/pim-how-to-start-security-review/start-end-dates.png)
+8. To make the access review recurring, change the **Frequency** setting from **One time** to **Weekly**, **Monthly**, **Quarterly**, **Annually**, or **Semi-annually**. Use the **Duration** slider or text box to define how many days each review of the recurring series will be open for input from reviewers. For example, the maximum duration that you can set for a monthly review is 27 days, to avoid overlapping reviews.
-9. To make the access review recurring, change the **Frequency** setting from **One time** to **Weekly**, **Monthly**, **Quarterly**, **Annually**, or **Semi-annually**. Use the **Duration** slider or text box to define how many days each review of the recurring series will be open for input from reviewers. For example, the maximum duration that you can set for a monthly review is 27 days, to avoid overlapping reviews.
+9. Use the **End** setting to specify how to end the recurring access review series. The series can end in three ways: it runs continuously to start reviews indefinitely, until a specific date, or after a defined number of occurrences has been completed. You, another User administrator, or another Global administrator can stop the series after creation by changing the date in **Settings**, so that it ends on that date.
-10. Use the **End** setting to specify how to end the recurring access review series. The series can end in three ways: it runs continuously to start reviews indefinitely, until a specific date, or after a defined number of occurrences has been completed. You, another User administrator, or another Global administrator can stop the series after creation by changing the date in **Settings**, so that it ends on that date.
-11. In the **Users Scope** section, select the scope of the review. To review users and groups with access to the Azure AD role, select **Users and Groups**, or select **(Preview) Service Principals** to review the machine accounts with access to the Azure AD role.
+10. In the **Users Scope** section, select the scope of the review. To review users and groups with access to the Azure AD role, select **Users and Groups**, or select **(Preview) Service Principals** to review the machine accounts with access to the Azure AD role.
- When **Users and Groups** is selected, membership of groups assigned to the role will be reviewed as part of the access review. When **Service Principals** is selected, only those with direct membership (not via nested groups) will be reviewed.
- ![Users scope to review role membership of](./media/pim-how-to-start-security-review/users.png)
+ <kbd> ![Users scope to review role membership of](./media/pim-how-to-start-security-review/users.png) </kbd>
-12. Under **Review role membership**, select the privileged Azure AD roles to review.
+
+11. Under **Review role membership**, select the privileged Azure AD roles to review.
> [!NOTE] > Selecting more than one role will create multiple access reviews. For example, selecting five roles will create five separate access reviews.
-13. In **assignment type**, scope the review by how the principal was assigned to the role. Choose **(Preview) eligible assignments only** to review eligible assignments (regardless of activation status when the review is created) or **(Preview) active assignments only** to review active assignments. Choose **all active and eligible assignments** to review all assignments regardless of type.
+12. In **assignment type**, scope the review by how the principal was assigned to the role. Choose **(Preview) eligible assignments only** to review eligible assignments (regardless of activation status when the review is created) or **(Preview) active assignments only** to review active assignments. Choose **all active and eligible assignments** to review all assignments regardless of type.
+
+ <kbd> ![Reviewers list of assignment types](./media/pim-how-to-start-security-review/assignment-type-select.png) </kbd>
- ![Reviewers list of assignment types](./media/pim-how-to-start-security-review/assignment-type-select.png)
14. In the **Reviewers** section, select one or more people to review all the users. Or you can select to have the members review their own access.
This article describes how to create one or more access reviews for privileged A
1. To specify what happens after a review completes, expand the **Upon completion settings** section.
- ![Upon completion settings to auto apply and should review not respond](./media/pim-how-to-start-security-review/upon-completion-settings.png)
+ <kbd> ![Upon completion settings to auto apply and should review not respond](./media/pim-how-to-start-security-review/upon-completion-settings.png) </kbd>
-1. If you want to automatically remove access for users that were denied, set **Auto apply results to resource** to **Enable**. If you want to manually apply the results when the review completes, set the switch to **Disable**.
+2. If you want to automatically remove access for users that were denied, set **Auto apply results to resource** to **Enable**. If you want to manually apply the results when the review completes, set the switch to **Disable**.
-1. Use the **Should reviewer not respond** list to specify what happens for users that are not reviewed by the reviewer within the review period. This setting does not impact users who have been reviewed by the reviewers manually. If the final reviewer's decision is Deny, then the user's access will be removed.
+3. Use the **If reviewer don't respond** list to specify what happens for users that are not reviewed by the reviewer within the review period. This setting does not impact users who have been reviewed by the reviewers manually. If the final reviewer's decision is Deny, then the user's access will be removed.
- **No change** - Leave user's access unchanged - **Remove access** - Remove user's access - **Approve access** - Approve user's access - **Take recommendations** - Take the system's recommendation on denying or approving the user's continued access
+
+4. Use the **Action to apply on denied guest users** list to specify what happens for guest users that are denied:
+
+ <kbd> ![Upon completion settings - Action to apply on denied guest users](./media/pim-how-to-start-security-review/action-to-apply-on-denied-guest-users.png) </kbd>
+
-1. You can send notifications to additional users or groups (Preview) to receive review completion updates. This feature allows for stakeholders other than the review creator to be updated on the progress of the review. To use this feature, select **Select User(s) or Group(s)** and add an additional user or group upon you want to receive the status of completion.
+5. You can send notifications to additional users or groups (Preview) to receive review completion updates. This feature allows for stakeholders other than the review creator to be updated on the progress of the review. To use this feature, select **Select User(s) or Group(s)** and add an additional user or group upon you want to receive the status of completion.
- ![Upon completion settings - Add additional users to receive notifications](./media/pim-how-to-start-security-review/upon-completion-settings-additional-receivers.png)
+ <kbd> ![Upon completion settings - Add additional users to receive notifications](./media/pim-how-to-start-security-review/upon-completion-settings-additional-receivers.png) </kbd>
### Advanced settings 1. To specify additional settings, expand the **Advanced settings** section.
- ![Advanced settings for show recommendations, require reason on approval, mail notifications, and reminders](./media/pim-how-to-start-security-review/advanced-settings.png)
+ <kbd> ![Advanced settings for show recommendations, require reason on approval, mail notifications, and reminders](./media/pim-how-to-start-security-review/advanced-settings.png) </kbd>
1. Set **Show recommendations** to **Enable** to show the reviewers the system recommendations based the user's access information.
This article describes how to create one or more access reviews for privileged A
1. Set **Mail notifications** to **Enable** to have Azure AD send email notifications to reviewers when an access review starts, and to administrators when a review completes. 1. Set **Reminders** to **Enable** to have Azure AD send reminders of access reviews in progress to reviewers who have not completed their review.
-1. The content of the email sent to reviewers is autogenerated based on the review details, such as review name, resource name, due date, etc. If you need a way to communicate additional information such as additional instructions or contact information, you can specify these details in the **Additional content for reviewer email** which will be included in the invitation and reminder emails sent to assigned reviewers. The highlighted section below is where this information will be displayed.
+1. The content of the email sent to reviewers is auto-generated based on the review details, such as review name, resource name, due date, etc. If you need a way to communicate additional information such as additional instructions or contact information, you can specify these details in the **Additional content for reviewer email** which will be included in the invitation and reminder emails sent to assigned reviewers. The highlighted section below is where this information will be displayed.
- ![Content of the email sent to reviewers with highlights](./media/pim-how-to-start-security-review/email-info.png)
+ ![Content of the email sent to reviewers with highlights](./media/pim-how-to-start-security-review/email-info.png)
## Start the access review Once you have specified the settings for an access review, select **Start**. The access review will appear in your list with an indicator of its status.
-![Access reviews list showing the status of started reviews](./media/pim-how-to-start-security-review/access-reviews-list.png)
+<kbd> ![Access reviews list showing the status of started reviews](./media/pim-how-to-start-security-review/access-reviews-list.png) </kbd>
By default, Azure AD sends an email to reviewers shortly after the review starts. If you choose not to have Azure AD send the email, be sure to inform the reviewers that an access review is waiting for them to complete. You can show them the instructions for how to [review access to Azure AD roles](pim-how-to-perform-security-review.md).
By default, Azure AD sends an email to reviewers shortly after the review starts
You can track the progress as the reviewers complete their reviews on the **Overview** page of the access review. No access rights are changed in the directory until the [review is completed](pim-how-to-complete-review.md).
-![Access reviews overview page showing the details of the review](./media/pim-how-to-start-security-review/access-review-overview.png)
+<kbd> ![Access reviews overview page showing the details of the review](./media/pim-how-to-start-security-review/access-review-overview.png) </kbd>
If this is a one-time review, then after the access review period is over or the administrator stops the access review, follow the steps in [Complete an access review of Azure AD roles](pim-how-to-complete-review.md) to see and apply the results.
active-directory Howto Integrate Activity Logs With Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md
You can route audit activity logs and sign-in activity logs to Azure Monitor log
* **Risky users logs (public preview)**: With the [risky users logs](../identity-protection/howto-identity-protection-investigate-risk.md#risky-users), you can monitor changes in user risk level and remediation activity. * **Risk detections logs (public preview)**: With the [risk detections logs](../identity-protection/howto-identity-protection-investigate-risk.md#risk-detections), you can monitor user's risk detections and analyze trends in risk activity detected in your organization.
-> [!NOTE]
-> Azure AD B2C audit and sign-in activity logs are currently unsupported.
## Prerequisites
active-directory Groups Assign Role https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/groups-assign-role.md
$roleDefinition = Get-AzureADMSRoleDefinition -Filter "displayName eq 'Helpdesk
### Create a role assignment ```powershell
-$roleAssignment = New-AzureADMSRoleAssignment -ResourceScope '/' -RoleDefinitionId $roleDefinition.Id -PrincipalId $group.Id
+$roleAssignment = New-AzureADMSRoleAssignment -DirectoryScopeId '/' -RoleDefinitionId $roleDefinition.Id -PrincipalId $group.Id
``` ## Microsoft Graph API
POST https://graph.microsoft.com/beta/groups
{ "description": "This group is assigned to Helpdesk Administrator built-in role of Azure AD.", "displayName": "Contoso_Helpdesk_Administrators",
-"groupTypes": [
-"Unified"
-],
-"mailEnabled": true,
-"securityEnabled": true
+"groupTypes": [],
+"mailEnabled": false,
+"securityEnabled": true,
"mailNickname": "contosohelpdeskadministrators",
-"isAssignableToRole": true,
+"isAssignableToRole": true
} ``` ### Get the role definition ```
-GET https://graph.microsoft.com/beta/roleManagement/directory/roleDefinitions?$filter = displayName eq ΓÇÿHelpdesk AdministratorΓÇÖ
+GET https://graph.microsoft.com/beta/roleManagement/directory/roleDefinitions?$filter = displayName eq 'Helpdesk Administrator'
``` ### Create the role assignment
active-directory Groups Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/groups-concept.md
Role-assignable groups are designed to help prevent potential breaches by having
If you do not want members of the group to have standing access to a role, you can use [Azure AD Privileged Identity Management (PIM)](../privileged-identity-management/pim-configure.md) to make a group eligible for a role assignment. Each member of the group is then eligible to activate the role assignment for a fixed time duration. > [!Note]
-> You must be using an updated version of PIM to be able to assign a Azure AD role to a group. You could be using an older version of PIM because your Azure AD organization leverages the PIM API. Send email to pim_preview@microsoft.com to move your organization and update your API. For more information, see [Azure AD roles and features in PIM](../privileged-identity-management/azure-ad-roles-features.md).
+> You must be using an updated version of PIM to be able to assign a Azure AD role to a group. You could be using an older version of PIM because your Azure AD organization leverages the PIM API. Send email to pim_preview@microsoft.com to move your organization and update your API. For more information, see [Azure AD roles and features in PIM](../privileged-identity-management/pim-configure.md).
## Scenarios not supported
Using this feature requires an Azure AD Premium P1 license. To also use Privileg
## Next steps - [Create a role-assignable group](groups-create-eligible.md)-- [Assign Azure AD roles to groups](groups-assign-role.md)
+- [Assign Azure AD roles to groups](groups-assign-role.md)
active-directory Groups Pim Eligible https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/groups-pim-eligible.md
This article describes how you can assign an Azure Active Directory (Azure AD) role to a group using Azure AD Privileged Identity Management (PIM). > [!NOTE]
-> You must be using the updated version of Privileged Identity Management to be able to assign a group to an Azure AD role using PIM. You might be on older version of PIM if your Azure AD organization leverages the Privileged Identity Management API. If so, please reach out to the alias pim_preview@microsoft.com to move your organization and update your API. Learn more at [Azure AD roles and features in PIM](../privileged-identity-management/azure-ad-roles-features.md).
+> You must be using the updated version of Privileged Identity Management to be able to assign a group to an Azure AD role using PIM. You might be on older version of PIM if your Azure AD organization leverages the Privileged Identity Management API. If so, please reach out to the alias pim_preview@microsoft.com to move your organization and update your API. Learn more at [Azure AD roles and features in PIM](../privileged-identity-management/pim-configure.md).
## Prerequisites
https://graph.microsoft.com/beta/privilegedAccess/aadroles/roleAssignmentRequest
- [Use Azure AD groups to manage role assignments](groups-concept.md) - [Troubleshoot Azure AD roles assigned to groups](groups-faq-troubleshooting.yml) - [Configure Azure AD admin role settings in Privileged Identity Management](../privileged-identity-management/pim-how-to-change-default-settings.md)-- [Assign Azure resource roles in Privileged Identity Management](../privileged-identity-management/pim-resource-roles-assign-roles.md)
+- [Assign Azure resource roles in Privileged Identity Management](../privileged-identity-management/pim-resource-roles-assign-roles.md)
active-directory Askspoke Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/askspoke-provisioning-tutorial.md
# Tutorial: Configure askSpoke for automatic user provisioning
-This tutorial describes the steps you need to perform in both askSpoke and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [askSpoke](https://www.askspoke.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../manage-apps/user-provisioning.md).
+This tutorial describes the steps you need to perform in both askSpoke and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [askSpoke](https://www.askspoke.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
## Capabilities Supported
This tutorial describes the steps you need to perform in both askSpoke and Azure
> - Remove users in askSpoke when they do not require access anymore > - Keep user attributes synchronized between Azure AD and askSpoke > - Provision groups and group memberships in askSpoke
-> - [Single sign-on](https://docs.microsoft.com/azure/active-directory/saas-apps/askspoke-tutorial) to askSpoke (recommended)
+> - [Single sign-on](./askspoke-tutorial.md) to askSpoke (recommended)
## Prerequisites The scenario outlined in this tutorial assumes that you already have the following prerequisites: -- [An Azure AD tenant](https://docs.microsoft.com/azure/active-directory/develop/quickstart-create-new-tenant)-- A user account in Azure AD with [permission](https://docs.microsoft.com/azure/active-directory/users-groups-roles/directory-assign-admin-roles) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+- [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
+- A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
- A user account in askSpoke with admin permissions. ## Step 1. Plan your provisioning deployment
-1. Learn about [how the provisioning service works](https://docs.microsoft.com/azure/active-directory/manage-apps/user-provisioning).
-2. Determine who will be in [scope for provisioning](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
-3. Determine what data to [map between Azure AD and askSpoke](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes).
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+3. Determine what data to [map between Azure AD and askSpoke](../app-provisioning/customize-application-attributes.md).
## Step 2. Configure askSpoke to support provisioning with Azure AD
The scenario outlined in this tutorial assumes that you already have the followi
## Step 3. Add askSpoke from the Azure AD application gallery
-Add askSpoke from the Azure AD application gallery to start managing provisioning to askSpoke. If you have previously setup askSpoke for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](https://docs.microsoft.com/azure/active-directory/manage-apps/add-gallery-app).
+Add askSpoke from the Azure AD application gallery to start managing provisioning to askSpoke. If you have previously setup askSpoke for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
## Step 4. Define who will be in scope for provisioning
-The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-- When assigning users and groups to askSpoke, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps) to add additional roles.
+- When assigning users and groups to askSpoke, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
-- Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+- Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
## Step 5. Configure automatic user provisioning to askSpoke
This section guides you through the steps to configure the Azure AD provisioning
8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to askSpoke**.
-9. Review the user attributes that are synchronized from Azure AD to askSpoke in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in askSpoke for update operations. If you choose to change the [matching target attribute](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes), you will need to ensure that the askSpoke API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+9. Review the user attributes that are synchronized from Azure AD to askSpoke in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in askSpoke for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the askSpoke API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
| Attribute | Type | Supported For Filtering | | | | -- |
This section guides you through the steps to configure the Azure AD provisioning
| displayName | String | &check; | | members | Reference |
-12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../manage-apps/define-conditional-rules-for-provisioning-user-accounts.md).
+12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
13. To enable the Azure AD provisioning service for askSpoke, change the **Provisioning Status** to **On** in the **Settings** section.
This operation starts the initial synchronization cycle of all users and groups
Once you've configured provisioning, use the following resources to monitor your deployment:
-1. Use the [provisioning logs](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-provisioning-logs) to determine which users have been provisioned successfully or unsuccessfully
-2. Check the [progress bar](https://docs.microsoft.com/azure/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user) to see the status of the provisioning cycle and how close it is to completion
-3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](https://docs.microsoft.com/azure/active-directory/manage-apps/application-provisioning-quarantine-status).
+1. Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+2. Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
## Additional resources -- [Managing user account provisioning for Enterprise Apps](../manage-apps/configure-automatic-user-provisioning-portal.md)
+- [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ## Next steps -- [Learn how to review logs and get reports on provisioning activity](../manage-apps/check-status-user-account-provisioning.md)
+- [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Checkproof Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/checkproof-provisioning-tutorial.md
# Tutorial: Configure CheckProof for automatic user provisioning
-This tutorial describes the steps you need to perform in both CheckProof and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [CheckProof](https://checkproof.com) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../manage-apps/user-provisioning.md).
+This tutorial describes the steps you need to perform in both CheckProof and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [CheckProof](https://checkproof.com) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
## Capabilities Supported
This tutorial describes the steps you need to perform in both CheckProof and Azu
> * Remove users in CheckProof when they do not require access anymore > * Keep user attributes synchronized between Azure AD and CheckProof > * Provision groups and group memberships in CheckProof
-> * [Single sign-on](https://docs.microsoft.com/azure/active-directory/saas-apps/checkproof-tutorial) to CheckProof (recommended)
+> * [Single sign-on](./checkproof-tutorial.md) to CheckProof (recommended)
## Prerequisites The scenario outlined in this tutorial assumes that you already have the following prerequisites:
-* [An Azure AD tenant](https://docs.microsoft.com/azure/active-directory/develop/quickstart-create-new-tenant)
-* A user account in Azure AD with [permission](https://docs.microsoft.com/azure/active-directory/users-groups-roles/directory-assign-admin-roles) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
* A CheckProof account with **SCIM Provisioning** function enabled. ## Step 1. Plan your provisioning deployment
-1. Learn about [how the provisioning service works](https://docs.microsoft.com/azure/active-directory/manage-apps/user-provisioning).
-2. Determine who will be in [scope for provisioning](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
-3. Determine what data to [map between Azure AD and CheckProof](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes).
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+3. Determine what data to [map between Azure AD and CheckProof](../app-provisioning/customize-application-attributes.md).
## Step 2. Configure CheckProof to support provisioning with Azure AD
The scenario outlined in this tutorial assumes that you already have the followi
## Step 3. Add CheckProof from the Azure AD application gallery
-Add CheckProof from the Azure AD application gallery to start managing provisioning to CheckProof. If you have previously setup CheckProof for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](https://docs.microsoft.com/azure/active-directory/manage-apps/add-gallery-app).
+Add CheckProof from the Azure AD application gallery to start managing provisioning to CheckProof. If you have previously setup CheckProof for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
## Step 4. Define who will be in scope for provisioning
-The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to CheckProof, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps) to add additional roles.
+* When assigning users and groups to CheckProof, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
## Step 5. Configure automatic user provisioning to CheckProof
This section guides you through the steps to configure the Azure AD provisioning
8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to CheckProof**.
-9. Review the user attributes that are synchronized from Azure AD to CheckProof in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in CheckProof for update operations. If you choose to change the [matching target attribute](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes), you will need to ensure that the CheckProof API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+9. Review the user attributes that are synchronized from Azure AD to CheckProof in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in CheckProof for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the CheckProof API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
|Attribute|Type|Supported For Filtering| |||--|
This section guides you through the steps to configure the Azure AD provisioning
|externalId|String| |members|Reference|
-12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../manage-apps/define-conditional-rules-for-provisioning-user-accounts.md).
+12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
13. To enable the Azure AD provisioning service for CheckProof, change the **Provisioning Status** to **On** in the **Settings** section.
This operation starts the initial synchronization cycle of all users and groups
## Step 6. Monitor your deployment Once you've configured provisioning, use the following resources to monitor your deployment:
-1. Use the [provisioning logs](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-provisioning-logs) to determine which users have been provisioned successfully or unsuccessfully
-2. Check the [progress bar](https://docs.microsoft.com/azure/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user) to see the status of the provisioning cycle and how close it is to completion
-3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](https://docs.microsoft.com/azure/active-directory/manage-apps/application-provisioning-quarantine-status).
+1. Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+2. Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
## Additional resources
-* [Managing user account provisioning for Enterprise Apps](../manage-apps/configure-automatic-user-provisioning-portal.md)
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ## Next steps
-* [Learn how to review logs and get reports on provisioning activity](../manage-apps/check-status-user-account-provisioning.md)
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Cloud Academy Sso Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/cloud-academy-sso-provisioning-tutorial.md
# Tutorial: Configure Cloud Academy - SSO for automatic user provisioning
-This tutorial describes the steps you need to perform in both Cloud Academy - SSO and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Cloud Academy - SSO](https://cloudacademy.com) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../manage-apps/user-provisioning.md).
+This tutorial describes the steps you need to perform in both Cloud Academy - SSO and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Cloud Academy - SSO](https://cloudacademy.com) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
## Capabilities Supported
This tutorial describes the steps you need to perform in both Cloud Academy - SS
> * Create users in Cloud Academy - SSO > * Remove users in Cloud Academy - SSO when they do not require access anymore > * Keep user attributes synchronized between Azure AD and Cloud Academy - SSO
-> * [Single sign-on](https://docs.microsoft.com/azure/active-directory/saas-apps/cloud-academy-sso-tutorial) to Cloud Academy - SSO (recommended)
+> * [Single sign-on](./cloud-academy-sso-tutorial.md) to Cloud Academy - SSO (recommended)
## Prerequisites The scenario outlined in this tutorial assumes that you already have the following prerequisites:
-* [An Azure AD tenant](https://docs.microsoft.com/azure/active-directory/develop/quickstart-create-new-tenant)
-* A user account in Azure AD with [permission](https://docs.microsoft.com/azure/active-directory/users-groups-roles/directory-assign-admin-roles) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
* A user account in Cloud Academy with an Administrator role in your company to activate the AD Integration and generate the API Key. ## Step 1. Plan your provisioning deployment
-1. Learn about [how the provisioning service works](https://docs.microsoft.com/azure/active-directory/manage-apps/user-provisioning).
-2. Determine who will be in [scope for provisioning](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
-3. Determine what data to [map between Azure AD and Cloud Academy - SSO](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes).
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+3. Determine what data to [map between Azure AD and Cloud Academy - SSO](../app-provisioning/customize-application-attributes.md).
## Step 2. Configure Cloud Academy - SSO to support provisioning with Azure AD
The scenario outlined in this tutorial assumes that you already have the followi
## Step 3. Add Cloud Academy - SSO from the Azure AD application gallery
-Add Cloud Academy - SSO from the Azure AD application gallery to start managing provisioning to Cloud Academy - SSO. If you have previously setup Cloud Academy - SSO for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](https://docs.microsoft.com/azure/active-directory/manage-apps/add-gallery-app).
+Add Cloud Academy - SSO from the Azure AD application gallery to start managing provisioning to Cloud Academy - SSO. If you have previously setup Cloud Academy - SSO for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
## Step 4. Define who will be in scope for provisioning
-The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Cloud Academy - SSO, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps) to add additional roles.
+* When assigning users and groups to Cloud Academy - SSO, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
## Step 5. Configure automatic user provisioning to Cloud Academy - SSO
This section guides you through the steps to configure the Azure AD provisioning
8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Cloud Academy - SSO**.
-9. Review the user attributes that are synchronized from Azure AD to Cloud Academy - SSO in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Cloud Academy - SSO for update operations. If you choose to change the [matching target attribute](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes), you will need to ensure that the Cloud Academy - SSO API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+9. Review the user attributes that are synchronized from Azure AD to Cloud Academy - SSO in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Cloud Academy - SSO for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the Cloud Academy - SSO API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
|Attribute|Type|Supported For Filtering| ||||
This section guides you through the steps to configure the Azure AD provisioning
|name.givenName|String| |name.familyName|String|
-10. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../manage-apps/define-conditional-rules-for-provisioning-user-accounts.md).
+10. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
11. To enable the Azure AD provisioning service for Cloud Academy - SSO, change the **Provisioning Status** to **On** in the **Settings** section.
This operation starts the initial synchronization cycle of all users and groups
## Step 6. Monitor your deployment Once you've configured provisioning, use the following resources to monitor your deployment:
-1. Use the [provisioning logs](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-provisioning-logs) to determine which users have been provisioned successfully or unsuccessfully
-2. Check the [progress bar](https://docs.microsoft.com/azure/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user) to see the status of the provisioning cycle and how close it is to completion
-3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](https://docs.microsoft.com/azure/active-directory/manage-apps/application-provisioning-quarantine-status).
+1. Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+2. Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
## Additional resources
-* [Managing user account provisioning for Enterprise Apps](../manage-apps/configure-automatic-user-provisioning-portal.md)
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ## Next steps
-* [Learn how to review logs and get reports on provisioning activity](../manage-apps/check-status-user-account-provisioning.md)
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Golinks Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/golinks-provisioning-tutorial.md
# Tutorial: Configure GoLinks for automatic user provisioning
-This tutorial describes the steps you need to perform in both GoLinks and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [GoLinks](https://www.golinks.io) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../manage-apps/user-provisioning.md).
+This tutorial describes the steps you need to perform in both GoLinks and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [GoLinks](https://www.golinks.io) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
## Capabilities Supported
This tutorial describes the steps you need to perform in both GoLinks and Azure
> * Create users in GoLinks > * Remove users in GoLinks when they do not require access anymore > * Keep user attributes synchronized between Azure AD and GoLinks
-> * [Single sign-on](https://docs.microsoft.com/azure/active-directory/saas-apps/golinks-tutorial) to GoLinks (recommended)
+> * [Single sign-on](./golinks-tutorial.md) to GoLinks (recommended)
## Prerequisites The scenario outlined in this tutorial assumes that you already have the following prerequisites:
-* [An Azure AD tenant](https://docs.microsoft.com/azure/active-directory/develop/quickstart-create-new-tenant)
-* A user account in Azure AD with [permission](https://docs.microsoft.com/azure/active-directory/users-groups-roles/directory-assign-admin-roles) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
* A GoLinks tenant on the [Enterprise plan](https://www.golinks.io/pricing.php). * A user account in [GoLinks](https://www.golinks.io) with admin access. ## Step 1. Plan your provisioning deployment
-1. Learn about [how the provisioning service works](https://docs.microsoft.com/azure/active-directory/manage-apps/user-provisioning).
-2. Determine who will be in [scope for provisioning](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
-3. Determine what data to [map between Azure AD and GoLinks](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes).
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+3. Determine what data to [map between Azure AD and GoLinks](../app-provisioning/customize-application-attributes.md).
## Step 2. Configure GoLinks to support provisioning with Azure AD
The scenario outlined in this tutorial assumes that you already have the followi
## Step 3. Add GoLinks from the Azure AD application gallery
-Add GoLinks from the Azure AD application gallery to start managing provisioning to GoLinks. If you have previously setup GoLinks for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](https://docs.microsoft.com/azure/active-directory/manage-apps/add-gallery-app).
+Add GoLinks from the Azure AD application gallery to start managing provisioning to GoLinks. If you have previously setup GoLinks for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
## Step 4. Define who will be in scope for provisioning
-The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to GoLinks, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps) to add other roles.
+* When assigning users and groups to GoLinks, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add other roles.
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
## Step 5. Configure automatic user provisioning to GoLinks
This section guides you through the steps to configure the Azure AD provisioning
8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to GoLinks**.
-9. Review the user attributes that are synchronized from Azure AD to GoLinks in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in GoLinks for update operations. If you choose to change the [matching target attribute](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes), you will need to ensure that the GoLinks API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+9. Review the user attributes that are synchronized from Azure AD to GoLinks in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in GoLinks for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the GoLinks API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
|Attribute|Type|Supported For Filtering| ||||
This section guides you through the steps to configure the Azure AD provisioning
|name.familyName|String| |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String|
-10. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../manage-apps/define-conditional-rules-for-provisioning-user-accounts.md).
+10. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
11. To enable the Azure AD provisioning service for GoLinks, change the **Provisioning Status** to **On** in the **Settings** section.
This operation starts the initial synchronization cycle of all users and groups
## Step 6. Monitor your deployment Once you've configured provisioning, use the following resources to monitor your deployment:
-1. Use the [provisioning logs](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-provisioning-logs) to determine which users have been provisioned successfully or unsuccessfully
-2. Check the [progress bar](https://docs.microsoft.com/azure/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user) to see the status of the provisioning cycle and how close it is to completion
-3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](https://docs.microsoft.com/azure/active-directory/manage-apps/application-provisioning-quarantine-status).
+1. Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+2. Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
## Additional resources
-* [Managing user account provisioning for Enterprise Apps](../manage-apps/configure-automatic-user-provisioning-portal.md)
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ## Next steps
-* [Learn how to review logs and get reports on provisioning activity](../manage-apps/check-status-user-account-provisioning.md)
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory H5mag Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/h5mag-provisioning-tutorial.md
# Tutorial: Configure H5mag for automatic user provisioning
-This tutorial describes the steps you need to perform in both H5mag and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [H5mag](https://www.h5mag.com) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../manage-apps/user-provisioning.md).
+This tutorial describes the steps you need to perform in both H5mag and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [H5mag](https://www.h5mag.com) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
## Capabilities Supported
This tutorial describes the steps you need to perform in both H5mag and Azure Ac
The scenario outlined in this tutorial assumes that you already have the following prerequisites:
-* [An Azure AD tenant](https://docs.microsoft.com/azure/active-directory/develop/quickstart-create-new-tenant)
-* A user account in Azure AD with [permission](https://docs.microsoft.com/azure/active-directory/users-groups-roles/directory-assign-admin-roles) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
* A user account in [H5mag](https://account.h5mag.com) with an Enterprise license. If your account needs an upgrade to an Enterprise license, reach out to `support@h5mag.com`. ## Step 1. Plan your provisioning deployment
-1. Learn about [how the provisioning service works](https://docs.microsoft.com/azure/active-directory/manage-apps/user-provisioning).
-2. Determine who will be in [scope for provisioning](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
-3. Determine what data to [map between Azure AD and H5mag](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes).
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+3. Determine what data to [map between Azure AD and H5mag](../app-provisioning/customize-application-attributes.md).
## Step 2. Configure H5mag to support provisioning with Azure AD
The scenario outlined in this tutorial assumes that you already have the followi
## Step 3. Add H5mag from the Azure AD application gallery
-Add H5mag from the Azure AD application gallery to start managing provisioning to H5mag. If you have previously setup H5mag for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](https://docs.microsoft.com/azure/active-directory/manage-apps/add-gallery-app).
+Add H5mag from the Azure AD application gallery to start managing provisioning to H5mag. If you have previously setup H5mag for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
## Step 4. Define who will be in scope for provisioning
-The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to H5mag, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps) to add additional roles.
+* When assigning users and groups to H5mag, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
## Step 5. Configure automatic user provisioning to H5mag
This section guides you through the steps to configure the Azure AD provisioning
8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to H5mag**.
-9. Review the user attributes that are synchronized from Azure AD to H5mag in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in H5mag for update operations. If you choose to change the [matching target attribute](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes), you will need to ensure that the H5mag API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+9. Review the user attributes that are synchronized from Azure AD to H5mag in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in H5mag for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the H5mag API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
|Attribute|Type|Supported For Filtering| ||||
This section guides you through the steps to configure the Azure AD provisioning
|timezone|String| |userType|String|
-10. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../manage-apps/define-conditional-rules-for-provisioning-user-accounts.md).
+10. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
11. To enable the Azure AD provisioning service for H5mag, change the **Provisioning Status** to **On** in the **Settings** section.
This operation starts the initial synchronization cycle of all users and groups
## Step 6. Monitor your deployment Once you've configured provisioning, use the following resources to monitor your deployment:
-1. Use the [provisioning logs](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-provisioning-logs) to determine which users have been provisioned successfully or unsuccessfully
-2. Check the [progress bar](https://docs.microsoft.com/azure/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user) to see the status of the provisioning cycle and how close it is to completion
-3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](https://docs.microsoft.com/azure/active-directory/manage-apps/application-provisioning-quarantine-status).
+1. Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+2. Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
## Additional resources
-* [Managing user account provisioning for Enterprise Apps](../manage-apps/configure-automatic-user-provisioning-portal.md)
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ## Next steps
-* [Learn how to review logs and get reports on provisioning activity](../manage-apps/check-status-user-account-provisioning.md)
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Logmein Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/logmein-provisioning-tutorial.md
# Tutorial: Configure LogMeIn for automatic user provisioning
-This tutorial describes the steps you need to perform in both LogMeIn and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [LogMeIn](https://www.logmein.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../manage-apps/user-provisioning.md).
+This tutorial describes the steps you need to perform in both LogMeIn and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [LogMeIn](https://www.logmein.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
## Capabilities Supported
This tutorial describes the steps you need to perform in both LogMeIn and Azure
> * Remove users in LogMeIn when they do not require access anymore > * Keep user attributes synchronized between Azure AD and LogMeIn > * Provision groups and group memberships in LogMeIn
-> * [Single sign-on](https://docs.microsoft.com/azure/active-directory/saas-apps/logmein-tutorial) to LogMeIn (recommended)
+> * [Single sign-on](./logmein-tutorial.md) to LogMeIn (recommended)
## Prerequisites The scenario outlined in this tutorial assumes that you already have the following prerequisites:
-* [An Azure AD tenant](https://docs.microsoft.com/azure/active-directory/develop/quickstart-create-new-tenant)
-* A user account in Azure AD with [permission](https://docs.microsoft.com/azure/active-directory/users-groups-roles/directory-assign-admin-roles) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
* An organization created in the LogMeIn Organization Center with at least one verified domain * A user account in the LogMeIn Organization Center with [permission](https://support.goto.com/meeting/help/manage-organization-users-g2m710102) to configure provisioning (for example, organization administrator role with Read & Write permissions) as shown in Step 2. ## Step 1. Plan your provisioning deployment
-1. Learn about [how the provisioning service works](https://docs.microsoft.com/azure/active-directory/manage-apps/user-provisioning).
-2. Determine who will be in [scope for provisioning](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
-3. Determine what data to [map between Azure AD and LogMeIn](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes).
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+3. Determine what data to [map between Azure AD and LogMeIn](../app-provisioning/customize-application-attributes.md).
## Step 2. Configure LogMeIn to support provisioning with Azure AD
The scenario outlined in this tutorial assumes that you already have the followi
## Step 3. Add LogMeIn from the Azure AD application gallery
-Add LogMeIn from the Azure AD application gallery to start managing provisioning to LogMeIn. If you have previously setup LogMeIn for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](https://docs.microsoft.com/azure/active-directory/manage-apps/add-gallery-app).
+Add LogMeIn from the Azure AD application gallery to start managing provisioning to LogMeIn. If you have previously setup LogMeIn for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
## Step 4. Define who will be in scope for provisioning
-The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to LogMeIn, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps) to add additional roles.
+* When assigning users and groups to LogMeIn, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
## Step 5. Configure automatic user provisioning to LogMeIn
This section guides you through the steps to configure the Azure AD provisioning
8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to LogMeIn**.
-9. Review the user attributes that are synchronized from Azure AD to LogMeIn in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in LogMeIn for update operations. If you choose to change the [matching target attribute](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes), you will need to ensure that the LogMeIn API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+9. Review the user attributes that are synchronized from Azure AD to LogMeIn in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in LogMeIn for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the LogMeIn API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
|Attribute|Type| |||
This section guides you through the steps to configure the Azure AD provisioning
|externalId|String| |members|Reference|
-12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../manage-apps/define-conditional-rules-for-provisioning-user-accounts.md).
+12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
13. To enable the Azure AD provisioning service for LogMeIn, change the **Provisioning Status** to **On** in the **Settings** section.
This operation starts the initial synchronization cycle of all users and groups
## Step 6. Monitor your deployment Once you've configured provisioning, use the following resources to monitor your deployment:
-1. Use the [provisioning logs](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-provisioning-logs) to determine which users have been provisioned successfully or unsuccessfully
-2. Check the [progress bar](https://docs.microsoft.com/azure/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user) to see the status of the provisioning cycle and how close it is to completion
-3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](https://docs.microsoft.com/azure/active-directory/manage-apps/application-provisioning-quarantine-status).
+1. Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+2. Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
## Additional resources
-* [Managing user account provisioning for Enterprise Apps](../manage-apps/configure-automatic-user-provisioning-portal.md)
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ## Next steps
-* [Learn how to review logs and get reports on provisioning activity](../manage-apps/check-status-user-account-provisioning.md)
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Looop Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/looop-provisioning-tutorial.md
Before configuring and enabling automatic user provisioning, you should decide w
### Important tips for assigning users to Looop
-* It is recommended that a single Azure AD user is assigned to Looop to test the automatic user provisioning configuration. Additional users and/or groups may be assigned later.
+* It is recommended that a single Azure AD user is assigned to Looop to test the automatic user provisioning configuration. More users and/or groups may be assigned later.
* When assigning a user to Looop, you must select any valid application-specific role (if available) in the assignment dialog. Users with the **Default Access** role are excluded from provisioning.
Before configuring and enabling automatic user provisioning, you should decide w
Before configuring Looop for automatic user provisioning with Azure AD, you will need to retrieve some provisioning information from Looop.
-1. Sign in to your [Looop Admin Console](https://app.looop.co/#/login) and select **Account**. Under **Account Settings** select **Authentication**.
+1. Sign in to your [Looop Admin Console](https://app.looop.co/#/login) and select **Account**. Under **Account Settings**, select **Authentication**.
- :::image type="content" source="media/looop-provisioning-tutorial/admin.png" alt-text="Screenshot of the Looop admin console. The Account tab is highlighted and open. Under Account settings, Authentication is highlighted." border="false":::
+ ![Looop Admin](media/looop-provisioning-tutorial/admin.png)
2. Generate a new token by clicking **Reset Token** under **SCIM Integration**.
- :::image type="content" source="media/looop-provisioning-tutorial/resettoken.png" alt-text="Screenshot of the S C I M integration section of a page in the Looop admin console. The Reset token button is highlighted." border="false":::
+ ![Looop Token](media/looop-provisioning-tutorial/resettoken.png)
3. Copy the **SCIM Endpoint** and the **Token**. These values will be entered in the **Tenant URL** and **Secret Token** fields in the Provisioning tab of your Looop application in the Azure portal.
This section guides you through the steps to configure the Azure AD provisioning
|name.givenName|String| |name.familyName|String| |externalId|String|
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String|
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:employeeNumber|String|
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|String|
|urn:ietf:params:scim:schemas:extension:Looop:2.0:User:area|String| |urn:ietf:params:scim:schemas:extension:Looop:2.0:User:custom_1|String| |urn:ietf:params:scim:schemas:extension:Looop:2.0:User:custom_2|String| |urn:ietf:params:scim:schemas:extension:Looop:2.0:User:custom_3|String|
- |urn:ietf:params:scim:schemas:extension:Looop:2.0:User:department|String|
- |urn:ietf:params:scim:schemas:extension:Looop:2.0:User:employee_id|String|
|urn:ietf:params:scim:schemas:extension:Looop:2.0:User:location|String| |urn:ietf:params:scim:schemas:extension:Looop:2.0:User:position|String| |urn:ietf:params:scim:schemas:extension:Looop:2.0:User:startAt|String|
This operation starts the initial synchronization of all users and/or groups def
For more information on how to read the Azure AD provisioning logs, see [Reporting on automatic user account provisioning](../app-provisioning/check-status-user-account-provisioning.md).
-## Additional resources
+## Change log
+
+* 07/15/2021 - Enterprise extension user attributes **urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department**, **urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:employeeNumber** and **urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager** have been added.
+* 07/15/2021 - Custom extension user attributes **urn:ietf:params:scim:schemas:extension:Looop:2.0:User:department** and **urn:ietf:params:scim:schemas:extension:Looop:2.0:User:employee_id** have been removed.
+
+## More resources
* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md) * [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
active-directory Secure Deliver Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/secure-deliver-provisioning-tutorial.md
# Tutorial: Configure SECURE DELIVER for automatic user provisioning
-This tutorial describes the steps you need to perform in both SECURE DELIVER and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [SECURE DELIVER](https://www.Contoso.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../manage-apps/user-provisioning.md).
+This tutorial describes the steps you need to perform in both SECURE DELIVER and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [SECURE DELIVER](https://www.Contoso.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
## Capabilities Supported
This tutorial describes the steps you need to perform in both SECURE DELIVER and
> * Create users in SECURE DELIVER > * Remove users in SECURE DELIVER when they do not require access anymore > * Keep user attributes synchronized between Azure AD and SECURE DELIVER
-> * [Single sign-on](https://docs.microsoft.com/azure/active-directory/saas-apps/securedeliver-tutorial) to SECURE DELIVER (recommended)
+> * [Single sign-on](./securedeliver-tutorial.md) to SECURE DELIVER (recommended)
## Prerequisites The scenario outlined in this tutorial assumes that you already have the following prerequisites:
-* [An Azure AD tenant](https://docs.microsoft.com/azure/active-directory/develop/quickstart-create-new-tenant)
-* A user account in Azure AD with [permission](https://docs.microsoft.com/azure/active-directory/users-groups-roles/directory-assign-admin-roles) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
## Step 1. Plan your provisioning deployment
-1. Learn about [how the provisioning service works](https://docs.microsoft.com/azure/active-directory/manage-apps/user-provisioning).
-2. Determine who will be in [scope for provisioning](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
-3. Determine what data to [map between Azure AD and SECURE DELIVER](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes).
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+3. Determine what data to [map between Azure AD and SECURE DELIVER](../app-provisioning/customize-application-attributes.md).
## Step 2. Configure SECURE DELIVER to support provisioning with Azure AD
The scenario outlined in this tutorial assumes that you already have the followi
## Step 3. Add SECURE DELIVER from the Azure AD application gallery
-Add SECURE DELIVER from the Azure AD application gallery to start managing provisioning to SECURE DELIVER. If you have previously setup SECURE DELIVER for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](https://docs.microsoft.com/azure/active-directory/manage-apps/add-gallery-app).
+Add SECURE DELIVER from the Azure AD application gallery to start managing provisioning to SECURE DELIVER. If you have previously setup SECURE DELIVER for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
## Step 4. Define who will be in scope for provisioning
-The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to SECURE DELIVER, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps) to add additional roles.
+* When assigning users and groups to SECURE DELIVER, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
## Step 5. Configure automatic user provisioning to SECURE DELIVER
This section guides you through the steps to configure the Azure AD provisioning
8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to SECURE DELIVER**.
-9. Review the user attributes that are synchronized from Azure AD to SECURE DELIVER in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in SECURE DELIVER for update operations. If you choose to change the [matching target attribute](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes), you will need to ensure that the SECURE DELIVER API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+9. Review the user attributes that are synchronized from Azure AD to SECURE DELIVER in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in SECURE DELIVER for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the SECURE DELIVER API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
|Attribute|Type|Supported For Filtering| ||||
This section guides you through the steps to configure the Azure AD provisioning
|emails[type eq "work"].value|String| |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|Reference|
-10. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../manage-apps/define-conditional-rules-for-provisioning-user-accounts.md).
+10. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
11. To enable the Azure AD provisioning service for SECURE DELIVER, change the **Provisioning Status** to **On** in the **Settings** section.
This operation starts the initial synchronization cycle of all users and groups
## Step 6. Monitor your deployment Once you've configured provisioning, use the following resources to monitor your deployment:
-1. Use the [provisioning logs](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-provisioning-logs) to determine which users have been provisioned successfully or unsuccessfully
-2. Check the [progress bar](https://docs.microsoft.com/azure/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user) to see the status of the provisioning cycle and how close it is to completion
-3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](https://docs.microsoft.com/azure/active-directory/manage-apps/application-provisioning-quarantine-status).
+1. Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+2. Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
## Additional resources
-* [Managing user account provisioning for Enterprise Apps](../manage-apps/configure-automatic-user-provisioning-portal.md)
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ## Next steps
-* [Learn how to review logs and get reports on provisioning activity](../manage-apps/check-status-user-account-provisioning.md)
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Sigma Computing Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/sigma-computing-provisioning-tutorial.md
# Tutorial: Configure Sigma Computing for automatic user provisioning
-This tutorial describes the steps you need to perform in both Sigma Computing and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Sigma Computing](https://www.sigmacomputing.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../manage-apps/user-provisioning.md).
+This tutorial describes the steps you need to perform in both Sigma Computing and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Sigma Computing](https://www.sigmacomputing.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
## Capabilities Supported
This tutorial describes the steps you need to perform in both Sigma Computing an
> * Remove users in Sigma Computing when they do not require access anymore > * Keep user attributes synchronized between Azure AD and Sigma Computing > * Provision groups and group memberships in Sigma Computing
-> * [Single sign-on](https://docs.microsoft.com/azure/active-directory/saas-apps/sigma-computing-tutorial) to Sigma Computing (recommended)
+> * [Single sign-on](./sigma-computing-tutorial.md) to Sigma Computing (recommended)
## Prerequisites The scenario outlined in this tutorial assumes that you already have the following prerequisites:
-* [An Azure AD tenant](https://docs.microsoft.com/azure/active-directory/develop/quickstart-create-new-tenant)
-* A user account in Azure AD with [permission](https://docs.microsoft.com/azure/active-directory/users-groups-roles/directory-assign-admin-roles) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
* An admin account in your Sigma organization.
-* An existing [SSO](https://docs.microsoft.com/azure/active-directory/saas-apps/sigma-computing-tutorial) integration with Sigma Computing.
+* An existing [SSO](./sigma-computing-tutorial.md) integration with Sigma Computing.
## Step 1. Plan your provisioning deployment
-1. Learn about [how the provisioning service works](https://docs.microsoft.com/azure/active-directory/manage-apps/user-provisioning).
-2. Determine who will be in [scope for provisioning](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
-3. Determine what data to [map between Azure AD and Sigma Computing](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes).
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+3. Determine what data to [map between Azure AD and Sigma Computing](../app-provisioning/customize-application-attributes.md).
## Step 2. Configure Sigma Computing to support provisioning with Azure AD
The scenario outlined in this tutorial assumes that you already have the followi
## Step 3. Add Sigma Computing from the Azure AD application gallery
-Add Sigma Computing from the Azure AD application gallery to start managing provisioning to Sigma Computing. If you have previously setup Sigma Computing for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](https://docs.microsoft.com/azure/active-directory/manage-apps/add-gallery-app).
+Add Sigma Computing from the Azure AD application gallery to start managing provisioning to Sigma Computing. If you have previously setup Sigma Computing for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
## Step 4. Define who will be in scope for provisioning
-The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Sigma Computing, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps) to add additional roles.
+* When assigning users and groups to Sigma Computing, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
## Step 5. Configure automatic user provisioning to Sigma Computing
This section guides you through the steps to configure the Azure AD provisioning
8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Sigma Computing**.
-9. Review the user attributes that are synchronized from Azure AD to Sigma Computing in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Sigma Computing for update operations. If you choose to change the [matching target attribute](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes), you will need to ensure that the Sigma Computing API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+9. Review the user attributes that are synchronized from Azure AD to Sigma Computing in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Sigma Computing for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the Sigma Computing API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
|Attribute|Type|Supported For Filtering| ||||
This section guides you through the steps to configure the Azure AD provisioning
|displayName|String|&check;| |members|Reference|
-12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../manage-apps/define-conditional-rules-for-provisioning-user-accounts.md).
+12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
13. To enable the Azure AD provisioning service for Sigma Computing, change the **Provisioning Status** to **On** in the **Settings** section.
This operation starts the initial synchronization cycle of all users and groups
## Step 6. Monitor your deployment Once you've configured provisioning, use the following resources to monitor your deployment:
-1. Use the [provisioning logs](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-provisioning-logs) to determine which users have been provisioned successfully or unsuccessfully
-2. Check the [progress bar](https://docs.microsoft.com/azure/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user) to see the status of the provisioning cycle and how close it is to completion
-3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](https://docs.microsoft.com/azure/active-directory/manage-apps/application-provisioning-quarantine-status).
+1. Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+2. Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
## Additional resources
-* [Managing user account provisioning for Enterprise Apps](../manage-apps/configure-automatic-user-provisioning-portal.md)
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ## Next steps
-* [Learn how to review logs and get reports on provisioning activity](../manage-apps/check-status-user-account-provisioning.md)
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Smallstep Ssh Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/smallstep-ssh-provisioning-tutorial.md
# Tutorial: Configure Smallstep SSH for automatic user provisioning
-This tutorial describes the steps you need to perform in both Smallstep SSH and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Smallstep SSH](https://smallstep.com) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../manage-apps/user-provisioning.md).
+This tutorial describes the steps you need to perform in both Smallstep SSH and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Smallstep SSH](https://smallstep.com) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
## Capabilities Supported
This tutorial describes the steps you need to perform in both Smallstep SSH and
The scenario outlined in this tutorial assumes that you already have the following prerequisites:
-* [An Azure AD tenant](https://docs.microsoft.com/azure/active-directory/develop/quickstart-create-new-tenant)
-* A user account in Azure AD with [permission](https://docs.microsoft.com/azure/active-directory/users-groups-roles/directory-assign-admin-roles) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
* A [Smallstep SSH](https://smallstep.com/sso-ssh/) account. ## Step 1. Plan your provisioning deployment
-1. Learn about [how the provisioning service works](https://docs.microsoft.com/azure/active-directory/manage-apps/user-provisioning).
-2. Determine who will be in [scope for provisioning](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
-3. Determine what data to [map between Azure AD and Smallstep SSH](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes).
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+3. Determine what data to [map between Azure AD and Smallstep SSH](../app-provisioning/customize-application-attributes.md).
## Step 2. Configure Smallstep SSH to support provisioning with Azure AD
The scenario outlined in this tutorial assumes that you already have the followi
## Step 3. Add Smallstep SSH from the Azure AD application gallery
-Add Smallstep SSH from the Azure AD application gallery to start managing provisioning to Smallstep SSH. If you have previously setup Smallstep SSH for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](https://docs.microsoft.com/azure/active-directory/manage-apps/add-gallery-app).
+Add Smallstep SSH from the Azure AD application gallery to start managing provisioning to Smallstep SSH. If you have previously setup Smallstep SSH for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
## Step 4. Define who will be in scope for provisioning
-The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Smallstep SSH, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps) to add other roles.
+* When assigning users and groups to Smallstep SSH, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add other roles.
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
## Step 5. Configure automatic user provisioning to Smallstep SSH
This section guides you through the steps to configure the Azure AD provisioning
8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Smallstep SSH**.
-9. Review the user attributes that are synchronized from Azure AD to Smallstep SSH in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Smallstep SSH for update operations. If you choose to change the [matching target attribute](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes), you will need to ensure that the Smallstep SSH API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+9. Review the user attributes that are synchronized from Azure AD to Smallstep SSH in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Smallstep SSH for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the Smallstep SSH API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
|Attribute|Type|Supported For Filtering| |||--|
This section guides you through the steps to configure the Azure AD provisioning
|displayName|String|&check;| |members|Reference|
-12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../manage-apps/define-conditional-rules-for-provisioning-user-accounts.md).
+12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
13. To enable the Azure AD provisioning service for Smallstep SSH, change the **Provisioning Status** to **On** in the **Settings** section.
This operation starts the initial synchronization cycle of all users and groups
## Step 6. Monitor your deployment Once you've configured provisioning, use the following resources to monitor your deployment:
-1. Use the [provisioning logs](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-provisioning-logs) to determine which users have been provisioned successfully or unsuccessfully
-2. Check the [progress bar](https://docs.microsoft.com/azure/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user) to see the status of the provisioning cycle and how close it is to completion
-3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](https://docs.microsoft.com/azure/active-directory/manage-apps/application-provisioning-quarantine-status).
+1. Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+2. Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
## Additional resources
-* [Managing user account provisioning for Enterprise Apps](../manage-apps/configure-automatic-user-provisioning-portal.md)
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ## Next steps
-* [Learn how to review logs and get reports on provisioning activity](../manage-apps/check-status-user-account-provisioning.md)
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Twingate Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/twingate-provisioning-tutorial.md
# Tutorial: Configure Twingate for automatic user provisioning
-This tutorial describes the steps you need to perform in both Twingate and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Twingate](https://www.twingate.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../manage-apps/user-provisioning.md).
+This tutorial describes the steps you need to perform in both Twingate and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Twingate](https://www.twingate.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
## Capabilities Supported
This tutorial describes the steps you need to perform in both Twingate and Azure
The scenario outlined in this tutorial assumes that you already have the following prerequisites:
-* [An Azure AD tenant](https://docs.microsoft.com/azure/active-directory/develop/quickstart-create-new-tenant)
-* A user account in Azure AD with [permission](https://docs.microsoft.com/azure/active-directory/users-groups-roles/directory-assign-admin-roles) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
* A Twingate tenant in a product tier that supports identity provider integration. See [Twingate pricing](https://www.twingate.com/pricing/) for details on different product tiers. * A user account in Twingate with Admin permissions. ## Step 1. Plan your provisioning deployment
-1. Learn about [how the provisioning service works](https://docs.microsoft.com/azure/active-directory/manage-apps/user-provisioning).
-2. Determine who will be in [scope for provisioning](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
-3. Determine what data to [map between Azure AD and Twingate](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes).
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+3. Determine what data to [map between Azure AD and Twingate](../app-provisioning/customize-application-attributes.md).
## Step 2. Configure Twingate to support provisioning with Azure AD
The scenario outlined in this tutorial assumes that you already have the followi
## Step 3. Add Twingate from the Azure AD application gallery
-Add Twingate from the Azure AD application gallery to start managing provisioning to Twingate. If you have previously setup Twingate for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](https://docs.microsoft.com/azure/active-directory/manage-apps/add-gallery-app).
+Add Twingate from the Azure AD application gallery to start managing provisioning to Twingate. If you have previously setup Twingate for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
## Step 4. Define who will be in scope for provisioning
-The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Twingate, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps) to add additional roles.
+* When assigning users and groups to Twingate, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
## Step 5. Configure automatic user provisioning to Twingate
This section guides you through the steps to configure the Azure AD provisioning
8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Twingate**.
-9. Review the user attributes that are synchronized from Azure AD to Twingate in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Twingate for update operations. If you choose to change the [matching target attribute](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes), you will need to ensure that the Twingate API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+9. Review the user attributes that are synchronized from Azure AD to Twingate in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Twingate for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the Twingate API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
|Attribute|Type|Supported For Filtering| ||||
This section guides you through the steps to configure the Azure AD provisioning
|displayName|String| |members|Reference|
-12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../manage-apps/define-conditional-rules-for-provisioning-user-accounts.md).
+12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
13. To enable the Azure AD provisioning service for Twingate, change the **Provisioning Status** to **On** in the **Settings** section.
This operation starts the initial synchronization cycle of all users and groups
## Step 6. Monitor your deployment Once you've configured provisioning, use the following resources to monitor your deployment:
-1. Use the [provisioning logs](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-provisioning-logs) to determine which users have been provisioned successfully or unsuccessfully
-2. Check the [progress bar](https://docs.microsoft.com/azure/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user) to see the status of the provisioning cycle and how close it is to completion
-3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](https://docs.microsoft.com/azure/active-directory/manage-apps/application-provisioning-quarantine-status).
+1. Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+2. Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
## Additional resources
-* [Managing user account provisioning for Enterprise Apps](../manage-apps/configure-automatic-user-provisioning-portal.md)
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ## Next steps
-* [Learn how to review logs and get reports on provisioning activity](../manage-apps/check-status-user-account-provisioning.md)
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Enable Your Tenant Verifiable Credentials https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/verifiable-credentials/enable-your-tenant-verifiable-credentials.md
Before you create the credential, you need to first give the signed-in user the
![Screenshot that shows the Add role assignment page in the Azure portal.](../../../includes/role-based-access-control/media/add-role-assignment-page.png) >[!IMPORTANT]
- >By default, container creators get the Owner role assigned. The Owner role isn't enough on its own. Your account needs the Storage Blob Data Reader role. For more information, see [Use the Azure portal to assign an Azure role for access to blob and queue data](../../storage/common/storage-auth-aad-rbac-portal.md).
+ >By default, container creators get the Owner role assigned. The Owner role isn't enough on its own. Your account needs the Storage Blob Data Reader role. For more information, see [Use the Azure portal to assign an Azure role for access to blob and queue data](../../storage/blobs/assign-azure-role-data-access.md).
## Set up Verifiable Credentials Preview
Now that you're issued the verifiable credential from our tenant, verify it by u
Now that you have the sample code that issues a verifiable credential from your issuer, continue to the next section. You'll use your own identity provider to authenticate users who are trying to get verifiable credentials. You'll also use your DID to sign presentation requests. > [!div class="nextstepaction"]
-> [Tutorial - Issue and verify verifiable credentials by using your tenant](issue-verify-verifiable-credentials-your-tenant.md)
--
+> [Tutorial - Issue and verify verifiable credentials by using your tenant](issue-verify-verifiable-credentials-your-tenant.md)
aks Aks Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/aks-migration.md
In this article we will summarize migration details for:
Azure Migrate offers a unified platform to assess and migrate to Azure on-premises servers, infrastructure, applications, and data. For AKS, you can use Azure Migrate for the following tasks: * [Containerize ASP.NET applications and migrate to AKS](../migrate/tutorial-app-containerization-aspnet-kubernetes.md)
-* [Containerize Java web applications and migrate to AKS](../migrate/tutorial-containerize-java-kubernetes.md)
+* [Containerize Java web applications and migrate to AKS](/azure/aks/tutorial-app-containerization-java-kubernetes)
## AKS with Standard Load Balancer and Virtual Machine Scale Sets
aks Api Server Authorized Ip Ranges https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/api-server-authorized-ip-ranges.md
For more information, see [Security concepts for applications and clusters in AK
<!-- LINKS - external --> [cni-networking]: https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md
-[dev-spaces-ranges]: ../dev-spaces/index.yml#aks-cluster-network-requirements
+[dev-spaces-ranges]: /previous-versions/azure/dev-spaces/#aks-cluster-network-requirements
[kubenet]: https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#kubenet <!-- LINKS - internal -->
For more information, see [Security concepts for applications and clusters in AK
[install-azure-cli]: /cli/azure/install-azure-cli [operator-best-practices-cluster-security]: operator-best-practices-cluster-security.md [route-tables]: ../virtual-network/manage-route-table.md
-[standard-sku-lb]: load-balancer-standard.md
+[standard-sku-lb]: load-balancer-standard.md
aks Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/certificate-rotation.md
Title: Rotate certificates in Azure Kubernetes Service (AKS)
description: Learn how to rotate your certificates in an Azure Kubernetes Service (AKS) cluster. Previously updated : 7/1/2021 Last updated : 7/13/2021 # Rotate certificates in Azure Kubernetes Service (AKS)
az vm run-command invoke -g MC_rg_myAKSCluster_region -n vm-name --command-id Ru
* Check expiration date of certificate on one VMSS agent node ```console
-az vmss run-command invoke -g MC_rg_myAKSCluster_region -n vmss-name --instance-id 0 --command-id RunShellScript --query 'value[0].message' -otsv --scripts "openssl x509 -in /etc/kubernetes/certs/client.crt -noout -enddate"
+az vmss run-command invoke -g MC_rg_myAKSCluster_region -n vmss-name --instance-id 0 --command-id RunShellScript --query 'value[0].message' -otsv --scripts "openssl x509 -in /etc/kubernetes/certs/apiserver.crt -noout -enddate"
``` ## Rotate your cluster certificates
aks Intro Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/intro-kubernetes.md
Learn more about deploying and managing AKS with the Azure CLI Quickstart.
[aks-portal]: ./kubernetes-walkthrough-portal.md [aks-scale]: ./tutorial-kubernetes-scale.md [aks-upgrade]: ./upgrade-cluster.md
-[azure-dev-spaces]: ../dev-spaces/index.yml
+[azure-dev-spaces]: /previous-versions/azure/dev-spaces/
[azure-devops]: ../devops-project/overview.md [azure-disk]: ./azure-disks-dynamic-pv.md [azure-files]: ./azure-files-dynamic-pv.md
Learn more about deploying and managing AKS with the Azure CLI Quickstart.
[kubernetes-rbac]: concepts-identity.md#kubernetes-rbac [concepts-identity]: concepts-identity.md [concepts-storage]: concepts-storage.md
-[conf-com-node]: ../confidential-computing/confidential-nodes-aks-overview.md
+[conf-com-node]: ../confidential-computing/confidential-nodes-aks-overview.md
aks Kubernetes Walkthrough Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/kubernetes-walkthrough-powershell.md
To learn more about AKS, and walk through a complete code to deployment example,
<!-- LINKS - external --> [kubectl]: https://kubernetes.io/docs/user-guide/kubectl/ [kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
-[azure-dev-spaces]: ../dev-spaces/index.yml
+[azure-dev-spaces]: /previous-versions/azure/dev-spaces/
[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply [azure-vote-app]: https://github.com/Azure-Samples/azure-voting-app-redis.git
To learn more about AKS, and walk through a complete code to deployment example,
[remove-azresourcegroup]: /powershell/module/az.resources/remove-azresourcegroup [sp-delete]: kubernetes-service-principal.md#additional-considerations [kubernetes-dashboard]: kubernetes-dashboard.md
-[aks-tutorial]: ./tutorial-kubernetes-prepare-app.md
+[aks-tutorial]: ./tutorial-kubernetes-prepare-app.md
aks Kubernetes Walkthrough Rm Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/kubernetes-walkthrough-rm-template.md
To learn more about AKS, and walk through a complete code to deployment example,
[kubectl]: https://kubernetes.io/docs/user-guide/kubectl/ [kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply [kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
-[azure-dev-spaces]: ../dev-spaces/index.yml
+[azure-dev-spaces]: /previous-versions/azure/dev-spaces/
[aks-quickstart-templates]: https://azure.microsoft.com/resources/templates/?term=Azure+Kubernetes+Service <!-- LINKS - internal -->
To learn more about AKS, and walk through a complete code to deployment example,
[kubernetes-service]: concepts-network.md#services [kubernetes-dashboard]: kubernetes-dashboard.md [ssh-keys]: ../virtual-machines/linux/create-ssh-keys-detailed.md
-[az-ad-sp-create-for-rbac]: /cli/azure/ad/sp#az_ad_sp_create_for_rbac
+[az-ad-sp-create-for-rbac]: /cli/azure/ad/sp#az_ad_sp_create_for_rbac
aks Load Balancer Standard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/load-balancer-standard.md
Frequently the root cause of SNAT exhaustion is an anti-pattern for how outbound
### Steps 1. Check if your connections remain idle for a long time and rely on the default idle timeout for releasing that port. If so the default timeout of 30 min might need to be reduced for your scenario. 2. Investigate how your application is creating outbound connectivity (for example, code review or packet capture).
-3. Determine if this activity is expected behavior or whether the application is misbehaving. Use [metrics](../load-balancer/load-balancer-standard-diagnostics.md) and [logs](../load-balancer/load-balancer-monitor-log.md) in Azure Monitor to substantiate your findings. Use "Failed" category for SNAT Connections metric for example.
+3. Determine if this activity is expected behavior or whether the application is misbehaving. Use [metrics](../load-balancer/load-balancer-standard-diagnostics.md) and [logs](../load-balancer/monitor-load-balancer.md) in Azure Monitor to substantiate your findings. Use "Failed" category for SNAT Connections metric for example.
4. Evaluate if appropriate [patterns](#design-patterns) are followed. 5. Evaluate if SNAT port exhaustion should be mitigated with [additional Outbound IP addresses + additional Allocated Outbound Ports](#configure-the-allocated-outbound-ports) .
Learn more about using Internal Load Balancer for Inbound traffic at the [AKS In
[requirements]: #requirements-for-customizing-allocated-outbound-ports-and-idle-timeout [use-multiple-node-pools]: use-multiple-node-pools.md [troubleshoot-snat]: #troubleshooting-snat
-[service-tags]: ../virtual-network/network-security-groups-overview.md#service-tags
+[service-tags]: ../virtual-network/network-security-groups-overview.md#service-tags
aks Operator Best Practices Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/operator-best-practices-identity.md
There are two levels of access needed to fully operate an AKS cluster:
To access other Azure services, like Cosmos DB, Key Vault, or Blob Storage, the pod needs access credentials. You could define access credentials with the container image or inject them as a Kubernetes secret. Either way, you would need to manually create and assign them. Usually, these credentials are reused across pods and aren't regularly rotated.
-With pod-managed identities for Azure resources, you automatically request access to services through Azure AD. Pod-managed identities is now currently in preview for AKS. Please refer to the [Use Azure Active Directory pod-managed identities in Azure Kubernetes Service (Preview)](https://docs.microsoft.com/azure/aks/use-azure-ad-pod-identity) documentation to get started.
+With pod-managed identities for Azure resources, you automatically request access to services through Azure AD. Pod-managed identities is now currently in preview for AKS. Please refer to the [Use Azure Active Directory pod-managed identities in Azure Kubernetes Service (Preview)](./use-azure-ad-pod-identity.md) documentation to get started.
Instead of manually defining credentials for pods, pod-managed identities request an access token in real time, using it to access only their assigned services. In AKS, there are two components that handle the operations to allow pods to use managed identities:
For more information about cluster operations in AKS, see the following best pra
[aks-best-practices-scheduler]: operator-best-practices-scheduler.md [aks-best-practices-advanced-scheduler]: operator-best-practices-advanced-scheduler.md [aks-best-practices-cluster-isolation]: operator-best-practices-cluster-isolation.md
-[azure-ad-rbac]: azure-ad-rbac.md
+[azure-ad-rbac]: azure-ad-rbac.md
app-service Configure Authentication User Identities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-authentication-user-identities.md
For ASP.NET 4.6 apps, App Service populates [ClaimsPrincipal.Current](/dotnet/ap
For [Azure Functions](../azure-functions/functions-overview.md), `ClaimsPrincipal.Current` is not populated for .NET code, but you can still find the user claims in the request headers, or get the `ClaimsPrincipal` object from the request context or even through a binding parameter. See [working with client identities in Azure Functions](../azure-functions/functions-bindings-http-webhook-trigger.md#working-with-client-identities) for more information.
-For .NET Core, [Microsoft.Identity.Web](https://www.nuget.org/packages/Microsoft.Identity.Web/) supports populating the current user with App Service authentication. To learn more, you can read about it on the [Microsoft.Identity.Web wiki](https://github.com/AzureAD/microsoft-identity-web/wiki/1.2.0#integration-with-azure-app-services-authentication-of-web-apps-running-with-microsoftidentityweb), or see it demonstrated in [this tutorial for a web app accessing Microsoft Graph](/azure/app-service/scenario-secure-app-access-microsoft-graph-as-user?tabs=command-line#install-client-library-packages).
+For .NET Core, [Microsoft.Identity.Web](https://www.nuget.org/packages/Microsoft.Identity.Web/) supports populating the current user with App Service authentication. To learn more, you can read about it on the [Microsoft.Identity.Web wiki](https://github.com/AzureAD/microsoft-identity-web/wiki/1.2.0#integration-with-azure-app-services-authentication-of-web-apps-running-with-microsoftidentityweb), or see it demonstrated in [this tutorial for a web app accessing Microsoft Graph](./scenario-secure-app-access-microsoft-graph-as-user.md?tabs=command-line#install-client-library-packages).
## Access user claims using the API
If the [token store](overview-authentication-authorization.md#token-store) is en
## Next steps > [!div class="nextstepaction"]
-> [Tutorial: Authenticate and authorize users end-to-end](tutorial-auth-aad.md)
+> [Tutorial: Authenticate and authorize users end-to-end](tutorial-auth-aad.md)
app-service Configure Language Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-language-java.md
Product support for the [Azure-supported Azul Zulu JDK](https://www.azul.com/dow
Visit the [Azure for Java Developers](/java/azure/) center to find Azure quickstarts, tutorials, and Java reference documentation.
-General questions about using App Service for Linux that aren't specific to the Java development are answered in the [App Service Linux FAQ](faq-app-service-linux.yml).
+General questions about using App Service for Linux that aren't specific to the Java development are answered in the [App Service Linux FAQ](faq-app-service-linux.yml).
app-service Deploy Zip https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/deploy-zip.md
enables version control, package restore, MSBuild, and more.
## More resources * [Kudu: Deploying from a zip file](https://github.com/projectkudu/kudu/wiki/Deploying-from-a-zip-file)
-* [Azure App Service Deployment Credentials](deploy-ftp.md)
+* [Azure App Service Deployment Credentials](deploy-ftp.md)
app-service Monitor App Service Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/monitor-app-service-reference.md
This section lists all the automatically collected platform metrics collected fo
|Metric Type | Resource Provider / Type Namespace<br/> and link to individual metrics | |-|--|
-| App Service Plans | [Microsoft.Web/serverfarms](/azure/azure-monitor/essentials/metrics-supported#microsoftwebserverfarms)
-| Web apps | [Microsoft.Web/sites](/azure/azure-monitor/essentials/metrics-supported#microsoftwebsites) |
-| Staging slots | [Microsoft.Web/sites/slots](/azure/azure-monitor/essentials/metrics-supported#microsoftwebsitesslots)
-| App Service Environment | [Microsoft.Web/hostingEnvironments](/azure/azure-monitor/essentials/metrics-supported#microsoftwebhostingenvironments)
-| App Service Environment Front-end | [Microsoft.Web/hostingEnvironments/multiRolePools](/azure/azure-monitor/essentials/metrics-supported#microsoftwebhostingenvironmentsmultirolepools)
+| App Service Plans | [Microsoft.Web/serverfarms](../azure-monitor/essentials/metrics-supported.md#microsoftwebserverfarms)
+| Web apps | [Microsoft.Web/sites](../azure-monitor/essentials/metrics-supported.md#microsoftwebsites) |
+| Staging slots | [Microsoft.Web/sites/slots](../azure-monitor/essentials/metrics-supported.md#microsoftwebsitesslots)
+| App Service Environment | [Microsoft.Web/hostingEnvironments](../azure-monitor/essentials/metrics-supported.md#microsoftwebhostingenvironments)
+| App Service Environment Front-end | [Microsoft.Web/hostingEnvironments/multiRolePools](../azure-monitor/essentials/metrics-supported.md#microsoftwebhostingenvironmentsmultirolepools)
-For more information, see a list of [all platform metrics supported in Azure Monitor](../azure-monitor/platform/metrics-supported.md).
+For more information, see a list of [all platform metrics supported in Azure Monitor](../azure-monitor/essentials/metrics-supported.md).
## Metric Dimensions
-For more information on what metric dimensions are, see [Multi-dimensional metrics](/azure/azure-monitor/platform/data-platform-metrics#multi-dimensional-metrics).
+For more information on what metric dimensions are, see [Multi-dimensional metrics](../azure-monitor/essentials/data-platform-metrics.md#multi-dimensional-metrics).
App Service doesn't have any metrics that contain dimensions.
This section lists the types of resource logs you can collect for App Service.
<sup>1</sup> For Java SE apps, add "$WEBSITE_AZMON_PREVIEW_ENABLED" to the app settings and set it to 1 or to true.
-For reference, see a list of [all resource logs category types supported in Azure Monitor](/azure/azure-monitor/platform/resource-logs-schema).
+For reference, see a list of [all resource logs category types supported in Azure Monitor](../azure-monitor/essentials/resource-logs-schema.md).
## Azure Monitor Logs tables
The following table lists common operations related to App Service that may be c
|Get Zipped Container Logs for Web App| Get container logs | |Restore Web App From Backup Blob| App restored from backup|
-For more information on the schema of Activity Log entries, see [Activity Log schema](/azure/azure-monitor/essentials/activity-log-schema).
+For more information on the schema of Activity Log entries, see [Activity Log schema](../azure-monitor/essentials/activity-log-schema.md).
## See Also - See [Monitoring Azure App Service](monitor-app-service.md) for a description of monitoring Azure App Service.-- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
+- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
app-service Monitor App Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/monitor-app-service.md
Last updated 04/16/2021
# Monitoring App Service
-When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation. This article describes the monitoring data generated by App Service and shipped to [Azure Monitor](/azure/azure-monitor/overview). You can also use [built-in diagnostics to monitor resources](troubleshoot-diagnostic-logs.md) to assist with debugging an App Service app. If you're unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource).
+When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation. This article describes the monitoring data generated by App Service and shipped to [Azure Monitor](../azure-monitor/overview.md). You can also use [built-in diagnostics to monitor resources](troubleshoot-diagnostic-logs.md) to assist with debugging an App Service app. If you're unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md).
> [!NOTE] > Azure Monitor integration with App Service is in [preview](https://aka.ms/appsvcblog-azmon).
When you have critical applications and business processes relying on Azure reso
## Monitoring data
-App Service collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](/azure/azure-monitor/insights/monitor-azure-resource#monitoring-data-from-Azure-resources).
+App Service collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data).
See [Monitoring *App Service* data reference](monitor-app-service-reference.md) for detailed information on the metrics and logs metrics created by App Service.
Platform metrics and the Activity log are collected and stored automatically, bu
Resource Logs aren't collected and stored until you create a diagnostic setting and route them to one or more locations.
-See [Create diagnostic setting to collect platform logs and metrics in Azure](/azure/azure-monitor/platform/diagnostic-settings) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for *App Service* are listed in [App Service monitoring data reference](monitor-app-service-reference.md#resource-logs).
+See [Create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for *App Service* are listed in [App Service monitoring data reference](monitor-app-service-reference.md#resource-logs).
The metrics and logs you can collect are discussed in the following sections. ## Analyzing metrics
-You can analyze metrics for *App Service* with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](/azure/azure-monitor/platform/metrics-getting-started) for details on using this tool.
+You can analyze metrics for *App Service* with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) for details on using this tool.
For a list of platform metrics collected for App Service, see [Monitoring App Service data reference metrics](monitor-app-service-reference.md#metrics)
-For reference, you can see a list of [all resource metrics supported in Azure Monitor](/azure/azure-monitor/platform/metrics-supported).
+For reference, you can see a list of [all resource metrics supported in Azure Monitor](../azure-monitor/essentials/metrics-supported.md).
## Analyzing logs
Data in Azure Monitor Logs is stored in tables where each table has its own set
All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](troubleshoot-diagnostic-logs.md#send-logs-to-azure-monitor-preview).
-The [Activity log](/azure/azure-monitor/platform/activity-log) is a type of platform log that provides insight into subscription-level events. You can view it independently or route to Azure Monitor Logs. Routing to Azure Monitor Logs gives the benefit of using Log Analytics to run complex queries.
+The [Activity log](../azure-monitor/essentials/activity-log.md) is a type of platform log that provides insight into subscription-level events. You can view it independently or route to Azure Monitor Logs. Routing to Azure Monitor Logs gives the benefit of using Log Analytics to run complex queries.
For a list of types of resource logs collected for App Service, see [Monitoring App Service data reference](monitor-app-service-reference.md#resource-logs)
See [Azure Monitor queries for App Service](https://github.com/microsoft/AzureMo
## Alerts
-Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](/azure/azure-monitor/platform/alerts-metric-overview), [logs](/azure/azure-monitor/platform/alerts-unified-log), and the [activity log](/azure/azure-monitor/platform/activity-log-alerts).
+Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](../azure-monitor/alerts/alerts-metric-overview.md), [logs](../azure-monitor/alerts/alerts-unified-log.md), and the [activity log](../azure-monitor/alerts/activity-log-alerts.md).
-If you're running an application on App Service [Azure Monitor Application Insights](/azure/azure-monitor/overview#application-insights) may offer additional types of alerts.
+If you're running an application on App Service [Azure Monitor Application Insights](../azure-monitor/overview.md#application-insights) may offer additional types of alerts.
The following table lists common and recommended alert rules for App Service.
The following table lists common and recommended alert rules for App Service.
- See [Monitoring App Service data reference](monitor-app-service-reference.md) for a reference of metrics, logs, and other important values created by App Service. -- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/insights/monitor-azure-resource) for details on monitoring Azure resources.
+- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
app-service Monitor Instances Health Check https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/monitor-instances-health-check.md
Last updated 12/03/2020
+ # Monitor App Service instances using Health check
-![Health check failure][2]
+This article uses Health check in the Azure portal to monitor App Service instances. Health check increases your application's availability by re-routing requests away from unhealthy instances, and replacing instances if they remain unhealthy. Your [App Service plan](./overview-hosting-plans.md) should be scaled to two or more instances to fully utilize Health check. The Health check path should check critical components of your application. For example, if your application depends on a database and a messaging system, the Health check endpoint should connect to those components. If the application cannot connect to a critical component, then the path should return a 500-level response code to indicate the app is unhealthy.
-This article uses Health check in the Azure portal to monitor App Service instances. Health check increases your application's availability by removing unhealthy instances. Your [App Service plan](./overview-hosting-plans.md) should be scaled to two or more instances to use Health check. The Health check path should check critical components of your application. For example, if your application depends on a database and a messaging system, the Health check endpoint should connect to those components. If the application cannot connect to a critical component, then the path should return a 500-level response code to indicate the app is unhealthy.
+![Health check failure][1]
## What App Service does with Health checks - When given a path on your app, Health check pings this path on all instances of your App Service app at 1-minute intervals.-- If an instance doesn't respond with a status code between 200-299 (inclusive) after two or more requests, or fails to respond to the ping, it will be deemed unhealthy and requests will not be routed to that instance.-- Health check will continue to ping the unhealthy instance after it has been removed from the load balancer. If the instance continues to respond unseccessfully for one hour, it will be replaced with new VM.
+- If an instance doesn't respond with a status code between 200-299 (inclusive) after two or more requests, or fails to respond to the ping, the system determines it's unhealthy and removes it.
+- After removal, Health check continues to ping the unhealthy instance. If it continues to respond unsuccessfully, App Service restarts the underlying VM in an effort to return the instance to a healthy state.
+- If an instance remains unhealthy for one hour, it will be replaced with new instance.
- Furthermore, when scaling up or out, App Service pings the Health check path to ensure new instances are ready. > [!NOTE] > Health check doesn't follow 302 redirects. At most one instance will be replaced per hour, with a maximum of three instances per day per App Service Plan.
->
-> On App Service Environments, if an instance continues to fail for one hour it will not be automatically replaced with a new instance due to the limited number of extra virtual machines on the stamp.
->
+>
## Enable Health Check
In addition to configuring the Health check options, you can also configure the
| App setting name | Allowed values | Description | |-|-|-|
-|`WEBSITE_HEALTHCHECK_MAXPINGFAILURES` | 2 - 10 | The maximum number of ping failures. For example, when set to `2`, your instances will be removed after `2` failed pings. Furthermore, when you are scaling up or out, App Service pings the Health check path to ensure new instances are ready. |
-|`WEBSITE_HEALTHCHECK_MAXUNHEALTHYWORKERPERCENT` | 0 - 100 | To avoid overwhelming healthy instances, no more than half of the instances will be excluded. For example, if an App Service Plan is scaled to four instances and three are unhealthy, at most two will be excluded. The other two instances (one healthy and one unhealthy) will continue to receive requests. In the worst-case scenario where all instances are unhealthy, none will be excluded. To override this behavior, set app setting to a value between `0` and `100`. A higher value means more unhealthy instances will be removed (default is 50). |
+|`WEBSITE_HEALTHCHECK_MAXPINGFAILURES` | 2 - 10 | The required number of failed requests for an instance to be deemed unhealthy and removed from the load balancer. For example, when set to `2`, your instances will be removed after `2` failed pings. (Default value is `10`) |
+|`WEBSITE_HEALTHCHECK_MAXUNHEALTHYWORKERPERCENT` | 0 - 100 | By default, no more than half of the instances will be excluded from the load balancer at one time to avoid overwhelming the remaining healthy instances. For example, if an App Service Plan is scaled to four instances and three are unhealthy, two will be excluded. The other two instances (one healthy and one unhealthy) will continue to receive requests. In the worst-case scenario where all instances are unhealthy, none will be excluded. <br /> To override this behavior, set app setting to a value between `0` and `100`. A higher value means more unhealthy instances will be removed (default value is `50`). |
#### Authentication and security
-Health check integrates with App Service's authentication and authorization features. No additional settings are required if these security features are enabled. However, if you're using your own authentication system, the Health check path must allow anonymous access. If the site is HTTP**S**-Only enabled, the Health check request will be sent via HTTP**S**.
+Health check integrates with App Service's [authentication and authorization features](overview-authentication-authorization.md). No additional settings are required if these security features are enabled.
-Large enterprise development teams often need to adhere to security requirements for exposed APIs. To secure the Health check endpoint, you should first use features such as [IP restrictions](app-service-ip-restrictions.md#set-an-ip-address-based-rule), [client certificates](app-service-ip-restrictions.md#set-an-ip-address-based-rule), or a Virtual Network to restrict application access. You can secure the Health check endpoint by requiring the `User-Agent` of the incoming request matches `HealthCheck/1.0`. The User-Agent can't be spoofed since the request would already secured by prior security features.
+If you're using your own authentication system, the Health check path must allow anonymous access. To secure the Health check endpoint, you should first use features such as [IP restrictions](app-service-ip-restrictions.md#set-an-ip-address-based-rule), [client certificates](app-service-ip-restrictions.md#set-an-ip-address-based-rule), or a Virtual Network to restrict application access. You can secure the Health check endpoint by requiring the `User-Agent` of the incoming request matches `HealthCheck/1.0`. The User-Agent can't be spoofed since the request would already secured by prior security features.
## Monitoring
After providing your application's Health check path, you can monitor the health
## Limitations
-Health check should not be enabled on Premium Functions sites. Due to the rapid scaling of Premium Functions, the health check requests can cause unnecessary fluctuations in HTTP traffic. Premium Functions have their own internal health probes that are used to inform scaling decisions.
+- Health check should not be enabled on Premium Functions sites. Due to the rapid scaling of Premium Functions, the Health check requests can cause unnecessary fluctuations in HTTP traffic. Premium Functions have their own internal health probes that are used to inform scaling decisions.
+- Health check can be enabled for **Free** and **Shared** App Service Plans so you can have metrics on the site's health and set up alerts, but because **Free** and **Shared** sites cannot scale out, any unhealthy instances will not be replaced. You should scale up to the **Basic** tier or higher so you can scale out to 2 or more instances and utilize the full benefit of Health check. This is recommended for production-facing applications as it will increase your app's availability and performance.
+
+## Frequently Asked Questions
+
+### What happens if my app is running on a single instance?
+
+If your app is only scaled to one instance and becomes unhealthy, it will not be removed from the load balancer because that would take your application down entirely. Scale out to two or more instances to two or more instances to get the re-routing benefit of Health check. If your app is running on a single instance, you can still use Health check's [monitoring](#monitoring) feature to keep track of your application's health.
+
+### Why are the Health check request not showing in my frontend logs?
+
+The Health check request are sent to your site internally, so the request will not show in [the frontend logs](troubleshoot-diagnostic-logs.md#enable-web-server-logging). This also means the request will have an origin of `127.0.0.1` since it the request being sent internally. You can add log statements in your Health check code to keep logs of when your Health check path is pinged.
+
+### Are the Health check requests sent over HTTP or HTTPS?
+
+The Health check requests will be sent via HTTPS when [HTTPS Only](configure-ssl-bindings.md#enforce-https) is enabled on the site. Otherwise, they are sent over HTTP.
+
+### What if I have multiple apps on the same App Service Plan?
+
+Unhealthy instances will be always be removed from the load balancer rotation regardless of other apps on the App Service Plan (up to the percentage specified in [`WEBSITE_HEALTHCHECK_MAXUNHEALTHYWORKERPERCENT`](#configuration)). When an app on an instance remains unhealthy for over one hour, the instance will only be replaced if all other apps with Health check enabled are also unhealthy. Apps which do not have Health check enabled will not be taken into account.
+
+#### Example
+
+Imagine you have two applications (or one app with a slot) with Health check enabled, called App A and App B. They are on the same App Service Plan and that the Plan is scaled out to 4 instances. If App A becomes unhealthy on two instances, the load balancer will stop sending requests to App A on those two instances. Requests will still be routed to App B on those instances assuming App B is healthy. If App A remains unhealthy for over an hour on those two instances, those instances will only be replaced if App B is **also** unhealthy on those instances. If App B is healthy, the instance will not be replaced.
+
+![Visual diagram explaining the example scenario above.][2]
+
+> [!NOTE]
+> If there were another site or slot on the Plan (Site C) without Health check enabled, it would not be taken into consideration for the instance replacement.
+
+### What if all my instances are unhealthy?
+
+In the scenario where all instances of your application are unhealthy, App Service will remove instances from the load balancer up to the percentage specified in `WEBSITE_HEALTHCHECK_MAXUNHEALTHYWORKERPERCENT`. In this scenario, taking all unhealthy app instances out of the load balancer rotation would effectively cause an outage for your application.
## Next steps - [Create an Activity Log Alert to monitor all Autoscale engine operations on your subscription](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/monitor-autoscale-alert) - [Create an Activity Log Alert to monitor all failed Autoscale scale-in/scale-out operations on your subscription](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/monitor-autoscale-failed-alert)
-[1]: ./media/app-service-monitor-instances-health-check/health-check-success-diagram.png
-[2]: ./media/app-service-monitor-instances-health-check/health-check-failure-diagram.png
+[1]: ./media/app-service-monitor-instances-health-check/health-check-diagram.png
+[2]: ./media/app-service-monitor-instances-health-check/health-check-multi-app-diagram.png
[3]: ./media/app-service-monitor-instances-health-check/azure-portal-navigation-health-check.png
app-service Reference App Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/reference-app-settings.md
The following environment variables are related to app deployment. For variables
| Setting name| Description | |-|-|
+| `DEPLOYMENT_BRANCH`| For [local Git](deploy-local-git.md) or [cloud Git](deploy-continuous-deployment.md) deployment (such as GitHub), set to the branch in Azure you want to deploy to. By default, it is `master`. |
| `WEBSITE_RUN_FROM_PACKAGE`| Set to `1` to run the app from a local ZIP package, or set to the URL of an external URL to run the app from a remote ZIP package. For more information, see [Run your app in Azure App Service directly from a ZIP package](deploy-run-package.md). | | `WEBSITE_USE_ZIP` | Deprecated. Use `WEBSITE_RUN_FROM_PACKAGE`. | | `WEBSITE_RUN_FROM_ZIP` | Deprecated. Use `WEBSITE_RUN_FROM_PACKAGE`. |
Kudu build configuration applies to native Windows apps and is used to control t
| `SCM_BUILD_ARGS` | Add things at the end of the msbuild command line, such that it overrides any previous parts of the default command line. | To do a clean build: `-t:Clean;Compile`| | `SCM_SCRIPT_GENERATOR_ARGS` | Kudu uses the `azure site deploymentscript` command described [here](http://blog.amitapple.com/post/38418009331/azurewebsitecustomdeploymentpart2) to generate a deployment script. It automatically detects the language framework type and determines the parameters to pass to the command. This setting overrides the automatically generated parameters. | To treat your repository as plain content files: `--basic -p <folder-to-deploy>` | | `SCM_TRACE_LEVEL` | Build trace level. The default is `1`. Set to higher values, up to 4, for more tracing. | `4` |
-| `SCM_COMMAND_IDLE_TIMEOUT` | Time out in seconds for each command that the build process launches to wait before without producing any output. After that, the command is considered idle and killed. The default is `60` (one minute). In Azure, there's also a general idle request timeout that disconnects clients after 230 seconds. However, the command will still continue running server-side after that. | |
-| `SCM_LOGSTREAM_TIMEOUT` | Time out of inactivity in seconds before stopping log streaming. The default is `1800` (30 minutes).| |
+| `SCM_COMMAND_IDLE_TIMEOUT` | Time-out in seconds for each command that the build process launches to wait before without producing any output. After that, the command is considered idle and killed. The default is `60` (one minute). In Azure, there's also a general idle request timeout that disconnects clients after 230 seconds. However, the command will still continue running server-side after that. | |
+| `SCM_LOGSTREAM_TIMEOUT` | Time-out of inactivity in seconds before stopping log streaming. The default is `1800` (30 minutes).| |
| `SCM_SITEEXTENSIONS_FEED_URL` | URL of the site extensions gallery. The default is `https://www.nuget.org/api/v2/`. The URL of the old feed is `http://www.siteextensions.net/api/v2/`. | | | `SCM_USE_LIBGIT2SHARP_REPOSITORY` | Set to `0` to use git.exe instead of libgit2sharp for git operations. | | | `WEBSITE_LOAD_USER_PROFILE` | In case of the error `The specified user does not have a valid profile.` during ASP.NET build automation (such as during Git deployment), set this variable to `1` to load a full user profile in the build environment. This setting is only applicable when `WEBSITE_COMPUTE_MODE` is `Dedicated`. | |
-| `WEBSITE_SCM_IDLE_TIMEOUT_IN_MINUTES` | Time out in minutes for the SCM (Kudu) site. The default is `20`. | |
+| `WEBSITE_SCM_IDLE_TIMEOUT_IN_MINUTES` | Time-out in minutes for the SCM (Kudu) site. The default is `20`. | |
| `SCM_DO_BUILD_DURING_DEPLOYMENT` | With [ZIP deploy](deploy-zip.md), the deployment engine assumes that a ZIP file is ready to run as-is and doesn't run any build automation. To enable the same build automation as in [Git deploy](deploy-local-git.md), set to `true`. | <!--
WEBSITE_DISABLE_PRELOAD_HANG_MITIGATION
| `DIAGNOSTICS_TEXTTRACEMAXLOGFILESIZEBYTES` | Maximum size of the log file in bytes. The default is `131072` (128 KB). || | `DIAGNOSTICS_TEXTTRACEMAXLOGFOLDERSIZEBYTES` | Maximum size of the log folder in bytes. The default is `1048576` (1 MB). || | `DIAGNOSTICS_TEXTTRACEMAXNUMLOGFILES` | Maximum number of log files to keep. The default is `20`. | |
-| `DIAGNOSTICS_TEXTTRACETURNOFFPERIOD` | Time out in milliseconds to keep application logging enabled. The default is `43200000` (12 hours). ||
+| `DIAGNOSTICS_TEXTTRACETURNOFFPERIOD` | Time-out in milliseconds to keep application logging enabled. The default is `43200000` (12 hours). ||
| `WEBSITE_LOG_BUFFERING` | By default, log buffering is enabled. Set to `0` to disable it. || | `WEBSITE_ENABLE_PERF_MODE` | For native Windows apps, set to `TRUE` to turn off IIS log entries for successful requests returned within 10 seconds. This is a quick way to do performance benchmarking by removing extended logging. ||
WEBSITE_SOCKET_STATISTICS_ENABLED
| `WEBSITE_NETWORK_HEALTH_DNS_ENDPOINTS` | | | `WEBSITE_NETWORK_HEALTH_URI_ENDPOINTS` | | | `WEBSITE_NETWORK_HEALTH_INTEVALSECS` | Interval of the network health check in seconds. The minimum value is 30 seconds. | |
-| `WEBSITE_NETWORK_HEALTH_TIMEOUT_INTEVALSECS` | Time out of the network health check in seconds. | |
+| `WEBSITE_NETWORK_HEALTH_TIMEOUT_INTEVALSECS` | Time-out of the network health check in seconds. | |
--> <!-- | CONTAINER_WINRM_USERNAME |
The following environment variables are related to [App Service authentication](
|-|-| | `WEBSITE_AUTH_DISABLE_IDENTITY_FLOW` | When set to `true`, disables assigning the thread principal identity in ASP.NET-based web applications (including v1 Function Apps). This is designed to allow developers to protect access to their site with auth, but still have it use a separate login mechanism within their app logic. The default is `false`. | | `WEBSITE_AUTH_HIDE_DEPRECATED_SID` | `true` or `false`. The default value is `false`. This is a setting for the legacy Azure Mobile Apps integration for Azure App Service. Setting this to `true` resolves an issue where the SID (security ID) generated for authenticated users might change if the user changes their profile information. Changing this value may result in existing Azure Mobile Apps user IDs changing. Most apps do not need to use this setting. |
-| `WEBSITE_AUTH_NONCE_DURATION`| A _timespan_ value in the form `_hours_:_minutes_:_seconds_`. The default value is `00:05:00`, or 5 minutes. This setting controls the lifetime of the [cryptographic nonce](https://en.wikipedia.org/wiki/Cryptographic_nonce) generated for all browser-driven logins. If a login fails to complete in the specified time, the login flow will be retried automatically. This application setting is intend for use with the V1 (classic) configuration experience. If using the V2 authentication configuration schema, you should instead use the `login.nonce.nonceExpirationInterval` configuration value. |
+| `WEBSITE_AUTH_NONCE_DURATION`| A _timespan_ value in the form `_hours_:_minutes_:_seconds_`. The default value is `00:05:00`, or 5 minutes. This setting controls the lifetime of the [cryptographic nonce](https://en.wikipedia.org/wiki/Cryptographic_nonce) generated for all browser-driven logins. If a login fails to complete in the specified time, the login flow will be retried automatically. This application setting is intended for use with the V1 (classic) configuration experience. If using the V2 authentication configuration schema, you should instead use the `login.nonce.nonceExpirationInterval` configuration value. |
| `WEBSITE_AUTH_PRESERVE_URL_FRAGMENT` | When set to `true` and users click on app links that contain URL fragments, the login process will ensure that the URL fragment part of your URL does not get lost in the login redirect process. For more information, see [Customize sign-in and sign-out in Azure App Service authentication](configure-authentication-customize-sign-in-out.md#preserve-url-fragments). | | `WEBSITE_AUTH_USE_LEGACY_CLAIMS` | To maintain backward compatibility across upgrades, the authentication module uses the legacy claims mapping of short to long names in the `/.auth/me` API, so certain mappings are excluded (e.g. "roles"). To get the more modern version of the claims mappings, set this variable to `False`. In the "roles" example, it would be mapped to the long claim name "http://schemas.microsoft.com/ws/2008/06/identity/claims/role". |
-| `WEBSITE_AUTH_DISABLE_WWWAUTHENTICATE` | `true` or `false`. The default value is `false`. When set to `true`, removes the [`WWW-Authenticate`](https://developer.mozilla.org/docs/Web/HTTP/Headers/WWW-Authenticate) HTTP response header from module-generated HTTP 401 responses. This application setting is intend for use with the V1 (classic) configuration experience. If using the V2 authentication configuration schema, you should instead use the `identityProviders.azureActiveDirectory.login.disableWwwAuthenticate` configuration value. |
-| `WEBSITE_AUTH_STATE_DIRECTORY` | A local file system directory path where tokens are stored when the file-based token store is enabled. The default value is `%HOME%\Data\.auth`. This application setting is intend for use with the V1 (classic) configuration experience. If using the V2 authentication configuration schema, you should instead use the `login.tokenStore.fileSystem.directory` configuration value. |
+| `WEBSITE_AUTH_DISABLE_WWWAUTHENTICATE` | `true` or `false`. The default value is `false`. When set to `true`, removes the [`WWW-Authenticate`](https://developer.mozilla.org/docs/Web/HTTP/Headers/WWW-Authenticate) HTTP response header from module-generated HTTP 401 responses. This application setting is intended for use with the V1 (classic) configuration experience. If using the V2 authentication configuration schema, you should instead use the `identityProviders.azureActiveDirectory.login.disableWwwAuthenticate` configuration value. |
+| `WEBSITE_AUTH_STATE_DIRECTORY` | A local file system directory path where tokens are stored when the file-based token store is enabled. The default value is `%HOME%\Data\.auth`. This application setting is intended for use with the V1 (classic) configuration experience. If using the V2 authentication configuration schema, you should instead use the `login.tokenStore.fileSystem.directory` configuration value. |
| `WEBSITE_AUTH_TOKEN_CONTAINER_SASURL` | A fully qualified blob container URL. Instructs the auth module to store and load all encrypted tokens to the specified blob storage container instead of using the default local file system. |
-| `WEBSITE_AUTH_TOKEN_REFRESH_HOURS` | Any positive decimal number. The default value is `72` (hours). This setting controls the amount of time after a session token expires that the `/.auth/refresh` API can be used to refresh it. It is intended primarily for use with Azure Mobile Apps, which rely on session tokens. Refresh attempts after this period will fail and end-users will be required to sign-in again. This application setting is intend for use with the V1 (classic) configuration experience. If using the V2 authentication configuration schema, you should instead use the `login.tokenStore.tokenRefreshExtensionHours` configuration value. |
+| `WEBSITE_AUTH_TOKEN_REFRESH_HOURS` | Any positive decimal number. The default value is `72` (hours). This setting controls the amount of time after a session token expires that the `/.auth/refresh` API can be used to refresh it. It's intended primarily for use with Azure Mobile Apps, which rely on session tokens. Refresh attempts after this period will fail and end users will be required to sign-in again. This application setting is intended for use with the V1 (classic) configuration experience. If using the V2 authentication configuration schema, you should instead use the `login.tokenStore.tokenRefreshExtensionHours` configuration value. |
| `WEBSITE_AUTH_TRACE_LEVEL`| Controls the verbosity of authentication traces written to [Application Logging](troubleshoot-diagnostic-logs.md#enable-application-logging-windows). Valid values are `Off`, `Error`, `Warning`, `Information`, and `Verbose`. The default value is `Verbose`. |
-| `WEBSITE_AUTH_VALIDATE_NONCE`| `true` or `false`. The default value is `true`. This value should never be set to `false` except when temporarily debugging [cryptographic nonce](https://en.wikipedia.org/wiki/Cryptographic_nonce) validation failures that occur during interactive logins. This application setting is intend for use with the V1 (classic) configuration experience. If using the V2 authentication configuration schema, you should instead use the `login.nonce.validateNonce` configuration value. |
+| `WEBSITE_AUTH_VALIDATE_NONCE`| `true` or `false`. The default value is `true`. This value should never be set to `false` except when temporarily debugging [cryptographic nonce](https://en.wikipedia.org/wiki/Cryptographic_nonce) validation failures that occur during interactive logins. This application setting is intended for use with the V1 (classic) configuration experience. If using the V2 authentication configuration schema, you should instead use the `login.nonce.validateNonce` configuration value. |
| `WEBSITE_AUTH_V2_CONFIG_JSON` | This environment variable is populated automatically by the Azure App Service platform and is used to configure the integrated authentication module. The value of this environment variable corresponds to the V2 (non-classic) authentication configuration for the current app in Azure Resource Manager. It's not intended to be configured explicitly. | | `WEBSITE_AUTH_ENABLED` | Read-only. Injected into a Windows or Linux app to indicate whether App Service authentication is enabled. |
app-service Scenario Secure App Access Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/scenario-secure-app-access-storage.md
You need to grant your web app access to the storage account before you can crea
In the [Azure portal](https://portal.azure.com), go into your storage account to grant your web app access. Select **Access control (IAM)** in the left pane, and then select **Role assignments**. You'll see a list of who has access to the storage account. Now you want to add a role assignment to a robot, the app service that needs access to the storage account. Select **Add** > **Add role assignment** to open the **Add role assignment** page.
-Assign the **Storage Blob Data Contributor** role to the **App Service** at subscription scope. For detailed steps, see [Assign Azure roles using the Azure portal](/azure/role-based-access-control/role-assignments-portal).
+Assign the **Storage Blob Data Contributor** role to the **App Service** at subscription scope. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
Your web app now has access to your storage account.
In this tutorial, you learned how to:
> * Access storage from a web app by using managed identities. > [!div class="nextstepaction"]
-> [App Service accesses Microsoft Graph on behalf of the user](scenario-secure-app-access-microsoft-graph-as-user.md)
+> [App Service accesses Microsoft Graph on behalf of the user](scenario-secure-app-access-microsoft-graph-as-user.md)
app-service Tutorial Python Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/tutorial-python-postgresql-app.md
You can also use the [Azure portal version of this tutorial](/azure/developer/py
::: zone pivot="postgres-flexible-server"
-This tutorial shows how to deploy a data-driven Python [Django](https://www.djangoproject.com/) web app to [Azure App Service](overview.md) and connect it to an [Azure Database for PostgreSQL Flexible Server (Preview)](/azure/postgresql/flexible-server/) database. If you cannot use PostgreSQL Flexible Server (Preview), then select the Single Server option above.
+This tutorial shows how to deploy a data-driven Python [Django](https://www.djangoproject.com/) web app to [Azure App Service](overview.md) and connect it to an [Azure Database for PostgreSQL Flexible Server (Preview)](../postgresql/flexible-server/index.yml) database. If you cannot use PostgreSQL Flexible Server (Preview), then select the Single Server option above.
In this tutorial, you use the Azure CLI to complete the following tasks:
Learn how to map a custom DNS name to your app:
Learn how App Service runs a Python app: > [!div class="nextstepaction"]
-> [Configure Python app](configure-language-python.md)
+> [Configure Python app](configure-language-python.md)
app-service Webjobs Sdk Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/webjobs-sdk-get-started.md
The `QueueTrigger` attribute tells the runtime to call this function when a new
When a message is added to a queue named `queue`, the function executes and the `message` string is written to the logs. The queue being monitored is in the default Azure Storage account, which you create next.
-The `message` parameter doesn't have to be a string. You can also bind to a JSON object, a byte array, or a [CloudQueueMessage](/dotnet/api/microsoft.azure.storage.queue.cloudqueuemessage) object. [See Queue trigger usage](/azure/azure-functions/functions-bindings-storage-queue-trigger?tabs=csharp#usage). Each binding type (such as queues, blobs, or tables) has a different set of parameter types that you can bind to.
+The `message` parameter doesn't have to be a string. You can also bind to a JSON object, a byte array, or a [CloudQueueMessage](/dotnet/api/microsoft.azure.storage.queue.cloudqueuemessage) object. [See Queue trigger usage](../azure-functions/functions-bindings-storage-queue-trigger.md?tabs=csharp#usage). Each binding type (such as queues, blobs, or tables) has a different set of parameter types that you can bind to.
### Create an Azure storage account
Output bindings simplify code that writes data. This example modifies the previo
This tutorial showed you how to create, run, and deploy a WebJobs SDK 3.x project. > [!div class="nextstepaction"]
-> [Learn more about the WebJobs SDK](webjobs-sdk-how-to.md)
+> [Learn more about the WebJobs SDK](webjobs-sdk-how-to.md)
application-gateway Monitor Application Gateway Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/monitor-application-gateway-reference.md
See [Monitoring Azure Application Gateway](monitor-application-gateway.md) for d
<!-- REQUIRED if you support Metrics. If you don't, keep the section but call that out. Some services are only onboarded to logs. <!-- Please keep headings in this order -->
-<!-- OPTION 2 - Link to the metrics as above, but work in extra information not found in the automated metric-supported reference article. NOTE: YOU WILL NOW HAVE TO MANUALLY MAINTAIN THIS SECTION to make sure it stays in sync with the metrics-supported link. For highly customized example, see [CosmosDB](https://docs.microsoft.com/azure/cosmos-db/monitor-cosmos-db-reference#metrics). They even regroup the metrics into usage type vs. resource provider and type.
+<!-- OPTION 2 - Link to the metrics as above, but work in extra information not found in the automated metric-supported reference article. NOTE: YOU WILL NOW HAVE TO MANUALLY MAINTAIN THIS SECTION to make sure it stays in sync with the metrics-supported link. For highly customized example, see [CosmosDB](../cosmos-db/monitor-cosmos-db-reference.md#metrics). They even regroup the metrics into usage type vs. resource provider and type.
--> <!-- Example format. Mimic the setup of metrics supported, but add extra information --> ### Application Gateway v2 metrics
-Resource Provider and Type: [Microsoft.Compute/applicationGateways](/azure/azure-monitor/platform/metrics-supported#microsoftnetworkapplicationgateways)
+Resource Provider and Type: [Microsoft.Compute/applicationGateways](../azure-monitor/essentials/metrics-supported.md#microsoftnetworkapplicationgateways)
#### Timing metrics Application Gateway provides several builtΓÇæin timing metrics related to the request and response, which are all measured in milliseconds.
For more information, see a list of [all platform metrics supported in Azure Mon
<!-- REQUIRED. Please keep headings in this order --> <!-- If you have metrics with dimensions, outline it here. If you have no dimensions, say so. Questions email azmondocs@microsoft.com -->
-For more information on what metric dimensions are, see [Multi-dimensional metrics](/azure/azure-monitor/platform/data-platform-metrics#multi-dimensional-metrics).
+For more information on what metric dimensions are, see [Multi-dimensional metrics](../azure-monitor/essentials/data-platform-metrics.md#multi-dimensional-metrics).
<!-- See https://docs.microsoft.com/azure/storage/common/monitor-storage-reference#metrics-dimensions for an example. Part is copied below. -->
This section lists the types of resource logs you can collect for Azure Applicat
<!-- List all the resource log types you can have and what they are for -->
-For reference, see a list of [all resource logs category types supported in Azure Monitor](/azure/azure-monitor/platform/resource-logs-schema).
+For reference, see a list of [all resource logs category types supported in Azure Monitor](../azure-monitor/essentials/resource-logs-schema.md).
> [!NOTE] > The Performance log is available only for the v1 SKU. For the v2 SKU, use [Metrics](#metrics) for performance data.
For more information, see [Back-end health and diagnostic logs for Application G
### Application Gateway
-Resource Provider and Type: [Microsoft.Network/applicationGateways](/azure/azure-monitor/essentials/resource-logs-categories#microsoftnetworkapplicationgateways)
+Resource Provider and Type: [Microsoft.Network/applicationGateways](../azure-monitor/essentials/resource-logs-categories.md#microsoftnetworkapplicationgateways)
| Category | Display Name | Information| |:|:-||
sslEnabled_s | Does the client request have SSL enabled|
<!-- replace below with the proper link to your main monitoring service article --> - See [Monitoring Azure Azure Application Gateway](monitor-application-gateway.md) for a description of monitoring Azure Azure Application Gateway.-- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
+- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
application-gateway Monitor Application Gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/monitor-application-gateway.md
Keep the headings in this order.
When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation.
-This article describes the monitoring data generated by Azure Application Gateway. Azure Application Gateway uses [Azure Monitor](/azure/azure-monitor/overview). If you are unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource).
+This article describes the monitoring data generated by Azure Application Gateway. Azure Application Gateway uses [Azure Monitor](../azure-monitor/overview.md). If you are unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md).
<!-- Optional diagram showing monitoring for your service. If you need help creating one, contact robb@microsoft.com -->
Azure Monitor Network Insights provides a comprehensive view of health and metri
## Monitoring data <!-- REQUIRED. Please keep headings in this order -->
-Azure Application Gateway collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](/azure/azure-monitor/insights/monitor-azure-resource#monitoring-data-from-Azure-resources).
+Azure Application Gateway collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data).
See [Monitoring Azure Application Gateway data reference](monitor-application-gateway-reference.md) for detailed information on the metrics and logs metrics created by Azure Application Gateway.
Resource Logs are not collected and stored until you create a diagnostic setting
<!-- Include any additional information on collecting logs. The number of things that diagnostics settings control is expanding -->
-See [Create diagnostic setting to collect platform logs and metrics in Azure](/azure/azure-monitor/platform/diagnostic-settings) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for Azure Application Gateway are listed in [Azure Application Gateway monitoring data reference](monitor-application-gateway-reference.md#resource-logs).
+See [Create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for Azure Application Gateway are listed in [Azure Application Gateway monitoring data reference](monitor-application-gateway-reference.md#resource-logs).
<!-- OPTIONAL: Add specific examples of configuration for this service. For example, CLI and PowerShell commands for creating diagnostic setting. Ideally, customers should set up a policy to automatically turn on collection for services. Azure monitor has Resource Manager template examples you can point to. See https://docs.microsoft.com/azure/azure-monitor/samples/resource-manager-diagnostic-settings. Contact azmondocs@microsoft.com if you have questions. -->
The metrics and logs you can collect are discussed in the following sections.
<!-- REQUIRED. Please keep headings in this order If you don't support metrics, say so. Some services may be only onboarded to logs -->
-You can analyze metrics for Azure Application Gateway with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](/azure/azure-monitor/platform/metrics-getting-started) for details on using this tool.
+You can analyze metrics for Azure Application Gateway with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) for details on using this tool.
<!-- Point to the list of metrics available in your monitor-service-reference article. --> For a list of the platform metrics collected for Azure Application Gateway, see [Monitoring Application Gateway data reference metrics](monitor-application-gateway-reference.md#metrics).
-For reference, you can see a list of [all resource metrics supported in Azure Monitor](/azure/azure-monitor/platform/metrics-supported).
+For reference, you can see a list of [all resource metrics supported in Azure Monitor](../azure-monitor/essentials/metrics-supported.md).
<!-- Optional: Call out additional information to help your customers. For example, you can include additional information here about how to use metrics explorer specifically for your service. Remember that the UI is subject to change quite often so you will need to maintain these screenshots yourself if you add them in. -->
Data in Azure Monitor Logs is stored in tables where each table has its own set
All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Common and service-specific schema for Azure Resource Logs](../azure-monitor/essentials/resource-logs-schema.md#top-level-common-schema).
-The [Activity log](/azure/azure-monitor/platform/activity-log) is a platform login Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
+The [Activity log](../azure-monitor/essentials/activity-log.md) is a platform login Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
For a list of the types of resource logs collected for Azure Application Gateway, see [Monitoring Azure Application Gateway data reference](monitor-application-gateway-reference.md#resource-logs).
AzureDiagnostics
This information is the BIGGEST request we get in Azure Monitor so do not avoid it long term. People don't know what to monitor for best results. Be prescriptive -->
-Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](/azure/azure-monitor/platform/alerts-metric-overview), [logs](/azure/azure-monitor/platform/alerts-unified-log), and the [activity log](/azure/azure-monitor/platform/activity-log-alerts). Different types of alerts have benefits and drawbacks
+Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](../azure-monitor/alerts/alerts-metric-overview.md), [logs](../azure-monitor/alerts/alerts-unified-log.md), and the [activity log](../azure-monitor/alerts/activity-log-alerts.md). Different types of alerts have benefits and drawbacks
<!-- only include next line if applications run on your service and work with App Insights. -->
-If you are creating or running an application which use Application Gateway [Azure Monitor Application Insights](/azure/azure-monitor/overview#application-insights) may offer additional types of alerts.
+If you are creating or running an application which use Application Gateway [Azure Monitor Application Insights](../azure-monitor/overview.md#application-insights) may offer additional types of alerts.
<!-- end --> The following tables lists common and recommended alert rules for Application Gateway.
The following tables lists common and recommended alert rules for Application Ga
- See [Monitoring Application Gateway data reference](monitor-application-gateway-reference.md) for a reference of the metrics, logs, and other important values created by Application Gateway. -- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/insights/monitor-azure-resource) for details on monitoring Azure resources.
+- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
applied-ai-services Cost Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/metrics-advisor/cost-management.md
+
+ Title: Cost management with Azure Metrics Advisor
+
+description: Learn about cost management and pricing for Azure Metrics Advisor
+++++ Last updated : 07/06/2021+++
+# Azure Metrics Advisor cost management
+
+Azure Metrics Advisor monitors the performance of your organization's growth engines, including sales revenue and manufacturing operations. Quickly identify and fix problems through a powerful combination of monitoring in near-real time, adapting models to your scenario, and offering granular analysis with diagnostics and alerting. You will only be charged for the time series that are analyzed by the service. There's no up-front commitment or minimum fee.
+
+> [!NOTE]
+> This article discusses how pricing is calculated to assist you with planning and cost management when using Azure Metric Advisor. The prices in this article do not reflect actual prices and are for example purposes only. For the latest pricing information please refer to the [official pricing page for Metrics Advisor](https://azure.microsoft.com/pricing/details/metrics-advisor/).
+
+## Key points about cost management and pricing
+
+- You will be charged for the number of **distinct time series** analyzed during a month. If one data point is analyzed for a time series, it will be calculated as well.
+- The number of distinct time series is **irrespective** of its granularity. An hourly time series and a daily time series will be charged at the same price.
+- You will be charged based on the tiered pricing structure listed below. The first day of next month will initialize a new statistic window.
+- The more time series you onboard to the service for analysis, the lower price you pay for each time series.
+
+**Again keep in mind, the prices below are for example purposes only**. For the latest pricing information, consult the [official pricing page for Metrics Advisor](https://azure.microsoft.com/pricing/details/metrics-advisor/).
+
+| Analyzed time series /month| $ per time series |
+|--|--|
+| Free: first 25 time series | $- |
+| 26 time series - 1k time series | $0.75 |
+| 1k time series - 5k time series | $0.50 |
+| 5k time series - 10k time series | $0.25|
+| 20k time series - 50k time series| $0.10|
+| >50k time series | $0.05 |
++
+To help you get a basic understanding of Metrics Advisor and start to explore the service, there's an included amount being offered to allow you to analyze up to 25 time series for free.
+
+## Pricing examples
+
+### Example 1
+<!-- introduce statistic window-->
+
+In month 1, if a customer has onboarded a data feed with 25 time series for analyzing the first week. Afterwards, they onboard another data feed with 30 time series the second week. But at the third week, they delete 30 time series that were onboarded during the second week. Then there are **55** distinct time series being analyzed in month 1, the customer will be charged for **30** of them (exclude the 25 time series in the free tier) and falls under tier 1. The monthly cost is: 30 * $0.75 = **$22.5**.
+
+| Volume tier | $ per time series | $ per month |
+| | -- | -- |
+| First 30 (55-25) time series | $0.75 | $22.5 |
+| **Total = 30 time series** | | **$22.5 per month** |
+
+In month 2, the customer has not onboarded or deleted any time series. Then there are 25 analyzed time series in month 2. No cost will be introduced.
+
+### Example 2
+<!-- introduce how time series is calculated-->
+
+A business planner needs to track the company's revenue as the indicator of business healthiness. Usually there's a week by week pattern, the customer onboards the metrics into Metrics Advisor for analyzing anomalies. Metrics Advisor is able to learn the pattern from historical data and perform detection on follow-up data points. There might be a sudden drop detected as an anomaly, which may indicate an underlying issue, like a service outage or a promotional offer not working as expected. There might also be an unexpected spike detected as an anomaly, which may indicate a highly successful marketing campaign or a significant customer win.
+
+The metric is analyzed on **100 product categories** and **10 regions**, then the number of distinct time series being analyzed is calculated as:
+
+```
+1(Revenue) * 100 product categories * 10 regions = 1,000 analyzed time series
+```
+
+Based on the tiered pricing model described above, 1,000 analyzed time series per month is charged at (1,000 - 25) * $0.75 = **$731.25**.
+
+| Volume tier | $ per time series | $ per month |
+| | -- | -- |
+| First 975 (1,000-25) time series | $0.75 | $731.25 |
+| **Total = 30 time series** | | **$731.25 per month** |
+
+### Example 3
+<!-- introduce cost for multiple metrics and -->
+
+After validating detection results on the revenue metric, the customer would like to onboard two more metrics to be analyzed. One is cost, another is DAU(daily active user) of their website. They would also like to add a new dimension with **20 channels**. Within the month, 10 out of the 100 product categories are discontinued after the first week, and are not analyzed further. In addition, 10 new product categories are introduced in the third week of the month, and the corresponding time series are analyzed for half of the month. Then the number of distinct time series being analyzed are calculated as:
+
+```
+3(Revenue, cost and DAU) * 110 product categories * 10 regions * 20 channels = 66,000 analyzed time series
+```
+
+Based on the tiered pricing model described above, 66,000 analyzed time series per month fall into tier 5 and will be charged at **$10281.25**.
+
+| Volume tier | $ per time series | $ per month |
+| | -- | -- |
+| First 975 (1,000-25) time series | $0.75 | $731.25 |
+| Next 4,000 time series | $0.50 | $2,000 |
+| Next 15,000 time series | $0.25 | $3,750 |
+| Next 30,000 time series | $0.10 | $3,000 |
+| Next 16,000 time series | $0.05 | $800 |
+| **Total = 65,975 time series** | | **$10281.25 per month** |
+
+## Next steps
+
+- [Manage your data feeds](how-tos/manage-data-feeds.md)
+- [Configurations for different data sources](data-feeds-from-different-sources.md)
+- [Configure metrics and fine tune detection configuration](how-tos/configure-metrics.md)
++
applied-ai-services Data Feeds From Different Sources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/metrics-advisor/data-feeds-from-different-sources.md
+
+ Title: Connect different data sources to Metrics Advisor
+
+description: Add different data feeds to Metrics Advisor
++++++ Last updated : 05/26/2021++++
+# How-to: Connect different data sources
+
+Use this article to find the settings and requirements for connecting different types of data sources to Metrics Advisor. Make sure to read how to [Onboard your data](how-tos/onboard-your-data.md) to learn about the key concepts for using your data with Metrics Advisor.
+
+## Supported authentication types
+
+| Authentication types | Description |
+| |-|
+|**Basic** | You need to provide basic parameters for accessing data sources. For example, a connection string or a password. Data feed admins can view these credentials. |
+| **Azure Managed Identity** | [Managed identities](../../active-directory/managed-identities-azure-resources/overview.md) for Azure resources is a feature of Azure Active Directory. It provides Azure services with an automatically managed identity in Azure AD. You can use the identity to authenticate to any service that supports Azure AD authentication.|
+| **Azure SQL Connection String**| Store your AzureSQL connection string as a **credential entity** in Metrics Advisor, and use it directly each time when onboarding metrics data. Only admins of the credential entity can view these credentials, but enables authorized viewers to create data feeds without needing to know details for the credentials. |
+| **Data Lake Gen2 Shared Key**| Store your data lake account key as a **credential entity** in Metrics Advisor and use it directly each time when onboarding metrics data. Only admins of the Credential entity can view these credentials, but enables authorized viewers to create data feed without needing to know the credential details.|
+| **Service principal**| Store your [Service Principal](../../active-directory/develop/app-objects-and-service-principals.md) as a **credential entity** in Metrics Advisor and use it directly each time when onboarding metrics data. Only admins of Credential entity can view the credentials, but enables authorized viewers to create data feed without needing to know the credential details.|
+| **Service principal from key vault**|Store your [Service Principal in a Key Vault](/azure-stack/user/azure-stack-key-vault-store-credentials) as a **credential entity** in Metrics Advisor and use it directly each time when onboarding metrics data. Only admins of a **credential entity** can view the credentials, but also leave viewers able to create data feed without needing to know detailed credentials. |
++
+## Data sources supported and corresponding authentication types
+
+| Data sources | Authentication Types |
+|-| |
+|[**Azure Application Insights**](#appinsights) | Basic |
+|[**Azure Blob Storage (JSON)**](#blob) | Basic<br>ManagedIdentity |
+|[**Azure Cosmos DB (SQL)**](#cosmosdb) | Basic |
+|[**Azure Data Explorer (Kusto)**](#kusto) | Basic<br>Managed Identity<br>Service principal<br>Service principal from key vault |
+|[**Azure Data Lake Storage Gen2**](#adl) | Basic<br>Data Lake Gen2 Shared Key<br>Service principal<br>Service principal from key vault |
+|[**Azure Event Hubs**](#eventhubs) | Basic |
+|[**Azure Log Analytics**](#log) | Basic<br>Service principal<br>Service principal from key vault |
+|[**Azure SQL Database / SQL Server**](#sql) | Basic<br>Managed Identity<br>Service principal<br>Service principal from key vault<br>Azure SQL Connection String |
+|[**Azure Table Storage**](#table) | Basic |
+|[**InfluxDB (InfluxQL)**](#influxdb) | Basic |
+|[**MongoDB**](#mongodb) | Basic |
+|[**MySQL**](#mysql) | Basic |
+|[**PostgreSQL**](#pgsql) | Basic|
+|[**Local files(CSV)**](#csv) | Basic|
+
+The following sections specify the parameters required for all authentication types within different data source scenarios.
+
+## <span id="appinsights">Azure Application Insights</span>
+
+* **Application ID**: This is used to identify this application when using the Application Insights API. To get the Application ID, take the following steps:
+
+ 1. From your Application Insights resource, click API Access.
+
+ ![Get application ID from your Application Insights resource](media/portal-app-insights-app-id.png)
+
+ 2. Copy the Application ID generated into **Application ID** field in Metrics Advisor.
+
+* **API Key**: API keys are used by applications outside the browser to access this resource. To get the API key, take the following steps:
+
+ 1. From the Application Insights resource, click **API Access**.
+
+ 2. Click **Create API Key**.
+
+ 3. Enter a short description, check the **Read telemetry** option, and click the **Generate key** button.
+
+ ![Get API key in Azure portal](media/portal-app-insights-app-id-api-key.png)
+
+ > [!WARNING]
+ > Copy this **API key** and save it because this key will never be shown to you again. If you lose this key, you have to create a new one.
+
+ 4. Copy the API key to the **API key** field in Metrics Advisor.
+
+* **Query**: Azure Application Insights logs are built on Azure Data Explorer, and Azure Monitor log queries use a version of the same Kusto query language. The [Kusto query language documentation](/azure/data-explorer/kusto/query) has all of the details for the language and should be your primary resource for writing a query against Application Insights.
+
+ Sample query:
+
+ ``` Kusto
+ [TableName] | where [TimestampColumn] >= datetime(@IntervalStart) and [TimestampColumn] < datetime(@IntervalEnd);
+ ```
+ You can also refer to the [Tutorial: Write a valid query](tutorials/write-a-valid-query.md) for more specific examples.
+
+## <span id="blob">Azure Blob Storage (JSON)</span>
+
+* **Connection String**: There are two authentication types for Azure Blob Storage(JSON), one is **Basic**, the other is **Managed Identity**.
+
+ * **Basic**: See [Configure Azure Storage connection strings](../../storage/common/storage-configure-connection-string.md#configure-a-connection-string-for-an-azure-storage-account) for information on retrieving this string. Also, you can visit the Azure portal for your Azure Blob Storage resource, and find connection string directly in the **Settings > Access keys** section.
+
+ * **Managed Identity**: Managed identities for Azure resources can authorize access to blob and queue data using Azure AD credentials from applications running in Azure virtual machines (VMs), function apps, virtual machine scale sets, and other services.
+
+ You can create a managed identity in Azure portal for your Azure Blob Storage resource, and choose **role assignments** in **Access Control(IAM)** section, then click **add** to create. A suggested role type is: Storage Blob Data Reader. For more details, refer to [Use managed identity to access Azure Storage](../../active-directory/managed-identities-azure-resources/tutorial-vm-windows-access-storage.md#grant-access-1).
+
+ ![MI blob](media/managed-identity-blob.png)
+
+
+* **Container**: Metrics Advisor expects time series data stored as Blob files (one Blob per timestamp) under a single container. This is the container name field.
+
+* **Blob Template**: Metrics Advisor uses path to find the json file in your Blob storage. This is an example of a Blob file template, which is used to find the json file in your Blob storage: `%Y/%m/FileName_%Y-%m-%d-%h-%M.json`. "%Y/%m" is the path, if you have "%d" in your path, you can add after "%m". If your JSON file is named by date, you could also use `%Y-%m-%d-%h-%M.json`.
+
+ The following parameters are supported:
+
+ * `%Y` is the year formatted as `yyyy`
+ * `%m` is the month formatted as `MM`
+ * `%d` is the day formatted as `dd`
+ * `%h` is the hour formatted as `HH`
+ * `%M` is the minute formatted as `mm`
+
+ For example, in the following dataset, the blob template should be "%Y/%m/%d/00/JsonFormatV2.json".
+
+ ![blob template](media/blob-template.png)
+
+
+* **JSON format version**: Defines the data schema in the JSON files. Currently Metrics Advisor supports two versions, you can choose one to fill in the field:
+
+ * **v1** (Default value)
+
+ Only the metrics *Name* and *Value* are accepted. For example:
+
+ ``` JSON
+ {"count":11, "revenue":1.23}
+ ```
+
+ * **v2**
+
+ The metrics *Dimensions* and *timestamp* are also accepted. For example:
+
+ ``` JSON
+ [
+ {"date": "2018-01-01T00:00:00Z", "market":"en-us", "count":11, "revenue":1.23},
+ {"date": "2018-01-01T00:00:00Z", "market":"zh-cn", "count":22, "revenue":4.56}
+ ]
+ ```
+
+ Only one timestamp is allowed per JSON file.
+
+## <span id="cosmosdb">Azure Cosmos DB (SQL)</span>
+
+* **Connection String**: The connection string to access your Azure Cosmos DB. This can be found in the Cosmos DB resource in Azure portal, in **Keys**. Also, you can find more information in [Secure access to data in Azure Cosmos DB](../../cosmos-db/secure-access-to-data.md).
+* **Database**: The database to query against. This can be found in the **Browse** page under **Containers** section in the Azure portal.
+* **Collection ID**: The collection ID to query against. This can be found in the **Browse** page under **Containers** section in the Azure portal.
+* **SQL Query**: A SQL query to get and formulate data into multi-dimensional time series data. You can use the `@IntervalStart` and `@IntervalEnd` variables in your query. They should be formatted: `yyyy-MM-ddTHH:mm:ssZ`.
+
+ Sample query:
+
+ ```SQL
+ SELECT [TimestampColumn], [DimensionColumn], [MetricColumn] FROM [TableName] WHERE [TimestampColumn] >= @IntervalStart and [TimestampColumn] < @IntervalEnd
+ ```
+
+ For more information, refer to the [tutorial on writing a valid query](tutorials/write-a-valid-query.md) for more specific examples.
+
+## <span id="kusto">Azure Data Explorer (Kusto)</span>
+
+* **Connection String**: There are four authentication types for Azure Data Explorer (Kusto), they are **Basic**, **Service Principal**, **Service Principal From KeyVault**, and **Managed Identity**. The data source in connection string should be in URI format(starts with 'https'), you can find the URI in Azure portal.
+
+ * **Basic**: Metrics Advisor supports accessing Azure Data Explorer(Kusto) by using Azure AD application authentication. You need to create and register an Azure AD application and then authorize it to access an Azure Data Explorer database, see detail in [Create an AAD app registration in Azure Data Explorer](/azure/data-explorer/provision-azure-ad-app) documentation.
+ Here's an example of connection string:
+
+ ```
+ Data Source=<URI Server>;Initial Catalog=<Database>;AAD Federated Security=True;Application Client ID=<Application Client ID>;Application Key=<Application Key>;Authority ID=<Tenant ID>
+ ```
+
+ * **Service Principal**: A service principal is a concrete instance created from the application object and inherits certain properties from that application object. The service principal object defines what the app can actually do in the specific tenant, who can access the app, and what resources the app can access. There are 3 steps to use service principal in Metrics Advisor.
+
+ **1. Create Azure AD application registration.** See first part in [Create an AAD app registration in Azure Data Explorer](/azure/data-explorer/provision-azure-ad-app).
+
+ **2. Manage Azure Data Explorer database permissions.** See [Manage Azure Data Explorer database permissions](/azure/data-explorer/manage-database-permissions) to know about Service Principal and manage permissions.
+
+ **3. Create a credential entity in Metrics Advisor.** See how to [create a credential entity](how-tos/credential-entity.md) in Metrics Advisor, so that you can choose that entity when adding data feed for Service Principal authentication type.
+
+ Here's an example of connection string:
+
+ ```
+ Data Source=<URI Server>;Initial Catalog=<Database>
+ ```
+
+ * **Service Principal From Key Vault**: Key Vault helps to safeguard cryptographic keys and secret values that cloud apps and services use. By using Key Vault, you can encrypt keys and secret values. You should create a service principal first, and then store the service principal inside Key Vault. You can go through [Create a credential entity for Service Principal from Key Vault](how-tos/credential-entity.md#sp-from-kv) to follow detailed procedure to set service principal from key vault.
+ Here's an example of connection string:
+ ```
+ Data Source=<URI Server>;Initial Catalog=<Database>
+ ```
+
+ * **Managed Identity**: Managed identity for Azure resources can authorize access to blob and queue data using Azure AD credentials from applications running in Azure virtual machines (VMs), function apps, virtual machine scale sets, and other services. By using managed identity for Azure resources together with Azure AD authentication, you can avoid storing credentials with your applications that run in the cloud. Learn how to [authorize with a managed identity](../../storage/common/storage-auth-aad-msi.md#enable-managed-identities-on-a-vm).
+
+ You can create a managed identity in Azure portal for your Azure Data Explorer (Kusto), choose **Permissions** section, and click **add** to create. The suggested role type is: admin / viewer.
+
+ ![MI kusto](media/managed-identity-kusto.png)
+
+ Here's an example of connection string:
+ ```
+ Data Source=<URI Server>;Initial Catalog=<Database>
+ ```
+
+ <!-- For more information, refer to the [tutorial on writing a valid query](tutorials/write-a-valid-query.md) for more specific examples. -->
+
+* **Query**: See [Kusto Query Language](/azure/data-explorer/kusto/query) to get and formulate data into multi-dimensional time series data. You can use the `@IntervalStart` and `@IntervalEnd` variables in your query. They should be formatted: `yyyy-MM-ddTHH:mm:ssZ`.
+
+ Sample query:
+
+ ``` Kusto
+ [TableName] | where [TimestampColumn] >= datetime(@IntervalStart) and [TimestampColumn] < datetime(@IntervalEnd);
+ ```
+
+ For more information, refer to the [tutorial on writing a valid query](tutorials/write-a-valid-query.md) for more specific examples.
+
+## <span id="adl">Azure Data Lake Storage Gen2</span>
+
+* **Account Name**: There are four authentication types for Azure Data Lake Storage Gen2, they are **Basic**, **Azure Data Lake Storage Gen2 Shared Key**, **Service Principal**, and **Service Principal From KeyVault**.
+
+ * **Basic**: The **Account Name** of your Azure Data Lake Storage Gen2. This can be found in your Azure Storage Account (Azure Data Lake Storage Gen2) resource in **Access keys**.
+
+ * **Azure Data Lake Storage Gen2 Shared Key**: First, you should specify the account key to access your Azure Data Lake Storage Gen2 (the same as Account Key in *Basic* authentication type). This could be found in Azure Storage Account (Azure Data Lake Storage Gen2) resource in **Access keys** setting. Then you should [create a credential entity](how-tos/credential-entity.md) for *Azure Data Lake Storage Gen2 Shared Key* type and fill in the account key.
+
+ The account name is the same as *Basic* authentication type.
+
+ * **Service Principal**: A service principal is a concrete instance created from the application object and inherits certain properties from that application object. A service principal is created in each tenant where the application is used and references the globally unique app object. The service principal object defines what the app can actually do in the specific tenant, who can access the app, and what resources the app can access.
+
+ The account name is the same as **Basic** authentication type.
+
+ **Step 1:** Create and register an Azure AD application and then authorize it to access database, see detail in [Create an AAD app registration](/azure/data-explorer/provision-azure-ad-app) documentation.
+
+ **Step 2:** Assign roles.
+
+ 1. In the Azure portal, go to the **Storage accounts** service.
+
+ 2. Select the ADLS Gen2 account to use with this application registration.
+
+ 3. Click **Access Control (IAM)**.
+
+ 4. Click **+ Add** and select **Add role assignment** from the dropdown menu.
+
+ 5. Set the **Select** field to the Azure AD application name and set role to **Storage Blob Data Contributor**. Click **Save**.
+
+ ![lake-service-principals](media/datafeeds/adls-gen-2-app-reg-assign-roles.png)
+
+ **Step 3:** [Create a credential entity](how-tos/credential-entity.md) in Metrics Advisor, so that you can choose that entity when adding data feed for Service Principal authentication type.
+
+ * **Service Principal From Key Vault** authentication type: Key Vault helps to safeguard cryptographic keys and secret values that cloud apps and services use. By using Key Vault, you can encrypt keys and secret values. You should create a service principal first, and then store the service principal inside Key Vault. You can go through [Create a credential entity for Service Principal from Key Vault](how-tos/credential-entity.md#sp-from-kv) to follow detailed procedure to set service principal from key vault. The account name is the same as *Basic* authentication type.
+
+* **Account Key** (only *Basic* needs): Specify the account key to access your Azure Data Lake Storage Gen2. This could be found in Azure Storage Account (Azure Data Lake Storage Gen2) resource in **Access keys** setting.
+
+* **File System Name (Container)**: Metrics Advisor will expect your time series data stored as Blob files (one Blob per timestamp) under a single container. This is the container name field. This can be found in your Azure storage account (Azure Data Lake Storage Gen2) instance, and click **'Containers'** in **'Data Lake Storage'** section, then you'll see the container name.
+
+* **Directory Template**: This is the directory template of the Blob file. The following parameters are supported:
+
+ * `%Y` is the year formatted as `yyyy`
+ * `%m` is the month formatted as `MM`
+ * `%d` is the day formatted as `dd`
+ * `%h` is the hour formatted as `HH`
+ * `%M` is the minute formatted as `mm`
+
+ Query sample for a daily metric: `%Y/%m/%d`.
+
+ Query sample for an hourly metric: `%Y/%m/%d/%h`.
+
+* **File Template**:
+ Metrics Advisor uses path to find the json file in your Blob storage. This is an example of a Blob file template, which is used to find the json file in your Blob storage: `%Y/%m/FileName_%Y-%m-%d-%h-%M.json`. `%Y/%m` is the path, if you have `%d` in your path, you can add after `%m`.
+
+ The following parameters are supported:
+
+ * `%Y` is the year formatted as `yyyy`
+ * `%m` is the month formatted as `MM`
+ * `%d` is the day formatted as `dd`
+ * `%h` is the hour formatted as `HH`
+ * `%M` is the minute formatted as `mm`
+
+ Currently Metrics Advisor supports the data schema in the JSON files as follows. For example:
+
+ ``` JSON
+ [
+ {"date": "2018-01-01T00:00:00Z", "market":"en-us", "count":11, "revenue":1.23},
+ {"date": "2018-01-01T00:00:00Z", "market":"zh-cn", "count":22, "revenue":4.56}
+ ]
+ ```
+
+## <span id="eventhubs">Azure Event Hubs</span>
+
+* **Limitations**: There are some limitations with Metrics Advisor Event Hub integration.
+
+ * Metrics Advisor Event Hubs integration doesn't currently support more than 3 active data feeds in one Metrics Advisor instance in public preview.
+ * Metrics Advisor will always start consuming messages from the latest offset, including when re-activating a paused data feed.
+
+ * Messages during the data feed pause period will be lost.
+ * The data feed ΓÇÿingestion start timeΓÇÖ is set to the current UTC timestamp automatically when created and is for reference purposes only.
+created, and for reference only.
+
+ * Only one data feed can be used per consumer group . To reuse a consumer group from another deleted data feed, you need to wait at least 10 minutes after deletion.
+data feed, it needs to wait at least 10 minutes after deletion.
+ * The connection string and consumer group cannot be modified after the data feed is created.
+ * About messages in Event Hubs: Only JSON is supported, and the JSON values cannot be a nested JSON object. The top-level element can be a JSON object or a JSON array.
+
+ Valid messages as follows:
+
+ ``` JSON
+ Single JSON object
+ {
+ "metric_1": 234,
+ "metric_2": 344,
+ "dimension_1": "name_1",
+ "dimension_2": "name_2"
+ }
+ ```
+
+ ``` JSON
+ JSON array
+ [
+ {
+ "timestamp": "2020-12-12T12:00:00", "temperature": 12.4,
+ "location": "outdoor"
+ },
+ {
+ "timestamp": "2020-12-12T12:00:00", "temperature": 24.8,
+ "location": "indoor"
+ }
+ ]
+ ```
++
+* **Connection String**: Navigate to the **Event Hubs Instance** first. Then add a new policy or choose an existing Shared access policy. Copy the connection string in the pop-up panel.
+ ![eventhubs](media/datafeeds/entities-eventhubs.jpg)
+
+ ![shared access policies](media/datafeeds/shared-access-policies.jpg)
+
+ Here's an example of a connection string:
+ ```
+ Endpoint=<Server>;SharedAccessKeyName=<SharedAccessKeyName>;SharedAccessKey=<SharedAccess Key>;EntityPath=<EntityPath>
+ ```
+
+* **Consumer Group**: A [consumer group](../../event-hubs/event-hubs-features.md#consumer-groups) is a view (state, position, or offset) of an entire event hub.
+This can be found on the "Consumer Groups" menu of an Azure Event Hubs instance. A consumer group can only serve one data feed, otherwise, onboard and ingestion will fail. It is recommended that you create a new consumer group for each data feed.
+* **Timestamp**(optional): Metrics Advisor uses the Event Hubs timestamp as the event timestamp if the user data source does not contain a timestamp field. The timestamp field is optional. If no timestamp column is chosen, we will use the enqueued time as the timestamp.
+
+ The timestamp field must match one of these two formats:
+
+ * "YYYY-MM-DDTHH:MM:SSZ" format;
+ * Number of seconds or milliseconds from the epoch of 1970-01-01T00:00:00Z.
+ No matter which timestamp field it will left align to granularity. For example, if timestamp is "2019-01-01T00:03:00Z", granularity is 5 minutes, then Metrics Advisor aligns the timestamp to "2019-01-01T00:00:00Z". If the event timestamp is "2019-01-01T00:10:00Z", Metrics Advisor uses the timestamp directly without any alignment.
++
+## <span id="log">Azure Log Analytics</span>
+
+There are three authentication types for Azure Log Analytics, they are **Basic**, **Service Principal** and **Service Principal From KeyVault**.
+* **Basic**: You need to fill in **Tenant ID**, **Client ID**, **Client Secret**, **Workspace ID**.
+ To get **Tenant ID**, **Client ID**, **Client Secret**, see [Register app or web API](../../active-directory/develop/quickstart-register-app.md).
+
+ * **Tenant ID**: Specify the tenant ID to access your Log Analytics.
+ * **Client ID**: Specify the client ID to access your Log Analytics.
+ * **Client Secret**: Specify the client secret to access your Log Analytics.
+ * **Workspace ID**: Specify the workspace ID of Log Analytics. For **Workspace ID**, you can find it in Azure portal.
+
+ ![workspace id](media/workspace-id.png)
+
+* **Service Principal**: A service principal is a concrete instance created from the application object and inherits certain properties from that application object. A service principal is created in each tenant where the application is used and references the globally unique app object. The service principal object defines what the app can actually do in the specific tenant, who can access the app, and what resources the app can access.
+
+ **Step 1:** Create and register an Azure AD application and then authorize it to access a database, see first part in [Create an AAD app registration](/azure/data-explorer/provision-azure-ad-app).
+
+ **Step 2:** Assign roles.
+ 1. In the Azure portal, go to the **Storage accounts** service.
+ 2. Click **Access Control (IAM)**.
+ 3. Click **+ Add** and select **Add role assignment** from the dropdown menu.
+ 4. Set the **Select** field to the Azure AD application name and set role to **Storage Blob Data Contributor**. Click **Save**.
+
+ ![lake-service-principals](media/datafeeds/adls-gen-2-app-reg-assign-roles.png)
+
+
+ **Step 3:** [Create a credential entity](how-tos/credential-entity.md) in Metrics Advisor, so that you can choose that entity when adding data feed for Service Principal authentication type.
+
+* **Service Principal From Key Vault** authentication type: Key Vault helps to safeguard cryptographic keys and secret values that cloud apps and services use. By using Key Vault, you can encrypt keys and secret values. You should create a service principal first, and then store the service principal inside Key Vault. You can go through [Create a credential entity for Service Principal from Key Vault](how-tos/credential-entity.md#sp-from-kv) to follow detailed procedure to set service principal from key vault.
+
+* **Query**: Specify the query of Log Analytics. For more information, see [Log queries in Azure Monitor](../../azure-monitor/logs/log-query-overview.md)
+
+ Sample query:
+
+ ``` Kusto
+ [TableName]
+ | where [TimestampColumn] >= datetime(@IntervalStart) and [TimestampColumn] < datetime(@IntervalEnd)
+ | summarize [count_per_dimension]=count() by [Dimension]
+ ```
+
+ For more information, refer to the [tutorial on writing a valid query](tutorials/write-a-valid-query.md) for more specific examples.
+
+## <span id="sql">Azure SQL Database | SQL Server</span>
+
+* **Connection String**: There are five authentication types for Azure SQL Database | SQL Server, they are **Basic**, **Managed Identity**, **Azure SQL Connection String**, **Service Principal** and **Service Principal From KeyVault**.
+
+ * **Basic**: Metrics Advisor accepts an [ADO.NET Style Connection String](/dotnet/framework/data/adonet/connection-string-syntax) for sql server data source.
+ Here's an example of connection string:
+
+ ```
+ Data Source=<Server>;Initial Catalog=<db-name>;User ID=<user-name>;Password=<password>
+ ```
+
+ * <span id='jump'>**Managed Identity**</span>: Managed identity for Azure resources can authorize access to blob and queue data using Azure AD credentials from applications running in Azure virtual machines (VMs), function apps, virtual machine scale sets, and other services. By using managed identity for Azure resources together with Azure AD authentication, you can avoid storing credentials with your applications that run in the cloud. To [enable your managed entity](../../active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-sql.md), you can refer to following steps:
+ 1. **Enabling a system-assigned managed identity is a one-click experience.** In Azure portal for your Metrics Advisor workspace, set the status as `on` in **Settings > Identity > System assigned**.
+
+ ![set status as on](media/datafeeds/set-identity-status.png)
+
+ 1. **Enable Azure AD authentication.** In the Azure portal for your data source, click **Set admin** in **Settings > Active Directory admin**, select an **Azure AD user account** to be made an administrator of the server, and click **Select**.
+
+ ![set admin](media/datafeeds/set-admin.png)
+
+ 1. **Enable managed identity(MI) in Metrics Advisor.** There are 2 ways to choose: edit query in a **database management tool** or **Azure portal**.
+
+ **Management tool**: In your database management tool, select **Active Directory - Universal with MFA support** in the authentication field. In the User name field, enter the name of the Azure AD account that you set as the server administrator in step 2, for example, test@contoso.com
+
+ ![set connection detail](media/datafeeds/connection-details.png)
+
+ **Azure portal**: Select Query editor in your SQL database, sign in admin account.
+ ![edit query in Azure Portal](media/datafeeds/query-editor.png)
+
+ Then in the query window, you should execute the following lines (same for management tool method):
+
+ ```
+ CREATE USER [MI Name] FROM EXTERNAL PROVIDER
+ ALTER ROLE db_datareader ADD MEMBER [MI Name]
+ ```
+
+ > [!NOTE]
+ > The `MI Name` is the **Managed Identity Name** in Metrics Advisor (for service principal, it should be replaced with **Service Principal name**). Also, you can learn more detail in this document: [Authorize with a managed identity](../../storage/common/storage-auth-aad-msi.md#enable-managed-identities-on-a-vm).
+
+ Here's an example of connection string:
+
+ ```
+ Data Source=<Server>;Initial Catalog=<Database>
+ ```
+
+ * **Azure SQL Connection String**:
+
+
+ Here's an example of connection string:
+
+ ```
+ Data Source=<Server>;Initial Catalog=<Database>;User ID=<user-name>;Password=<password>
+ ```
+
+
+ * **Service Principal**: A service principal is a concrete instance created from the application object and inherits certain properties from that application object. A service principal is created in each tenant where the application is used and references the globally unique app object. The service principal object defines what the app can actually do in the specific tenant, who can access the app, and what resources the app can access.
+
+ **Step 1:** Create and register an Azure AD application and then authorize it to access a database, see detail in [Create an AAD app registration](/azure/data-explorer/provision-azure-ad-app) documentation.
+
+ **Step 2:** Follow the same steps with [managed identity in SQL Server](#jump), which is mentioned above.
+
+ **Step 3:** [Create a credential entity](how-tos/credential-entity.md) in Metrics Advisor, so that you can choose that entity when adding data feed for Service Principal authentication type.
+
+ Here's an example of connection string:
+
+ ```
+ Data Source=<Server>;Initial Catalog=<Database>
+ ```
+
+ * **Service Principal From Key Vault**: Key Vault helps to safeguard cryptographic keys and secret values that cloud apps and services use. By using Key Vault, you can encrypt keys and secret values. You should create a service principal first, and then store the service principal inside Key Vault. You can go through [Create a credential entity for Service Principal from Key Vault](how-tos/credential-entity.md#sp-from-kv) to follow detailed procedure to set service principal from key vault. Also, your connection string could be found in Azure SQL Server resource in **Settings > Connection strings** section.
+
+ Here's an example of connection string:
+
+ ```
+ Data Source=<Server>;Initial Catalog=<Database>
+ ```
+
+* **Query**: A SQL query to get and formulate data into multi-dimensional time series data. You can use `@IntervalStart` and `@IntervalEnd` in your query to help with getting expected metrics value in an interval. They should be formatted: `yyyy-MM-ddTHH:mm:ssZ`.
++
+ Sample query:
+
+ ```SQL
+ SELECT [TimestampColumn], [DimensionColumn], [MetricColumn] FROM [TableName] WHERE [TimestampColumn] >= @IntervalStart and [TimestampColumn] < @IntervalEnd
+ ```
+
+## <span id="table">Azure Table Storage</span>
+
+* **Connection String**: Create an SAS (shared access signature) URL and fill in here. The most straightforward way to generate a SAS URL is using the Azure portal. By using the Azure portal, you can navigate graphically. To create an SAS URL via the Azure portal, first, navigate to the storage account youΓÇÖd like to access under the **Settings section** then click **Shared access signature**. Check at least "Table" and "Object" checkboxes, then click the Generate SAS and connection string button. Table service SAS URL is what you need to copy and fill in the text box in the Metrics Advisor workspace.
+
+ ![azure table generate sas](media/azure-table-generate-sas.png)
+
+* **Table Name**: Specify a table to query against. This can be found in your Azure Storage Account instance. Click **Tables** in the **Table Service** section.
+
+* **Query**: You can use `@IntervalStart` and `@IntervalEnd` in your query to help with getting expected metrics value in an interval. They should be formatted: `yyyy-MM-ddTHH:mm:ssZ`.
+
+ Sample query:
+
+ ``` mssql
+ PartitionKey ge '@IntervalStart' and PartitionKey lt '@IntervalEnd'
+ ```
+
+ For more information, refer to the [tutorial on writing a valid query](tutorials/write-a-valid-query.md) for more specific examples.
+
+<!--
+## <span id="es">Elasticsearch</span>
+
+* **Host**: Specify the master host of Elasticsearch Cluster.
+* **Port**: Specify the master port of Elasticsearch Cluster.
+* **Authorization Header**: Specify the authorization header value of Elasticsearch Cluster.
+* **Query**: Specify the query to get data. Placeholder `@IntervalStart` is supported. For example, when data of `2020-06-21T00:00:00Z` is ingested, `@IntervalStart = 2020-06-21T00:00:00`.
++
+* **Request URL**: An HTTP url that can return a JSON. The placeholders %Y,%m,%d,%h,%M are supported: %Y=year in format yyyy, %m=month in format MM, %d=day in format dd, %h=hour in format HH, %M=minute in format mm. For example: `http://microsoft.com/ProjectA/%Y/%m/X_%Y-%m-%d-%h-%M`.
+* **Request HTTP method**: Use GET or POST.
+* **Request header**: Could add basic authentication.
+* **Request payload**: Only JSON payload is supported. Placeholder @IntervalStart is supported in the payload. The response should be in the following JSON format: `[{"timestamp": "2018-01-01T00:00:00Z", "market":"en-us", "count":11, "revenue":1.23}, {"timestamp": "2018-01-01T00:00:00Z", "market":"zh-cn", "count":22, "revenue":4.56}]`. For example, when data of `2020-06-21T00:00:00Z` is ingested, `@IntervalStart = 2020-06-21T00:00:00.0000000+00:00)`.
+-->
+
+## <span id="influxdb">InfluxDB (InfluxQL)</span>
+
+* **Connection String**: The connection string to access your InfluxDB.
+* **Database**: The database to query against.
+* **Query**: A query to get and formulate data into multi-dimensional time series data for ingestion.
+
+ Sample query:
+
+ ``` SQL
+ SELECT [TimestampColumn], [DimensionColumn], [MetricColumn] FROM [TableName] WHERE [TimestampColumn] >= @IntervalStart and [TimestampColumn] < @IntervalEnd
+ ```
+
+For more information, refer to the [tutorial on writing a valid query](tutorials/write-a-valid-query.md) for more specific examples.
+
+* **User name**: This is optional for authentication.
+* **Password**: This is optional for authentication.
+
+## <span id="mongodb">MongoDB</span>
+
+* **Connection String**: The connection string to access your MongoDB.
+* **Database**: The database to query against.
+* **Query**: A command to get and formulate data into multi-dimensional time series data for ingestion. We recommend the command is verified on [db.runCommand()](https://docs.mongodb.com/manual/reference/method/db.runCommand/https://docsupdatetracker.net/index.html).
+
+ Sample query:
+
+ ``` MongoDB
+ {"find": "[TableName]","filter": { [Timestamp]: { $gte: ISODate(@IntervalStart) , $lt: ISODate(@IntervalEnd) }},"singleBatch": true}
+ ```
+
+
+## <span id="mysql">MySQL</span>
+
+* **Connection String**: The connection string to access your MySQL DB.
+* **Query**: A query to get and formulate data into multi-dimensional time series data for ingestion.
+
+ Sample query:
+
+ ``` SQL
+ SELECT [TimestampColumn], [DimensionColumn], [MetricColumn] FROM [TableName] WHERE [TimestampColumn] >= @IntervalStart and [TimestampColumn]< @IntervalEnd
+ ```
+
+ For more information, refer to the [tutorial on writing a valid query](tutorials/write-a-valid-query.md) for more specific examples.
+
+## <span id="pgsql">PostgreSQL</span>
+
+* **Connection String**: The connection string to access your PostgreSQL DB.
+* **Query**: A query to get and formulate data into multi-dimensional time series data for ingestion.
+
+ Sample query:
+
+ ``` SQL
+ SELECT [TimestampColumn], [DimensionColumn], [MetricColumn] FROM [TableName] WHERE [TimestampColumn] >= @IntervalStart and [TimestampColumn] < @IntervalEnd
+ ```
+ For more information, refer to the [tutorial on writing a valid query](tutorials/write-a-valid-query.md) for more specific examples.
+
+## <span id="csv">Local files(CSV)</span>
+
+> [!NOTE]
+> This feature is only used for quick system evaluation focusing on anomaly detection. It only accepts static data from a local CSV and performs anomaly detection on single time series data. However, for the full experience analyzing on multi-dimensional metrics including real-time data ingestion, anomaly notification, root cause analysis, cross-metric incident analysis, use other supported data sources.
+
+**Requirements on data in CSV:**
+- Have at least one column, which represents measurements to be analyzed. For better and quicker user experience, we recommend you try a CSV file containing two columns: (1) Timestamp column (2) Metric Column. (Timestamp format: 2021-03-30T00:00:00Z, the 'seconds' part is best to be ':00Z'), and the time granularity between every record should be the same.
+- Timestamp column is optional, if there's no timestamp, Metrics Advisor will use timestamp starting from today 00:00:00(UTC) and map each measure in the row at a one-hour interval. If there is timestamp column in CSV and you want to keep it, make sure the data time period follow this rule [historical data processing window].
+- There is no re-ordering or gap-filling happening during data ingestion, make sure your data in CSV is ordered by timestamp **ascending (ASC)**.
+
+## Next steps
+
+* While waiting for your metric data to be ingested into the system, read about [how to manage data feed configurations](how-tos/manage-data-feeds.md).
+* When your metric data is ingested, you can [Configure metrics and fine tune detection configuration](how-tos/configure-metrics.md).
applied-ai-services Encryption https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/metrics-advisor/encryption.md
+
+ Title: Metrics Advisor service encryption
+
+description: Metrics Advisor service encryption of data at rest.
+++++ Last updated : 07/02/2021+
+#Customer intent: As a user of the Metrics Advisor service, I want to learn how encryption at rest works.
++
+# Metrics Advisor service encryption of data at rest
+
+Metrics Advisor service automatically encrypts your data when it is persisted to the cloud. The Metrics Advisor service encryption protects your data and helps you to meet your organizational security and compliance commitments.
++
+Metrics Advisor supports CMK and double encryption by using BYOS (bring your own storage).
+
+## Steps to create a Metrics Advisor with BYOS
+
+### Step1. Create an Azure Database for PostgreSQL and set admin
+
+- Create an Azure Database for PostgreSQL
+
+ Log in to the Azure portal and create a resource of the Azure Database for PostgreSQL. Couple of things to notice:
+
+ 1. Please select the **'Single Server'** deployment option.
+ 2. When choosing 'Datasource', please specify as **'None'**.
+ 3. For the 'Location', please make sure to create within the **same location** as Metrics Advisor resource.
+ 4. 'Version' should be set to **11**.
+ 5. 'Compute + storage' should choose a 'Memory Optimized' SKU with at least **32 vCores**.
+
+ ![Create an Azure Database for PostgreSQL](media/cmk-create.png)
+
+- Set Active Directory Admin for newly created PG
+
+ After successfully creating your Azure Database for PostgreSQL. Go to the resource page of the newly created Azure PG resource. Select 'Active Directory admin' tab and set yourself as the Admin.
++
+### Step2. Create a Metrics Advisor resource and enable Managed Identity
+
+- Create a Metrics Advisor resource in the Azure portal
+
+ Go to Azure portal again and search 'Metrics Advisor'. When creating Metrics Advisor, do remember the following:
+
+ 1. Choose the **same 'region'** as you created Azure Database for PostgreSQL.
+ 2. Mark 'Bring your own storage' as **'Yes'** and select the Azure Database for PostgreSQL you just created in the dropdown list.
+
+- Enable the Managed Identity for Metrics Advisor
+
+ After creating the Metrics Advisor resource, select 'Identity' and set 'Status' to **'On'** to enable Managed Identity.
+
+- Get Application ID of Managed Identity
+
+ Go to Azure Active Directory, and select 'Enterprise applications'. Change 'Application type' to **'Managed Identity'**, copy resource name of Metrics Advisor, and search. Then you're able to view the 'Application ID' from the query result, copy it.
+
+### Step3. Grant Metrics Advisor access permission to your Azure Database for PostgreSQL
+
+- Grant **'Owner'** role for the Managed Identity on your Azure Database for PostgreSQL
+
+- Set firewall rules
+
+ 1. Set 'Allow access to Azure services' as 'Yes'.
+ 2. Add your clientIP address to log in to Azure Database for PostgreSQL.
+
+- Get the access-token for your account with resource type 'https://ossrdbms-aad.database.windows.net'. The access token is the password you need to log in to the Azure Database for PostgreSQL by your account. An example using `az` client:
+
+ ```
+ az login
+ az account get-access-token --resource https://ossrdbms-aad.database.windows.net
+ ```
+
+- After getting the token, use it to log in to your Azure Database for PostgreSQL. Replace the 'servername' as the one that you can find in the 'overview' of your Azure Database for PostgreSQL.
+
+ ```
+ export PGPASSWORD=<access-token>
+ psql -h <servername> -U <adminaccount@servername> -d postgres
+ ```
+
+- After login, execute the following commands to grant Metrics Advisor access permission to Azure Database for PostgreSQL. Replace the 'appid' with the one that you get in Step 2.
+
+ ```
+ SET aad_validate_oids_in_tenant = off;
+ CREATE ROLE metricsadvisor WITH LOGIN PASSWORD '<appid>' IN ROLE azure_ad_user;
+ ALTER ROLE metricsadvisor CREATEDB;
+ GRANT azure_pg_admin TO metricsadvisor;
+ ```
+
+By completing all the above steps, you've successfully created a Metrics Advisor resource with CMK supported. Wait for a couple of minutes until your Metrics Advisor is accessible.
+
+## Next steps
+
+* [Metrics Advisor Service Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk)
+* [Learn more about Azure Key Vault](../../key-vault/general/overview.md)
applied-ai-services Glossary https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/metrics-advisor/glossary.md
+
+ Title: Metrics Advisor glossary
+
+description: Key ideas and concepts for the Metrics Advisor service
++++++ Last updated : 09/14/2020+++
+# Metrics Advisor glossary of common vocabulary and concepts
+
+This document explains the technical terms used in Metrics Advisor. Use this article to learn about common concepts and objects you might encounter when using the service.
+
+## Data feed
+
+> [!NOTE]
+> Multiple metrics can share the same data source, and even the same data feed.
+
+A data feed is what Metrics Advisor ingests from your data source, such as Cosmos DB or a SQL server. A data feed contains rows of:
+* timestamps
+* zero or more dimensions
+* one or more measures.
+
+## Interval
+Metrics need to be monitored at a certain granularity according to business requirements. For example, business Key Performance Indicators (KPIs) are monitored at daily granularity. However, service performance metrics are often monitored at minute/hourly granularity. So the frequency to collect metric data from sources are different.
+
+Metrics Advisor continuously grabs metrics data at each time interval, **the interval is equal to the granularity of the metrics.** Every time, Metrics Advisor runs the query you have written ingests data at this specific interval. Based on this data ingestion mechanism, the query script **should not return all metric data that exists in the database, but needs to limit the result to a single interval.**
+
+<!-- ![What is interval](media/tutorial/what-is-interval.png) -->
+
+## Metric
+
+A metric is a quantifiable measure that is used to monitor and assess the status of a specific business process. It can be a combination of multiple time series values divided into dimensions. For example a *web health* metric might contain dimensions for *user count* and the *en-us market*.
+
+## Dimension
+
+A dimension is one or more categorical values. The combination of those values identifies a particular univariate time series, for example: country, language, tenant, and so on.
+
+## Multi-dimensional metric
+
+What is a multi-dimension metric? Let's use two examples.
+
+**Revenue of a business**
+
+Suppose you have data for the revenue of your business. Your time series data might look something like this:
+
+| Timestamp | Category | Market | Revenue |
+| -|-|--|-- |
+| 2020-6-1 | Food | US | 1000 |
+| 2020-6-1 | Apparel | US | 2000 |
+| 2020-6-2 | Food | UK | 800 |
+| ... | ... |... | ... |
+
+In this example, *Category* and *Market* are dimensions. *Revenue* is the Key Performance Indicator (KPI) which could be sliced into different categories and/or markets, and could also be aggregated. For example, the revenue of *food* for all markets.
+
+**Error counts for a complex application**
+
+Suppose you have data for the number of errors logged in an application. Your time series data might look something like this:
+
+| Timestamp | Application component | Region | Error count |
+| -|-|--|-- |
+| 2020-6-1 | Employee database | WEST EU | 9000 |
+| 2020-6-1 | Message queue | EAST US | 1000 |
+| 2020-6-2 | Message queue | EAST US | 8000|
+| ... | ... | ... | ...|
+
+In this example, *Application component* and *Region* are dimensions. *Error count* is the KPI which could be sliced into different categories and/or markets, and could also be aggregated. For example, the error count of *Message queue* in all regions.
+
+## Measure
+
+A measure is a fundamental or unit-specific term and a quantifiable value of the metric.
+
+## Time series
+
+A time series is a series of data points indexed (or listed or graphed) in chronological order. Most commonly, a time series is a sequence taken at successive, equally spaced points in time. It is a sequence of discrete-time data.
+
+In Metrics Advisor, values of one metric on a specific dimension combination are called one series.
+
+## Granularity
+
+Granularity indicates how frequent data points will be generated at the data source. For example, daily, hourly.
+
+## Ingest data since(UTC)
+
+Ingest data since(UTC) is the time that you want Metrics Advisor to begin ingesting data from your data source. Your data source must have data at the specified ingestion start time.
+
+## Confidence boundaries
+
+> [!NOTE]
+> Confidence boundaries are not the only measurement used to find anomalies. It's possible for data points outside of this boundary to be flagged as normal by the detection model.
+
+In Metrics Advisor, confidence boundaries represent the sensitivity of the algorithm used, and are used to filter out overly sensitive anomalies. On the web portal, confidence bounds appear as a transparent blue band. All the points within the band are treated as normal points.
+
+Metrics Advisor provides tools to adjust the sensitivity of the algorithms used. See [How to: Configure metrics and fine tune detection configuration](how-tos/configure-metrics.md) for more information.
+
+![Confidence bounds](media/confidence-bounds.png)
++
+## Hook
+
+Metrics Advisor lets you create and subscribe to real-time alerts. These alerts are sent over the internet, [using a hook](how-tos/alerts.md).
+
+## Anomaly incident
+
+After a detection configuration is applied to metrics, incidents are generated whenever any series within it has an anomaly. In large data sets this can be overwhelming, so Metrics Advisor groups series of anomalies within a metric into an incident. The service will also evaluate the severity and provide tools for [diagnosing an incident](how-tos/diagnose-an-incident.md).
+
+### Diagnostic tree
+
+In Metrics Advisor, you can apply anomaly detection on metrics, then Metrics Advisor automatically monitors all time series of all dimension combinations. Whenever there is any anomaly detected, Metrics Advisor aggregates anomalies into incidents.
+After an incident occurs, Metrics Advisor will provide a diagnostic tree with a hierarchy of contributing anomalies, and identify ones with the biggest impact. Each incident has a root cause anomaly, which is the top node of the tree.
+
+### Anomaly grouping
+
+Metrics Advisor provides the capability to find related time series with similar patterns. It can also provide deeper insights into the impact on other dimensions, and correlate the anomalies.
+
+### Time series comparison
+
+You can pick multiple time series to compare trends in a single visualization. This provides a clear and insightful way to view and compare related series.
+
+## Detection configuration
+
+>[!Note]
+>Detection configurations are only applied within an individual metric.
+
+On the Metrics Advisor web portal, a detection configuration (such as sensitivity, auto snooze, and direction) is listed on the left panel when viewing a metric. Parameters can be tuned and applied to all series within this metric.
+
+A detection configuration is required for every time series, and determines whether a point in the time series is an anomaly. Metrics Advisor will set up a default configuration for the whole metric when you first onboard data.
+
+You can additionally refine the configuration by applying tuning parameters on a group of series, or a specific one. Only one configuration will be applied to a time series:
+* Configurations applied to a specific series will overwrite configurations for a group
+* Configurations for a group will overwrite configurations applied to the whole metric.
+
+Metrics Advisor provides several [detection methods](how-tos/configure-metrics.md#anomaly-detection-methods), and you can combine them using logical operators.
+
+### Smart detection
+
+Anomaly detection using multiple machine learning algorithms.
+
+**Sensitivity**: A numerical value to adjust the tolerance of the anomaly detection. Visually, the higher the value, the narrower the upper and lower boundaries around the time series.
+
+### Hard threshold
+
+Values outside of upper or lower bounds are anomalies.
+
+**Min**: The lower bound
+
+**Max**: The upper bound
+
+### Change threshold
+
+Use the previous point value to determine if this point is an anomaly.
+
+**Change percentage**: Compared to the previous point, the current point is an anomaly if the percentage of change is more than this parameter.
+
+**Change over points**: How many points to look back.
+
+### Common parameters
+
+**Direction**: A point is an anomaly only when the deviation occurs in the direction *up*, *down*, or *both*.
+
+**Not valid anomaly until**: A data point is only an anomaly if a specified percentage of previous points are also anomalies.
+
+## Alert settings
+
+Alert settings determine which anomalies should trigger an alert. You can set multiple alerts with different settings. For example, you could create an alert for anomalies with lower business impact, and another for more importance alerts.
+
+You can also create an alert across metrics. For example, an alert that only gets triggered if two specified metrics have anomalies.
+
+### Alert scope
+
+Alert scope refers to the scope that the alert applies to. There are four options:
+
+**Anomalies of all series**: Alerts will be triggered for anomalies in all series within the metric.
+
+**Anomalies in series group**: Alerts will only be triggered for anomalies in specific dimensions of the series group. The number of specified dimensions should be smaller than the total number dimensions.
+
+**Anomalies in favorite series**: Alerts will only be triggered for anomalies that are added as favorites. You can choose a group of series as a favorite for each detecting config.
+
+**Anomalies in top N of all series**: Alerts will only be triggered for anomalies in the top N series. You can set parameters to specify the number of timestamps to take into account, and how many anomalies must be in them to send the alert.
+
+### Severity
+
+Severity is a grade that Metrics Advisor uses to describe the severity of incident, including *High*, *Medium*, and *Low*.
+
+Currently, Metrics Advisor uses the following factors to measure the alert severity:
+1. The value proportion and the quantity proportion of anomalies in the metric.
+1. Confidence of anomalies.
+1. Your favorite settings also contribute to the severity.
+
+### Auto snooze
+
+Some anomalies are transient issues, especially for small granularity metrics. You can *snooze* an alert for a specific number of time points. If anomalies are found within that specified number of points, no alert will be triggered. The behavior of auto snooze can be set on either metric level or series level.
+
+The behavior of snooze can be set on either metric level or series level.
+
+## Data feed settings
+
+### Ingestion Time Offset
+
+By default, data is ingested according to the granularity (such as *daily*). By using a positive integer, you can delay ingestion of the data by the specified value. Using a negative number, you can advance the ingestion by the specified value.
+
+### Max Ingestion per Minute
+
+Set this parameter if your data source supports limited concurrency. Otherwise leave the default settings.
+
+### Stop retry after
+
+If data ingestion has failed, Metrics Advisor will retry automatically after a period of time. The beginning of the period is the time when the first data ingestion occurred. The length of the retry period is defined according to the granularity. If you use the default value (`-1`), the retry period will be determined according to the granularity:
+
+| Granularity | Stop Retry After |
+| : | : |
+| Daily, Custom (>= 1 Day), Weekly, Monthly, Yearly | 7 days |
+| Hourly, Custom (< 1 Day) | 72 hours |
+
+### Min retry interval
+
+You can specify the minimum interval when retrying to pull data from the source. If you use the default value (`-1`), the retry interval will be determined according to the granularity:
+
+| Granularity | Minimum Retry Interval |
+| : | : |
+| Daily, Custom (>= 1 Day), Weekly, Monthly | 30 minutes |
+| Hourly, Custom (< 1 Day) | 10 minutes |
+| Yearly | 1 day |
+
+### Grace period
+
+> [!Note]
+> The grace period begins at the regular ingestion time, plus specified ingestion time offset.
+
+A grace period is a period of time where Metrics Advisor will continue fetching data from the data source, but won't fire any alerts. If no data was ingested after the grace period, a *Data feed not available* alert will be triggered.
+
+### Snooze alerts in
+
+When this option is set to zero, each timestamp with *Not Available* will trigger an alert. When set to a value other than zero, the specified number of *Not available* alerts will be snoozed if no data was fetched.
+
+## Data feed permissions
+
+There are two roles to manage data feed permissions: *Administrator*, and *Viewer*.
+
+* An *Administrator* has full control of the data feed and metrics within it. They can activate, pause, delete the data feed, and make updates to feeds and configurations. An *Administrator* is typically the owner of the metrics.
+
+* A *Viewer* is able to view the data feed or metrics, but is not able to make changes.
+
+## Next steps
+- [Metrics Advisor overview](overview.md)
+- [Use the web portal](quickstarts/web-portal.md)
applied-ai-services Alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/metrics-advisor/how-tos/alerts.md
+
+ Title: Configure Metrics Advisor alerts
+
+description: How to configure your Metrics Advisor alerts using hooks for email, web and Azure DevOps.
++++++ Last updated : 09/14/2020+++
+# How-to: Configure alerts and get notifications using a hook
+
+After an anomaly is detected by Metrics Advisor, an alert notification will be triggered based on alert settings, using a hook. An alert setting can be used with multiple detection configurations, various parameters are available to customize your alert rule.
+
+## Create a hook
+
+Metrics Advisor supports four different types of hooks: email, Teams, webhook, and Azure DevOps. You can choose the one that works for your specific scenario.
+
+### Email hook
+
+> [!Note]
+> Metrics Advisor resource administrators need to configure the Email settings, and input **SMTP related information** into Metrics Advisor before anomaly alerts can be sent. The resource group admin or subscription admin needs to assign at least one *Cognitive Services Metrics Advisor Administrator* role in the Access control tab of the Metrics Advisor resource. [Learn more about e-mail settings configuration](../faq.yml#how-to-set-up-email-settings-and-enable-alerting-by-email-).
++
+An email hook is the channel for anomaly alerts to be sent to email addresses specified in the **Email to** section. Two types of alert emails will be sent: **Data feed not available** alerts, and **Incident reports**, which contain one or multiple anomalies.
+
+To create an email hook, the following parameters are available:
+
+|Parameter |Description |
+|||
+| Name | Name of the email hook |
+| Email to| Email addresses to send alerts to|
+| External link | Optional field, which enables a customized redirect, such as for troubleshooting notes. |
+| Customized anomaly alert title | Title template supports `${severity}`, `${alertSettingName}`, `${datafeedName}`, `${metricName}`, `${detectConfigName}`, `${timestamp}`, `${topDimension}`, `${incidentCount}`, `${anomalyCount}`
+
+After you select **OK**, an email hook will be created. You can use it in any alert settings to receive anomaly alerts. Refer to the tutorial of [enable anomaly notification in Metrics Advisor](../tutorials/enable-anomaly-notification.md#send-notifications-with-logic-apps-teams-and-smtp) for detailed steps.
+
+### Teams hook
+
+A Teams hook is the channel for anomaly alerts to be sent to a channel in Microsoft Teams. A Teams hook is implemented through an "Incoming webhook" connector. You may need to create an "Incoming webhook" connector ahead in your target Teams channel and get a URL of it. Then pivot back to your Metrics Advisor workspace.
+
+Select "Hooks" tab in left navigation bar, and select "Create hook" button at top right of the page. Choose hook type of "Teams", following parameters are provided:
+
+|Parameter |Description |
+|||
+| Name | Name of the Teams hook |
+| Connector URL | The URL that just copied from "Incoming webhook" connector that created in target Teams channel. |
+
+After you select **OK**, a Teams hook will be created. You can use it in any alert settings to notify anomaly alerts to target Teams channel. Refer to the tutorial of [enable anomaly notification in Metrics Advisor](../tutorials/enable-anomaly-notification.md#send-notifications-with-logic-apps-teams-and-smtp) for detailed steps.
+
+### Web hook
+
+A web hook is another notification channel by using an endpoint that is provided by the customer. Any anomaly detected on the time series will be notified through a web hook. There're several steps to enable a web hook as alert notification channel within Metrics Advisor.
+
+**Step1.** Enable Managed Identity in your Metrics Advisor resource
+
+A system assigned managed identity is restricted to one per resource and is tied to the lifecycle of this resource. You can grant permissions to the managed identity by using Azure role-based access control (Azure RBAC). The managed identity is authenticated with Azure AD, so you donΓÇÖt have to store any credentials in code.
+
+Go to Metrics Advisor resource in Azure portal, and select "Identity", turn it to "on" then Managed Identity is enabled.
+
+**Step2.** Create a web hook in Metrics Advisor workspace
+
+Log in to you workspace and select "Hooks" tab, then select "Create hook" button.
++
+To create a web hook, you will need to add the following information:
+
+|Parameter |Description |
+|||
+|Endpoint | The API address to be called when an alert is triggered. **MUST be Https**. |
+|Username / Password | For authenticating to the API address. Leave this black if authentication isn't needed. |
+|Header | Custom headers in the API call. |
+|Certificate identifier in Azure Key vaults| If accessing the endpoint needs to be authenticated by a certificate, the certificate should be stored in Azure Key vaults. Input the identifier here.
+
+> [!Note]
+> When a web hook is created or modified, the endpoint will be called as a test with **an empty request body**. Your API needs to return a 200 HTTP code to successfully pass the validation.
++
+- Request method is **POST**
+- Timeout 30s
+- Retry for 5xx error, ignore other error. Will not follow 301/302 redirect request.
+- Request body:
+```
+{
+"value": [{
+ "hookId": "b0f27e91-28cf-4aa2-aa66-ac0275df14dd",
+ "alertType": "Anomaly",
+ "alertInfo": {
+ "anomalyAlertingConfigurationId": "1bc6052e-9a2a-430b-9cbd-80cd07a78c64",
+ "alertId": "172536dbc00",
+ "timestamp": "2020-05-27T00:00:00Z",
+ "createdTime": "2020-05-29T10:04:45.590Z",
+ "modifiedTime": "2020-05-29T10:04:45.590Z"
+ },
+ "callBackUrl": "https://kensho2-api.azurewebsites.net/alert/anomaly/configurations/1bc6052e-9a2a-430b-9cbd-80cd07a78c64/alerts/172536dbc00/incidents"
+}]
+}
+```
+
+**Step3. (optional)** Store your certificate in Azure Key vaults and get identifier
+As mentioned, if accessing the endpoint needs to be authenticated by a certificate, the certificate should be stored in Azure Key vaults.
+
+- Check [Set and retrieve a certificate from Azure Key Vault using the Azure portal](../../../key-vault/certificates/quick-create-portal.md)
+- Click on the certificate you've added, then you're able to copy the "Certificate identifier".
+- Then select "Access policies" and "Add access policy", grant "get" permission for "Key permissions", "Secrete permissions" and "Certificate permissions". Select principal as the name of your Metrics Advisor resource. Select "Add" and "Save" button in "Access policies" page.
+
+**Step4.** Receive anomaly notification
+When a notification is pushed through a web hook, you can fetch incidents data by calling the "callBackUrl" in Webhook Request. Details for this api:
+
+- [/alert/anomaly/configurations/{configurationId}/alerts/{alertId}/incidents](https://westus2.dev.cognitive.microsoft.com/docs/services/MetricsAdvisor/operations/getIncidentsFromAlertByAnomalyAlertingConfiguration)
+
+By using web hook and Azure Logic Apps, it's possible to send email notification **without an SMTP server configured**. Refer to the tutorial of [enable anomaly notification in Metrics Advisor](../tutorials/enable-anomaly-notification.md#send-notifications-with-logic-apps-teams-and-smtp) for detailed steps.
+
+### Azure DevOps
+
+Metrics Advisor also supports automatically creating a work item in Azure DevOps to track issues/bugs when any anomaly is detected. All alerts can be sent through Azure DevOps hooks.
+
+To create an Azure DevOps hook, you will need to add the following information
+
+|Parameter |Description |
+|||
+| Name | A name for the hook |
+| Organization | The organization that your DevOps belongs to |
+| Project | The specific project in DevOps. |
+| Access Token | A token for authenticating to DevOps. |
+
+> [!Note]
+> You need to grant write permissions if you want Metrics Advisor to create work items based on anomaly alerts.
+> After creating hooks, you can use them in any of your alert settings. Manage your hooks in the **hook settings** page.
+
+## Add or edit alert settings
+
+Go to metrics detail page to find the **Alert settings** section, in the bottom-left corner of the metrics detail page. It lists all alert settings that apply to the selected detection configuration. When a new detection configuration is created, there's no alert setting, and no alerts will be sent.
+You can use the **add**, **edit** and **delete** icons to modify alert settings.
++
+Select the **add** or **edit** buttons to get a window to add or edit your alert settings.
++
+**Alert setting name**: The name of the alert setting. It will be displayed in the alert email title.
+
+**Hooks**: The list of hooks to send alerts to.
+
+The section marked in the screenshot above are the settings for one detection configuration. You can set different alert settings for different detection configurations. Choose the target configuration using the third drop-down list in this window.
+
+### Filter settings
+
+The following are filter settings for one detection configuration.
+
+**Alert For** has four options for filtering anomalies:
+
+* **Anomalies in all series**: All anomalies will be included in the alert.
+* **Anomalies in the series group**: Filter series by dimension values. Set specific values for some dimensions. Anomalies will only be included in the alert when the series matches the specified value.
+* **Anomalies in favorite series**: Only the series marked as favorite will be included in the alert. |
+* **Anomalies in top N of all series**: This filter is for the case that you only care about the series whose value is in the top N. Metrics Advisor will look back over previous timestamps, and check if values of the series at these timestamps are in top N. If the "in top n" count is larger than the specified number, the anomaly will be included in an alert. |
+
+**Filter anomaly options are an extra filter with the following options**:
+
+- **Severity**: The anomaly will only be included when the anomaly severity is within the specified range.
+- **Snooze**: Stop alerts temporarily for anomalies in the next N points (period), when triggered in an alert.
+ - **snooze type**: When set to **Series**, a triggered anomaly will only snooze its series. For **Metric**, one triggered anomaly will snooze all the series in this metric.
+ - **snooze number**: the number of points (period) to snooze.
+ - **reset for non-successive**: When selected, a triggered anomaly will only snooze the next n successive anomalies. If one of the following data points isn't an anomaly, the snooze will be reset from that point; When unselected, one triggered anomaly will snooze next n points (period), even if successive data points aren't anomalies.
+- **value** (optional): Filter by value. Only point values that meet the condition, anomaly will be included. If you use the corresponding value of another metric, the dimension names of the two metrics should be consistent.
+
+Anomalies not filtered out will be sent in an alert.
+
+### Add cross-metric settings
+
+Select **+ Add cross-metric settings** in the alert settings page to add another section.
+
+The **Operator** selector is the logical relationship of each section, to determine if they send an alert.
++
+|Operator |Description |
+|||
+|AND | Only send an alert if a series matches each alert section, and all data points are anomalies. If the metrics have different dimension names, an alert will never be triggered. |
+|OR | Send the alert if at least one section contains anomalies. |
++
+## Next steps
+
+- [Adjust anomaly detection using feedback](anomaly-feedback.md)
+- [Diagnose an incident](diagnose-an-incident.md).
+- [Configure metrics and fine tune detection configuration](configure-metrics.md)
applied-ai-services Anomaly Feedback https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/metrics-advisor/how-tos/anomaly-feedback.md
+
+ Title: Provide anomaly feedback to the Metrics Advisor service
+
+description: Learn how to send feedback on anomalies found by your Metrics Advisor instance, and tune the results.
++++++ Last updated : 11/24/2020+++
+# Provide anomaly feedback
+
+User feedback is one of the most important methods to discover defects within the anomaly detection system. Here we provide a way for users to mark incorrect detection results directly on a time series, and apply the feedback immediately. In this way, a user can teach the anomaly detection system how to do anomaly detection for a specific time series through active interactions.
+
+> [!NOTE]
+> Currently feedback will only affect anomaly detection results by **Smart detection** but not **Hard threshold** and **Change threshold**.
+
+## How to give time series feedback
+
+You can provide feedback from the metric detail page on any series. Just select any point, and you will see the below feedback dialog. It shows you the dimensions of the series you've chosen. You can reselect dimension values, or even remove some of them to get a batch of time series data. After choosing time series, select the **Add** button to add the feedback, there are four kinds of feedback you could give. To append multiple feedback items, select the **Save** button once you complete your annotations.
+++
+### Mark the anomaly point type
+
+As shown in the image below, the feedback dialog will fill the timestamp of your chosen point automatically, though you can edit this value. You then select whether you want to identify this item as an `Anomaly`, `NotAnomaly`, or `AutoDetect`.
++
+The selection will apply your feedback to the future anomaly detection processing of the same series. The processed points will not be recalculated. That means if you marked an Anomaly as NotAnomaly, we will suppress similar anomalies in the future, and if you marked a `NotAnomaly` point as `Anomaly`, we will tend to detect similar points as `Anomaly` in the future. If `AutoDetect` is chosen, any previous feedback on the same point will be ignored in the future.
+
+## Provide feedback for multiple continuous points
+
+If you would like to give anomaly feedback for multiple continuous points at the same time, select the group of points you want to annotate. You will see the chosen time-range automatically filled when you provide anomaly feedback.
++
+To view if an individual point is affected by your anomaly feedback, when browsing a time series, select a single point. If its anomaly detection result has been changed by feedback, the tooltip will show **Affected by feedback: true**. If it shows **Affected by feedback: false**, this means an anomaly feedback calculation was performed for this point, but the anomaly detection result should not be changed.
++
+There are some situations where we do not suggest giving feedback:
+
+- The anomaly is caused by a holiday. It's suggested to use a preset event to solve this kind of false alarm, as it will be more precise.
+- The anomaly is caused by a known data source change. For example, an upstream system change happened at that time. In this situation, it is expected to give an anomaly alert since our system didn't know what caused the value change and when similar value changes will happen again. Thus we don't suggest annotating this kind of issue as `NotAnomaly`.
+
+## Change points
+
+Sometimes the trend change of data will affect anomaly detection results. When a decision is made as to whether a point is an anomaly or not, the latest window of history data will be taken into consideration. When your time series has a trend change, you could mark the exact change point, this will help our anomaly detector in future analysis.
+
+As the figure below shows, you could select `ChangePoint` for the feedback Type, and select `ChangePoint`, `NotChangePoint`, or `AutoDetect` from the pull-down list.
++
+> [!NOTE]
+> If your data keeps changing, you will only need to mark one point as a `ChangePoint`, so if you marked a `timerange`, we will fill the last point's timestamp and time automatically. In this case, your annotation will only affect anomaly detection results after 12 points.
+
+## Seasonality
+
+For seasonal data, when we perform anomaly detection, one step is to estimate the period(seasonality) of the time series, and apply it to the anomaly detection phase. Sometimes, it's hard to identify a precise period, and the period may also change. An incorrectly defined period may have side effects on your anomaly detection results. You can find the current period from a tooltip, its name is `Min Period`.
++
+You can provide feedback for period to fix this kind of anomaly detection error. As the figure shows, you can set a period value. The unit `interval` means one granularity. Here zero intervals means the data is non-seasonal. You could also select `AutoDetect` if you want to cancel previous feedback and let the pipeline detect period automatically.
+
+> [!NOTE]
+> When setting period you do not need to assign a timestamp or timerange, the period will affect future anomaly detections on whole timeseries from the moment you give feedback.
+++
+## Provide comment feedback
+
+You can also add comments to annotate and provide context to your data. To add comments, select a time range and add the text for your comment.
++
+## Time series batch feedback
+
+As previously described, the feedback modal allows you to reselect or remove dimension values, to get a batch of time series defined by a dimension filter. You can also open this modal by clicking the "+" button for Feedback from the left panel, and select dimensions and dimension values.
+++
+## How to view feedback history
+
+There are two ways to view feedback history. You can select the feedback history button from the left panel, and will see a feedback list modal. It lists all the feedback you've given before either for single series or dimension filters.
++
+Another way to view feedback history is from a series. You will see several buttons on the upper right corner of each series. Select the show feedback button, and the line will switch from showing anomaly points to showing feedback entries. The green flag represents a change point, and the blue points are other feedback points. You could also select them, and will get a feedback list modal that lists the details of the feedback given for this point.
+++
+> [!NOTE]
+> Anyone who has access to the metric is permitted to give feedback, so you may see feedback given by other datafeed owners. If you edit the same point as someone else, your feedback will overwrite the previous feedback entry.
+
+## Next steps
+- [Diagnose an incident](diagnose-an-incident.md).
+- [Configure metrics and fine tune detection configuration](configure-metrics.md)
+- [Configure alerts and get notifications using a hook](../how-tos/alerts.md)
applied-ai-services Configure Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/metrics-advisor/how-tos/configure-metrics.md
+
+ Title: Configure your Metrics Advisor instance using the web portal
+
+description: How to configure your Metrics Advisor instance and fine-tune the anomaly detection results.
++++++ Last updated : 09/10/2020+++
+# How to: Configure metrics and fine tune detection configuration
+
+Use this article to start configuring your Metrics Advisor instance using the web portal. To browse the metrics for a specific data feed, go to the **Data feeds** page and select one of the feeds. This will display a list of metrics associated with it.
++
+Select one of the metric names to see its details. In this detailed view, you can switch to another metric in the same data feed using the drop down list in the top right corner of the screen.
+
+When you first view a metric's details, you can load a time series by letting Metrics Advisor choose one for you, or by specifying values to be included for each dimension.
+
+You can also select time ranges, and change the layout of the page.
+
+> [!NOTE]
+> - The start time is inclusive.
+> - The end time is exclusive.
+
+You can click the **Incidents** tab to view anomalies, and find a link to the [Incident hub](diagnose-an-incident.md).
+
+## Tune the detection configuration
+
+A metric can apply one or more detection configurations. There is a default configuration for each metric, which you can edit or add to, according to your monitoring needs.
+
+### Tune the configuration for all series in current metric
+
+This configuration will be applied to all the series in this metric, except for ones with a separate configuration. A metric level configuration is applied by default when data is onboarded, and is shown on the left panel. Users can directly edit metric level config on metric page.
+
+There are additional parameters like **Direction**, and **Valid anomaly** that can be used to further tune the configuration. You can combine different detection methods as well.
++
+### Tune the configuration for a specific series or group
+
+Click **Advanced configuration** below the metric level configuration options to see the group level configuration.You can add a configuration for an individual series, or group of series by clicking the **+** icon in this window. The parameters are similar to the metric-level configuration parameters, but you may need to specify at least one dimension value for a group-level configuration to identify a group of series. And specify all dimension values for series-level configuration to identify a specific series.
+
+This configuration will be applied to the group of series or specific series instead of the metric level configuration. After setting the conditions for this group, save it.
++
+### Anomaly detection methods
+
+Metrics Advisor offers multiple anomaly detection methods: **Hard threshold, Smart detection, Change threshold**. You can use one or combine them using logical operators by clicking the **'+'** button.
+
+**Hard threshold**
+
+ Hard threshold is a basic method for anomaly detection. You can set an upper and/or lower bound to determine the expected value range. Any points fall out of the boundary will be identified as an anomaly.
+
+**Smart detection**
+
+Smart detection is powered by machine learning that learns patterns from historical data, and uses them for future detection. When using this method, the **Sensitivity** is the most important parameter for tuning the detection results. You can drag it to a smaller or larger value to affect the visualization on the right side of the page. Choose one that fits your data and save it.
++
+In smart detection mode, the sensitivity and boundary version parameters are used to fine-tune the anomaly detection result.
+
+Sensitivity can affect the width of the expected value range of each point. When increased, the expected value range will be tighter, and more anomalies will be reported:
++
+When the sensitivity is turned down, the expected value range will be wider, and fewer anomalies will be reported:
++
+**Change threshold**
+
+Change threshold is normally used when metric data generally stays around a certain range. The threshold is set according to **Change percentage**. The **Change threshold** mode is able to detect anomalies in the scenarios:
+
+* Your data is normally stable and smooth. You want to be notified when there are fluctuations.
+* Your data is normally quite unstable and fluctuates a lot. You want to be notified when it becomes too stable or flat.
+
+Use the following steps to use this mode:
+
+1. Select **Change threshold** as your anomaly detection method when you set the anomaly detection configurations for your metrics or time series.
+
+ :::image type="content" source="../media/metrics/change-threshold.png" alt-text="change threshold":::
+
+2. Select the **out of the range** or **in the range** parameter based on your scenario.
+
+ If you want to detect fluctuations, select **out of the range**. For example, with the settings below, any data point that changes over 10% compared to the previous one will be detected as an outlier.
+ :::image type="content" source="../media/metrics/out-of-the-range.png" alt-text="out of range parameter":::
+
+ If you want to detect flat lines in your data, select **in the range**. For example, with the settings below, any data point that changes within 0.01% compared to the previous one will be detected as an outlier. Because the threshold is so small (0.01%), it detects flat lines in the data as outliers.
+
+ :::image type="content" source="../media/metrics/in-the-range.png" alt-text="In range parameter":::
+
+3. Set the percentage of change that will count as an anomaly, and which previously captured data points will be used for comparison. This comparison is always between the current data point, and a single data point N points before it.
+
+ **Direction** is only valid if you're using the **out of the range** mode:
+
+ * **Up** configures detection to only detect anomalies when (current data point) - (comparison data point) > **+** threshold percentage.
+ * **Down** configures detection to only detect anomalies when (current data point) - (comparing data point) < **-** threshold percentage.
+++
+## Preset events
+
+Sometimes, expected events and occurrences (such as holidays) can generate anomalous data. Using preset events, you can add flags to the anomaly detection output, during specified times. This feature should be configured after your data feed is onboarded. Each metric can only have one preset event configuration.
+
+> [!Note]
+> Preset event configuration will take holidays into consideration during anomaly detection, and may change your results. It will be applied to the data points ingested after you save the configuration.
+
+Click the **Configure Preset Event** button next to the metrics drop-down list on each metric details page.
+
+
+In the window that appears, configure the options according to your usage. Make sure **Enable holiday event** is selected to use the configuration.
+
+The **Holiday event** section helps you suppress unnecessary anomalies detected during holidays. There are two options for the **Strategy** option that you can apply:
+
+* **Suppress holiday**: Suppresses all anomalies and alerts in anomaly detection results during holiday period.
+* **Holiday as weekend**: Calculates the average expected values of several corresponding weekends before the holiday, and bases the anomaly status off of these values.
+
+There are several other values you can configure:
+
+|Option |Description |
+|||
+|**Choose one dimension as country** | Choose a dimension that contains country information. For example a country code. |
+|**Country code mapping** | The mapping between a standard [country code](https://wikipedia.org/wiki/ISO_3166-1_alpha-2), and chosen dimension's country data. |
+|**Holiday options** | Whether to take into account all holidays, only PTO (Paid Time Off) holidays, or only Non-PTO holidays. |
+|**Days to expand** | The impacted days before and after a holiday. |
++
+The **Cycle event** section can be used in some scenarios to help reduce unnecessary alerts by using cyclic patterns in the data. For example:
+
+- Metrics that have multiple patterns or cycles, such as both a weekly and monthly pattern.
+- Metrics that do not have a clear pattern, but the data is comparable Year over Year (YoY), Month over Month (MoM), Week Over Week (WoW), or Day Over Day (DoD).
+
+Not all options are selectable for every granularity. The available options per granularity are below (Γ£ö for available, X for unavailable):
+
+| Granularity | YoY | MoM | WoW | DoD |
+|:-|:-|:-|:-|:-|
+| Yearly | X | X | X | X |
+| Monthly | X | X | X | X |
+| Weekly | Γ£ö | X | X | X |
+| Daily | Γ£ö | Γ£ö | Γ£ö | X |
+| Hourly | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Minutely | X | X | X | X |
+| Secondly | X | X | X | X |
+| Custom* | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+
+
+When using a custom granularity in seconds, only available if the metric is longer than one hour and less than one day.
+
+Cycle event is used to reduce anomalies if they follow a cyclic pattern, but it will report an anomaly if multiple data points don't follow the pattern. **Strict mode** is used to enable anomaly reporting if even one data point doesn't follow the pattern.
++
+## View recent incidents
+
+Metrics Advisor detects anomalies on all your time series data as they're ingested. However, not all anomalies need to be escalated, because they might not have a big impact. Aggregation will be performed on anomalies to group related ones into incidents. You can view these incidents from the **Incident** tab in metrics details page.
+
+Click on an incident to go to the **Incidents analysis** page where you can see more details about it. Click on **Manage incidents in new Incident hub**, to find the [Incident hub](diagnose-an-incident.md) page where you can find all incidents under the specific metric.
+
+## Subscribe anomalies for notification
+
+If you'd like to get notified whenever an anomaly is detected, you can subscribe to alerts for the metric, using a hook. See [Configure alerts and get notifications using a hook](alerts.md) for more information.
++
+## Next steps
+- [Configure alerts and get notifications using a hook](alerts.md)
+- [Adjust anomaly detection using feedback](anomaly-feedback.md)
+- [Diagnose an incident](diagnose-an-incident.md).
+
applied-ai-services Credential Entity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/metrics-advisor/how-tos/credential-entity.md
+
+ Title: Create a credential entity
+
+description: How to create a credential entity to manage your credential in secure.
++++++ Last updated : 06/22/2021+++
+# How-to: Create a credential entity
+
+When onboarding a data feed, you should select an authentication type, some authentication types like *Azure SQL Connection String* and *Service Principal* need a credential entity to store credential-related information, in order to manage your credential in secure. This article will tell how to create a credential entity for different credential types in Metrics Advisor.
+
+
+## Basic procedure: Create a credential entity
+
+You can create a **credential entity** to store credential-related information, and use it for authenticating to your data sources. You can share the credential entity to others and enable them to connect to your data sources without sharing the real credentials. It can be created in 'Adding data feed' tab or 'Credential entity' tab. After creating a credential entity for a specific authentication type, you can just choose one credential entity you created when adding new data feed, this will be helpful when creating multiple data feeds. The general procedure of creating and using a credential entity is shown below:
+
+1. Select '+' to create a new credential entity in 'Adding data feed' tab (you can also create one in 'Credential entity feed' tab).
+
+ ![create credential entity](../media/create-credential-entity.png)
+
+2. Set the credential entity name, description (if needed), credential type (equals to *authentication type*) and other settings.
+
+ ![set credential entity](../media/set-credential-entity.png)
+
+3. After creating a credential entity, you can choose it when specifying authentication type.
+
+ ![choose credential entity](../media/choose-credential-entity.png)
+
+There are **four credential types** in Metrics Advisor: Azure SQL Connection String, Azure Data Lake Storage Gen2 Shared Key Entity, Service Principal, Service Principal from Key Vault. For different credential type settings, see following instructions.
+
+## Azure SQL Connection String
+
+You should set the **Name** and **Connection String**, then select 'create'.
+
+![set credential entity for sql connection string](../media/credential-entity/credential-entity-sql-connection-string.png)
+
+## Azure Data Lake Storage Gen2 Shared Key Entity
+
+You should set the **Name** and **Account Key**, then select 'create'. Account key could be found in Azure Storage Account (Azure Data Lake Storage Gen2) resource in **Access keys** setting.
+
+<!-- 增加basic说明,tips是错的;增加一下怎么管理;加一个step1的link
+-->
+![set credential entity for data lake](../media/credential-entity/credential-entity-data-lake.png)
+
+## Service principal
+
+To create service principal for your data source, you can follow detailed instructions in [Connect different data sources](../data-feeds-from-different-sources.md). After creating a service principal, you need to fill in the following configurations in credential entity.
+
+![sp credential entity](../media/credential-entity/credential-entity-service-principal.png)
+
+* **Name:** Set a name for your service principal credential entity.
+* **Tenant ID & Client ID:** After creating a service principal in Azure portal, you can find `Tenant ID` and `Client ID` in **Overview**.
+
+ ![sp client ID and tenant ID](../media/credential-entity/sp-client-tenant-id.png)
+
+* **Client Secret:** After creating a service principal in Azure portal, you should go to **Certificates & Secrets** to create a new client secret, and the **value** should be used as `Client Secret` in credential entity. (Note: The value only appears once, so it's better to store it somewhere.)
++
+ ![sp Client secret value](../media/credential-entity/sp-secret-value.png)
+
+## <span id="sp-from-kv">Service principal from Key Vault</span>
+
+There are several steps to create a service principal from key vault.
+
+**Step 1. Create a Service Principal and grant it access to your database.** You can follow detailed instructions in [Connect different data sources](../data-feeds-from-different-sources.md), in creating service principal section for each data source.
+
+After creating a service principal in Azure portal, you can find `Tenant ID` and `Client ID` in **Overview**. The **Directory (tenant) ID** should be `Tenant ID` in credential entity configurations.
+
+![sp client ID and tenant ID](../media/credential-entity/sp-client-tenant-id.png)
+
+**Step 2. Create a new client secret.** You should go to **Certificates & Secrets** to create a new client secret, and the **value** will be used in next steps. (Note: The value only appears once, so it's better to store it somewhere.)
+
+![sp Client secret value](../media/credential-entity/sp-secret-value.png)
+
+**Step 3. Create a key vault.** In [Azure portal](https://ms.portal.azure.com/#home), select **Key vaults** to create one.
+
+![create a key vault in azure portal](../media/credential-entity/create-key-vault.png)
+
+After creating a key vault, the **Vault URI** is the `Key Vault Endpoint` in MA (Metrics Advisor) credential entity.
+
+![key vault endpoint](../media/credential-entity/key-vault-endpoint.png)
+
+**Step 4. Create secrets for Key Vault.** In Azure portal for key vault, generate two secrets in **Settings->Secrets**.
+The first is for `Service Principal Client Id`, the other is for `Service Principal Client Secret`, both of their name will be used in credential entity configurations.
+
+![generate secrets](../media/credential-entity/generate-secrets.png)
+
+* **Service Principal Client ID:** Set a `Name` for this secret, the name will be used in credential entity configuration, and the value should be your Service Principal `Client ID` in **Step 1**.
+
+ ![secret1: sp client id](../media/credential-entity/secret-1-sp-client-id.png)
+
+* **Service Principal Client Secret:** Set a `Name` for this secret, the name will be used in credential entity configuration, and the value should be your Service Principal `Client Secret Value` in **Step 2**.
+
+ ![secret2: sp client secret](../media/credential-entity/secret-2-sp-secret-value.png)
+
+Until now, the *client ID* and *client secret* of service principal are finally stored in Key Vault. Next, you need to create another service principal to store the key vault. Therefore, you should **create two service principals**, one to save client ID and client secret, which will be stored in a key vault, the other is to store the key vault.
+
+**Step 5. Create a service principal to store the key vault.**
+
+1. Go to [Azure portal AAD (Azure Active Directory)](https://portal.azure.com/?trace=diagnostics&feature.customportal=false#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview) and create a new registration.
+
+ ![create a new registration](../media/credential-entity/create-registration.png)
+
+ After creating the service principal, the **Application (client) ID** in Overview will be the `Key Vault Client ID` in credential entity configuration.
+
+2. In **Manage->Certificates & Secrets**, create a client secret by selecting 'New client secret'. Then you should **copy down the value**, because it appears only once. The value is `Key Vault Client Secret` in credential entity configuration.
+
+ ![add client secret](../media/credential-entity/add-client-secret.png)
+
+**Step 6. Grant Service Principal access to Key Vault.** Go to the key vault resource you created, in **Settings->Access polices**, by selecting 'Add Access Policy' to make connection between key vault and the second service principal in **Step 5**, and 'Save'.
+
+![grant sp to key vault](../media/credential-entity/grant-sp-to-kv.png)
++
+## Configurations conclusion
+To conclude, the credential entity configurations in Metrics Advisor for *Service Principal from Key Vault* and the way to get them are shown in table below:
+
+| Configuration | How to get |
+|-| |
+| Key Vault Endpoint | **Step 3:** Vault URI of key vault. |
+| Tenant ID | **Step 1:** Directory (tenant) ID of your first service principal. |
+| Key Vault Client ID | **Step 5:** The Application (client) ID of your second service principal. |
+| Key Vault Client Secret | **Step 5:** The client secret value of your second service principal. |
+| Service Principal Client ID Name | **Step 4:** The secret name you set for Client ID. |
+| Service Principal Client Secret Name | **Step 4:** The secret name you set for Client Secret Value. |
++
+## Next steps
+
+- [Onboard your data](onboard-your-data.md)
+- [Connect different data sources](../data-feeds-from-different-sources.md)
applied-ai-services Diagnose An Incident https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/metrics-advisor/how-tos/diagnose-an-incident.md
+
+ Title: Diagnose an incident using Metrics Advisor
+
+description: Learn how to diagnose an incident using Metrics Advisor, and get detailed views of anomalies in your data.
++++++ Last updated : 04/15/2021+++
+# Diagnose an incident using Metrics Advisor
+
+## What is an incident?
+
+When there are anomalies detected on multiple time series within one metric at a particular timestamp, Metrics Advisor will automatically group anomalies that **share the same root cause** into one incident. An incident usually indicates a real issue, Metrics Advisor performs analysis on top of it and provides automatic root cause analysis insights.
+
+This will significantly remove customer's effort to view each individual anomaly and quickly finds the most important contributing factor to an issue.
+
+An alert generated by Metrics Advisor may contain multiple incidents and each incident may contain multiple anomalies captured on different time series at the same timestamp.
+
+## Paths to diagnose an incident
+
+- **Diagnose from an alert notification**
+
+ If you've configured a hook of the email/Teams type and applied at least one alerting configuration. Then you will receive continuous alert notifications escalating incidents that are analyzed by Metrics Advisor. Within the notification, there's an incident list and a brief description. For each incident, there's a **"Diagnose"** button, selecting it will direct you to the incident detail page to view diagnostic insights.
+
+ :::image type="content" source="../media/diagnostics/alert-notification.png" alt-text="Diagnose from an alert notification":::
+
+- **Diagnose from an incident in "Incident hub"**
+
+ There's a central place in Metrics Advisor that gathers all incidents that have been captured and make it easy to track any ongoing issues. Selecting the **Incident Hub** tab in left navigation bar will list out all incidents within the selected metrics. Within the incident list, select one of them to view detailed diagnostic insights.
+
+ :::image type="content" source="../media/diagnostics/incident-list.png" alt-text="Diagnose from an incident in Incident hub":::
+
+- **Diagnose from an incident listed in metrics page**
+
+ Within the metrics detail page, there's a tab named **Incidents** which lists the latest incidents captured for this metric. The list can be filtered by the severity of the incidents or the dimension value of the metrics.
+
+ Selecting one incident in the list will direct you to the incident detail page to view diagnostic insights.
+
+ :::image type="content" source="../media/diagnostics/incident-in-metrics.png" alt-text="Diagnose from an incident listed in metrics page":::
+
+## Typical diagnostic flow
+
+After being directed to the incident detail page, you're able to take advantage of the insights that are automatically analyzed by Metrics Advisor to quickly locate root cause of an issue or use the analysis tool to further evaluate the issue impact. There are three sections in the incident detail page which correspond to three major steps to diagnosing an incident.
+
+### Step 1. Check summary of current incident
+
+The first section lists a summary of the current incident, including basic information, actions & tracings, and an analyzed root cause.
+
+- Basic information includes the "top impacted series" with a diagram, "impact start & end time", "incident severity" and "total anomalies included". By reading this, you can get a basic understanding of an ongoing issue and the impact of it.
+- Actions & tracings, this is used to facilitate team collaboration on an ongoing incident. Sometimes one incident may need to involve cross-team members' effort to analyze and resolve it. Everyone who has the permission to view the incident can add an action or a tracing event.
+
+ For example, after diagnosing the incident and root cause is identified, an engineer can add a tracing item with type of "customized" and input the root cause in the comment section. Leave the status as "Active". Then other teammates can share the same info and know there's someone working on the fix. You can also add an "Azure DevOps" item to track the incident with a specific task or bug.
++
+- Analyzed root cause is an automatically analyzed result. Metrics Advisor analyzes all anomalies that are captured on time series within one metric with different dimension values at the same timestamp. Then performs correlation, clustering to group related anomalies together and generates root cause advice.
+
+For metrics with multiple dimensions, it's a common case that multiple anomalies will be detected at the same time. However, those anomalies may share the same root cause. Instead of analyzing all anomalies one by one, leveraging **Analyzed root cause** should be the most efficient way to diagnose current incident.
++
+### Step 2. View cross-dimension diagnostic insights
+
+After getting basic info and automatic analysis insights, you can get more detailed info on abnormal status on other dimensions within the same metric in a holistic way using the **"Diagnostic tree"**.
+
+For metrics with multiple dimensions, Metrics Advisor categorizes the time series into a hierarchy, which is named the **Diagnostic tree**. For example, a "revenue" metric is monitored by two dimensions: "region" and "category". Despite concrete dimension values, there needs to have an **aggregated** dimension value, like **"SUM"**. Then time series of "region" = **"SUM"** and "category" = **"SUM"** will be categorized as the root node within the tree. Whenever there's an anomaly captured at **"SUM"** dimension, then it could be drilled down and analyzed to locate which specific dimension value has contributed the most to the parent node anomaly. Select each node to expand and see detailed information.
++
+- To enable an "aggregated" dimension value in your metrics
+
+ Metrics Advisor supports performing "Roll-up" on dimensions to calculate an "aggregated" dimension value. The diagnostic tree supports diagnosing on **"SUM", "AVG", "MAX","MIN","COUNT"** aggregations. To enable an "aggregated" dimension value, you can enable the "Roll-up" function during data onboarding. Please make sure your metrics is **mathematically computable** and that the aggregated dimension has real business value.
+
+ :::image type="content" source="../media/diagnostics/automatic-roll-up.png" alt-text="Roll-up settings":::
+
+- If there's no "aggregated" dimension value in your metrics
+
+ If there's no "aggregated" dimension value in your metrics and the "Roll-up" function is not enabled during data onboarding. There will be no metric value calculated for "aggregated" dimension, it will show up as a gray node in the tree and could be expanded to view its child nodes.
+
+#### Legend of diagnostic tree
+
+There are three kinds of nodes in the diagnostic tree:
+- **Blue node**, which corresponds to a time series with real metric value.
+- **Gray node**, which corresponds to a virtual time series with no metric value, it's a logical node.
+- **Red node**, which corresponds to the top impacted time series of the current incident.
+
+For each node abnormal status is described by the color of the node border
+- **Red border** means there's an anomaly captured on the time series corresponding to the incident timestamp.
+- **Non-red border** means there's no anomaly captured on the time series corresponding to the incident timestamp.
+
+#### Display mode
+
+There are two display modes for a diagnostic tree: only show anomaly series or show major proportions.
+
+- **Only show anomaly series mode** enables customer to focus on current anomalies that captured on different series and diagnose root cause of top impacted series.
+- **Show major proportions** enables customer to check on abnormal status of major proportions of top impacted series. In this mode, the tree would show both series with anomaly detected and series with no anomaly. But more focus on important series.
+
+#### Analyze options
+
+- **Show delta ratio**
+
+ "Delta ratio" is the percentage of current node delta compared to parent node delta. HereΓÇÖs the formula:
+
+ (real value of current node - expected value of current node) / (real value of parent node - expected value of parent node) * 100%
+
+ This is used to analyze the major contribution of parent node delta.
+
+- **Show value proportion**
+
+ "Value proportion" is the percentage of current node value compared to parent node value. HereΓÇÖs the formula:
+
+ (real value of current node / real value of parent node) * 100%
+
+ This is used to evaluate the proportion of current node within the whole.
+
+By using "Diagnostic tree", customers can locate root cause of current incident into specific dimension. This significantly removes customer's effort to view each individual anomalies or pivot through different dimensions to find the major anomaly contribution.
+
+### Step 3. View cross-metrics diagnostic insights using "Metrics graph"
+
+Sometimes, it's hard to analyze an issue by checking abnormal status of one single metric, but need to correlate multiple metrics together. Customers are able to configure a **Metrics graph**, which indicates the relationship between metrics. Refer to [How to build a metrics graph](metrics-graph.md) to get started.
+
+#### Check anomaly status on root cause dimension within "Metrics graph"
+
+By using the above cross-dimension diagnostic result, the root cause is limited to a specific dimension value. Then use the "Metrics graph" and filter by the analyzed root cause dimension to check anomaly status on other metrics.
+
+For example, if there's an incident captured on "revenue" metrics. The top impacted series is at global region with "region" = "SUM". By using cross-dimension diagnostic, the root cause has been located on "region" = "Karachi". There's a pre-configured metrics graph, including metrics of "revenue", "cost", "DAU", "PLT(page load time)" and "CHR(cache hit rate)".
+
+Metrics Advisor will automatically filter the metrics graph by the root cause dimension of "region" = "Karachi" and display anomaly status of each metric. By analyzing the relation between metrics and anomaly status, customers can gain further insights of what is the final root cause.
++
+#### Auto related anomalies
+
+By applying the root cause dimension filter on the metrics graph, anomalies on each metric at the timestamp of the current incident will be autorelated. Those anomalies should be related to the identified root cause of current incident.
++
+## Next steps
+
+- [Adjust anomaly detection using feedback](anomaly-feedback.md)
+- [Configure metrics and fine tune detection configuration](configure-metrics.md)
applied-ai-services Further Analysis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/metrics-advisor/how-tos/further-analysis.md
+
+ Title: Further analyze an incident and evaluate impact
+
+description: Learn how to leverage analysis tools to further analyze an incident.
++++++ Last updated : 04/15/2021+++
+# Further analyze an incident and evaluate impact
+
+## Metrics drill down by dimensions
+
+When you're viewing incident information, you may need to get more detailed information, for example, for different dimensions, and timestamps. If your data has one or more dimensions, you can use the drill down function to get a more detailed view.
+
+To use the drill down function, click on the **Metric drilling** tab in the **Incident hub**.
++
+The **Dimensions** setting is a list of dimensions for an incident, you can select other available dimension values for each one. After the dimension values are changed. The **Timestamp** setting lets you view the current incident at different moments in time.
+
+### Select drilling options and choose a dimension
+
+There are two types of drill down options: **Drill down** and **Horizontal comparison**.
+
+> [!Note]
+> - For drill down, you can explore the data from different dimension values, except the currenly selected dimensions.
+> - For horizontal comparison, you can explore the data from different dimension values, except the all-up dimensions.
++
+### Value comparison for different dimension values
+
+The second section of the drill down tab is a table with comparisons for different dimension values. It includes the value, baseline value, difference value, delta value and whether it is an anomaly.
+
++
+### Value and expected value comparisons for different dimension value
+
+The third section of the drill down tab is a histogram with the values and expected values, for different dimension values. The histogram is sorted by the difference between value and expected value. You can find the unexpected value with the biggest impact easily. For example, in the above picture, we can find that, except the all up value, **US7** contributes the most for the anomaly.
++
+### Raw value visualization
+The last part of drill down tab is a line chart of the raw values. With this chart provided, you don't need to navigate to the metric page to view details.
++
+## Compare time series
+
+Sometimes when an anomaly is detected on a specific time series, it's helpful to compare it with multiple other series in a single visualization.
+Click on the **Compare tools** tab, and then click on the blue **+ Add** button.
++
+Select a series from your data feed. You can choose the same granularity or a different one. Select the target dimensions and load the series trend, then click **Ok** to compare it with a previous series. The series will be put together in one visualization. You can continue to add more series for comparison and get further insights. Click the drop down menu at the top of the **Compare tools** tab to compare the time series data over a time-shifted period.
+
+> [!Warning]
+> To make a comparison, time series data analysis may require shifts in data points so the granularity of your data must support it. For example, if your data is weekly and you use the **Day over day** comparison, you will get no results. In this example, you would use the **Month over month** comparison instead.
+
+After selecting a time-shifted comparison, you can select whether you want to compare the data values, the delta values, or the percentage delta.
+
+## View similar anomalies using Time Series Clustering
+
+When viewing an incident, you can use the **Similar time-series-clustering** tab to see the various series associated with it. Series in one group are summarized together. From the above picture, we can know that there is at least two series groups. This feature is only available if the following requirements are met:
+
+- Metrics must have one or more dimensions or dimension values.
+- The series within one metric must have a similar trend.
+
+Available dimensions are listed on the top the tab, and you can make a selection to specify the series.
+
applied-ai-services Manage Data Feeds https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/metrics-advisor/how-tos/manage-data-feeds.md
+
+ Title: Manage data feeds in Metrics Advisor
+
+description: Learn how to manage data feeds that you've added to Metrics Advisor.
++++++ Last updated : 04/20/2021+++
+# How to: Manage your data feeds
+
+Learn how to manage your onboarded data feeds in Metrics Advisor. This article guides you through managing data feeds in Metrics Advisor.
+
+## Edit a data feed
+
+> [!NOTE]
+> The following details cannot be changed after a data feed has been created.
+> * Data feed ID
+> * Created Time
+> * Dimension
+> * Source Type
+> * Granularity
+
+Only the administrator of a data feed is allowed to make changes to it.
+
+On the data feed list page, you can **pause, reactivate, delete** a data feed:
+
+* **Pause/Reactivate**: Select the **Pause/Play** button to pause/reactivate a data feed.
+
+* **Delete**: Select **Delete** button to delete a data feed.
+
+If you change the ingestion start time, you need to verify the schema again. You can change it by clicking **Edit** in the data feed detail page.
+
+## Backfill your data feed
+
+Select the **Backfill** button to trigger an immediate ingestion on a time-stamp, to fix a failed ingestion or override the existing data.
+- The start time is inclusive.
+- The end time is exclusive.
+- Anomaly detection is re-triggered on selected range only.
++
+## Manage permission of a data feed
+
+Workspace access is controlled by the Metrics Advisor resource, which uses Azure Active Directory for authentication. Another layer of permission control is applied to metric data.
+
+Metrics Advisor lets you grant permissions to different groups of people on different data feeds. There are two types of roles:
+
+- **Administrator**: Has full permissions to manage a data feed, including modify and delete.
+- **Viewer**: Has access to a read-only view of the data feed.
+
+
+## Advanced settings
+
+There are several optional advanced settings when creating a new data feed, they can be modified in data feed detail page.
+
+### Ingestion options
+
+* **Ingestion time offset**: By default, data is ingested according to the specified granularity. For example, a metric with a *daily* timestamp will be ingested one day after its timestamp. You can use the offset to delay the time of ingestion with a *positive* number, or advance it with a *negative* number.
+
+* **Max concurrency**: Set this parameter if your data source supports limited concurrency. Otherwise leave at the default setting.
+
+* **Stop retry after**: If data ingestion has failed, it will retry automatically within a period. The beginning of the period is the time when the first data ingestion happened. The length of the period is defined according to the granularity. If leaving the default value (-1), the value will be determined according to the granularity as below.
+
+ | Granularity | Stop Retry After |
+ | : | : |
+ | Daily, Custom (>= 1 Day), Weekly, Monthly, Yearly | 7 days |
+ | Hourly, Custom (< 1 Day) | 72 hours |
+
+* **Min retry interval**: You can specify the minimum interval when retrying pulling data from source. If leaving the default value (-1), the retry interval will be determined according to the granularity as below.
+
+ | Granularity | Minimum Retry Interval |
+ | : | : |
+ | Daily, Custom (>= 1 Day), Weekly, Monthly | 30 minutes |
+ | Hourly, Custom (< 1 Day) | 10 minutes |
+ | Yearly | 1 day |
+
+### Fill gap when detecting:
+
+> [!NOTE]
+> This setting won't affect your data source and will not affect the data charts displayed on the portal. The auto-filling only occurs during anomaly detection.
+
+Sometimes series are not continuous. When there are missing data points, Metrics Advisor will use the specified value to fill them before anomaly detection to improve accuracy.
+The options are:
+
+* Using the value from the previous actual data point. This is used by default.
+* Using a specific value.
+
+### Action link template:
+
+Action link templates are used to predefine actionable HTTP urls, which consist of the placeholders `%datafeed`, `%metric`, `%timestamp`, `%detect_config`, and `%tagset`. You can use the template to redirect from an anomaly or an incident to a specific URL to drill down.
++
+Once you've filled in the action link, click **Go to action link** on the incident list's action option, and diagnostic tree's right-click menu. Replace the placeholders in the action link template with the corresponding values of the anomaly or incident.
+
+| Placeholder | Examples | Comment |
+| - | -- | - |
+| `%datafeed` | - | Data feed ID |
+| `%metric` | - | Metric ID |
+| `%detect_config` | - | Detect config ID |
+| `%timestamp` | - | Timestamp of an anomaly or end time of a persistent incident |
+| `%tagset` | `%tagset`, <br> `[%tagset.get("Dim1")]`, <br> `[ %tagset.get("Dim1", "filterVal")]` | Dimension values of an anomaly or top anomaly of an incident. <br> The `filterVal` is used to filter out matching values within the square brackets. |
+
+Examples:
+
+* If the action link template is `https://action-link/metric/%metric?detectConfigId=%detect_config`:
+ * The action link `https://action-link/metric/1234?detectConfigId=2345` would go to anomalies or incidents under metric `1234` and detect config `2345`.
+
+* If the action link template is `https://action-link?[Dim1=%tagset.get('Dim1','')&][Dim2=%tagset.get('Dim2','')]`:
+ * The action link would be `https://action-link?Dim1=Val1&Dim2=Val2` when the anomaly is `{ "Dim1": "Val1", "Dim2": "Val2" }`.
+ * The action link would be `https://action-link?Dim2=Val2` when the anomaly is `{ "Dim1": "", "Dim2": "Val2" }`, since `[Dim1=***&]` is skipped for the dimension value empty string.
+
+* If the action link template is `https://action-link?filter=[Name/Dim1 eq '%tagset.get('Dim1','')' and ][Name/Dim2 eq '%tagset.get('Dim2','')']`:
+ * The action link would be `https://action-link?filter=Name/Dim1 eq 'Val1' and Name/Dim2 eq 'Val2'` when the anomaly is `{ "Dim1": "Val1", "Dim2": "Val2" }`,
+ * The action link would be `https://action-link?filter=Name/Dim2 eq 'Val2'` when anomaly is `{ "Dim1": "", "Dim2": "Val2" }` since `[Name/Dim1 eq '***' and ]` is skipped for the dimension value empty string.
+
+### "Data feed not available" alert settings
+
+A data feed is considered as not available if no data is ingested from the source within the grace period specified from the time the data feed starts ingestion. An alert is triggered in this case.
+
+To configure an alert, you need to [create a hook](alerts.md#create-a-hook) first. Alerts will be sent through the hook configured.
+
+* **Grace period**: The Grace period setting is used to determine when to send an alert if no data points are ingested. The reference point is the time of first ingestion. If an ingestion fails, Metrics Advisor will keep trying at a regular interval specified by the granularity. If it continues to fail past the grace period, an alert will be sent.
+
+* **Auto snooze**: When this option is set to zero, each timestamp with *Not Available* triggers an alert. When a setting other than zero is specified, continuous timestamps after the first timestamp with *not available* are not triggered according to the setting specified.
+
+## Next steps
+- [Configure metrics and fine tune detection configuration](configure-metrics.md)
+- [Adjust anomaly detection using feedback](anomaly-feedback.md)
+- [Diagnose an incident](diagnose-an-incident.md).
applied-ai-services Metrics Graph https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/metrics-advisor/how-tos/metrics-graph.md
+
+ Title: Metrics Advisor metrics graph
+
+description: How to configure your Metrics graph and visualize related anomalies in your data.
++++++ Last updated : 09/08/2020+++
+# How-to: Build a metrics graph to analyze related metrics
+
+Each time series in Metrics Advisor is monitored separately by a model that learns from historical data to predict future trends. Anomalies will be detected if any data point falls out of the historical pattern. In some cases, however, several metrics may relate to each other, and anomalies need to be analyzed across multiple metrics. **Metrics graph** is just the tool that helps with this.
+
+For example, if you have several metrics that monitor your business from different perspectives, anomaly detection will be applied respectively. However, in the real business case, anomalies detected on multiple metrics may have a relation with each other, discovering those relations and analyzing root cause base on that would be helpful when addressing real issues. The metrics graph helps automatically correlate anomalies detected on related metrics to accelerate the troubleshooting process.
+
+## Select a metric to put the first node to the graph
+
+Click the **Metrics graph** tab in the navigation bar. The first step for building a metrics graph is to put a node onto the graph. Select a data feed and a metric at the top of the page. A node will appear in the bottom panel.
++
+## Add a node/relation on existing node
+
+Next, you need to add another node and specify a relation to an existing node(s). Select an existing node and right-click on it. A context menu will appear with several options.
+
+Select **Add relation**, and you will be able to choose another metric and specify the relation type between the two nodes. You can also apply specific dimension filters.
++
+After repeating the above steps, you will have a metrics graph describing the relations between all related metrics.
+
+There're other actions you can take on the graph:
+1. Delete a node
+2. Go to metrics
+3. Go to Incident Hub
+4. Expand
+5. Delete relation
+
+## Legend of metrics graph
+
+Each node on the graph represents a metric. There are four kinds of nodes in the metrics graph:
+
+- **Green node**: The node that represents current metric incident severity is low.
+- **Orange node**: The node that represents current metric incident severity is medium.
+- **Red node**: The node that represents current metric incident severity is high.
+- **Blue node**: The node which doesn't have anomaly severity.
++
+## View related metrics anomaly status in incident hub
+
+When the metrics graph is built, whenever an anomaly is detected on metrics within the graph, you will able to view related anomaly statuses and get a high-level view of the incident.
+
+Click into an incident within the graph and scroll down to **cross metrics analysis**, below the diagnostic information.
++
+## Next steps
+
+- [Adjust anomaly detection using feedback](anomaly-feedback.md)
+- [Diagnose an incident](diagnose-an-incident.md).
+- [Configure metrics and fine tune detection configuration](configure-metrics.md)
applied-ai-services Onboard Your Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/metrics-advisor/how-tos/onboard-your-data.md
+
+ Title: Onboard your data feed to Metrics Advisor
+
+description: How to get started with onboarding your data feeds to Metrics Advisor.
++++++ Last updated : 04/20/2021+++
+# How-to: Onboard your metric data to Metrics Advisor
+
+Use this article to learn about onboarding your data to Metrics Advisor.
+
+## Data schema requirements and configuration
+
+If you are not sure about some of the terms, refer to [Glossary](../glossary.md).
+
+## Avoid loading partial data
+
+Partial data is caused by inconsistencies between the data stored in Metrics Advisor and the data source. This can happen when the data source is updated after Metrics Advisor has finished pulling data. Metrics Advisor only pulls data from a given data source once.
+
+For example, if a metric has been onboarded to Metrics Advisor for monitoring. Metrics Advisor successfully grabs metric data at timestamp A and performs anomaly detection on it. However, if the metric data of that particular timestamp A has been refreshed after the data has been ingested. New data value won't be retrieved.
+
+You can try to [backfill](manage-data-feeds.md#backfill-your-data-feed) historical data (described later) to mitigate inconsistencies but this won't trigger new anomaly alerts, if alerts for those time points have already been triggered. This process may add additional workload to the system, and is not automatic.
+
+To avoid loading partial data, we recommend two approaches:
+
+* Generate data in one transaction:
+
+ Ensure the metric values for all dimension combinations at the same timestamp are stored to the data source in one transaction. In the above example, wait until data from all data sources is ready, and then load it into Metrics Advisor in one transaction. Metrics Advisor can poll the data feed regularly until data is successfully (or partially) retrieved.
+
+* Delay data ingestion by setting a proper value for the **Ingestion time offset** parameter:
+
+ Set the **Ingestion time offset** parameter for your data feed to delay the ingestion until the data is fully prepared. This can be useful for some data sources which don't support transactions such as Azure Table Storage. See [advanced settings](manage-data-feeds.md#advanced-settings) for details.
+
+## Start by adding a data feed
+
+After signing into your Metrics Advisor portal and choosing your workspace, click **Get started**. Then, on the main page of the workspace, click **Add data feed** from the left menu.
+
+### Add connection settings
+
+#### 1. Basic settings
+Next you'll input a set of parameters to connect your time-series data source.
+* **Source Type**: The type of data source where your time series data is stored.
+* **Granularity**: The interval between consecutive data points in your time series data. Currently Metrics Advisor supports: Yearly, Monthly, Weekly, Daily, Hourly, and Custom. The lowest interval the customization option supports is 300 seconds.
+ * **Seconds**: The number of seconds when *granularityName* is set to *Customize*.
+* **Ingest data since (UTC)**: The baseline start time for data ingestion. `startOffsetInSeconds` is often used to add an offset to help with data consistency.
+
+#### 2. Specify connection string
+Next, you'll need to specify the connection information for the data source. For details on the other fields and connecting different types of data sources, see [How-to: Connect different data sources](../data-feeds-from-different-sources.md).
+
+#### 3. Specify query for a single timestamp
+<!-- Next, you'll need to specify a query to convert the data into the required schema, see [how to write a valid query](../tutorials/write-a-valid-query.md) for more information. -->
+
+For details of different types of data sources, see [How-to: Connect different data sources](../data-feeds-from-different-sources.md).
+
+### Load data
+
+After the connection string and query string are inputted, select **Load data**. Within this operation, Metrics Advisor will check connection and permission to load data, check necessary parameters (@IntervalStart and @IntervalEnd) which need to be used in query, and check the column name from data source.
+
+If there's an error at this step:
+1. First check if the connection string is valid.
+2. Then check if there's sufficient permissions and that the ingestion worker IP address is granted access.
+3. Then check if required parameters (@IntervalStart and @IntervalEnd) are used in your query.
++
+### Schema configuration
+
+Once the data schema is loaded, select the appropriate fields.
+
+If the timestamp of a data point is omitted, Metrics Advisor will use the timestamp when the data point is ingested instead. For each data feed, you can specify at most one column as a timestamp. If you get a message that a column cannot be specified as a timestamp, check your query or data source, and whether there are multiple timestamps in the query result - not only in the preview data. When performing data ingestion, Metrics Advisor can only consume one chunk (for example one day, one hour - according to the granularity) of time-series data from the given source each time.
+
+|Selection |Description |Notes |
+||||
+| **Display Name** | Name to be displayed in your workspace instead of the original column name. | Optional.|
+|**Timestamp** | The timestamp of a data point. If omitted, Metrics Advisor will use the timestamp when the data point is ingested instead. For each data feed, you can specify at most one column as timestamp. | Optional. Should be specified with at most one column. If you get a **column cannot be specified as Timestamp** error, check your query or data source for duplicate timestamps. |
+|**Measure** | The numeric values in the data feed. For each data feed, you can specify multiple measures but at least one column should be selected as measure. | Should be specified with at least one column. |
+|**Dimension** | Categorical values. A combination of different values identifies a particular single-dimension time series, for example: country, language, tenant. You can select zero or more columns as dimensions. Note: be cautious when selecting a non-string column as a dimension. | Optional. |
+|**Ignore** | Ignore the selected column. | Optional. For data sources support using a query to get data, there is no 'Ignore' option. |
+
+If you want to ignore columns, we recommend updating your query or data source to exclude those columns. You can also ignore columns using **Ignore columns** and then **Ignore** on the specific columns. If a column should be a dimension and is mistakenly set as *Ignored*, Metrics Advisor may end up ingesting partial data. For example, assume the data from your query is as below:
+
+| Row ID | Timestamp | Country | Language | Income |
+| | | | | |
+| 1 | 2019/11/10 | China | ZH-CN | 10000 |
+| 2 | 2019/11/10 | China | EN-US | 1000 |
+| 3 | 2019/11/10 | US | ZH-CN | 12000 |
+| 4 | 2019/11/11 | US | EN-US | 23000 |
+| ... | ...| ... | ... | ... |
+
+If *Country* is a dimension and *Language* is set as *Ignored*, then the first and second rows will have the same dimensions for a timestamp. Metrics Advisor will arbitrarily use one value from the two rows. Metrics Advisor will not aggregate the rows in this case.
+
+After configuring the schema, select **Verify schema**. Within this operation, Metrics Advisor will perform following checks:
+- Whether timestamp of queried data falls into one single interval.
+- Whether there's duplicate values returned for the same dimension combination within one metric interval.
+
+### Automatic roll up settings
+
+> [!IMPORTANT]
+> If you'd like to enable root cause analysis and other diagnostic capabilities, the **Automatic roll up settings** need to be configured.
+> Once enabled, the automatic roll-up settings cannot be changed.
+
+Metrics Advisor can automatically perform aggregation(for example SUM, MAX, MIN) on each dimension during ingestion, then builds a hierarchy which will be used in root case analysis and other diagnostic features.
+
+Consider the following scenarios:
+
+* *"I do not need to include the roll-up analysis for my data."*
+
+ You do not need to use the Metrics Advisor roll-up.
+
+* *"My data has already rolled up and the dimension value is represented by: NULL or Empty (Default), NULL only, Others."*
+
+ This option means Metrics Advisor doesn't need to roll up the data because the rows are already summed. For example, if you select *NULL only*, then the second data row in the below example will be seen as an aggregation of all countries and language *EN-US*; the fourth data row which has an empty value for *Country* however will be seen as an ordinary row which might indicate incomplete data.
+
+ | Country | Language | Income |
+ ||-|--|
+ | China | ZH-CN | 10000 |
+ | (NULL) | EN-US | 999999 |
+ | US | EN-US | 12000 |
+ | | EN-US | 5000 |
+
+* *"I need Metrics Advisor to roll up my data by calculating Sum/Max/Min/Avg/Count and represent it by {some string}."*
+
+ Some data sources such as Cosmos DB or Azure Blob Storage do not support certain calculations like *group by* or *cube*. Metrics Advisor provides the roll up option to automatically generate a data cube during ingestion.
+ This option means you need Metrics Advisor to calculate the roll-up using the algorithm you've selected and use the specified string to represent the roll-up in Metrics Advisor. This won't change any data in your data source.
+ For example, suppose you have a set of time series which stands for Sales metrics with the dimension (Country, Region). For a given timestamp, it might look like the following:
++
+ | Country | Region | Sales |
+ |||-|
+ | Canada | Alberta | 100 |
+ | Canada | British Columbia | 500 |
+ | United States | Montana | 100 |
++
+ After enabling Auto Roll Up with *Sum*, Metrics Advisor will calculate the dimension combinations, and sum the metrics during data ingestion. The result might be:
+
+ | Country | Region | Sales |
+ | | | - |
+ | Canada | Alberta | 100 |
+ | NULL | Alberta | 100 |
+ | Canada | British Columbia | 500 |
+ | NULL | British Columbia | 500 |
+ | United States | Montana | 100 |
+ | NULL | Montana | 100 |
+ | NULL | NULL | 700 |
+ | Canada | NULL | 600 |
+ | United States | NULL | 100 |
+
+ `(Country=Canada, Region=NULL, Sales=600)` means the sum of Sales in Canada (all regions) is 600.
+
+ The following is the transformation in SQL language.
+
+ ```mssql
+ SELECT
+ dimension_1,
+ dimension_2,
+ ...
+ dimension_n,
+ sum (metrics_1) AS metrics_1,
+ sum (metrics_2) AS metrics_2,
+ ...
+ sum (metrics_n) AS metrics_n
+ FROM
+ each_timestamp_data
+ GROUP BY
+ CUBE (dimension_1, dimension_2, ..., dimension_n);
+ ```
+
+ Consider the following before using the Auto roll up feature:
+
+ * If you want to use *SUM* to aggregate your data, make sure your metrics are additive in each dimension. Here are some examples of *non-additive* metrics:
+ - Fraction-based metrics. This includes ratio, percentage, etc. For example, you should not add the unemployment rate of each state to calculate the unemployment rate of the entire country.
+ - Overlap in dimension. For example, you should not add the number of people in to each sport to calculate the number of people who like sports, because there is an overlap between them, one person can like multiple sports.
+ * To ensure the health of the whole system, the size of cube is limited. Currently, the limit is 1,000,000. If your data exceeds that limit, ingestion will fail for that timestamp.
+
+## Advanced settings
+
+There are several advanced settings to enable data ingested in a customized way, such as specifying ingestion offset, or concurrency. For more information, see the [advanced settings](manage-data-feeds.md#advanced-settings) section in the data feed management article.
+
+## Specify a name for the data feed and check the ingestion progress
+
+Give a custom name for the data feed, which will be displayed in your workspace. Then click on **Submit**. In the data feed details page, you can use the ingestion progress bar to view status information.
+++
+To check ingestion failure details:
+
+1. Click **Show Details**.
+2. Click **Status** then choose **Failed** or **Error**.
+3. Hover over a failed ingestion, and view the details message that appears.
++
+A *failed* status indicates the ingestion for this data source will be retried later.
+An *Error* status indicates Metrics Advisor won't retry for the data source. To reload data, you need to trigger a backfill/reload manually.
+
+You can also reload the progress of an ingestion by clicking **Refresh Progress**. After data ingestion completes, you're free to click into metrics and check anomaly detection results.
+
+## Next steps
+- [Manage your data feeds](manage-data-feeds.md)
+- [Configurations for different data sources](../data-feeds-from-different-sources.md)
+- [Configure metrics and fine tune detection configuration](configure-metrics.md)
applied-ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/metrics-advisor/overview.md
+
+ Title: What is the Azure Metrics Advisor service?
+
+description: What is Metrics Advisor?
++++++ Last updated : 07/06/2021+++
+# What is Azure Metrics Advisor?
+
+Metrics Advisor is a part of [Azure Applied AI Services](../../applied-ai-services/what-are-applied-ai-services.md) that uses AI to perform data monitoring and anomaly detection in time series data. The service automates the process of applying models to your data, and provides a set of APIs and a web-based workspace for data ingestion, anomaly detection, and diagnostics - without needing to know machine learning. Developers can build AIOps, predicative maintenance, and business monitor applications on top of the service. Use Metrics Advisor to:
+
+* Analyze multi-dimensional data from multiple data sources
+* Identify and correlate anomalies
+* Configure and fine-tune the anomaly detection model used on your data
+* Diagnose anomalies and help with root cause analysis
++
+This documentation contains the following types of articles:
+* The [quickstarts](./Quickstarts/web-portal.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time.
+* The [how-to guides](./how-tos/onboard-your-data.md) contain instructions for using the service in more specific or customized ways.
+* The [conceptual articles](glossary.md) provide in-depth explanations of the service's functionality and features.
+
+## Connect to a variety of data sources
+
+Metrics Advisor can connect to, and [ingest multi-dimensional metric](how-tos/onboard-your-data.md) data from many data stores, including: SQL Server, Azure Blob Storage, MongoDB and more.
+
+## Easy-to-use and customizable anomaly detection
+
+* Metrics Advisor automatically selects the best model for your data, without needing to know any machine learning.
+* Automatically monitor every time series within [multi-dimensional metrics](glossary.md#multi-dimensional-metric).
+* Use [parameter tuning](how-tos/configure-metrics.md) and [interactive feedback](how-tos/anomaly-feedback.md) to customize the model applied on your data, and future anomaly detection results.
+
+## Real-time notification through multiple channels
+
+Whenever anomalies are detected, Metrics Advisor is able to [send real time notification](how-tos/alerts.md) through multiple channels using hooks, such as: email hooks, web hooks, Teams hooks and Azure DevOps hooks. Flexible alert configuration lets you customize when and where to send a notification.
+
+## Smart diagnostic insights by analyzing anomalies
+
+### Analyze root cause into specific dimension
+
+Metrics Advisor combines anomalies detected on the same multi-dimensional metric into a diagnostic tree to help you analyze root cause into specific dimension. There's also automated analyzed insights available by analyzing the greatest contribution of each dimension.
+
+### Cross-metrics analysis using Metrics graph
+
+A [Metrics graph](./how-tos/metrics-graph.md) indicates the relation between metrics. Cross-metrics analysis can be enabled to help you catch on abnormal status among all related metrics in a holistic view. And eventually locate the final root cause.
+
+Refer to [how to diagnose an incident](./how-tos/diagnose-an-incident.md) for more detail.
+
+## Typical workflow
+
+The workflow is simple: after onboarding your data, you can fine-tune the anomaly detection, and create configurations to fit your scenario.
+
+1. [Create an Azure resource](https://go.microsoft.com/fwlink/?linkid=2142156) for Metrics Advisor.
+2. Build your first monitor using the web portal.
+ 1. [Onboard your data](./how-tos/onboard-your-data.md)
+ 2. [Fine-tune anomaly detection configuration](./how-tos/configure-metrics.md)
+ 3. [Subscribe anomalies for notification](./how-tos/alerts.md)
+ 4. [View diagnostic insights](./how-tos/diagnose-an-incident.md)
+3. Use the REST API to customize your instance.
+
+## Video
+* [Introducing Metrics Advisor](https://www.youtube.com/watch?v=0Y26cJqZMIM)
+* [New to Cognitive Services](https://www.youtube.com/watch?v=7tCLJHdBZgM)
+
+## Data retention & limitation:
+
+Metrics Advisor will keep at most **10,000** time intervals ([what is an interval?](tutorials/write-a-valid-query.md#what-is-an-interval)) forward counting from current timestamp, no matter there's data available or not. Data falls out of the window will be deleted. Data retention mapping to count of days for different metric granularity:
+
+| Granularity(min) | Retention(day) |
+|| |
+| 1 | 6.94 |
+| 5 | 34.72|
+| 15 | 104.1|
+| 60(=hourly) | 416.67 |
+| 1440(=daily)|10000.00|
+
+ThereΓÇÖre also further limitations, please refer to [FAQ](faq.yml#what-are-the-data-retention-and-limitations-of-metrics-advisor-) for more details.
+
+## Next steps
+
+* Explore a quickstart: [Monitor your first metric on web](quickstarts/web-portal.md).
+* Explore a quickstart: [Use the REST APIs to customize your solution](./quickstarts/rest-api-and-client-library.md).
applied-ai-services Rest Api And Client Library https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/metrics-advisor/quickstarts/rest-api-and-client-library.md
+
+ Title: Metrics Advisor client libraries REST API
+
+description: Use this quickstart to connect your applications to the Metrics Advisor API from Azure Cognitive Services.
++++++ Last updated : 07/06/2021+
+zone_pivot_groups: programming-languages-metrics-monitor
++
+# Quickstart: Use the client libraries or REST APIs to customize your solution
+
+Get started with the Metrics Advisor REST API or client libraries. Follow these steps to install the package and try out the example code for basic tasks.
+
+Use Metrics Advisor to perform:
+
+* Add a data feed from a data source
+* Check ingestion status
+* Configure detection and alerts
+* Query the anomaly detection results
+* Diagnose anomalies
+++++++++++++++++
+## Clean up resources
+
+If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
+
+* [Portal](../../../cognitive-services/cognitive-services-apis-create-account.md#clean-up-resources)
+* [Azure CLI](../../../cognitive-services/cognitive-services-apis-create-account-cli.md#clean-up-resources)
+
+## Next steps
+
+- [Use the web portal](web-portal.md)
+- [Onboard your data feeds](../how-tos/onboard-your-data.md)
+ - [Manage data feeds](../how-tos/manage-data-feeds.md)
+ - [Configurations for different data sources](../data-feeds-from-different-sources.md)
+- [Configure metrics and fine tune detection configuration](../how-tos/configure-metrics.md)
+- [Adjust anomaly detection using feedback](../how-tos/anomaly-feedback.md)
applied-ai-services Web Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/metrics-advisor/quickstarts/web-portal.md
+
+ Title: 'Quickstart: Metrics Advisor web portal'
+
+description: Learn how to start using the Metrics Advisor web portal.
++++ Last updated : 09/30/2020++++
+ - mode-portal
++
+# Quickstart: Monitor your first metric using the web portal
+
+When you provision a Metrics Advisor instance, you can use the APIs and web-based workspace to work with the service. The web-based workspace can be used as a straightforward way to quickly get started with the service. It also provides a visual way to configure settings, customize your model, and perform root cause analysis.
+
+* Onboard your metric data
+* View your metrics and visualizations
+* Fine-tune detection configurations
+* Explore diagnostic insights
+* Create and subscribe to anomaly alerts
+
+## Prerequisites
+
+* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services)
+* Once you have your Azure subscription, <a href="https://go.microsoft.com/fwlink/?linkid=2142156" title="Create a Metrics Advisor resource" target="_blank">create a Metrics Advisor resource </a> in the Azure portal to deploy your Metrics Advisor instance.
+
+
+> [!TIP]
+> * It may 10 to 30 minutes for your Metrics Advisor resource to deploy. Select **Go to resource** once it successfully deploys.
+> * If you'd like to use the REST API to interact with the service, you will need the key and endpoint from the resource you create. You can find them in the **Keys and endpoints** tab in the created resource.
++
+This document uses a SQL Database as an example for creating your first monitor.
+
+## Sign in to your workspace
+
+After your resource is created, sign in to [Metrics Advisor portal](https://go.microsoft.com/fwlink/?linkid=2143774) with your Active Directory account. From the landing page, select your **Directory**, **Subscription** and **Workspace** that just created, then select **Get started**. For onboarding time series data, select **Add data feed** from the left menu.
+
+
+Currently you can create one Metrics Advisor resource at each available region. You can switch workspaces in Metrics Advisor portal at any time.
++
+## Onboard time series data
+
+Metrics Advisor provides connectors for different data sources, such as SQL Database, Azure Data Explorer, and Azure Table Storage. The steps for connecting data are similar for different connectors, although some configuration parameters may vary. See [connect data different data feed sources](../data-feeds-from-different-sources.md) for different data connection settings.
+
+This quickstart uses a SQL Database as an example. You can also ingest your own data follow the same steps.
++
+### Data schema requirements and configuration
++
+### Configure connection settings and query
+
+[Add the data feeds](../how-tos/onboard-your-data.md) by connecting to your time series data source. Start by selecting the following parameters:
+
+* **Source Type**: The type of data source where your time series data is stored.
+* **Granularity**: The interval between consecutive data points in your time series data, for example Yearly, Monthly, Daily. The lowest interval customization supports is 60 seconds.
+* **Ingest data since (UTC)**: The start time for the first timestamp to be ingested.
++
+<!-- Next, specify the **Connection string** with the credentials for your data source, and a custom **Query**, see [how to write a valid query](../tutorials/write-a-valid-query.md) for more information. -->
+++
+### Load data
+
+After the connection string and query string are inputted, select **Load data**. Within this operation, Metrics Advisor will check connection and permission to load data, check necessary parameters (@IntervalStart and @IntervalEnd) which need to be used in query, and check the column name from data source.
+
+If there's an error at this step:
+1. First check if the connection string is valid.
+2. Then confirm that there's sufficient permissions and that the ingestion worker IP address is granted access.
+3. Next check if the required parameters (@IntervalStart and @IntervalEnd) are used in your query.
+
+### Schema configuration
+
+Once the data is loaded by running the query and shown like below, select the appropriate fields.
++
+|Selection |Description |Notes |
+||||
+|**Timestamp** | The timestamp of a data point. If omitted, Metrics Advisor will use the timestamp when the data point is ingested instead. For each data feed, you could specify at most one column as timestamp. | Optional. Should be specified with at most one column. |
+|**Measure** | The numeric values in the data feed. For each data feed, you could specify multiple measures but at least one column should be selected as measure. | Should be specified with at least one column. |
+|**Dimension** | Categorical values. A combination of different values identifies a particular single-dimension time series, for example: country, language, tenant. You could select none or arbitrary number of columns as dimensions. Note: if you're selecting a non-string column as dimension, be cautious with dimension explosion. | Optional. |
+|**Ignore** | Ignore the selected column. | Optional. For data sources support using a query to get data, there is no 'Ignore' option. |
+++
+After configuring the schema, select **Verify schema**. Within this operation, Metrics Advisor will perform following checks:
+- Whether timestamp of queried data falls into one single interval.
+- Whether there's duplicate values returned for the same dimension combination within one metric interval.
+
+### Automatic roll up settings
+
+> [!IMPORTANT]
+> If you'd like to enable **root cause analysis** and other diagnostic capabilities, 'automatic roll up setting' needs to be configured.
+> Once enabled, the automatic roll up settings cannot be changed.
+
+Metrics Advisor can automatically perform aggregation(SUM/MAX/MIN...) on each dimension during ingestion, then builds a hierarchy, which will be used in root case analysis and other diagnostic features. See [Automatic roll up settings](../how-tos/onboard-your-data.md#automatic-roll-up-settings) for more details.
+
+Give a custom name for the data feed, which will be displayed in your workspace. Select **Submit**.
+
+## Tune detection configuration
+
+After the data feed is added, Metrics Advisor will attempt to ingest metric data from the specified start date. It will take some time for data to be fully ingested, and you can view the ingestion status by selecting **Ingestion progress** at the top of the data feed page. If data is ingested, Metrics Advisor will apply detection, and continue to monitor the source for new data.
+
+When detection is applied, select one of the metrics listed in data feed to find the **Metric detail page** to:
+- View visualizations of all time series' slices under this metric
+- Update detection configuration to meet expected results
+- Set up notification for detected anomalies
++
+## View the diagnostic insights
+
+After tuning the detection configuration, anomalies that are found should reflect actual anomalies in your data. Metrics Advisor performs analysis on multi-dimensional metrics to locate root cause into specific dimension and also cross-metrics analysis by using "Metrics graph".
+
+To view the diagnostic insights, select the red dots on time series visualizations, which represent detected anomalies. A window will appear with a link to incident analysis page.
++
+After selecting the link, you will be pivoted to the incident analysis page, which analyzes on a group of related anomalies with a bunch of diagnostics insights. There're 3 major steps to diagnose an incident:
+
+### Check summary of current incident
+
+At the top, there will be a summary including basic information, actions & tracings and an analyzed root cause. Basic information includes the "top impacted series" with a diagram, "impact start & end time", "incident severity" and "total anomalies included".
+
+Analyzed root cause is an automatic analyzed result. Metrics Advisor analyzes on all anomalies that captured on time series within one metric with different dimension values at the same timestamp. Then performs correlation, clustering to group related anomalies together and generates a root cause advice.
++
+Based on these, you can already get a straightforward view of current abnormal status and the impact of the incident and the most potential root cause. So that immediate action could be taken to resolve incident as soon as possible.
+
+### View cross-dimension diagnostic insights
+
+After getting basic info and automatic analysis insight, you can get more detailed info on abnormal status on other dimensions within the same metric in a holistic way using **"Diagnostic tree"**.
+
+For metrics with multiple dimensions, Metrics Advisor categorizes the time series into a hierarchy, which is named as "Diagnostic tree". For example, a "revenue" metric is monitored by two dimensions: "region" and "category". Despite concrete dimension values, there needs to have an **aggregated** dimension value, like **"SUM"**. Then time series of "region" = **"SUM"** and "category" = **"SUM"** will be categorized as the root node within the tree. Whenever there's an anomaly captured at **"SUM"** dimension, then it could be drilled down and analyzed to locate which specific dimension value has contributed the most to the parent node anomaly. Click on each node to expand detailed information.
++
+### View cross-metrics diagnostic insights using "Metrics graph"
+
+Sometimes, it's hard to analyze an issue by checking the abnormal status of a single metric, and you need to correlate multiple metrics together. Customers are able to configure a "Metrics graph" which indicates the relations between metrics.
+By leveraging above cross-dimension diagnostic result, the root cause is limited into specific dimension value. Then use "Metrics graph" and filter by the analyzed root cause dimension to check anomaly status on other metrics.
+After clicking the link, you will be pivoted to the incident analysis page which analyzes on corresponding anomaly, with a bunch of diagnostics insights. There are three sections in the incident detail page which correspond to three major steps to diagnosing an incident.
++
+But you can also pivot across more diagnostics insights leveraging additional features to drill down anomalies by dimension, view similar anomalies and do comparison across metrics. Please find more at [How to: diagnose an incident](../how-tos/diagnose-an-incident.md).
+
+## Get notified when new anomalies are found
+
+If you'd like to get alerted when an anomaly is detected in your data, you can create a subscription for one or more of your metrics. Metrics Advisor uses hooks to send alerts. Three types of hooks are supported: email hook, web hook and Azure DevOps. We'll use web hook as an example.
+
+### Create a web hook
+
+A web hook is the entry point to get anomaly noticed by a programmatic way from the Metrics Advisor service, which calls a user-provided API when an alert is triggered.For details on how to create a hook, refer to the **Create a hook** section in [How-to: Configure alerts and get notifications using a hook](../how-tos/alerts.md#create-a-hook).
+
+### Configure alert settings
+
+After creating a hook, an alert setting determines how and which alert notifications should be sent. You can set multiple alert settings for each metric. two important settings are **Alert for** which specifies the anomalies to be included, and **Filter anomaly options**, which define which anomalies to include in the alert. See the **Add or Edit alert settings** section in [How-to: Configure alerts and get notifications using a hook](../how-tos/alerts.md#add-or-edit-alert-settings) for more details.
++
+## Next steps
+
+- [Onboard your data feeds](../how-tos/onboard-your-data.md)
+ - [Manage data feeds](../how-tos/manage-data-feeds.md)
+ - [Configurations for different data sources](../data-feeds-from-different-sources.md)
+- [Use the REST API or Client libraries](./rest-api-and-client-library.md)
+- [Configure metrics and fine tune detection configuration](../how-tos/configure-metrics.md)
applied-ai-services Enable Anomaly Notification https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/metrics-advisor/tutorials/enable-anomaly-notification.md
+
+ Title: Metrics Advisor anomaly notification e-mails with Azure Logic Apps
+description: Learn how to automate sending e-mail alerts in response to Metric Advisor anomalies
++++ Last updated : 05/20/2021 ++
+# Tutorial: Enable anomaly notification in Metrics Advisor
+
+<!-- 2. Introductory paragraph
+Required. Lead with a light intro that describes, in customer-friendly language,
+what the customer will learn, or do, or accomplish. Answer the fundamental ΓÇ£why
+would I want to do this?ΓÇ¥ question. Keep it short.
+-->
++
+<!-- 3. Tutorial outline
+Required. Use the format provided in the list below.
+-->
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Create a hook in Metrics Advisor
+> * Send Notifications with Azure Logic Apps
+> * Send Notifications to Microsoft Teams
+> * Send Notifications via SMTP server
+
+<!-- 4. Prerequisites
+Required. First prerequisite is a link to a free trial account if one exists. If there
+are no prerequisites, state that no prerequisites are needed for this tutorial.
+-->
+
+## Prerequisites
+### Create a Metrics Advisor resource
+
+To explore capabilities of Metrics Advisor, you may need to <a href="https://go.microsoft.com/fwlink/?linkid=2142156" title="Create a Metrics Advisor resource" target="_blank">create a Metrics Advisor resource </a> in the Azure portal to deploy your Metrics Advisor instance.
+
+### Create a hook in Metrics Advisor
+A hook in Metrics Advisor is a bridge that enables customer to subscribe to metrics anomalies and send notifications through different channels. There are four types of hooks in Metrics Advisor:
+
+- Email hook
+- Webhook
+- Teams hook
+- Azure DevOps hook
+
+Each hook type corresponds to a specific channel that anomaly will be notified through.
+
+<!-- 5. H2s
+Required. Give each H2 a heading that sets expectations for the content that follows.
+Follow the H2 headings with a sentence about how the section contributes to the whole.
+-->
+
+## Send notifications with Logic Apps, Teams, and SMTP
+
+#### [Logic Apps](#tab/logic)
+
+### Send email notification by using Azure Logic Apps
+
+<!-- Introduction paragraph -->
+There are two common options to send email notifications that are supported in Metrics Advisor. One is to use webhooks and Azure Logic Apps to send email alerts, the other is to set up an SMTP server and use it to send email alerts directly. This section will focus on the first option, which is easier for customers who don't have an available SMTP server.
+
+**Step 1.** Create a webhook in Metrics Advisor
+
+A webhook is the entry point for all the information available from the Metrics Advisor service, and calls a user-provided API when an alert is triggered. All alerts can be sent through a webhook.
+
+Select the **Hooks** tab in your Metrics Advisor workspace, and select the **Create hook** button. Choose a hook type of **web hook**. Fill in the required parameters and select **OK**. For detailed steps, refer to [create a webhook](../how-tos/alerts.md#web-hook).
+
+There's one extra parameter of **Endpoint** that needs to be filled out, this could be done after completing Step 3 below.
++
+**Step 2.** Create a Logic Apps resource
+
+In the [Azure portal](https://portal.azure.com), create an empty Logic App by following the instructions in [Create your logic app](../../../logic-apps/quickstart-create-first-logic-app-workflow.md). When you see the **Logic Apps Designer**, return to this tutorial.
++
+**Step 3.** Add a trigger of **When an HTTP request is received**
+
+- Azure Logic Apps uses various actions to trigger workflows that are defined. For this use case, it uses the trigger of **When an HTTP request is received**.
+
+- In the dialog for **When an HTTP request is received**, select **Use sample payload to generate schema**.
+
+ ![Screenshot that shows the When an HTTP request dialog box and the Use sample payload to generate schema option selected. ](../media/tutorial/logic-apps-generate-schema.png)
+
+ Copy the following sample JSON into the textbox and select **Done**.
+
+ ```json
+ {
+ "properties": {
+ "value": {
+ "items": {
+ "properties": {
+ "alertInfo": {
+ "properties": {
+ "alertId": {
+ "type": "string"
+ },
+ "anomalyAlertingConfigurationId": {
+ "type": "string"
+ },
+ "createdTime": {
+ "type": "string"
+ },
+ "modifiedTime": {
+ "type": "string"
+ },
+ "timestamp": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "alertType": {
+ "type": "string"
+ },
+ "callBackUrl": {
+ "type": "string"
+ },
+ "hookId": {
+ "type": "string"
+ }
+ },
+ "required": [
+ "hookId",
+ "alertType",
+ "alertInfo",
+ "callBackUrl"
+ ],
+ "type": "object"
+ },
+ "type": "array"
+ }
+ },
+ "type": "object"
+ }
+ ```
+
+- Choose the method as 'POST' and select **Save**. You can now see the URL of your HTTP request trigger. Select the copy icon to copy it and fill it back in the **Endpoint** in Step 1.
+
+ ![Screenshot that highlights the copy icon to copy the URL of your HTTP request trigger.](../media/tutorial/logic-apps-copy-url.png)
+
+**Step 4.** Add a next step using 'HTTP' action
+
+Signals that are pushed through the webhook only contain limited information like timestamp, alertID, configurationID, etc. Detailed information needs to be queried using the callback URL provided in the signal. This step is to query detailed alert info.
+
+- Choose a method of 'GET'
+- Select 'callBackURL' from 'Dynamic content' list in 'URI'.
+- Enter a key of 'Content-Type' in 'Headers' and input a value of 'application/json'
+- Enter a key of 'x-api-key' in 'Headers' and get this by the clicking **'API keys'** tab in your Metrics Advisor workspace. This step is to ensure the workflow has sufficient permissions for API calls.
+
+ ![Screenshot that highlights the api-keys](../media/tutorial/logic-apps-api-key.png)
+
+**Step 5.** Add a next step to ΓÇÿparse JSONΓÇÖ
+
+You need to parse the response of the API for easier formatting of email content.
+
+> [!NOTE]
+> This tutorial only shares a quick example, the final email format needs to be further designed.
+
+- Select 'Body' from 'Dynamic content' list in 'Content'
+- select **Use sample payload to generate schema**. Copy the following sample JSON into the textbox and select **Done**.
+
+```json
+{
+ "properties": {
+ "@@nextLink": {},
+ "value": {
+ "items": {
+ "properties": {
+ "properties": {
+ "properties": {
+ "IncidentSeverity": {
+ "type": "string"
+ },
+ "IncidentStatus": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "rootNode": {
+ "properties": {
+ "createdTime": {
+ "type": "string"
+ },
+ "detectConfigGuid": {
+ "type": "string"
+ },
+ "dimensions": {
+ "properties": {
+ },
+ "type": "object"
+ },
+ "metricGuid": {
+ "type": "string"
+ },
+ "modifiedTime": {
+ "type": "string"
+ },
+ "properties": {
+ "properties": {
+ "AnomalySeverity": {
+ "type": "string"
+ },
+ "ExpectedValue": {}
+ },
+ "type": "object"
+ },
+ "seriesId": {
+ "type": "string"
+ },
+ "timestamp": {
+ "type": "string"
+ },
+ "value": {
+ "type": "number"
+ }
+ },
+ "type": "object"
+ }
+ },
+ "required": [
+ "rootNode",
+ "properties"
+ ],
+ "type": "object"
+ },
+ "type": "array"
+ }
+ },
+ "type": "object"
+}
+```
+
+**Step 6.** Add a next step to ΓÇÿcreate HTML tableΓÇÖ
+
+A bunch of information has been returned from the API call, however, depending on your scenarios not all of the information may be useful. Choose the items that you care about and would like included in the alert email.
+
+Below is an example of an HTML table that chooses 'timestamp', 'metricGUID' and 'dimension' to be included in the alert email.
+
+![Screenshot of html table example](../media/tutorial/logic-apps-html-table.png)
+
+**Step 7.** Add the final step to ΓÇÿsend an emailΓÇÖ
+
+There are several options to send email, both Microsoft hosted and 3rd-party offerings. Customer may need to have a tenant/account for their chosen option. For example, when choosing ΓÇÿOffice 365 OutlookΓÇÖ as the server. Sign in process will be pumped for building connection and authorization. An API connection will be established to use email server to send alert.
+
+Fill in the content that you'd like to include to 'Body', 'Subject' in the email and fill in an email address in 'To'.
+
+![Screenshot of send an email](../media/tutorial/logic-apps-send-email.png)
+
+#### [Teams Channel](#tab/teams)
+
+### Send anomaly notification through a Microsoft Teams channel
+This section will walk through the practice of sending anomaly notifications through a Microsoft Teams channel. This can help enable scenarios where team members are collaborating on analyzing anomalies that are detected by Metrics Advisor. The workflow is easy to configure and doesn't have a large number of prerequisites.
+
++
+**Step 1.** Add a 'Incoming Webhook' connector to your Teams channel
+
+- Navigate to the Teams channel that you'd like to send notification to, select 'ΓÇóΓÇóΓÇó'(More options).
+- In the dropdown list, select 'Connectors'. Within the new dialog, search for 'Incoming Webhook' and click 'Add'.
+
+ ![Screenshot to create an incoming webhook](../media/tutorial/add-webhook.png)
+
+- If you are not able to view the 'Connectors' option, please contact your Teams group owners. Select 'Manage team', then select the 'Settings' tab at the top and check whether the setting of 'Allow members to create, update and remove connectors' is checked.
+
+ ![Screenshot to check teams settings](../media/tutorial/teams-settings.png)
+
+- Input a name for the connector and you can also upload an image to make it as the avatar. Select 'Create', then the Incoming Webhook connector is added successfully to your channel. A URL will be generated at the bottom of the dialog, **be sure to select 'Copy'**, then select 'Done'.
+
+ ![Screenshot to copy URL](../media/tutorial/webhook-url.png)
+
+**Step 2.** Create a new 'Teams hook' in Metrics Advisor
+
+- Select 'Hooks' tab in left navigation bar, and select the 'Create hook' button at top right of the page.
+- Choose hook type of 'Teams', then input a name and paste the URL that you copied from the above step.
+- Select 'Save'.
+
+ ![Screenshot to create a Teams hook](../media/tutorial/teams-hook.png)
+
+**Step 3.** Apply the Teams hook to an alert configuration
+
+Go and select one of the data feeds that you have onboarded. Select a metric within the feed and open the metrics detail page. You can create an 'alerting configuration' to subscribe to anomalies that are detected and notify through a Teams channel.
+
+Select the '+' button and choose the hook that you created, fill in other fields and select 'Save'. Then you're set for applying a Teams hook to an alert configuration. Any new anomalies will be notified through the Teams channel.
+
+![Screenshot that applies an Teams hook to an alert configuration](../media/tutorial/teams-hook-in-alert.png)
++
+#### [SMTP E-mail](#tab/smtp)
+
+### Send email notification by configuring an SMTP server
+
+This section will share the practice of using an SMTP server to send email notifications on anomalies that are detected. Make sure you have a usable SMTP server and have sufficient permission to get parameters like account name and password.
+
+**Step 1.** Assign your account as the 'Cognitive Service Metrics Advisor Administrator' role
+
+- A user with the subscription administrator or resource group administrator privileges needs to navigate to the Metrics Advisor resource that was created in the Azure portal, and select the Access control(IAM) tab.
+- Select 'Add role assignments'.
+- Pick a role of 'Cognitive Services Metrics Advisor Administrator', select your account as in the image below.
+- Select 'Save' button, then you've been successfully added as administrator of a Metrics Advisor resource. All the above actions need to be performed by a subscription administrator or resource group administrator. It might take up to one minute for the permissions to propagate.
+
+![Screenshot that shows how to assign admin role to a specific role](../media/tutorial/access-control.png)
+
+**Step 2.** Configure SMTP server in Metrics Advisor workspace
+
+After you've completed the above steps and have been successfully added as an administrator of the Metrics Advisor resource. Wait several minutes for the permissions to propagate. Then sign in to your Metrics Advisor workspace, you should be able to view a new tab named 'Email setting' on the left navigation panel. Select it and to continue configuration.
+
+Parameters to be filled out:
+
+- SMTP server name (**required**): Fill in the name of your SMTP server provider, most server names are written in the form ΓÇ£smtp.domain.comΓÇ¥ or ΓÇ£mail.domain.comΓÇ¥. Take Office365 as an example, it should be set as 'smtp.office365.com'.
+- SMTP server port (**required**): Port 587 is the default port for SMTP submission on the modern web. While you can use other ports for submission (more on those next), you should always start with port 587 as the default and only use a different port if circumstances dictate (like your host blocking port 587 for some reason).
+- Email sender(s)(**required**): This is the real email account that takes responsibility to send emails. You may need to fill in the account name and password of the sender. You can set a quota threshold for the maximum number of alert emails to be sent within one minute for one account. You can set multiple senders if there's possibility of having large volume of alerts to be sent in one minute, but at least one account should be set.
+- Send on behalf of (optional): If you have multiple senders configured, but you'd like alert emails to appear to be sent from one account. You can use this field to align them. But note you may need to grant permission to the senders to allow sending emails on behalf of their account.
+- Default CC (optional): To set a default email address that will be cc'd in all email alerts.
+
+Below is an example of a configured SMTP server:
+
+![Screenshot that shows an example of a configured SMTP server](../media/tutorial/email-setting.png)
+
+**Step 3.** Create an email hook in Metrics Advisor
+
+After successfully configuring an SMTP server, you're set to create an 'email hook' in the 'Hooks' tab in Metrics Advisor. For more about creating an 'email hook', refer to [article on alerts](../how-tos/alerts.md#email-hook) and follow the steps to completion.
+
+**Step 4.** Apply the email hook to an alert configuration
+
+ Go and select one of the data feeds that you on-boarded, select a metric within the feed and open the metrics detail page. You can create an 'alerting configuration' to subscribe to the anomalies that have been detected and sent through emails.
+
+Select the '+' button and choose the hook that you created, fill in other fields and select 'Save'. You have now successfully setup an email hook with a custom alert configuration and any new anomalies will be escalated through the hook using the SMTP server.
+
+![Screenshot that applies an email hook to an alert configuration](../media/tutorial/apply-hook.png)
+++
+## Next steps
+
+Advance to the next article to learn how to create.
+> [!div class="nextstepaction"]
+> [Write a valid query](write-a-valid-query.md)
+
+<!--
+Remove all the comments in this template before you sign-off or merge to the
+main branch.
+-->
applied-ai-services Write A Valid Query https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/metrics-advisor/tutorials/write-a-valid-query.md
+
+ Title: Write a query for Metrics Advisor data ingestion
+description: Learn how to onboard your data to Metrics Advisor.
++++ Last updated : 05/20/2021 ++
+<!--
+Remove all the comments in this template before you sign-off or merge to the
+main branch.
+-->
+
+<!--
+This template provides the basic structure of a tutorial article.
+See the [tutorial guidance](contribute-how-to-mvc-tutorial.md) in the contributor guide.
+
+To provide feedback on this template contact
+[the templates workgroup](mailto:templateswg@microsoft.com).
+-->
+
+<!-- 1. H1
+Required. Start with "Tutorial: ". Make the first word following "Tutorial: " a
+verb.
+-->
+
+# Tutorial: Write a valid query to onboard metrics data
+
+<!-- 2. Introductory paragraph
+Required. Lead with a light intro that describes, in customer-friendly language,
+what the customer will learn, or do, or accomplish. Answer the fundamental ΓÇ£why
+would I want to do this?ΓÇ¥ question. Keep it short.
+-->
++
+<!-- 3. Tutorial outline
+Required. Use the format provided in the list below.
+-->
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * How to write a valid data onboarding query
+> * Common errors and how to avoid them
+
+<!-- 4. Prerequisites
+Required. First prerequisite is a link to a free trial account if one exists. If there
+are no prerequisites, state that no prerequisites are needed for this tutorial.
+-->
+
+## Prerequisites
+
+### Create a Metrics Advisor resource
+
+To explore capabilities of Metrics Advisor, you may need to <a href="https://go.microsoft.com/fwlink/?linkid=2142156" title="Create a Metrics Advisor resource" target="_blank">create a Metrics Advisor resource </a> in the Azure portal to deploy your Metrics Advisor instance.
+
+<!-- 5. H2s
+Required. Give each H2 a heading that sets expectations for the content that follows.
+Follow the H2 headings with a sentence about how the section contributes to the whole.
+-->
+
+## Data schema requirements
+<!-- Introduction paragraph -->
+++
+## <span id="ingestion-work">How does data ingestion work in Metrics Advisor?</span>
+
+When onboarding your metrics to Metrics Advisor, generally there are two ways:
+<!-- Introduction paragraph -->
+- Pre-aggregate your metrics into the expected schema and store data into certain files. Fill in the path template during onboarding, and Metrics Advisor will continuously grab new files from the path and perform detection on the metrics. This is a common practice for a data source like Azure Data Lake and Azure Blob Storage.
+- If you're ingesting data from data sources like Azure SQL Server, Azure Data Explorer, or other sources, which support using a query script than you need to make sure you are properly constructing your query. This article will teach you how to write a valid query to onboard metric data as expected.
++
+### What is an interval?
+
+Metrics need to be monitored at a certain granularity according to business requirements. For example, business Key Performance Indicators (KPIs) are monitored at daily granularity. However, service performance metrics are often monitored at minute/hourly granularity. So the frequency to collect metric data from sources are different.
+
+Metrics Advisor continuously grabs metrics data at each time interval, **the interval is equal to the granularity of the metrics.** Every time, Metrics Advisor runs the query you have written ingests data at this specific interval. Based on this data ingestion mechanism, the query script **should not return all metric data that exists in the database, but needs to limit the result to a single interval.**
+
+![Illustration that describes what is an interval](../media/tutorial/what-is-interval.png)
+
+## How to write a valid query?
+<!-- Introduction paragraph -->
+### <span id="use-parameters"> Use @IntervalStart and @IntervalEnd to limit query results</span>
+
+ To help in achieving this, two parameters have been provided to use within the query: **@IntervalStart** and **@IntervalEnd**.
+
+Every time when the query runs, @IntervalStart and @IntervalEnd will be automatically updated to the latest interval timestamp and gets corresponding metrics data. @IntervalEnd is always assigned as @IntervalStart + 1 granularity.
+
+Here's an example of proper use of these two parameters with Azure SQL Server:
+
+```SQL
+SELECT [timestampColumnName] AS timestamp, [dimensionColumnName], [metricColumnName] FROM [sampleTable] WHERE [timestampColumnName] >= @IntervalStart and [timestampColumnName] < @IntervalEnd;
+```
+
+By writing the query script in this way, the timestamps of metrics should fall in the same interval for each query result. Metrics Advisor will automatically align the timestamps with the metrics' granularity.
+
+### <span id="use-aggregation"> Use aggregation functions to aggregate metrics</span>
+
+It's a common case that there are many columns within customers data sources, however, not all of them make sense to be monitored or included as a dimension. Customers can use aggregation functions to aggregate metrics and only include meaningful columns as dimensions.
+
+Below is an example where there are more than 10 columns in a customer's data source, but only a few of them are meaningful and need to be included and aggregated into a metric to be monitored.
+
+| TS | Market | Device OS | Category | ... | Measure1 | Measure2 | Measure3 |
+| -|--|--|-|--|-|-|-|
+| 2020-09-18T12:23:22Z | New York | iOS | Sunglasses | ...| 43242 | 322 | 54546|
+| 2020-09-18T12:27:34Z | Beijing | Android | Bags | ...| 3333 | 126 | 67677 |
+| ...
+
+If customer would like to monitor **'Measure1'** at **hourly granularity** and choose **'Market'** and **'Category'** as dimensions, below are examples of how to properly make use of the aggregation functions to achieve this:
+
+- SQL sample:
+
+ ```sql
+ SELECT dateadd(hour, datediff(hour, 0, TS),0) as NewTS
+ ,Market
+ ,Category
+ ,sum(Measure1) as M1
+ FROM [dbo].[SampleTable] where TS >= @IntervalStart and TS < @IntervalEnd
+ group by Market, Category, dateadd(hour, datediff(hour, 0, TS),0)
+ ```
+- Azure Data Explorer sample:
+
+ ```kusto
+ SampleTable
+ | where TS >= @IntervalStart and TS < @IntervalEnd
+ | summarize M1 = sum(Measure1) by Market, Category, NewTS = startofhour(TS)
+ ```
+
+> [!Note]
+> In the above case, the customer would like to monitor metrics at an hourly granularity, but the raw timestamp(TS) is not aligned. Within aggregation statement, **a process on the timestamp is required** to align at the hour and generate a new timestamp column named 'NewTS'.
++
+## Common errors during onboarding
+
+- **Error:** Multiple timestamp values are found in query results
+
+ This is a common error, if you haven't limited query results within one interval. For example, if you're monitoring a metric at a daily granularity, you will get this error if your query returns results like this:
+
+ ![Screenshot that shows multiple timestamp values returned](../media/tutorial/multiple-timestamps.png)
+
+ There are multiple timestamp values and they're not in the same metrics interval(one day). Check [How does data ingestion work in Metrics Advisor?](#ingestion-work) and understand that Metrics Advisor grabs metrics data at each metrics interval. Then make sure to use **@IntervalStart** and **@IntervalEnd** in your query to limit results within one interval. Check [Use @IntervalStart and @IntervalEnd to limit query results](#use-parameters) for detailed guidance and samples.
++
+- **Error:** Duplicate metric values are found on the same dimension combination within one metric interval
+
+ Within one interval, Metrics Advisor expects only one metrics value for the same dimension combinations. For example, if you're monitoring a metric at a daily granularity, you will get this error if your query returns results like this:
+
+ ![Screenshot that shows duplicate values returned](../media/tutorial/duplicate-values.png)
+
+ Refer to [Use aggregation functions to aggregate metrics](#use-aggregation) for detailed guidance and samples.
+
+<!-- 7. Next steps
+Required: A single link in the blue box format. Point to the next logical tutorial
+in a series, or, if there are no other tutorials, to some other cool thing the
+customer can do.
+-->
+
+## Next steps
+
+Advance to the next article to learn how to create.
+> [!div class="nextstepaction"]
+> [Enable anomaly notifications](enable-anomaly-notification.md)
+
+<!--
+Remove all the comments in this template before you sign-off or merge to the
+main branch.
applied-ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/metrics-advisor/whats-new.md
+
+ Title: Metrics Advisor what's new
+
+description: Learn about what is new with Metrics Advisor
++++++ Last updated : 10/14/2020+++
+# Metrics Advisor: what's new in the docs
+
+Welcome! This page covers what's new in the Metrics Advisor docs. Check back every month for information on service changes, doc additions and updates this month.
+
+## SDK updates
+
+If you want to learn about the latest updates to Metrics Advisor client SDKs see:
+
+* [.NET SDK change log](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/metricsadvisor/Azure.AI.MetricsAdvisor/CHANGELOG.md)
+* [Java SDK change log ](https://github.com/Azure/azure-sdk-for-jav)
+* [Python SDK change log](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/metricsadvisor/azure-ai-metricsadvisor/CHANGELOG.md)
+* [JavaScript SDK change log](https://github.com/Azure/azure-sdk-for-js/blob/master/sdk/metricsadvisor/ai-metrics-advisor/CHANGELOG.md)
+
+## June 2021
+
+### New articles
+
+* [Tutorial: Write a valid query to onboard metrics data](tutorials/write-a-valid-query.md)
+* [Tutorial: Enable anomaly notification in Metrics Advisor](tutorials/enable-anomaly-notification.md)
+
+### Updated articles
+
+* [Updated metrics onboarding flow](how-tos/onboard-your-data.md)
+* [Enriched guidance when adding data feeds from different sources](data-feeds-from-different-sources.md)
+* [Updated new notification channel using Microsoft Teams](how-tos/alerts.md#teams-hook)
+* [Updated incident diagnostic experience](how-tos/diagnose-an-incident.md)
+
+## October 2020
+
+### New articles
+
+* [Quickstarts for Metrics Advisor client SDKs for .NET, Java, Python, and JavaScript](quickstarts/rest-api-and-client-library.md)
+
+### Updated articles
+
+* [Update on how Metric Advisor builds an incident tree for multi-dimensional metrics](/azure/applied-ai-services/metrics-advisor/faq#how-does-metric-advisor-build-a-diagnostic-tree-for-multi-dimensional-metrics)
applied-ai-services What Are Applied Ai Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/what-are-applied-ai-services.md
Form Recognizer is composed of custom document processing models, prebuilt model
Protecting organizationΓÇÖs growth by enabling them to make the right decision based on intelligence from metrics of businesses, services and physical assets. Azure Metrics Advisor uses AI to perform data monitoring and anomaly detection in time series data. The service automates the process of applying models to your data, and provides a set of APIs and a web-based workspace for data ingestion, anomaly detection, and diagnostics - without needing to know machine learning. Developers can build AIOps, predictive maintenance, and business monitoring applications on top of the service. Azure Metrics Advisor is built using Anomaly Detector from Azure Cognitive Services.ΓÇï
-[Learn more about Azure Metrics Advisor](../cognitive-services/metrics-advisor/index.yml)
+[Learn more about Azure Metrics Advisor](./metrics-advisor/index.yml)
## Azure Cognitive Search
attestation Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/quickstart-portal.md
Follow the steps in this section to view, add, and delete policy signer certific
1. Go to the Azure portal menu or the home page and select **All resources**. 1. In the filter box, enter the attestation provider name. 1. Select the attestation provider and go to the overview page.
-1. Select **Policy signer certificates** on the resource menu on the left side of the window or on the lower pane. If you see a prompt to select certificate for authentication, please choose the appropriate option to proceed.
+1. Select **Policy signer certificates** on the resource menu on the left side of the window or on the lower pane. If you see a prompt to select certificate for authentication, please click cancel/ choose a valid certificate to proceed.
1. Select **Download policy signer certificates**. The button will be disabled for attestation providers created without the policy signing requirement. 1. The downloaded text file will have all certificates in a JWS format. 1. Verify the certificate count and the downloaded certificates.
Follow the steps in this section to view, add, and delete policy signer certific
1. Go to the Azure portal menu or the home page and select **All resources**. 1. In the filter box, enter the attestation provider name. 1. Select the attestation provider and go to the overview page.
-1. Select **Policy signer certificates** on the resource menu on the left side of the window or on the lower pane. If you see a prompt to select certificate for authentication, please choose the appropriate option to proceed.
+1. Select **Policy signer certificates** on the resource menu on the left side of the window or on the lower pane. If you see a prompt to select certificate for authentication, please click cancel/ choose a valid certificate to proceed.
1. Select **Add** on the upper menu. The button will be disabled for attestation providers created without the policy signing requirement. 1. Upload the policy signer certificate file and select **Add**. [See examples of policy signer certificates](./policy-signer-examples.md).
Follow the steps in this section to view, add, and delete policy signer certific
1. Go to the Azure portal menu or the home page and select **All resources**. 1. In the filter box, enter the attestation provider name. 1. Select the attestation provider and go to the overview page.
-1. Select **Policy signer certificates** on the resource menu on the left side of the window or on the lower pane. If you see a prompt to select certificate for authentication, please choose the appropriate option to proceed.
+1. Select **Policy signer certificates** on the resource menu on the left side of the window or on the lower pane. If you see a prompt to select certificate for authentication, please click cancel/ choose a valid certificate to proceed.
1. Select **Delete** on the upper menu. The button will be disabled for attestation providers created without the policy signing requirement. 1. Upload the policy signer certificate file and select **Delete**. [See examples of policy signer certificates](./policy-signer-examples.md).
This section describes how to view an attestation policy and how to configure po
1. Go to the Azure portal menu or the home page and select **All resources**. 1. In the filter box, enter the attestation provider name. 1. Select the attestation provider and go to the overview page.
-1. Select **Policy** on the resource menu on the left side of the window or on the lower pane. If you see a prompt to select certificate for authentication, please choose the appropriate option to proceed.
+1. Select **Policy** on the resource menu on the left side of the window or on the lower pane. If you see a prompt to select certificate for authentication, please click cancel/ choose a valid certificate to proceed.
1. Select the preferred **Attestation Type** and view the **Current policy**. ### Configure an attestation policy
Follow these steps to upload a policy in JWT or text format if the attestation p
1. Go to the Azure portal menu or the home page and select **All resources**. 1. In the filter box, enter the attestation provider name. 1. Select the attestation provider and go to the overview page.
-1. Select **Policy** on the resource menu on the left side of the window or on the lower pane. If you see a prompt to select certificate for authentication, please choose the appropriate option to proceed.
+1. Select **Policy** on the resource menu on the left side of the window or on the lower pane. If you see a prompt to select certificate for authentication, please click cancel/ choose a valid certificate to proceed.
1. Select **Configure** on the upper menu. 1. Select **Policy Format** as **JWT** or as **Text**.
automanage Automanage Windows Server Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automanage/automanage-windows-server-services-overview.md
SMB over QUIC is available in public preview on the following images:
- Windows Server 2022 Datacenter: Azure Edition (Desktop experience) - Windows Server 2022 Datacenter: Azure Edition (Core)
-SMB over QUIC enables users to access files when working remotely without a VPN, by tunneling SMB traffic over the QUIC protocol. To learn more, see [SMB over QUIC](https://aka.ms/smboverquic).
+SMB over QUIC enables users to access files when working remotely without a VPN, by tunneling SMB traffic over the QUIC protocol. To learn more, see [SMB over QUIC](/windows-server/storage/file-server/smb-over-quic).
### Azure Extended Network
Azure Extended Network is available in public preview on the following images:
- Windows Server 2022 Datacenter: Azure Edition (Desktop experience) - Windows Server 2022 Datacenter: Azure Edition (Core)
-Azure Extended Network enables you to stretch an on-premises subnet into Azure to let on-premises virtual machines keep their original on-premises private IP addresses when migrating to Azure. To learn more, see [Azure Extended Network](https://docs.microsoft.com/windows-server/manage/windows-admin-center/azure/azure-extended-network).
+Azure Extended Network enables you to stretch an on-premises subnet into Azure to let on-premises virtual machines keep their original on-premises private IP addresses when migrating to Azure. To learn more, see [Azure Extended Network](/windows-server/manage/windows-admin-center/azure/azure-extended-network).
## Getting started with Windows Server Azure Edition
automation Add User Assigned Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/add-user-assigned-identity.md
+
+ Title: Using a user-assigned managed identity for an Azure Automation account (preview)
+description: This article describes how to set up a user-assigned managed identity for Azure Automation accounts.
++ Last updated : 07/09/2021+++
+# Using a user-assigned managed identity for an Azure Automation account (preview)
+
+This article shows you how to add a user-assigned managed identity for an Azure Automation account and how to use it to access other resources. For more information on how managed identities work with Azure Automation, see [Managed identities](automation-security-overview.md#managed-identities-preview).
+
+> [!NOTE]
+> User-assigned managed identities are supported for cloud jobs only.
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Prerequisites
+
+- An Azure Automation account. For instructions, see [Create an Azure Automation account](automation-quickstart-create-account.md).
+
+- A system-assigned managed identity. For instructions, see [Using a system-assigned managed identity for an Azure Automation account (preview)](enable-managed-identity-for-automation.md).
+
+- A user-assigned managed identity. For instructions, see [Create a user-assigned managed identity](/active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal#create-a-user-assigned-managed-identity).
+
+- The user-assigned managed identity and the target Azure resources that your runbook manages using that identity must be in the same Azure subscription.
+
+- The latest version of Azure Account modules. Currently this is 2.2.8. (See [Az.Accounts](https://www.powershellgallery.com/packages/Az.Accounts/) for details about this version.)
+
+- An Azure resource that you want to access from your Automation runbook. This resource needs to have a role defined for the user-assigned managed identity, which helps the Automation runbook authenticate access to the resource. To add roles, you need to be an owner for the resource in the corresponding Azure AD tenant.
+
+- If you want to execute hybrid jobs using a user-assigned managed identity, update the Hybrid Runbook Worker to the latest version. The minimum required versions are:
+
+ - Windows Hybrid Runbook Worker: version 7.3.1125.0
+ - Linux Hybrid Runbook Worker: version 1.7.4.0
+
+## Add user-assigned managed identity for Azure Automation account
+
+You can add a user-assigned managed identity for an Azure Automation account using the Azure portal, PowerShell, the Azure REST API, or ARM template. For the examples involving PowerShell, first sign in to Azure interactively using the [Connect-AzAccount](/powershell/module/Az.Accounts/Connect-AzAccount) cmdlet and follow the instructions.
+
+```powershell
+# Sign in to your Azure subscription
+$sub = Get-AzSubscription -ErrorAction SilentlyContinue
+if(-not($sub))
+{
+ Connect-AzAccount -Subscription
+}
+
+# If you have multiple subscriptions, set the one to use
+# Select-AzSubscription -SubscriptionId "<SUBSCRIPTIONID>"
+```
+
+Then initialize a set of variables that will be used throughout the examples. Revise the values below and then execute"
+
+```powershell
+$subscriptionID = "subscriptionID"
+$resourceGroup = "resourceGroupName"
+$automationAccount = "automationAccountName"
+$userAssignedOne = "userAssignedIdentityOne"
+$userAssignedTwo = "userAssignedIdentityTwo"
+```
+
+### Add using the Azure portal
+
+Perform the following steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. In the Azure portal, navigate to your Automation account.
+
+1. UnderΓÇ»**Account Settings**, selectΓÇ»**Identity**.
+
+1. Select the **User assigned** tab, and then select **Add**.
+
+1. Select your existing user-assigned managed identity and then select **Add**. You'll then be returned to the **User assigned** tab.
+
+ :::image type="content" source="media/add-user-assigned-identity/user-assigned-managed-identity.png" alt-text="Output from Portal.":::
+
+### Add using PowerShell
+
+Use PowerShell cmdlet [Set-AzAutomationAccount](/powershell/module/az.automation/set-azautomationaccount) to add the user-assigned managed identities. You must first consider whether there's an existing system-assigned managed identity. The example below adds two existing user-assigned managed identities to an existing Automation account, and will disable a system-assigned managed identity if one exists.
+
+```powershell
+$output = Set-AzAutomationAccount `
+ -ResourceGroupName $resourceGroup `
+ -Name $automationAccount `
+ -AssignUserIdentity "/subscriptions/$subscriptionID/resourcegroups/$resourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/$userAssignedOne", `
+ "/subscriptions/$subscriptionID/resourcegroups/$resourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/$userAssignedTwo"
+
+$output
+```
+
+To keep an existing system-assigned managed identity, use the following:
+
+```powershell
+$output = Set-AzAutomationAccount `
+ -ResourceGroupName $resourceGroup `
+ -Name $automationAccount `
+ -AssignUserIdentity "/subscriptions/$subscriptionID/resourcegroups/$resourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/$userAssignedOne", `
+ "/subscriptions/$subscriptionID/resourcegroups/$resourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/$userAssignedTwo" `
+ -AssignSystemIdentity
+
+$output
+```
+
+The output should look similar to the following:
++
+For additional output, execute: `$output.identity | ConvertTo-Json`.
+
+### Add using a REST API
+
+Syntax and example steps are provided below.
+
+#### Syntax
+
+The sample body syntax below enables a system-assigned managed identity if not already enabled and assigns two existing user-assigned managed identities to the existing Automation account.
+
+PATCH
+
+```json
+{
+ "identity": {
+ "type": "SystemAssigned, UserAssigned",
+ "userAssignedIdentities": {
+ "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resource-group-name/providers/Microsoft.ManagedIdentity/userAssignedIdentities/firstIdentity": {},
+ "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resource-group-name/providers/Microsoft.ManagedIdentity/userAssignedIdentities/secondIdentity": {}
+ }
+ }
+}
+```
+
+The syntax of the API is as follows:
+
+```http
+https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resource-group-name/providers/Microsoft.Automation/automationAccounts/automation-account-name?api-version=2020-01-13-preview
+```
+
+#### Example
+
+Perform the following steps.
+
+1. Revise the syntax of the body above into a file named `body_ua.json`. Save the file on your local machine or in an Azure storage account.
+
+1. Revise the variable value below and then execute.
+
+ ```powershell
+ $file = "path\body_ua.json"
+ ```
+
+1. This example uses the PowerShell cmdlet [Invoke-RestMethod](/powershell/module/microsoft.powershell.utility/invoke-restmethod) to send the PATCH request to your Automation account.
+
+ ```powershell
+ # build URI
+ $URI = "https://management.azure.com/subscriptions/$subscriptionID/resourceGroups/$resourceGroup/providers/Microsoft.Automation/automationAccounts/$automationAccount`?api-version=2020-01-13-preview"
+
+ # build body
+ $body = Get-Content $file
+
+ # obtain access token
+ $azContext = Get-AzContext
+ $azProfile = [Microsoft.Azure.Commands.Common.Authentication.Abstractions.AzureRmProfileProvider]::Instance.Profile
+ $profileClient = New-Object -TypeName Microsoft.Azure.Commands.ResourceManager.Common.RMProfileClient -ArgumentList ($azProfile)
+ $token = $profileClient.AcquireAccessToken($azContext.Subscription.TenantId)
+ $authHeader = @{
+ 'Content-Type'='application/json'
+ 'Authorization'='Bearer ' + $token.AccessToken
+ }
+
+ # Invoke the REST API
+ $response = Invoke-RestMethod -Uri $URI -Method PATCH -Headers $authHeader -Body $body
+
+ # Review output
+ $response.identity | ConvertTo-Json
+ ```
+
+ The output should look similar to the following:
+
+ ```json
+ {
+ "type": "SystemAssigned, UserAssigned",
+ "principalId": "00000000-0000-0000-0000-000000000000",
+ "tenantId": "00000000-0000-0000-0000-000000000000",
+ "userAssignedIdentities": {
+ "/subscriptions/ContosoID/resourcegroups/ContosoLab/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ContosoUAMI1": {
+ "PrincipalId": "00000000-0000-0000-0000-000000000000",
+ "ClientId": "00000000-0000-0000-0000-000000000000"
+ },
+ "/subscriptions/ContosoID/resourcegroups/ContosoLab/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ContosoUAMI2": {
+ "PrincipalId": "00000000-0000-0000-0000-000000000000",
+ "ClientId": "00000000-0000-0000-0000-000000000000"
+ }
+ }
+ }
+ ```
+
+### Add using an ARM template
+
+Syntax and example steps are provided below.
+
+#### Template syntax
+
+The sample template syntax below enables a system-assigned managed identity if not already enabled and assigns two existing user-assigned managed identities to the existing Automation account.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "automationAccountName": {
+ "defaultValue": "YourAutomationAccount",
+ "type": "String",
+ "metadata": {
+ "description": "Automation account name"
+ }
+ },
+ "userAssignedOne": {
+ "defaultValue": "userAssignedOne",
+ "type": "String",
+ "metadata": {
+ "description": "User-assigned managed identity"
+ }
+ },
+ "userAssignedTwo": {
+ "defaultValue": "userAssignedTwo",
+ "type": "String",
+ "metadata": {
+ "description": "User-assigned managed identity"
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Automation/automationAccounts",
+ "apiVersion": "2020-01-13-preview",
+ "name": "[parameters('automationAccountName')]",
+ "location": "[resourceGroup().location]",
+ "identity": {
+ "type": "SystemAssigned, UserAssigned",
+ "userAssignedIdentities": {
+ "[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/',parameters('userAssignedOne'))]": {},
+ "[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/',parameters('userAssignedTwo'))]": {}
+ }
+ },
+ "properties": {
+ "sku": {
+ "name": "Basic"
+ },
+ "encryption": {
+ "keySource": "Microsoft.Automation",
+ "identity": {}
+ }
+ }
+ }
+ ]
+}
+```
+
+#### Example
+
+Perform the following steps.
+
+1. Copy and paste the template into a file named `template_ua.json`. Save the file on your local machine or in an Azure storage account.
+
+1. Revise the variable value below and then execute.
+
+ ```powershell
+ $templateFile = "path\template_ua.json"
+ ```
+
+1. Use PowerShell cmdlet [New-AzResourceGroupDeployment](/powershell/module/az.resources/new-azresourcegroupdeployment) to deploy the template.
+
+ ```powershell
+ New-AzResourceGroupDeployment `
+ -Name "UserAssignedDeployment" `
+ -ResourceGroupName $resourceGroup `
+ -TemplateFile $templateFile `
+ -automationAccountName $automationAccount `
+ -userAssignedOne $userAssignedOne `
+ -userAssignedTwo $userAssignedTwo
+ ```
+
+ The command won't produce an output; however, you can use the code below to verify:
+
+ ```powershell
+ (Get-AzAutomationAccount `
+ -ResourceGroupName $resourceGroup `
+ -Name $automationAccount).Identity | ConvertTo-Json
+ ```
+
+ The output will look similar to the output shown for the REST API example, above.
+
+## Give identity access to Azure resources by obtaining a token
+
+An Automation account can use its user-assigned managed identity to obtain tokens to access other resources protected by Azure AD, such as Azure Key Vault. These tokens don't represent any specific user of the application. Instead, they represent the application that is accessing the resource. In this case, for example, the token represents an Automation account.
+
+Before you can use your user-assigned managed identity for authentication, set up access for that identity on the Azure resource where you plan to use the identity. To complete this task, assign the appropriate role to that identity on the target Azure resource.
+
+This example uses Azure PowerShell to show how to assign the Contributor role in the subscription to the target Azure resource. The Contributor role is used as an example and may or may not be required in your case. Alternatively, you can use portal also to assign the role to the target Azure resource.
+
+```powershell
+New-AzRoleAssignment `
+ -ObjectId <automation-Identity-object-id> `
+ -Scope "/subscriptions/<subscription-id>" `
+ -RoleDefinitionName "Contributor"
+```
+
+## Authenticate access with user-assigned managed identity
+
+After you enable the user-assigned managed identity for your Automation account and give an identity access to the target resource, you can specify that identity in runbooks against resources that support managed identity. For identity support, use the Az cmdlet [Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount).
+
+```powershell
+Connect-AzAccountΓÇ»-IdentityΓÇ»`
+ -AccountIdΓÇ»<user-assigned-identity-ClientId>
+```
+
+## Generate an access token without using Azure cmdlets
+
+For HTTP Endpoints make sure of the following.
+- The metadata header must be present and should be set to "true".
+- A resource must be passed along with the request, as a query parameter for a GET request and as form data for a POST request.
+- Content Type for the Post request must be `application/x-www-form-urlencoded`.
+
+### Get Access token for user-assigned managed identity using Http Get
+
+```powershell
+$resource=ΓÇ»"?resource=https://management.azure.com/"
+$client_id="&client_id=<ClientId of USI>"
+$urlΓÇ»=ΓÇ»$env:IDENTITY_ENDPOINTΓÇ»+ΓÇ»$resourceΓÇ»+ΓÇ»$client_id
+$Headers = New-Object "System.Collections.Generic.Dictionary[[String],[String]]" 
+$Headers.Add("Metadata",ΓÇ»"True")
+$accessToken = Invoke-RestMethod -Uri $url -Method 'GET' -Headers $Headers
+Write-OutputΓÇ»$accessToken.access_token
+```
+
+### Get Access token for user-assigned managed identity using Http Post
+
+```powershell
+$urlΓÇ»=ΓÇ»$env:IDENTITY_ENDPOINT
+$headers = New-Object "System.Collections.Generic.Dictionary[[String],[String]]"
+$headers.Add("Metadata",ΓÇ»"True")
+$bodyΓÇ»=ΓÇ»@{'resource'='https://management.azure.com/'
+'client_id'='<ClientId of USI>'}
+$accessToken = Invoke-RestMethod $url -Method 'POST' -Headers $headers -ContentType 'application/x-www-form-urlencoded' -Body $body
+Write-OutputΓÇ»$accessToken.access_token
+```
+
+### Using user-assigned managed identity in Azure PowerShell
+
+```powershell
+Write-Output "Connecting to azure via  Connect-AzAccount -Identity -AccountId <ClientId of USI>" 
+Connect-AzAccount -Identity -AccountId <ClientId of USI>
+Write-Output "Successfully connected with Automation account's Managed Identity" 
+Write-Output "Trying to fetch value from key vault using User Assigned Managed identity. Make sure you have given correct access to Managed Identity" 
+$secret = Get-AzKeyVaultSecret -VaultName '<KVname>' -Name '<KeyName>' 
+$ssPtrΓÇ»=ΓÇ»[System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($secret.SecretValue)ΓÇ»
+tryΓÇ»{ΓÇ»
+  $secretValueText = [System.Runtime.InteropServices.Marshal]::PtrToStringBSTR($ssPtr) 
+    Write-Output $secretValueText 
+} finally { 
+    [System.Runtime.InteropServices.Marshal]::ZeroFreeBSTR($ssPtr) 
+}
+```
+
+### Using user-assigned managed identity in Python Runbook
+
+```python
+#!/usr/bin/env python3 
+import os 
+import requests  
+
+resourceΓÇ»=ΓÇ»"?resource=https://management.azure.com/"
+client_id = "&client_id=<ClientId of USI>"
+endPoint = os.getenv('IDENTITY_ENDPOINT')+ resource +client_id
+payload={}ΓÇ»
+headersΓÇ»=ΓÇ»{ΓÇ»
+  'Metadata': 'True' 
+}ΓÇ»
+response = requests.request("GET", endPoint, headers=headers, data=payload) 
+print(response.text)
+```
+
+## Next steps
+
+- If your runbooks aren't completing successfully, review [Troubleshoot Azure Automation managed identity issues (preview)](troubleshoot/managed-identity.md).
+
+- If you need to disable a managed identity, see [Disable your Azure Automation account managed identity (preview)](disable-managed-identity-for-automation.md).
+
+- For an overview of Azure Automation account security, see [Automation account authentication overview](automation-security-overview.md).
automation Disable Managed Identity For Automation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/disable-managed-identity-for-automation.md
Title: Disable your Azure Automation account managed identity (preview)
-description: This article explains how to disable and remove a managed identity for an Azure Automation account.
+ Title: Disable system-assigned managed identity for Azure Automation account (preview)
+description: This article explains how to disable a system-assigned managed identity for an Azure Automation account.
Previously updated : 04/14/2021 Last updated : 07/13/2021
-# Disable your Azure Automation account managed identity (preview)
+# Disable system-assigned managed identity for Azure Automation account (preview)
-There are two ways to disable a system-assigned identity in Azure Automation. You can complete this task from the Azure portal, or by using an Azure Resource Manager (ARM) template.
+You can disable a system-assigned managed identity in Azure Automation by using the Azure portal, or using REST API.
-## Disable managed identity in the Azure portal
+## Disable using the Azure portal
-You can disable the managed identity from the Azure portal no matter how the managed identity was originally set up.
+You can disable the system-assigned managed identity from the Azure portal no matter how the system-assigned managed identity was originally set up.
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Navigate to your Automation account and select **Identity** under **Account Settings**.
+1. Navigate to your Automation account and under **Account Settings**, select **Identity**.
-1. Set the **System assigned** option to **Off** and press **Save**. When you're prompted to confirm, press **Yes**.
+1. From the **System assigned** tab, under the **Status** button, select **Off** and then select **Save**. When you're prompted to confirm, select **Yes**.
-The managed identity is removed and no longer has access to the target resource.
+The system-assigned managed identity is disabled and no longer has access to the target resource.
-## Disable using Azure Resource Manager template
+## Disable using REST API
-If you created the managed identity for your Automation account using an Azure Resource Manager template, you can disable the managed identity by reusing that template and modifying its settings. Set the type of the identity object's child property to **None** as shown in the following example, and then re-run the template.
+Syntax and example steps are provided below.
+
+### Request body
+
+The following request body disables the system-assigned managed identity and removes any user-assigned managed identities.
+
+PATCH
```json
-"identity": {
+{
+ "identity": {
"type": "None"
-}
+ }
+}
+ ```
-Removing a system-assigned identity using this method also deletes it from Azure AD. System-assigned identities are also automatically removed from Azure AD when the app resource that they are assigned to is deleted.
+If there are multiple user-assigned identities defined, to retain them and only remove the system-assigned identity you need to specify each user-assigned identity using comma-delimited list as in the following example:
+
+PATCH
+
+```json
+{
+"identity" : {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resourceGroupName/providers/Microsoft.ManagedIdentity/userAssignedIdentities/firstIdentity": {},
+ "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resourceGroupName/providers/Microsoft.ManagedIdentity/userAssignedIdentities/secondIdentity": {}
+ }
+ }
+}
+```
+
+The following is the service's REST API request URI to send the PATCH request.
+
+```http
+PATCH https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resource-group-name/providers/Microsoft.Automation/automationAccounts/automation-account-name?api-version=2020-01-13-preview
+```
+
+### Example
+
+Perform the following steps.
+
+1. Copy and paste the request body, depending on which operation you want to perform, into a file named `body_remove_sa.json`. Save the file on your local machine or in an Azure storage account.
+
+1. Sign in to Azure interactively using the [Connect-AzAccount](/powershell/module/Az.Accounts/Connect-AzAccount) cmdlet and follow the instructions.
+
+ ```powershell
+ # Sign in to your Azure subscription
+ $sub = Get-AzSubscription -ErrorAction SilentlyContinue
+ if(-not($sub))
+ {
+ Connect-AzAccount -Subscription
+ }
+
+ # If you have multiple subscriptions, set the one to use
+ # Select-AzSubscription -SubscriptionId "<SUBSCRIPTIONID>"
+ ```
+
+1. Provide an appropriate value for the variables and then execute the script.
+
+ ```powershell
+ $subscriptionID = "subscriptionID"
+ $resourceGroup = "resourceGroupName"
+ $automationAccount = "automationAccountName"
+ $file = "path\body_remove_sa.json"
+ ```
+
+1. This example uses the PowerShell cmdlet [Invoke-RestMethod](/powershell/module/microsoft.powershell.utility/invoke-restmethod) to send the PATCH request to your Automation account.
+
+ ```powershell
+ # build URI
+ $URI = "https://management.azure.com/subscriptions/$subscriptionID/resourceGroups/$resourceGroup/providers/Microsoft.Automation/automationAccounts/$automationAccount`?api-version=2020-01-13-preview"
+
+ # build body
+ $body = Get-Content $file
+
+ # obtain access token
+ $azContext = Get-AzContext
+ $azProfile = [Microsoft.Azure.Commands.Common.Authentication.Abstractions.AzureRmProfileProvider]::Instance.Profile
+ $profileClient = New-Object -TypeName Microsoft.Azure.Commands.ResourceManager.Common.RMProfileClient -ArgumentList ($azProfile)
+ $token = $profileClient.AcquireAccessToken($azContext.Subscription.TenantId)
+ $authHeader = @{
+ 'Content-Type'='application/json'
+ 'Authorization'='Bearer ' + $token.AccessToken
+ }
+
+ # Invoke the REST API
+ Invoke-RestMethod -Uri $URI -Method PATCH -Headers $authHeader -Body $body
+
+ # Confirm removal
+ (Get-AzAutomationAccount `
+ -ResourceGroupName $resourceGroup `
+ -Name $automationAccount).Identity.Type
+ ```
+
+ Depending on the syntax you used, the output will either be: `UserAssigned` or blank.
## Next steps -- For more information about enabling managed identity in Azure Automation, see [Enable and use managed identity for Automation (preview)](enable-managed-identity-for-automation.md).
+- For more information about enabling managed identities in Azure Automation, see [Enable and use managed identity for Automation (preview)](enable-managed-identity-for-automation.md).
- For an overview of Automation account security, see [Automation account authentication overview](automation-security-overview.md).
automation Enable Managed Identity For Automation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/enable-managed-identity-for-automation.md
Title: Enable a managed identity for your Azure Automation account (preview)
+ Title: Using a system-assigned managed identity for an Azure Automation account (preview)
description: This article describes how to set up managed identity for Azure Automation accounts. Previously updated : 04/28/2021 Last updated : 07/09/2021
-# Enable a managed identity for your Azure Automation account (preview)
-This topic shows you how to create a managed identity for an Azure Automation account and how to use it to access other resources. For more information on how managed identity works with Azure Automation, see [Managed identities](automation-security-overview.md#managed-identities-preview).
+# Using a system-assigned managed identity for an Azure Automation account (preview)
+
+This article shows you how to enable a system-assigned managed identity for an Azure Automation account and how to use it to access other resources. For more information on how managed identities work with Azure Automation, see [Managed identities](automation-security-overview.md#managed-identities-preview).
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
## Prerequisites -- An Azure account and subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. Both the managed identity and the target Azure resources that your runbook manages using that identity must be in the same Azure subscription.
+- An Azure Automation account. For instructions, see [Create an Azure Automation account](automation-quickstart-create-account.md).
- The latest version of Azure Account modules. Currently this is 2.2.8. (See [Az.Accounts](https://www.powershellgallery.com/packages/Az.Accounts/) for details about this version.)
This topic shows you how to create a managed identity for an Azure Automation ac
- If you want to execute hybrid jobs using a managed identity, update the Hybrid Runbook Worker to the latest version. The minimum required versions are:
- - Windows Hybrid Runbook Worker: version 7.3.1125.0
- - Linux Hybrid Runbook Worker: version 1.7.4.0
+ - Windows Hybrid Runbook Worker: version 7.3.1125.0
+ - Linux Hybrid Runbook Worker: version 1.7.4.0
+
+## Enable a system-assigned managed identity for an Azure Automation account
-## Enable system-assigned identity
+Once enabled, the following properties will be assigned to the system-assigned managed identity.
->[!IMPORTANT]
->The new Automation account-level identity will override any previous VM-level system-assigned identities which are described in [Use runbook authentication with managed identities](./automation-hrw-run-runbooks.md#runbook-auth-managed-identities). If you're running hybrid jobs on Azure VMs that use a VM's system-assigned identity to access runbook resources, then the Automation account identity will be used for the hybrid jobs. This means your existing job execution may be affected if you've been using the Customer Managed Keys (CMK) feature of your Automation account.<br/><br/>If you wish to continue using the VM's managed identity, you shouldn't enable the Automation account-level identity. If you've already enabled it, you can disable the Automation account managed identity. See [Disable your Azure Automation account managed identity](./disable-managed-identity-for-automation.md).
+|Property (JSON) | Value | Description|
+|-|--||
+| principalid | \<principal-ID\> | The Globally Unique Identifier (GUID) of the service principal object for the system-assigned managed identity that represents your Automation account in the Azure AD tenant. This GUID sometimes appears as an "object ID" or objectID. |
+| tenantid | \<Azure-AD-tenant-ID\> | The Globally Unique Identifier (GUID) that represents the Azure AD tenant where the Automation account is now a member. Inside the Azure AD tenant, the service principal has the same name as the Automation account. |
+
+You can enable a system-assigned managed identity for an Azure Automation account using the Azure portal, PowerShell, the Azure REST API, or ARM template. For the examples involving PowerShell, first sign in to Azure interactively using the [Connect-AzAccount](/powershell/module/Az.Accounts/Connect-AzAccount) cmdlet and follow the instructions.
+
+```powershell
+# Sign in to your Azure subscription
+$sub = Get-AzSubscription -ErrorAction SilentlyContinue
+if(-not($sub))
+{
+ Connect-AzAccount -Subscription
+}
+
+# If you have multiple subscriptions, set the one to use
+# Select-AzSubscription -SubscriptionId "<SUBSCRIPTIONID>"
+```
+
+Then initialize a set of variables that will be used throughout the examples. Revise the values below and then execute.
+
+```powershell
+$subscriptionID = "subscriptionID"
+$resourceGroup = "resourceGroupName"
+$automationAccount = "automationAccountName"
+```
-Setting up system-assigned identities for Azure Automation can be done one of two ways. You can either use the Azure portal, or the Azure REST API.
+> [!IMPORTANT]
+> The new Automation account-level identity will override any previous VM-level system-assigned identities which are described in [Use runbook authentication with managed identities](./automation-hrw-run-runbooks.md#runbook-auth-managed-identities). If you're running hybrid jobs on Azure VMs that use a VM's system-assigned identity to access runbook resources, then the Automation account identity will be used for the hybrid jobs. This means your existing job execution may be affected if you've been using the Customer Managed Keys (CMK) feature of your Automation account.<br/><br/>If you wish to continue using the VM's managed identity, you shouldn't enable the Automation account-level identity. If you've already enabled it, you can disable the Automation account system-assigned managed identity. See [Disable your Azure Automation account managed identity](./disable-managed-identity-for-automation.md).
->[!NOTE]
->User-assigned identities are not supported yet.
+### Enable using the Azure portal
-### Enable system-assigned identity in Azure portal
+Perform the following steps:
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Navigate to your Automation account and select **Identity** under **Account Settings**.
+1. In the Azure portal, navigate to your Automation account.
+
+1. UnderΓÇ»**Account Settings**, selectΓÇ»**Identity**.
1. Set the **System assigned** option to **On** and press **Save**. When you're prompted to confirm, select **Yes**.
+ :::image type="content" source="media/managed-identity/managed-identity-on.png" alt-text="Enabling system-assigned identity in Azure portal.":::
-Your Automation account can now use the system-assigned identity, which is registered with Azure Active Directory (Azure AD) and is represented by an object ID.
+ Your Automation account can now use the system-assigned identity, which is registered with Azure Active Directory (Azure AD) and is represented by an object ID.
+ :::image type="content" source="media/managed-identity/managed-identity-object-id.png" alt-text="Managed identity object ID.":::
-### Enable system-assigned identity through the REST API
+### Enable using PowerShell
-You can configure a system-assigned managed identity to the Automation account by using the following REST API call.
+Use PowerShell cmdlet [Set-AzAutomationAccount](/powershell/module/az.automation/set-azautomationaccount) to enable the system-assigned managed identity.
-```http
-PATCH https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resource-group-name/providers/Microsoft.Automation/automationAccounts/automation-account-name?api-version=2020-01-13-preview
+```powershell
+$output = Set-AzAutomationAccount `
+ -ResourceGroupName $resourceGroup `
+ -Name $automationAccount `
+ -AssignSystemIdentity
+
+$output
```
-Request body
+The output should look similar to the following:
++
+For additional output, execute: `$output.identity | ConvertTo-Json`.
+
+### Enable using a REST API
+
+Syntax and example steps are provided below.
+
+#### Syntax
+
+The body syntax below enables a system-assigned managed identity to an existing Automation account. However, this syntax will remove any existing user-assigned managed identities associated with the Automation account.
+
+PATCH
+ ```json {
- "identity":
- {
- "type": "SystemAssigned"
+ "identity": {
+ "type": "SystemAssigned"
} } ```
+If there are multiple user-assigned identities defined, to retain them and only remove the system-assigned identity you need to specify each user-assigned identity using comma-delimited list as in the following example:
+
+PATCH
+
+```json
+{
+ "identity" : {
+ "type": "SystemAssigned, UserAssigned",
+ "userAssignedIdentities": {
+ "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resourceGroupName/providers/Microsoft.ManagedIdentity/userAssignedIdentities/cmkID": {},
+ "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resourceGroupName/providers/Microsoft.ManagedIdentity/userAssignedIdentities/cmkID2": {}
+ }
+ }
+}
+
+```
+
+The syntax of the API is as follows:
+
+```http
+PATCH https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resource-group-name/providers/Microsoft.Automation/automationAccounts/automation-account-name?api-version=2020-01-13-preview
+```
+
+#### Example
+
+Perform the following steps.
+
+1. Copy and paste the body syntax into a file named `body_sa.json`. Save the file on your local machine or in an Azure storage account.
+
+1. Revise the variable value below and then execute.
+
+ ```powershell
+ $file = "path\body_sa.json"
+ ```
+
+1. This example uses the PowerShell cmdlet [Invoke-RestMethod](/powershell/module/microsoft.powershell.utility/invoke-restmethod) to send the PATCH request to your Automation account.
+
+ ```powershell
+ # build URI
+ $URI = "https://management.azure.com/subscriptions/$subscriptionID/resourceGroups/$resourceGroup/providers/Microsoft.Automation/automationAccounts/$automationAccount`?api-version=2020-01-13-preview"
+
+ # build body
+ $body = Get-Content $file
+
+ # obtain access token
+ $azContext = Get-AzContext
+ $azProfile = [Microsoft.Azure.Commands.Common.Authentication.Abstractions.AzureRmProfileProvider]::Instance.Profile
+ $profileClient = New-Object -TypeName Microsoft.Azure.Commands.ResourceManager.Common.RMProfileClient -ArgumentList ($azProfile)
+ $token = $profileClient.AcquireAccessToken($azContext.Subscription.TenantId)
+ $authHeader = @{
+ 'Content-Type'='application/json'
+ 'Authorization'='Bearer ' + $token.AccessToken
+ }
+
+ # Invoke the REST API
+ $response = Invoke-RestMethod -Uri $URI -Method PATCH -Headers $authHeader -Body $body
+
+ # Review output
+ $response.identity | ConvertTo-Json
+ ```
+
+ The output should look similar to the following:
+
+ ```json
+ {
+ "PrincipalId": "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa",
+ "TenantId": "bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb",
+ "Type": 0,
+ "UserAssignedIdentities": null
+ }
+ ```
+
+### Enable using an ARM template
+
+Syntax and example steps are provided below.
+
+#### Template syntax
+
+The sample template syntax below enables a system-assigned managed identity to the existing Automation account. However, this syntax will remove any existing user-assigned managed identities associated with the Automation account.
+ ```json {
- "name": "automation-account-name",
- "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resource-group-name/providers/Microsoft.Automation/automationAccounts/automation-account-name",
- .
- .
- "identity": {
- "type": "SystemAssigned",
- "principalId": "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa",
- "tenantId": "bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb"
- },
-.
-.
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "resources": [
+ {
+ "type": "Microsoft.Automation/automationAccounts",
+ "apiVersion": "2020-01-13-preview",
+ "name": "yourAutomationAccount",
+ "location": "[resourceGroup().location]",
+ "identity": {
+ "type": "SystemAssigned"
+ },
+ "properties": {
+ "sku": {
+ "name": "Basic"
+ }
+ }
+ }
+ ]
} ```
-|Property (JSON) | Value | Description|
-|-|--||
-| principalid | \<principal-ID\> | The Globally Unique Identifier (GUID) of the service principal object for the managed identity that represents your Automation account in the Azure AD tenant. This GUID sometimes appears as an "object ID" or objectID. |
-| tenantid | \<Azure-AD-tenant-ID\> | The Globally Unique Identifier (GUID) that represents the Azure AD tenant where the Automation account is now a member. Inside the Azure AD tenant, the service principal has the same name as the Automation account. |
+#### Example
+
+Perform the following steps.
+
+1. Revise the syntax of the template above to use your Automation account and save it to a file named `template_sa.json`.
+
+1. Revise the variable value below and then execute.
+
+ ```powershell
+ $templateFile = "path\template_sa.json"
+ ```
+
+1. Use PowerShell cmdlet [New-AzResourceGroupDeployment](/powershell/module/az.resources/new-azresourcegroupdeployment) to deploy the template.
+
+ ```powershell
+ New-AzResourceGroupDeployment `
+ -Name "SystemAssignedDeployment" `
+ -ResourceGroupName $resourceGroup `
+ -TemplateFile $templateFile
+ ```
+
+ The command won't produce an output; however, you can use the code below to verify:
+
+ ```powershell
+ (Get-AzAutomationAccount `
+ -ResourceGroupName $resourceGroup `
+ -Name $automationAccount).Identity | ConvertTo-Json
+ ```
-## Give identity access to Azure resources by obtaining a token
+ The output will look similar to the output shown for the REST API example, above.
-An Automation account can use its managed identity to get tokens to access other resources protected by Azure AD, such as Azure Key Vault. These tokens do not represent any specific user of the application. Instead, they represent the application thatΓÇÖs accessing the resource. In this case, for example, the token represents an Automation account.
+## Give access to Azure resources by obtaining a token
+
+An Automation account can use its system-assigned managed identity to get tokens to access other resources protected by Azure AD, such as Azure Key Vault. These tokens don't represent any specific user of the application. Instead, they represent the application that's accessing the resource. In this case, for example, the token represents an Automation account.
Before you can use your system-assigned managed identity for authentication, set up access for that identity on the Azure resource where you plan to use the identity. To complete this task, assign the appropriate role to that identity on the target Azure resource. This example uses Azure PowerShell to show how to assign the Contributor role in the subscription to the target Azure resource. The Contributor role is used as an example, and may or may not be required in your case. ```powershell
-New-AzRoleAssignment -ObjectId <automation-Identity-object-id> -Scope "/subscriptions/<subscription-id>" -RoleDefinitionName "Contributor"
+New-AzRoleAssignment `
+ -ObjectId <automation-Identity-object-id> `
+ -Scope "/subscriptions/<subscription-id>" `
+ -RoleDefinitionName "Contributor"
```
-## Authenticate access with managed identity
+## Authenticate access with system-assigned managed identity
-After you enable the managed identity for your Automation account and give an identity access to the target resource, you can specify that identity in runbooks against resources that support managed identity. For identity support, use the Az cmdlet `Connect-AzAccount` cmdlet. See [Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) in the PowerShell reference.
+After you enable the managed identity for your Automation account and give an identity access to the target resource, you can specify that identity in runbooks against resources that support managed identity. For identity support, use the Az cmdlet `Connect-AzAccount` cmdlet. See [Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) in the PowerShell reference. Replace `SubscriptionID` with your actual subscription ID and then execute the following command:
```powershell Connect-AzAccount -Identity
+$AzureContext = Set-AzContext -SubscriptionId "SubscriptionID"
```
->[!NOTE]
->If your organization is still using the deprecated AzureRM cmdlets, you can use `Connect-AzureRMAccount -Identity`.
+> [!NOTE]
+> If your organization is still using the deprecated AzureRM cmdlets, you can use `Connect-AzureRMAccount -Identity`.
## Generate an access token without using Azure cmdlets For HTTP Endpoints make sure of the following.-- The metadata header must be present and should be set to ΓÇ£trueΓÇ¥.
+- The metadata header must be present and should be set to "true".
- A resource must be passed along with the request, as a query parameter for a GET request and as form data for a POST request.-- The X-IDENTITY-HEADER should be set to the value of the environment variable IDENTITY_HEADER for Hybrid Runbook Workers. -- Content Type for the Post request must be 'application/x-www-form-urlencoded'.
+- The X-IDENTITY-HEADER should be set to the value of the environment variable IDENTITY_HEADER for Hybrid Runbook Workers.
+- Content Type for the Post request must be 'application/x-www-form-urlencoded'.
-### Sample GET request
+### Get Access token for System Assigned Identity using Http Get
```powershell $resource= "?resource=https://management.azure.com/"
$accessToken = Invoke-RestMethod -Uri $url -Method 'GET' -Headers $Headers
Write-Output $accessToken.access_token ```
-### Sample POST request
+### Get Access token for System Assigned Identity using Http Post
+ ```powershell $url = $env:IDENTITY_ENDPOINT $headers = New-Object "System.Collections.Generic.Dictionary[[String],[String]]"
$accessToken = Invoke-RestMethod $url -Method 'POST' -Headers $headers -ContentT
Write-Output $accessToken.access_token ```
-## Sample runbooks using managed identity
-
-### Sample runbook to access a SQL database without using Azure cmdlets
-
-Make sure you've enabled an identity before you try this script. See [Enable system-assigned identity](#enable-system-assigned-identity).
-
-For details on provisioning access to an Azure SQL database, see [Provision Azure AD admin (SQL Database)](../azure-sql/database/authentication-aad-configure.md#provision-azure-ad-admin-sql-database).
-
-```powershell
-$queryParameter = "?resource=https://database.windows.net/"
-$url = $env:IDENTITY_ENDPOINT + $queryParameter
-$Headers = New-Object "System.Collections.Generic.Dictionary[[String],[String]]"
-$Headers.Add("X-IDENTITY-HEADER", $env:IDENTITY_HEADER)
-$Headers.Add("Metadata", "True")
-$content =[System.Text.Encoding]::Default.GetString((Invoke-WebRequest -UseBasicParsing -Uri $url -Method 'GET' -Headers $Headers).RawContentStream.ToArray()) | ConvertFrom-Json
-$Token = $content.access_token
-echo "The managed identities for Azure resources access token is $Token"
-$SQLServerName = "<ServerName>" # Azure SQL logical server name
-$DatabaseName = "<DBname>" # Azure SQL database name
-Write-Host "Create SQL connection string"
-$conn = New-Object System.Data.SqlClient.SQLConnection
-$conn.ConnectionString = "Data Source=$SQLServerName.database.windows.net;Initial Catalog=$DatabaseName;Connect Timeout=30"
-$conn.AccessToken = $Token
-Write-host "Connect to database and execute SQL script"
-$conn.Open()
-$ddlstmt = "CREATE TABLE Person( PersonId INT IDENTITY PRIMARY KEY, FirstName NVARCHAR(128) NOT NULL)"
-Write-host " "
-Write-host "SQL DDL command"
-$ddlstmt
-$command = New-Object -TypeName System.Data.SqlClient.SqlCommand($ddlstmt, $conn)
-Write-host "results"
-$command.ExecuteNonQuery()
-$conn.Close()
-```
-
-### Sample runbook to access a key vault using Azure cmdlets
-
-Make sure you've enabled an identity before you try this script. See [Enable system-assigned identity](#enable-system-assigned-identity).
+### Using system-assigned managed identity in Azure PowerShell
For more information, see [Get-AzKeyVaultSecret](/powershell/module/az.keyvault/get-azkeyvaultsecret).
try {
} ```
-### Sample Python runbook to get a token
-
-Make sure you've enabled an identity before you try this runbook. See [Enable system-assigned identity](#enable-system-assigned-identity).
+### Using system-assigned managed identity in Python Runbook
```python #!/usr/bin/env python3
response = requests.request("GET", endPoint, headers=headers, data=payload)
print(response.text) ```
+### Using system-assigned managed identity to Access SQL Database
+
+For details on provisioning access to an Azure SQL database, see [Provision Azure AD admin (SQL Database)](../azure-sql/database/authentication-aad-configure.md#provision-azure-ad-admin-sql-database).
+
+```powershell
+$queryParameter = "?resource=https://database.windows.net/"
+$url = $env:IDENTITY_ENDPOINT + $queryParameter
+$Headers = New-Object "System.Collections.Generic.Dictionary[[String],[String]]"
+$Headers.Add("X-IDENTITY-HEADER", $env:IDENTITY_HEADER)
+$Headers.Add("Metadata", "True")
+$content =[System.Text.Encoding]::Default.GetString((Invoke-WebRequest -UseBasicParsing -Uri $url -Method 'GET' -Headers $Headers).RawContentStream.ToArray()) | ConvertFrom-Json
+$Token = $content.access_token
+echo "The managed identities for Azure resources access token is $Token"
+$SQLServerName = "<ServerName>" # Azure SQL logical server name
+$DatabaseName = "<DBname>" # Azure SQL database name
+Write-Host "Create SQL connection string"
+$conn = New-Object System.Data.SqlClient.SQLConnection
+$conn.ConnectionString = "Data Source=$SQLServerName.database.windows.net;Initial Catalog=$DatabaseName;Connect Timeout=30"
+$conn.AccessToken = $Token
+Write-host "Connect to database and execute SQL script"
+$conn.Open()
+$ddlstmt = "CREATE TABLE Person( PersonId INT IDENTITY PRIMARY KEY, FirstName NVARCHAR(128) NOT NULL)"
+Write-host " "
+Write-host "SQL DDL command"
+$ddlstmt
+$command = New-Object -TypeName System.Data.SqlClient.SqlCommand($ddlstmt, $conn)
+Write-host "results"
+$command.ExecuteNonQuery()
+$conn.Close()
+```
+ ## Next steps - If your runbooks aren't completing successfully, review [Troubleshoot Azure Automation managed identity issues (preview)](troubleshoot/managed-identity.md).
automation Remove User Assigned Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/remove-user-assigned-identity.md
+
+ Title: Remove user-assigned managed identity for Azure Automation account (preview)
+description: This article explains how to remove a user-assigned managed identity for an Azure Automation account.
++ Last updated : 07/13/2021+++
+# Remove user-assigned managed identity for Azure Automation account (preview)
+
+You can remove a user-assigned managed identity in Azure Automation by using the Azure portal, PowerShell, the Azure REST API, or an Azure Resource Manager (ARM) template.
+
+## Remove using the Azure portal
+
+You can remove a user-assigned managed identity from the Azure portal no matter how the user-assigned managed identity was originally added.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Navigate to your Automation account and under **Account Settings**, select **Identity**.
+
+1. Select the **User assigned** tab.
+
+1. Select the user-assigned managed identity to be removed from the list.
+
+1. Select **Remove**. When you're prompted to confirm, select **Yes**.
+
+The user-assigned managed identity is removed and no longer has access to the target resource.
+
+## Remove using PowerShell
+
+Use PowerShell cmdlet [Set-AzAutomationAccount](/powershell/module/az.automation/set-azautomationaccount) to remove all user-assigned managed identities and retain an existing system-assigned managed identity.
+
+1. Sign in to Azure interactively using the [Connect-AzAccount](/powershell/module/Az.Accounts/Connect-AzAccount) cmdlet and follow the instructions.
+
+ ```powershell
+ # Sign in to your Azure subscription
+ $sub = Get-AzSubscription -ErrorAction SilentlyContinue
+ if(-not($sub))
+ {
+ Connect-AzAccount -Subscription
+ }
+ ```
+
+1. Provide an appropriate value for the variables and then execute the script.
+
+ ```powershell
+ $resourceGroup = "resourceGroupName"
+ $automationAccount = "automationAccountName"
+ ```
+
+1. Execute [Set-AzAutomationAccount](/powershell/module/az.automation/set-azautomationaccount).
+
+ ```powershell
+ # Removes all UAs, keeps SA
+ $output = Set-AzAutomationAccount `
+ -ResourceGroupName $resourceGroup `
+ -Name $automationAccount `
+ -AssignSystemIdentity
+
+ $output.identity.Type
+ ```
+
+ The output will be `SystemAssigned`.
+
+## Remove using REST API
+
+You can remove a user-assigned managed identity from the Automation account by using the following REST API call and example.
+
+### Request body
+
+Scenario: System-assigned managed identity is enabled or is to be enabled. One of many user-assigned managed identities is to be removed. This example removes a user-assigned managed identity named `firstIdentity`.
+
+PATCH
+
+```json
+{
+ "identity": {
+ "type": "SystemAssigned, UserAssigned",
+ "userAssignedIdentities": {
+ "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resource-group-name/providers/Microsoft.ManagedIdentity/userAssignedIdentities/firstIdentity": null
+ }
+ }
+}
+```
+
+Scenario: System-assigned managed identity is enabled or is to be enabled. All user-assigned managed identities are to be removed.
+
+PUT
+
+```json
+{
+ "identity": {
+ "type": "SystemAssigned"
+ }
+}
+```
+
+Scenario: System-assigned managed identity is disabled or is to be disabled. One of many user-assigned managed identities is to be removed. This example removes a user-assigned managed identity named `firstIdentity`.
+
+PATCH
+
+```json
+{
+ "identity": {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resource-group-name/providers/Microsoft.ManagedIdentity/userAssignedIdentities/firstIdentity": null
+ }
+ }
+}
+
+```
+
+Scenario: System-assigned managed identity is disabled or is to be disabled. All user-assigned managed identities are to be removed.
+
+PUT
+
+```json
+{
+ "identity": {
+ "type": "None"
+ }
+}
+```
+
+The following is the service's REST API request URI to send the PATCH request.
+
+```http
+https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resource-group-name/providers/Microsoft.Automation/automationAccounts/automation-account-name?api-version=2020-01-13-preview
+```
+
+### Example
+
+Perform the following steps.
+
+1. Copy and paste the request body, depending on which operation you want to perform, into a file named `body_remove_ua.json`. Make any required modifications, and then save the file on your local machine or in an Azure storage account.
+
+1. Sign in to Azure interactively using the [Connect-AzAccount](/powershell/module/Az.Accounts/Connect-AzAccount) cmdlet and follow the instructions.
+
+ ```powershell
+ # Sign in to your Azure subscription
+ $sub = Get-AzSubscription -ErrorAction SilentlyContinue
+ if(-not($sub))
+ {
+ Connect-AzAccount -Subscription
+ }
+ ```
+
+1. Provide an appropriate value for the variables and then execute the script.
+
+ ```powershell
+ $subscriptionID = "subscriptionID"
+ $resourceGroup = "resourceGroupName"
+ $automationAccount = "automationAccountName"
+ $file = "path\body_remove_ua.json"
+ ```
+
+1. This example uses the PowerShell cmdlet [Invoke-RestMethod](/powershell/module/microsoft.powershell.utility/invoke-restmethod) to send the PATCH request to your Automation account.
+
+ ```powershell
+ # build URI
+ $URI = "https://management.azure.com/subscriptions/$subscriptionID/resourceGroups/$resourceGroup/providers/Microsoft.Automation/automationAccounts/$automationAccount`?api-version=2020-01-13-preview"
+
+ # build body
+ $body = Get-Content $file
+
+ # obtain access token
+ $azContext = Get-AzContext
+ $azProfile = [Microsoft.Azure.Commands.Common.Authentication.Abstractions.AzureRmProfileProvider]::Instance.Profile
+ $profileClient = New-Object -TypeName Microsoft.Azure.Commands.ResourceManager.Common.RMProfileClient -ArgumentList ($azProfile)
+ $token = $profileClient.AcquireAccessToken($azContext.Subscription.TenantId)
+ $authHeader = @{
+ 'Content-Type'='application/json'
+ 'Authorization'='Bearer ' + $token.AccessToken
+ }
+
+ # Invoke the REST API
+ Invoke-RestMethod -Uri $URI -Method PATCH -Headers $authHeader -Body $body
+
+ # Confirm removal
+ (Get-AzAutomationAccount `
+ -ResourceGroupName $resourceGroup `
+ -Name $automationAccount).Identity.Type
+ ```
+
+ Depending on the syntax you used, the output will either be: `SystemAssignedUserAssigned`, `SystemAssigned`, `UserAssigned`, or blank.
+
+## Remove using Azure Resource Manager template
+
+If you added the user-assigned managed identity for your Automation account using an Azure Resource Manager template, you can remove the user-assigned managed identity by modifying the template, and then re-running it.
+
+Scenario: System-assigned managed identity is enabled or is to be enabled. One of two user-assigned managed identities is to be removed. This syntax snippet removes **all** user-assigned managed identities **except for** the one passed as a parameter to the template.
+
+```json
+...
+"identity": {
+ "type": "SystemAssigned, UserAssigned",
+ "userAssignedIdentities": {
+ "[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/',parameters('userAssignedOne'))]": {}
+ }
+},
+...
+```
+
+Scenario: System-assigned managed identity is enabled or is to be enabled. All user-assigned managed identities are to be removed.
+
+```json
+...
+"identity": {
+ "type": "SystemAssigned"
+},
+...
+```
+
+Scenario: System-assigned managed identity is disabled or is to be disabled. One of two user-assigned managed identities is to be removed. This syntax snippet removes **all** user-assigned managed identities **except for** the one passed as a parameter to the template.
+
+```json
+...
+"identity": {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/',parameters('userAssignedOne'))]": {}
+ }
+},
+...
+```
+
+Use the [Get-AzAutomationAccount](/powershell/module/az.automation/get-azautomationaccount) cmdlet to verify. Depending on the syntax you used, the output will either be: `SystemAssignedUserAssigned`, `SystemAssigned`, or `UserAssigned`.
+
+```powershell
+(Get-AzAutomationAccount `
+ -ResourceGroupName $resourceGroup `
+ -Name $automationAccount).Identity.Type
+```
+
+## Next steps
+
+- For more information about enabling managed identities in Azure Automation, see [Enable and use managed identity for Automation (preview)](enable-managed-identity-for-automation.md).
+
+- For an overview of Automation account security, see [Automation account authentication overview](automation-security-overview.md).
availability-zones Az Region https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/availability-zones/az-region.md
Azure Availability Zones are available with your Azure subscription. Learn more
## Next steps > [!div class="nextstepaction"]
-> [Regions and Availability Zones in Azure](az-overview.md)
+> [Regions and Availability Zones in Azure](az-overview.md)
azure-app-configuration Howto Backup Config Store https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/howto-backup-config-store.md
az role assignment create \
--scope $secondaryAppConfigId ```
-Use the following command or the [Azure portal](../storage/common/storage-auth-aad-rbac-portal.md#assign-azure-roles-using-the-azure-portal) to grant the managed identity of your function app access to your queue. Assign the `Storage Queue Data Contributor` role in the queue.
+Use the following command or the [Azure portal](../storage/blobs/assign-azure-role-data-access.md#assign-an-azure-role) to grant the managed identity of your function app access to your queue. Assign the `Storage Queue Data Contributor` role in the queue.
```azurecli-interactive az role assignment create \
az group delete --name $resourceGroupName
Now that you know how to set up automatic backup of your key-values, learn more about how you can increase the geo-resiliency of your application: -- [Resiliency and disaster recovery](concept-disaster-recovery.md)
+- [Resiliency and disaster recovery](concept-disaster-recovery.md)
azure-app-configuration Howto Integrate Azure Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/howto-integrate-azure-managed-service-identity.md
To set up a managed identity in the portal, you first create an application and
## Deploy your application
-Using managed identities requires you to deploy your app to an Azure service. Managed identities can't be used for authentication of locally-running apps. To deploy the .NET Core app that you created in the [Create an ASP.NET Core app with App Configuration](./quickstart-aspnet-core-app.md) quickstart and modified to use managed identities, follow the guidance in [Publish your web app](/azure/app-service/quickstart-dotnetcore?tabs=netcore31&pivots=development-environment-vs#publish-your-web-app).
+Using managed identities requires you to deploy your app to an Azure service. Managed identities can't be used for authentication of locally-running apps. To deploy the .NET Core app that you created in the [Create an ASP.NET Core app with App Configuration](./quickstart-aspnet-core-app.md) quickstart and modified to use managed identities, follow the guidance in [Publish your web app](../app-service/quickstart-dotnetcore.md?pivots=development-environment-vs&tabs=netcore31#publish-your-web-app).
-In addition to App Service, many other Azure services support managed identities. For more information, see [Services that support managed identities for Azure resources](/azure/active-directory/managed-identities-azure-resources/services-support-managed-identities).
+In addition to App Service, many other Azure services support managed identities. For more information, see [Services that support managed identities for Azure resources](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md).
## Clean up resources
azure-cache-for-redis Cache Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-best-practices.md
If you would like to test how your code works under error conditions, consider u
* **We recommend using Dv2 VM Series** for your client as they have better hardware and will give the best results. * Make sure the client VM you use has **at least as much compute and bandwidth* as the cache being tested. * **Test under failover conditions** on your cache. It's important to ensure that you don't test the performance of your cache only under steady state conditions. Test under failover conditions, too, and measure the CPU/Server Load on your cache during that time. You can start a failover by [rebooting the primary node](cache-administration.md#reboot). Testing under failover conditions allows you to see how your application behaves in terms of throughput and latency during failover conditions. Failover can happen during updates and during an unplanned event. Ideally you don't want to see CPU/Server Load peak to more than say 80% even during a failover as that can affect performance.
-* **Some cache sizes** are hosted on VMs with four or more cores. Distribute the TLS encryption/decryption and TLS connection/disconnection workloads across multiple cores to bring down overall CPU usage on the cache VMs. [See here for details around VM sizes and cores](/azure/azure-cache-for-redis/cache-planning-faq#azure-cache-for-redis-performance)
+* **Some cache sizes** are hosted on VMs with four or more cores. Distribute the TLS encryption/decryption and TLS connection/disconnection workloads across multiple cores to bring down overall CPU usage on the cache VMs. [See here for details around VM sizes and cores](./cache-planning-faq.yml#azure-cache-for-redis-performance)
* **Enable VRSS** on the client machine if you are on Windows. [See here for details](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/dn383582(v=ws.11)). Example PowerShell script: >PowerShell -ExecutionPolicy Unrestricted Enable-NetAdapterRSS -Name ( Get-NetAdapter).Name * **Consider using Premium tier Redis instances**. These cache sizes will have better network latency and throughput because they're running on better hardware for both CPU and Network. > [!NOTE]
- > Our observed performance results are [published here](/azure/azure-cache-for-redis/cache-planning-faq#azure-cache-for-redis-performance) for your reference. Also, be aware that SSL/TLS adds some overhead, so you may get different latencies and/or throughput if you're using transport encryption.
+ > Our observed performance results are [published here](./cache-planning-faq.yml#azure-cache-for-redis-performance) for your reference. Also, be aware that SSL/TLS adds some overhead, so you may get different latencies and/or throughput if you're using transport encryption.
### Redis-Benchmark examples
Test GET requests using a 1k payload.
**To test throughput:** Pipelined GET requests with 1k payload.
-> redis-benchmark -h yourcache.redis.cache.windows.net -a yourAccesskey -t GET -n 1000000 -d 1024 -P 50 -c 50
+> redis-benchmark -h yourcache.redis.cache.windows.net -a yourAccesskey -t GET -n 1000000 -d 1024 -P 50 -c 50
azure-cache-for-redis Cache Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-configure.md
Each pricing tier has different limits for client connections, memory, and bandw
| Azure Cache for Redis metric | More information | | | |
-| Network bandwidth usage |[Cache performance - available bandwidth](/azure/azure-cache-for-redis/cache-planning-faq#azure-cache-for-redis-performance) |
+| Network bandwidth usage |[Cache performance - available bandwidth](./cache-planning-faq.yml#azure-cache-for-redis-performance) |
| Connected clients |[Default Redis server configuration - max clients](#maxclients) | | Server load |[Usage charts - Redis Server Load](cache-how-to-monitor.md#usage-charts) |
-| Memory usage |[Cache performance - size](/azure/azure-cache-for-redis/cache-planning-faq#azure-cache-for-redis-performance) |
+| Memory usage |[Cache performance - size](./cache-planning-faq.yml#azure-cache-for-redis-performance) |
To upgrade your cache, select **Upgrade now** to change the pricing tier and [scale](#scale) your cache. For more information on choosing a pricing tier, see [Choosing the right tier](cache-overview.md#choosing-the-right-tier)
For information on moving resources from one resource group to another, and from
## Next steps
-* For more information on working with Redis commands, see [How can I run Redis commands?](cache-development-faq.yml#how-can-i-run-redis-commands-)
+* For more information on working with Redis commands, see [How can I run Redis commands?](cache-development-faq.yml#how-can-i-run-redis-commands-)
azure-cache-for-redis Cache How To Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-monitor.md
Each metric includes two versions. One metric measures performance for the entir
| Cache Hits |The number of successful key lookups during the specified reporting interval. This number maps to `keyspace_hits` from the Redis [INFO](https://redis.io/commands/info) command. | | Cache Latency (Preview) | The latency of the cache calculated using the internode latency of the cache. This metric is measured in microseconds, and has three dimensions: `Avg`, `Min`, and `Max`. The dimensions represent the average, minimum, and maximum latency of the cache during the specified reporting interval. | | Cache Misses |The number of failed key lookups during the specified reporting interval. This number maps to `keyspace_misses` from the Redis INFO command. Cache misses don't necessarily mean there's an issue with the cache. For example, when using the cache-aside programming pattern, an application looks first in the cache for an item. If the item isn't there (cache miss), the item is retrieved from the database and added to the cache for next time. Cache misses are normal behavior for the cache-aside programming pattern. If the number of cache misses is higher than expected, examine the application logic that populates and reads from the cache. If items are being evicted from the cache because of memory pressure, then there may be some cache misses, but a better metric to monitor for memory pressure would be `Used Memory` or `Evicted Keys`. |
-| Cache Read |The amount of data read from the cache in Megabytes per second (MB/s) during the specified reporting interval. This value is derived from the network interface cards that support the virtual machine that hosts the cache and isn't Redis specific. **This value corresponds to the network bandwidth used by this cache. If you want to set up alerts for server-side network bandwidth limits, then create it using this `Cache Read` counter. See [this table](/azure/azure-cache-for-redis/cache-planning-faq#azure-cache-for-redis-performance) for the observed bandwidth limits for various cache pricing tiers and sizes.** |
+| Cache Read |The amount of data read from the cache in Megabytes per second (MB/s) during the specified reporting interval. This value is derived from the network interface cards that support the virtual machine that hosts the cache and isn't Redis specific. **This value corresponds to the network bandwidth used by this cache. If you want to set up alerts for server-side network bandwidth limits, then create it using this `Cache Read` counter. See [this table](./cache-planning-faq.yml#azure-cache-for-redis-performance) for the observed bandwidth limits for various cache pricing tiers and sizes.** |
| Cache Write |The amount of data written to the cache in Megabytes per second (MB/s) during the specified reporting interval. This value is derived from the network interface cards that support the virtual machine that hosts the cache and isn't Redis specific. This value corresponds to the network bandwidth of data sent to the cache from the client. | | Connected Clients |The number of client connections to the cache during the specified reporting interval. This number maps to `connected_clients` from the Redis INFO command. Once the [connection limit](cache-configure.md#default-redis-server-configuration) is reached, later attempts to connect to the cache fail. Even if there are no active client applications, there may still be a few instances of connected clients because of internal processes and connections. | | CPU |The CPU utilization of the Azure Cache for Redis server as a percentage during the specified reporting interval. This value maps to the operating system `\Processor(_Total)\% Processor Time` performance counter. |
Activity logs provide insight into the operations that completed on your Azure C
To view activity logs for your cache, select **Activity logs** from the **Resource menu**.
-For more information about Activity logs, see [Overview of the Azure Activity Log](../azure-monitor/essentials/platform-logs-overview.md).
+For more information about Activity logs, see [Overview of the Azure Activity Log](../azure-monitor/essentials/platform-logs-overview.md).
azure-cache-for-redis Cache Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-overview.md
The [Azure Cache for Redis Pricing](https://azure.microsoft.com/pricing/details/
Consider the following options when choosing an Azure Cache for Redis tier: * **Memory**: The Basic and Standard tiers offer 250 MB ΓÇô 53 GB; the Premium tier 6 GB - 1.2 TB; the Enterprise tiers 12 GB - 14 TB. To create a Premium tier cache larger than 120 GB, you can use Redis OSS clustering. For more information, see [Azure Cache for Redis Pricing](https://azure.microsoft.com/pricing/details/cache/). For more information, see [How to configure clustering for a Premium Azure Cache for Redis](cache-how-to-premium-clustering.md).
-* **Performance**: Caches in the Premium and Enterprise tiers are deployed on hardware that has faster processors, giving better performance compared to the Basic or Standard tier. Premium tier Caches have higher throughput and lower latencies. For more information, see [Azure Cache for Redis performance](/azure/azure-cache-for-redis/cache-planning-faq#azure-cache-for-redis-performance).
+* **Performance**: Caches in the Premium and Enterprise tiers are deployed on hardware that has faster processors, giving better performance compared to the Basic or Standard tier. Premium tier Caches have higher throughput and lower latencies. For more information, see [Azure Cache for Redis performance](./cache-planning-faq.yml#azure-cache-for-redis-performance).
* **Dedicated core for Redis server**: All caches except C0 run dedicated VM cores. Redis, by design, uses only one thread for command processing. Azure Cache for Redis uses other cores for I/O processing. Having more cores improves throughput performance even though it may not produce linear scaling. Furthermore, larger VM sizes typically come with higher bandwidth limits than smaller ones. That helps you avoid network saturation, which will cause timeouts in your application.
-* **Network performance**: If you have a workload that requires high throughput, the Premium or Enterprise tier offers more bandwidth compared to Basic or Standard. Also within each tier, larger size caches have more bandwidth because of the underlying VM that hosts the cache. For more information, see [Azure Cache for Redis performance](/azure/azure-cache-for-redis/cache-planning-faq#azure-cache-for-redis-performance).
+* **Network performance**: If you have a workload that requires high throughput, the Premium or Enterprise tier offers more bandwidth compared to Basic or Standard. Also within each tier, larger size caches have more bandwidth because of the underlying VM that hosts the cache. For more information, see [Azure Cache for Redis performance](./cache-planning-faq.yml#azure-cache-for-redis-performance).
* **Maximum number of client connections**: The Premium and Enterprise tiers offer the maximum numbers of clients that can connect to Redis, offering higher numbers of connections for larger sized caches. Clustering increases the total amount of network bandwidth available for a clustered cache. * **High availability**: Azure Cache for Redis provides multiple [high availability](cache-high-availability.md) options. It guarantees that a Standard, Premium, or Enterprise cache is available according to our [SLA](https://azure.microsoft.com/support/legal/sla/cache/v1_0/). The SLA only covers connectivity to the cache endpoints. The SLA doesn't cover protection from data loss. We recommend using the Redis data persistence feature in the Premium and Enterprise tiers to increase resiliency against data loss. * **Data persistence**: The Premium and Enterprise tiers allow you to persist the cache data to an Azure Storage account and a Managed Disk respectively. Underlying infrastructure issues might result in potential data loss. We recommend using the Redis data persistence feature in these tiers to increase resiliency against data loss. Azure Cache for Redis offers both RDB and AOF (preview) options. Data persistence can be enabled through Azure portal and CLI. For the Premium tier, see [How to configure persistence for a Premium Azure Cache for Redis](cache-how-to-premium-persistence.md).
azure-functions Functions Reference Node https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-reference-node.md
For Linux function apps, run the following Azure CLI command to update the Node
az functionapp config set --linux-fx-version "node|14" --name "<MY_APP_NAME>" --resource-group "<MY_RESOURCE_GROUP_NAME>" ```
+To learn more about Azure Functions runtime support policy, please refer to this [article](./language-support-policy.md)
+ ## Dependency management In order to use community libraries in your JavaScript code, as is shown in the below example, you need to ensure that all dependencies are installed on your Function App in Azure.
azure-functions Functions Reference Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-reference-powershell.md
The following table shows the PowerShell versions available to each major versio
You can see the current version by printing `$PSVersionTable` from any function.
+To learn more about Azure Functions runtime support policy, please refer to this [article](./language-support-policy.md)
+ ### Running local on a specific version When running locally the Azure Functions runtime defaults to using PowerShell Core 6. To instead use PowerShell 7 when running locally, you need to add the setting `"FUNCTIONS_WORKER_RUNTIME_VERSION" : "~7"` to the `Values` array in the local.setting.json file in the project root. When running locally on PowerShell 7, your local.settings.json file looks like the following example:
When running locally the Azure Functions runtime defaults to using PowerShell Co
Your function app must be running on version 3.x to be able to upgrade from PowerShell Core 6 to PowerShell 7. To learn how to do this, see [View and update the current runtime version](set-runtime-version.md#view-and-update-the-current-runtime-version). + Use the following steps to change the PowerShell version used by your function app. You can do this either in the Azure portal or by using PowerShell. # [Portal](#tab/portal)
azure-functions Functions Reference Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-reference-python.md
When running locally, the runtime uses the available Python version.
### Changing Python version
-To set a Python function app to a specific language version, you need to specify the language as well as the version of the language in `LinuxFxVersion` field in site config. For example, to change Python app to use Python 3.8.
+To set a Python function app to a specific language version, you need to specify the language as well as the version of the language in `LinuxFxVersion` field in site config. For example, to change Python app to use Python 3.8, set `linuxFxVersion` to `python|3.8`.
-Set `linuxFxVersion` to `python|3.8`.
+To learn more about Azure Functions runtime support policy, please refer to this [article](./language-support-policy.md)
To see the full list of supported Python versions functions apps, please refer to this [article](./supported-languages.md) ++ # [Azure CLI](#tab/azurecli-linux) You can view and set the `linuxFxVersion` from the Azure CLI.
azure-functions Language Support Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/language-support-policy.md
There are few exceptions to the retirement policy outlined above. Here is a list
|Language Versions |EOL Date |Expected Retirement Date| |--|--|-|
+|.NET 5|February 2022|TBA|
|Node 6|30 April 2019|TBA| |Node 8|31 December 2019|TBA| |Node 10|30 April 2021|TBA|
azure-government Documentation Government Overview Wwps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-overview-wwps.md
This section addresses common customer questions related to Azure public, privat
### Transparency and audit -- **Audit documentation:** Does Microsoft make all audit documentation readily available to customers to download and examine? **Answer:** Yes, Microsoft makes independent third-party audit reports and other related documentation available for download under a non-disclosure agreement from the Azure portal. You will need an existing Azure subscription or [free trial subscription](https://azure.microsoft.com/free/) to access the Azure Security Center [audit reports blade](https://ms.portal.azure.com/#blade/Microsoft_Azure_Security/AuditReportsBlade). Additional compliance documentation is available from the Service Trust Portal (STP) [Audit Reports](https://servicetrust.microsoft.com/ViewPage/MSComplianceGuideV3) section. You must log in to access audit reports on the STP. For more information, see [Get started with the Microsoft Service Trust Portal](https://aka.ms/stphelp).
+- **Audit documentation:** Does Microsoft make all audit documentation readily available to customers to download and examine? **Answer:** Yes, Microsoft makes independent third-party audit reports and other related documentation available for download under a non-disclosure agreement from the Azure portal. You will need an existing Azure subscription or [free trial subscription](https://azure.microsoft.com/free/) to access the Azure Security Center [audit reports blade](https://ms.portal.azure.com/#blade/Microsoft_Azure_Security/AuditReportsBlade). Additional compliance documentation is available from the Service Trust Portal (STP) [Audit Reports](https://servicetrust.microsoft.com/ViewPage/MSComplianceGuideV3) section. You must log in to access audit reports on the STP. For more information, see [Get started with the Microsoft Service Trust Portal](/microsoft-365/compliance/get-started-with-service-trust-portal).
- **Process auditability:** Does Microsoft make its processes, data flow, and documentation available to customers or regulators for audit? **Answer:** Microsoft offers a Regulator Right to Examine, which is a program Microsoft implemented to provide regulators with direct right to examine Azure, including the ability to conduct an on-site examination, to meet with Microsoft personnel and Microsoft external auditors, and to access any related information, records, reports, and documents. - **Service documentation:** Can Microsoft provide in-depth documentation covering service architecture, software and hardware components, and data protocols? **Answer:** Yes, Microsoft provides extensive and in-depth Azure online documentation covering all these topics. For example, you can review documentation on Azure [products](../index.yml), [global infrastructure](https://azure.microsoft.com/global-infrastructure/), and [API reference](/rest/api/azure/).
Learn more about:
- [Azure Security](../security/index.yml) - [Azure Compliance](../compliance/index.yml) - [Azure guidance for secure isolation](./azure-secure-isolation-guidance.md)-- [Azure for government - worldwide government](https://azure.microsoft.com/industries/government/)
+- [Azure for government - worldwide government](https://azure.microsoft.com/industries/government/)
azure-maps Tutorial Iot Hub Maps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/tutorial-iot-hub-maps.md
IoT Hub enables secure and reliable bi-directional communication between an IoT
> [!NOTE] > The ability to publish device telemetry events on Event Grid is currently in preview. This feature is available in all regions except the following: East US, West US, West Europe, Azure Government, Azure China 21Vianet, and Azure Germany.
-To create an IoT hub in the *ContosoRental* resource group, follow the steps in [create an IoT hub](../iot-hub/quickstart-send-telemetry-dotnet.md#create-an-iot-hub).
+To create an IoT hub in the *ContosoRental* resource group, follow the steps in [create an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp#create-an-iot-hub).
## Register a device in your IoT hub
To learn more about how to send device-to-cloud telemetry, and the other way aro
> [!div class="nextstepaction"]
-> [Send telemetry from a device](../iot-hub/quickstart-send-telemetry-dotnet.md)
+> [Send telemetry from a device](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp)
azure-monitor Azure Monitor Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/azure-monitor-agent-overview.md
The Azure Monitor agent supports Azure service tags (both AzureMonitor and Azure
## Next steps - [Install Azure Monitor agent](azure-monitor-agent-install.md) on Windows and Linux virtual machines.-- [Create a data collection rule](data-collection-rule-azure-monitor-agent.md) to collect data from the agent and send it to Azure Monitor.
+- [Create a data collection rule](data-collection-rule-azure-monitor-agent.md) to collect data from the agent and send it to Azure Monitor.
azure-monitor Alerts Log https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-log.md
The previous sections described how to create, view, and manage log alert rules
1. You can disable a log alert rule using the following command: ```azurecli
- az monitor scheduled-query update -g {ResourceGroup} -n {AlertRuleName} --enabled false
+ az monitor scheduled-query update -g {ResourceGroup} -n {AlertRuleName} --disabled false
``` 1. You can delete a log alert rule using the following command:
azure-monitor Alerts Metric https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-metric.md
The previous sections described how to create, view, and manage metric alert rul
6. You can disable a metric alert rule using the following command. ```azurecli
- az monitor metrics alert update -g {ResourceGroup} -n {AlertRuleName} --enabled false
+ az monitor metrics alert update -g {ResourceGroup} -n {AlertRuleName} --disabled false
``` 7. You can delete a metric alert rule using the following command.
Metric alert rules have dedicated PowerShell cmdlets available:
- [Understand how metric alerts work](./alerts-metric-overview.md) - [Understand how metric alerts with Dynamic Thresholds condition work](../alerts/alerts-dynamic-thresholds.md) - [Understand the web hook schema for metric alerts](./alerts-metric-near-real-time.md#payload-schema)-- [Troubleshooting problems in metric alerts](./alerts-troubleshoot-metric.md)
+- [Troubleshooting problems in metric alerts](./alerts-troubleshoot-metric.md)
azure-monitor Api Custom Events Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/api-custom-events-metrics.md
If you set any of these values yourself, consider removing the relevant line fro
* **ID**: A generated value that correlates different events, so that when you inspect any event in Diagnostic Search, you can find related items. * **Name**: An identifier, usually the URL of the HTTP request. * **SyntheticSource**: If not null or empty, a string that indicates that the source of the request has been identified as a robot or web test. By default, it is excluded from calculations in Metrics Explorer.
-* **Properties**: Properties that are sent with all telemetry data. It can be overridden in individual Track* calls.
* **Session**: The user's session. The ID is set to a generated value, which is changed when the user has not been active for a while. * **User**: User information.
azure-monitor Cloudservices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/cloudservices.md
Did you build for .NET 4.6? .NET 4.6 is not automatically supported in Azure clo
[netlogs]: ./asp-net-trace-logs.md [portal]: https://portal.azure.com/ [qna]: ../faq.yml
-[redfield]: ./monitor-performance-live-website-now.md
+[redfield]: ./status-monitor-v2-overview.md
[start]: ./app-insights-overview.md
azure-monitor Configuration With Applicationinsights Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/configuration-with-applicationinsights-config.md
TelemetryConfiguration.Active.ApplicationIdProvider = new DictionaryApplicationI
[exceptions]: ./asp-net-exceptions.md [netlogs]: ./asp-net-trace-logs.md [new]: ./create-new-resource.md
-[redfield]: ./monitor-performance-live-website-now.md
-[start]: ./app-insights-overview.md
-
+[redfield]: ./status-monitor-v2-overview.md
+[start]: ./app-insights-overview.md
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-standalone-config.md
The setting applies to all of these metrics:
> [!NOTE] > Authentication feature is available starting from version 3.2.0-BETA
-It allows you to configure agent to generate [token credentials](https://go.microsoft.com/fwlink/?linkid=2163810) that are required for Azure Active Directory Authentication.
+It allows you to configure agent to generate [token credentials](/java/api/overview/azure/identity-readme#credentials) that are required for Azure Active Directory Authentication.
For more information, check out the [Authentication](./azure-ad-authentication.md) documentation. ## Self-diagnostics
Please configure specific options based on your needs.
} } }
-```
+```
azure-monitor Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/release-notes.md
Read also our [blogs](https://azure.microsoft.com/blog/tag/application-insights/
Get started with codeless monitor codeless monitoring: * [Azure VM and Azure virtual machine scale set IIS-hosted apps](./azure-vm-vmss-apps.md)
-* [IIS server](./monitor-performance-live-website-now.md)
+* [IIS server](./status-monitor-v2-overview.md)
* [Azure Web Apps](./azure-web-apps.md) Get started with code-based monitoring:
Get started with code-based monitoring:
* [ASP.NET Core](./asp-net-core.md) * [Java](./java-in-process-agent.md) * [Node.js](./nodejs.md)
-* [Python](./opencensus-python.md)
-
+* [Python](./opencensus-python.md)
azure-monitor Sdk Connection String https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/sdk-connection-string.md
You can set the connection string in the `applicationinsights.json` configuratio
} ```
-See [connection string configuration](/azure/azure-monitor/app/java-standalone-config#connection-string) for more details.
+See [connection string configuration](./java-standalone-config.md#connection-string) for more details.
For Application Insights Java 2.x, you can set the connection string in the `ApplicationInsights.xml` configuration file:
Get started at development time with:
* [ASP.NET Core](./asp-net-core.md) * [Java](./java-in-process-agent.md) * [Node.js](./nodejs.md)
-* [Python](./opencensus-python.md)
-
+* [Python](./opencensus-python.md)
azure-monitor Continuous Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/continuous-monitoring.md
In order to gain observability across your entire environment, you need to enabl
- [Azure DevOps Projects](../devops-project/overview.md) give you a simplified experience with your existing code and Git repository, or choose from one of the sample applications to create a Continuous Integration (CI) and Continuous Delivery (CD) pipeline to Azure. - [Continuous monitoring in your DevOps release pipeline](./app/continuous-monitoring.md) allows you to gate or rollback your deployment based on monitoring data.-- [Status Monitor](./app/monitor-performance-live-website-now.md) allows you to instrument a live .NET app on Windows with Azure Application Insights, without having to modify or redeploy your code.
+- [Status Monitor](./app/status-monitor-v2-overview.md) allows you to instrument a live .NET app on Windows with Azure Application Insights, without having to modify or redeploy your code.
- If you have access to the code for your application, then enable full monitoring with [Application Insights](./app/app-insights-overview.md) by installing the Azure Monitor Application Insights SDK for [.NET](./app/asp-net.md), [.NET Core](./app/asp-net-core.md), [Java](./app/java-in-process-agent.md), [Node.js](./app/nodejs-quick-start.md), or [any other programming languages](./app/platforms.md). This allows you to specify custom events, metrics, or page views that are relevant to your application and your business.
azure-monitor Data Platform Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/data-platform-metrics.md
For most resources in Azure, platform metrics are stored for 93 days. There are
> [!NOTE]
-> As mentioned above, for most resources in Azure, platform metrics are stored for 93 days. However, you can only query (in Metrics tile) for no more than 30 days worth of data on any single chart. This limitation doesn't apply to log-based metrics. In case you see a blank chart or your chart only displays part of metric data, verify the difference between start and end dates in the time picker doesn't exceed the 30-day interval. Once you have selected a 30 day interval, you can [pan](https://docs.microsoft.com/azure/azure-monitor/essentials/metrics-charts#pan) the chart to view the full retention window.
+> As mentioned above, for most resources in Azure, platform metrics are stored for 93 days. However, you can only query (in Metrics tile) for no more than 30 days worth of data on any single chart. This limitation doesn't apply to log-based metrics. In case you see a blank chart or your chart only displays part of metric data, verify the difference between start and end dates in the time picker doesn't exceed the 30-day interval. Once you have selected a 30 day interval, you can [pan](./metrics-charts.md#pan) the chart to view the full retention window.
For most resources in Azure, platform metrics are stored for 93 days. There are
- Learn more about the [Azure Monitor data platform](../data-platform.md). - Learn about [log data in Azure Monitor](../logs/data-platform-logs.md).-- Learn about the [monitoring data available](../agents/data-sources.md) for different resources in Azure.
+- Learn about the [monitoring data available](../agents/data-sources.md) for different resources in Azure.
azure-monitor Resource Logs Schema https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/resource-logs-schema.md
The schema for resource logs varies depending on the resource and log category.
| Azure Batch |[Azure Batch logging](../../batch/batch-diagnostics.md) | | Cognitive Services | [Logging for Azure Cognitive Services](../../cognitive-services/diagnostic-logging.md) | | Container Instances | [Logging for Azure Container Instances](../../container-instances/container-instances-log-analytics.md#log-schema) |
-| Container Registry | [Logging for Azure Container Registry](../../container-registry/container-registry-diagnostics-audit-logs.md) |
+| Container Registry | [Logging for Azure Container Registry](../../container-registry/monitor-service.md) |
| Content Delivery Network | [Azure Logs for CDN](../../cdn/cdn-azure-diagnostic-logs.md) | | CosmosDB | [Azure Cosmos DB Logging](../../cosmos-db/monitor-cosmos-db.md) | | Data Factory | [Monitor Data Factories using Azure Monitor](../../data-factory/monitor-using-azure-monitor.md) |
The schema for resource logs varies depending on the resource and log category.
| Azure Database for MySQL | [Azure Database for MySQL diagnostic logs](../../mysql/concepts-server-logs.md#diagnostic-logs) | | Azure Database for PostgreSQL | [Azure Database for PostgreSQL logs](../../postgresql/concepts-server-logs.md#resource-logs) | | Azure Databricks | [Diagnostic logging in Azure Databricks](/azure/databricks/administration-guide/account-settings/azure-diagnostic-logs) |
+| Azure Machine Learning | [Diagnostic logging in Azure Machine Learning](/azure/machine-learning/monitor-resource-reference) |
| DDoS Protection | [Logging for Azure DDoS Protection Standard](../../ddos-protection/diagnostic-logging.md#log-schemas) | | Azure Digital Twins | [Set up Azure Digital Twins Diagnostics](../../digital-twins/troubleshoot-diagnostics.md#log-schemas) | Event Hubs |[Azure Event Hubs logs](../../event-hubs/event-hubs-diagnostic-logs.md) |
The schema for resource logs varies depending on the resource and log category.
| IoT Hub | [IoT Hub Operations](../../iot-hub/monitor-iot-hub-reference.md#resource-logs) | | Key Vault |[Azure Key Vault Logging](../../key-vault/general/logging.md) | | Kubernetes Service |[Azure Kubernetes Logging](../../aks/view-control-plane-logs.md#log-event-schema) |
-| Load Balancer |[Log analytics for Azure Load Balancer](../../load-balancer/load-balancer-monitor-log.md) |
+| Load Balancer |[Log analytics for Azure Load Balancer](../../load-balancer/monitor-load-balancer.md) |
| Logic Apps |[Logic Apps B2B custom tracking schema](../../logic-apps/logic-apps-track-integration-account-custom-tracking-schema.md) | | Media Services | [Media services monitoring schemas](../../media-services/latest/monitoring/monitor-media-services-data-reference.md#schemas) | | Network Security Groups |[Log analytics for network security groups (NSGs)](../../virtual-network/virtual-network-nsg-manage-log.md) |
azure-monitor Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/data-explorer.md
The **Usage** tab allows users to deep dive into the performance of the cluster'
The **tables** tab shows the latest and historical properties of tables in the cluster. You can see which tables are consuming the most space, track growth history by table size, hot data, and the number of rows over time.
-The **cache** tab allows users to analyze their actual queries' lookback window patterns and compare them to the configured cache policy (for each table). You can identify tables used by the most queries and tables that are not queried at all, and adapt the cache policy accordingly. You may get particular cache policy recommendations on specific tables in Azure Advisor (currently, cache recommendations are available only from the [main Azure Advisor dashboard](https://docs.microsoft.com/azure/data-explorer/azure-advisor#use-the-azure-advisor-recommendations)), based on actual queries' lookback window in the past 30 days and an un-optimized cache policy for at least 95% of the queries. Cache reduction recommendations in Azure Advisor are available for clusters that are "bounded by data" (meaning the cluster has low CPU and low ingestion utilization, but because of high data capacity, the cluster could not scale-in or scale-down).
+The **cache** tab allows users to analyze their actual queries' lookback window patterns and compare them to the configured cache policy (for each table). You can identify tables used by the most queries and tables that are not queried at all, and adapt the cache policy accordingly. You may get particular cache policy recommendations on specific tables in Azure Advisor (currently, cache recommendations are available only from the [main Azure Advisor dashboard](/azure/data-explorer/azure-advisor#use-the-azure-advisor-recommendations)), based on actual queries' lookback window in the past 30 days and an un-optimized cache policy for at least 95% of the queries. Cache reduction recommendations in Azure Advisor are available for clusters that are "bounded by data" (meaning the cluster has low CPU and low ingestion utilization, but because of high data capacity, the cluster could not scale-in or scale-down).
[![Screenshot of cache details](./media/data-explorer/cache-tab.png)](./media/data-explorer/cache-tab.png#lightbox)
Currently, diagnostic logs do not work retroactively, so the data will only star
## Next steps
-Learn the scenarios workbooks are designed to support, how to author new and customize existing reports, and more by reviewing [Create interactive reports with Azure Monitor workbooks](../visualize/workbooks-overview.md).
+Learn the scenarios workbooks are designed to support, how to author new and customize existing reports, and more by reviewing [Create interactive reports with Azure Monitor workbooks](../visualize/workbooks-overview.md).
azure-monitor Sql Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/sql-insights-overview.md
The tables below have the following columns:
## Next steps - See [Enable SQL insights](sql-insights-enable.md) for instructions on enabling SQL insights-- See [Frequently asked questions](/azure/azure-monitor/faq#sql-insights-preview) for frequently asked questions about SQL insights
+- See [Frequently asked questions](/azure/azure-monitor/faq#sql-insights-preview) for frequently asked questions about SQL insights
azure-monitor Wire Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/wire-data.md
VMConnection
### More examples queries
-Refer to the [VM insights log search documentation](../vm/vminsights-log-search.md) and the [VM insights alert documentation](../vm/vminsights-alerts.md#sample-alert-queries) for additional example queries.
+Refer to the [VM insights log search documentation](../vm/vminsights-log-search.md) and the [VM insights alert documentation](../vm/monitor-virtual-machine-alerts.md) for additional example queries.
## Uninstall Wire Data 2.0 Solution
azure-monitor Cross Workspace Query https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/cross-workspace-query.md
Last updated 06/30/2021
Azure Monitor Logs support querying across multiple Log Analytics workspaces and Application Insights apps in the same resource group, another resource group, or another subscription. This provides you with a system-wide view of your data.
-If you manage subscriptions in other Azure Active Directory (Azure AD) tenants through [Azure Lighthouse](/azure/lighthouse/overview), you can include [Log Analytics workspaces created in those customer tenants](/azure/lighthouse/how-to/monitor-at-scale) in your queries.
+If you manage subscriptions in other Azure Active Directory (Azure AD) tenants through [Azure Lighthouse](../../lighthouse/overview.md), you can include [Log Analytics workspaces created in those customer tenants](../../lighthouse/how-to/monitor-at-scale.md) in your queries.
There are two methods to query data that is stored in multiple workspace and apps:
azure-monitor Private Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/private-storage.md
To replace a storage account used for ingestion,
When using your own storage account, retention is up to you. Log Analytics won't delete logs stored on your private storage. Instead, you should set up a policy to handle the load according to your preferences. #### Consider load
-Storage accounts can handle a certain load of read and write requests before they start throttling requests (For more information, see [Scalability and performance targets for Blob storage](../../storage/common/scalability-targets-standard-account.md)). Throttling affects the time it takes to ingest logs. If your storage account is overloaded, register an additional storage account to spread the load between them. To monitor your storage accountΓÇÖs capacity and performance review its [Insights in the Azure portal](/azure/azure-monitor/insights/storage-insights-overview).
+Storage accounts can handle a certain load of read and write requests before they start throttling requests (For more information, see [Scalability and performance targets for Blob storage](../../storage/common/scalability-targets-standard-account.md)). Throttling affects the time it takes to ingest logs. If your storage account is overloaded, register an additional storage account to spread the load between them. To monitor your storage accountΓÇÖs capacity and performance review its [Insights in the Azure portal](../insights/storage-insights-overview.md).
### Related charges Storage accounts are charged by the volume of stored data, the type of the storage, and the type of redundancy. For details see [Block blob pricing](https://azure.microsoft.com/pricing/details/storage/blobs) and [Table Storage pricing](https://azure.microsoft.com/pricing/details/storage/tables).
Storage accounts are charged by the volume of stored data, the type of the stora
## Next steps - Learn about [using Azure Private Link to securely connect networks to Azure Monitor](private-link-security.md)-- Learn about [Azure Monitor customer-managed keys](../logs/customer-managed-keys.md)
+- Learn about [Azure Monitor customer-managed keys](../logs/customer-managed-keys.md)
azure-percept Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/known-issues.md
Last updated 03/25/2021
# Azure Percept known issues
-Here are issues with the Azure Percept DK, Azure Percept Audio, or Azure Percept Studio that the product teams are aware of. Workarounds and troubleshooting steps are provided where possible. If you're blocked by any of these issues, you can post it as a question on [Microsoft Q&A](https://docs.microsoft.com/answers/topics/azure-percept.html) or submit a customer support request in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview).
+Here are issues with the Azure Percept DK, Azure Percept Audio, or Azure Percept Studio that the product teams are aware of. Workarounds and troubleshooting steps are provided where possible. If you're blocked by any of these issues, you can post it as a question on [Microsoft Q&A](/answers/topics/azure-percept.html) or submit a customer support request in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview).
|Area|Symptoms|Description of Issue|Workaround| |-||||
azure-resource-manager Add Template To Azure Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/add-template-to-azure-pipelines.md
This article assumes your Bicep file and Azure DevOps organization are ready for
* You've configured a [service connection](/azure/devops/pipelines/library/connect-to-azure) to your Azure subscription. The tasks in the pipeline execute under the identity of the service principal. For steps to create the connection, see [Create a DevOps project](../templates/deployment-tutorial-pipeline.md#create-a-devops-project).
-* You have a [Bicep file](../templates/quickstart-create-bicep-use-visual-studio-code.md) that defines the infrastructure for your project.
+* You have a [Bicep file](./quickstart-create-bicep-use-visual-studio-code.md) that defines the infrastructure for your project.
## Create pipeline
An Azure CLI task takes the following inputs:
## Next steps * To use the what-if operation in a pipeline, see [Test ARM templates with What-If in a pipeline](https://4bes.nl/2021/03/06/test-arm-templates-with-what-if/).
-* To learn about using Bicep file with GitHub Actions, see [Deploy Bicep files by using GitHub Actions](./deploy-github-actions.md).
+* To learn about using Bicep file with GitHub Actions, see [Deploy Bicep files by using GitHub Actions](./deploy-github-actions.md).
azure-resource-manager Conditional Resource Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/conditional-resource-deployment.md
description: Describes how to conditionally deploy a resource in Bicep.
Previously updated : 06/29/2021 Last updated : 07/15/2021 # Conditional deployment in Bicep
-Sometimes you need to optionally deploy a resource in Bicep. Use the `if` keyword to specify whether the resource is deployed. The value for the condition resolves to true or false. When the value is true, the resource is created. When the value is false, the resource isn't created. The value can only be applied to the whole resource.
+Sometimes you need to optionally deploy a resource or module in Bicep. Use the `if` keyword to specify whether the resource or module is deployed. The value for the condition resolves to true or false. When the value is true, the resource is created. When the value is false, the resource isn't created. The value can only be applied to the whole resource or module.
> [!NOTE] > Conditional deployment doesn't cascade to [child resources](child-resource-name-type.md). If you want to conditionally deploy a resource and its child resources, you must apply the same condition to each resource type.
resource dnsZone 'Microsoft.Network/dnszones@2018-05-01' = if (deployZone) {
} ```
+The next example conditionally deploys a module.
+
+```bicep
+param deployZone bool
+
+module dnsZone 'dnszones.bicep' = if (deployZone) {
+ name: 'myZoneModule'
+}
+```
+ Conditions may be used with dependency declarations. If the identifier of conditional resource is specified in `dependsOn` of another resource (explicit dependency), the dependency is ignored if the condition evaluates to false at template deployment time. If the condition evaluates to true, the dependency is respected. Referencing a property of a conditional resource (implicit dependency) is allowed but may produce a runtime error in some cases. ## New or existing resource
azure-resource-manager Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/deploy-cli.md
description: Use Azure Resource Manager and Azure CLI to deploy resources to Azu
Previously updated : 06/01/2021 Last updated : 07/15/2021 # Deploy resources with Bicep and Azure CLI
az deployment group create --name addstorage --resource-group myResourceGroup \
Use double quotes around the JSON that you want to pass into the object.
+You can use a variable to contain the parameter values. In Bash, set the variable to all of the parameter values and add it to the deployment command.
+
+```azurecli-interactive
+params="prefix=start suffix=end"
+
+az deployment group create \
+ --resource-group testgroup \
+ --template-file <path-to-bicep> \
+ --parameters $params
+```
+
+However, if you're using Azure CLI with Windows Command Prompt (CMD) or PowerShell, set the variable to a JSON string. Escape the quotation marks: `$params = '{ \"prefix\": {\"value\":\"start\"}, \"suffix\": {\"value\":\"end\"} }'`.
+ ### Parameter files Rather than passing parameters as inline values in your script, you may find it easier to use a JSON file that contains the parameter values. The parameter file must be a local file. External parameter files aren't supported with Azure CLI. Bicep file uses JSON parameter files.
Before deploying your Bicep file, you can preview the changes the Bicep file wil
## Deploy template specs
-Currently, Azure CLI doesn't support creating template specs by providing Bicep files. However you can create a Bicep file with the [Microsoft.Resources/templateSpecs](/azure/templates/microsoft.resources/templatespecs) resource to deploy a template spec. Here is an [example](https://github.com/Azure/azure-docs-json-samples/blob/master/create-template-spec-using-template/azuredeploy.bicep). You can also build your Bicep file into an ARM template JSON by using the Bicep CLI, and then create a template spec with the JSON template.
+Currently, Azure CLI doesn't support creating template specs by providing Bicep files. However you can create a Bicep file with the [Microsoft.Resources/templateSpecs](/azure/templates/microsoft.resources/templatespecs) resource to deploy a template spec. Here's an [example](https://github.com/Azure/azure-docs-json-samples/blob/master/create-template-spec-using-template/azuredeploy.bicep). You can also build your Bicep file into an ARM template JSON by using the Bicep CLI, and then create a template spec with the JSON template.
## Deployment name
azure-resource-manager Modules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/modules.md
description: Describes how to define and consume a module, and how to use module
Previously updated : 06/03/2021 Last updated : 07/15/2021 # Use Bicep modules
output storageEndpoint object = stgModule.outputs.storageEndpoint
] ... ```
+- The **_params_** property contains any parameters to pass to the module file. These parameters match the parameters defined in the Bicep file.
To get an output value from a module, retrieve the property value with syntax like: `stgModule.outputs.storageEndpoint` where `stgModule` is the identifier of the module.
+You can conditionally deploy a module. Use the same **if** syntax as you would use when [conditionally deploying a resource](conditional-resource-deployment.md).
+
+```bicep
+param deployZone bool
+
+module dnsZone 'dnszones.bicep' = if (deployZone) {
+ name: 'myZoneModule'
+}
+```
+ ## Configure module scopes When declaring a module, you can supply a _scope_ property to set the scope at which to deploy the module:
azure-resource-manager Azure Services Resource Providers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/azure-services-resource-providers.md
The resources providers that are marked with **- registered** are registered by
| Microsoft.DesktopVirtualization | [Windows Virtual Desktop](../../virtual-desktop/index.yml) | | Microsoft.Devices | [Azure IoT Hub](../../iot-hub/index.yml)<br />[Azure IoT Hub Device Provisioning Service](../../iot-dps/index.yml) | | Microsoft.DevOps | [Azure DevOps](/azure/devops/) |
-| Microsoft.DevSpaces | [Azure Dev Spaces](../../dev-spaces/index.yml) |
+| Microsoft.DevSpaces | [Azure Dev Spaces](/previous-versions/azure/dev-spaces/) |
| Microsoft.DevTestLab | [Azure Lab Services](../../lab-services/index.yml) | | Microsoft.DigitalTwins | [Azure Digital Twins](../../digital-twins/overview.md) | | Microsoft.DocumentDB | [Azure Cosmos DB](../../cosmos-db/index.yml) |
ResourceType : Microsoft.KeyVault/vaults
## Next steps
-For more information about resource providers, including how to register a resource provider, see [Azure resource providers and types](resource-providers-and-types.md).
+For more information about resource providers, including how to register a resource provider, see [Azure resource providers and types](resource-providers-and-types.md).
azure-resource-manager Move Support Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/move-support-resources.md
Jump to a resource provider namespace:
> | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- | > | controllers | Yes | Yes | No |
-> | AKS cluster | **pending** | **pending** | No<br/><br/> [Learn more](../../dev-spaces/index.yml) about moving to another region.
+> | AKS cluster | **pending** | **pending** | No<br/><br/> [Learn more](/previous-versions/azure/dev-spaces/) about moving to another region.
## Microsoft.DevTestLab
azure-resource-manager Tag Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/tag-resources.md
Title: Tag resources, resource groups, and subscriptions for logical organization description: Shows how to apply tags to organize Azure resources for billing and managing. Previously updated : 05/05/2021 Last updated : 07/15/2021
resource applyTags 'Microsoft.Resources/tags@2021-04-01' = {
[!INCLUDE [resource-manager-tag-resource](../../../includes/resource-manager-tag-resources.md)]
+Some resources, such [IP Groups in Azure Firewall](../../firewall/ip-groups.md), don't currently support updating tags through the portal. Instead, use the update commands for those resources. For example, you can update tags for an IP group with the [az network ip-group update](/cli/azure/network/ip-group#az_network_ip_group_update) command.
+ ## REST API To work with tags through the Azure REST API, use:
azure-resource-manager Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-cli.md
Title: Deploy resources with Azure CLI and template description: Use Azure Resource Manager and Azure CLI to deploy resources to Azure. The resources are defined in a Resource Manager template. Previously updated : 05/07/2021 Last updated : 07/15/2021 # Deploy resources with ARM templates and Azure CLI
az deployment group create --name addstorage --resource-group myResourceGroup \
Use double quotes around the JSON that you want to pass into the object.
+You can use a variable to contain the parameter values. In Bash, set the variable to all of the parameter values and add it to the deployment command.
+
+```azurecli-interactive
+params="prefix=start suffix=end"
+
+az deployment group create \
+ --resource-group testgroup \
+ --template-file <path-to-template> \
+ --parameters $params
+```
+
+However, if you're using Azure CLI with Windows Command Prompt (CMD) or PowerShell, set the variable to a JSON string. Escape the quotation marks: `$params = '{ \"prefix\": {\"value\":\"start\"}, \"suffix\": {\"value\":\"end\"} }'`.
+ ### Parameter files Rather than passing parameters as inline values in your script, you may find it easier to use a JSON file that contains the parameter values. The parameter file must be a local file. External parameter files aren't supported with Azure CLI.
azure-signalr Howto Shared Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/howto-shared-private-endpoints.md
Title: Secure outbound traffic through Shared Private Endpoints
+ Title: Secure Azure SignalR outbound traffic through Shared Private Endpoints
description: How to secure outbound traffic through Shared Private Endpoints to avoid traffic go to public network
Last updated 07/08/2021
-# Secure outbound traffic through Shared Private Endpoints
+# Secure Azure SignalR outbound traffic through Shared Private Endpoints
If you're using [serverless mode](concept-service-mode.md#serverless-mode) in Azure SignalR Service, you might have outbound traffic to upstream. Upstream such as Azure Web App and Azure Functions, can be configured to accept connections from a list of virtual networks and refuse outside connections that originate from a public network. You can create an outbound [private endpoint connection](../private-link/private-endpoint-overview.md) to reach these endpoints.
azure-signalr Signalr Howto Troubleshoot Live Trace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/signalr-howto-troubleshoot-live-trace.md
The real time live traces captured by live trace tool contain detailed informati
| User ID | Identity of the user | | IP | The IP address of client | | Server Sticky | Routing mode of client. Allowed values are `Disabled`, `Preferred` and `Required`. For more information, see [ServerStickyMode](https://github.com/Azure/azure-signalr/blob/master/docs/run-asp-net-core.md#serverstickymode) |
-| Transport | The transport that the client can use to send HTTP requests. Allowed values are `WebSockets`, `ServerSentEvents` and `LongPolling`. For more information, see [HttpTransportType](https://docs.microsoft.com/dotnet/api/microsoft.aspnetcore.http.connections.httptransporttype) |
+| Transport | The transport that the client can use to send HTTP requests. Allowed values are `WebSockets`, `ServerSentEvents` and `LongPolling`. For more information, see [HttpTransportType](/dotnet/api/microsoft.aspnetcore.http.connections.httptransporttype) |
## Next Steps In this guide, you learned about how to use live trace tool. You could also learn how to handle the common issues: * Troubleshooting guides: For how to troubleshoot typical issues based on live traces, see our [troubleshooting guide](./signalr-howto-troubleshoot-guide.md).
-* Troubleshooting methods: For self-diagnosis to find the root cause directly or narrow down the issue, see our [troubleshooting methods introduction](./signalr-howto-troubleshoot-method.md).
+* Troubleshooting methods: For self-diagnosis to find the root cause directly or narrow down the issue, see our [troubleshooting methods introduction](./signalr-howto-troubleshoot-method.md).
azure-sql Auditing Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/auditing-overview.md
You can manage Azure SQL Database auditing using [Azure Resource Manager](../../
## See also - Data Exposed episode [What's New in Azure SQL Auditing](https://channel9.msdn.com/Shows/Data-Exposed/Whats-New-in-Azure-SQL-Auditing) on Channel 9.-- [Auditing for SQL Managed Instance](https://docs.microsoft.com/azure/azure-sql/managed-instance/auditing-configure)-- [Auditing for SQL Server](https://docs.microsoft.com/sql/relational-databases/security/auditing/sql-server-audit-database-engine)
+- [Auditing for SQL Managed Instance](../managed-instance/auditing-configure.md)
+- [Auditing for SQL Server](/sql/relational-databases/security/auditing/sql-server-audit-database-engine)
azure-sql Authentication Azure Ad Only Authentication Create Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/authentication-azure-ad-only-authentication-create-server.md
This how-to guide outlines the steps to create an [Azure SQL logical server](log
## Permissions
-To provision an Azure SQL logical server or managed instance, you'll need to have the appropriate permissions to create these resources. Azure users with higher permissions, such as subscription [Owners](../../role-based-access-control/built-in-roles.md#owner), [Contributors](../../role-based-access-control/built-in-roles.md#contributor), [Service Administrators](/azure/role-based-access-control/rbac-and-directory-admin-roles#classic-subscription-administrator-roles), and [Co-Administrators](/azure/role-based-access-control/rbac-and-directory-admin-roles#classic-subscription-administrator-roles) have the privilege to create a SQL server or managed instance. To create these resources with the least privileged Azure RBAC role, use the [SQL Server Contributor](../../role-based-access-control/built-in-roles.md#sql-server-contributor) role for SQL Database and [SQL Managed Instance Contributor](../../role-based-access-control/built-in-roles.md#sql-managed-instance-contributor) role for Managed Instance.
+To provision an Azure SQL logical server or managed instance, you'll need to have the appropriate permissions to create these resources. Azure users with higher permissions, such as subscription [Owners](../../role-based-access-control/built-in-roles.md#owner), [Contributors](../../role-based-access-control/built-in-roles.md#contributor), [Service Administrators](../../role-based-access-control/rbac-and-directory-admin-roles.md#classic-subscription-administrator-roles), and [Co-Administrators](../../role-based-access-control/rbac-and-directory-admin-roles.md#classic-subscription-administrator-roles) have the privilege to create a SQL server or managed instance. To create these resources with the least privileged Azure RBAC role, use the [SQL Server Contributor](../../role-based-access-control/built-in-roles.md#sql-server-contributor) role for SQL Database and [SQL Managed Instance Contributor](../../role-based-access-control/built-in-roles.md#sql-managed-instance-contributor) role for Managed Instance.
The [SQL Security Manager](../../role-based-access-control/built-in-roles.md#sql-security-manager) Azure RBAC role doesn't have enough permissions to create a server or instance with Azure AD-only authentication enabled. The [SQL Security Manager](../../role-based-access-control/built-in-roles.md#sql-security-manager) role will be required to manage the Azure AD-only authentication feature after server or instance creation.
azure-sql Features Comparison https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/features-comparison.md
The Azure platform provides a number of PaaS capabilities that are added as an a
| [VNet](../../virtual-network/virtual-networks-overview.md) | Partial, it enables restricted access using [VNet Endpoints](vnet-service-endpoint-rule-overview.md) | Yes, SQL Managed Instance is injected in customer's VNet. See [subnet](../managed-instance/transact-sql-tsql-differences-sql-server.md#subnet) and [VNet](../managed-instance/transact-sql-tsql-differences-sql-server.md#vnet) | | VNet Service endpoint | [Yes](vnet-service-endpoint-rule-overview.md) | No | | VNet Global peering | Yes, using [Private IP and service endpoints](vnet-service-endpoint-rule-overview.md) | Yes, using [Virtual network peering](https://techcommunity.microsoft.com/t5/azure-sql/new-feature-global-vnet-peering-support-for-azure-sql-managed/ba-p/1746913). |
-| [Private connectivity](../../private-link/private-link-overview.md) | Yes, using [Private Link](/azure/private-link/private-endpoint-overview) | Yes, using VNet. |
+| [Private connectivity](../../private-link/private-link-overview.md) | Yes, using [Private Link](../../private-link/private-endpoint-overview.md) | Yes, using VNet. |
## Tools
For more information about Azure SQL Database and Azure SQL Managed Instance, se
- [What is Azure SQL Database?](sql-database-paas-overview.md) - [What is Azure SQL Managed Instance?](../managed-instance/sql-managed-instance-paas-overview.md)-- [What is an Azure SQL Managed Instance pool?](../managed-instance/instance-pools-overview.md)
+- [What is an Azure SQL Managed Instance pool?](../managed-instance/instance-pools-overview.md)
azure-sql Intelligent Insights Use Diagnostics Log https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/intelligent-insights-use-diagnostics-log.md
The last part of the Intelligent Insights performance log pertains to the automa
"rootCauseAnalysis_s" : "High data IO caused performance to degrade. It seems that this database is missing some indexes that could help." ```
-You can use the Intelligent Insights performance log with [Azure Monitor logs](/azure/log-analytics/log-analytics-azure-sql) or a third-party solution for custom DevOps alerting and reporting capabilities.
+You can use the Intelligent Insights performance log with [Azure Monitor logs](../../azure-monitor/insights/azure-sql.md) or a third-party solution for custom DevOps alerting and reporting capabilities.
## Next steps
azure-sql Ledger Digest Management And Database Verification https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/ledger-digest-management-and-database-verification.md
EXECUTE sp_verify_database_ledger N'
"digest_time": "2020-11-12T18:43:30.4701575" } ]
+'
``` Return codes for `sp_verify_database_ledger` and `sp_verify_database_ledger_from_digest_storage` are `0` (success) or `1` (failure).
Return codes for `sp_verify_database_ledger` and `sp_verify_database_ledger_from
- [Azure SQL Database ledger overview](ledger-overview.md) - [Updatable ledger tables](ledger-updatable-ledger-tables.md) - [Append-only ledger tables](ledger-append-only-ledger-tables.md) -- [Database ledger](ledger-database-ledger.md)
+- [Database ledger](ledger-database-ledger.md)
azure-sql Service Tiers General Purpose Business Critical https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/service-tiers-general-purpose-business-critical.md
The following factors affect the amount of storage used for data and log files,
> [!IMPORTANT] > In the General Purpose and Business Critical tiers, you are charged for the maximum storage size configured for a database, elastic pool, or managed instance. In the Hyperscale tier, you are charged for the allocated data storage.
-To monitor the current allocated and used data storage size in SQL Database, use *allocated_data_storage* and *storage* Azure Monitor [metrics](/azure/azure-monitor/essentials/metrics-supported#microsoftsqlserversdatabases) respectively. To monitor total consumed instance storage size for SQL Managed Instance, use the *storage_space_used_mb* [metric](/azure/azure-monitor/essentials/metrics-supported#microsoftsqlmanagedinstances). To monitor the current allocated and used storage size of individual data and log files in a database using T-SQL, use the [sys.database_files](/sql/relational-databases/system-catalog-views/sys-database-files-transact-sql) view and the [FILEPROPERTY(... , 'SpaceUsed')](/sql/t-sql/functions/fileproperty-transact-sql) function.
+To monitor the current allocated and used data storage size in SQL Database, use *allocated_data_storage* and *storage* Azure Monitor [metrics](../../azure-monitor/essentials/metrics-supported.md#microsoftsqlserversdatabases) respectively. To monitor total consumed instance storage size for SQL Managed Instance, use the *storage_space_used_mb* [metric](../../azure-monitor/essentials/metrics-supported.md#microsoftsqlmanagedinstances). To monitor the current allocated and used storage size of individual data and log files in a database using T-SQL, use the [sys.database_files](/sql/relational-databases/system-catalog-views/sys-database-files-transact-sql) view and the [FILEPROPERTY(... , 'SpaceUsed')](/sql/t-sql/functions/fileproperty-transact-sql) function.
> [!TIP] > Under some circumstances, you may need to shrink a database to reclaim unused space. For more information, see [Manage file space in Azure SQL Database](file-space-manage.md).
For details about the specific compute and storage sizes available in vCore serv
- [vCore-based resource limits for Azure SQL Database](resource-limits-vcore-single-databases.md). - [vCore-based resource limits for pooled databases in Azure SQL Database](resource-limits-vcore-elastic-pools.md).-- [vCore-based resource limits for Azure SQL Managed Instance](../managed-instance/resource-limits.md).
+- [vCore-based resource limits for Azure SQL Managed Instance](../managed-instance/resource-limits.md).
azure-sql Sql Server Iaas Agent Extension Automate Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/sql-server-iaas-agent-extension-automate-management.md
To install the SQL Server IaaS extension to SQL Server on Azure VMs, see the art
For more information about running SQL Server on Azure Virtual Machines, see the [What is SQL Server on Azure Virtual Machines?](sql-server-on-azure-vm-iaas-what-is-overview.md).
-To learn more, see [frequently asked questions](frequently-asked-questions-faq.yml).
+To learn more, see [frequently asked questions](frequently-asked-questions-faq.yml).
azure-video-analyzer Deploy On Stack Edge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/deploy-on-stack-edge.md
For Video Analyzer, we will deploy via IoT Hub, but the Azure Stack Edge resourc
* Video Analyzer account
- This [cloud service](https://docs.microsoft.com/azure/azure-video-analyzer/video-analyzer-docs/overview) is used to register the Video Analyzer edge module, and for playing back recorded video and video analytics
+ This [cloud service](./overview.md) is used to register the Video Analyzer edge module, and for playing back recorded video and video analytics
* Managed identity
- This is the user assigned [managed identity](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/overview) used to manage access to the above storage account.
+ This is the user assigned [managed identity](../../active-directory/managed-identities-azure-resources/overview.md) used to manage access to the above storage account.
* An [Azure Stack Edge](../../databox-online/azure-stack-edge-gpu-deploy-prep.md) resource * [An IoT Hub](../../iot-hub/iot-hub-create-through-portal.md) * Storage account
Follow these instructions to connect to your IoT hub by using the Azure IoT Tool
## Next steps
-[Detect motion and emit events](detect-motion-emit-events-quickstart.md)
-
+[Detect motion and emit events](detect-motion-emit-events-quickstart.md)
azure-vmware Attach Disk Pools To Azure Vmware Solution Hosts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/attach-disk-pools-to-azure-vmware-solution-hosts.md
The diagram shows how disk pools work with Azure VMware Solution hosts. Each iSC
## Supported regions
-You can only connect the disk pool to an Azure VMware Solution private cloud in the same region. For a list of supported regions, see [Regional availability](/azure/virtual-machines/disks-pools#regional-availability). If your private cloud is deployed in a non-supported region, you can redeploy it in a supported region. Azure VMware Solution private cloud and disk pool colocation provide the best performance with minimal network latency.
+You can only connect the disk pool to an Azure VMware Solution private cloud in the same region. For a list of supported regions, see [Regional availability](../virtual-machines/disks-pools.md#regional-availability). If your private cloud is deployed in a non-supported region, you can redeploy it in a supported region. Azure VMware Solution private cloud and disk pool colocation provide the best performance with minimal network latency.
## Prerequisites
You can only connect the disk pool to an Azure VMware Solution private cloud in
- [Azure VMware Solution private cloud](deploy-azure-vmware-solution.md) deployed with a [virtual network configured](deploy-azure-vmware-solution.md#step-3-connect-to-azure-virtual-network-with-expressroute). For more information, see [Network planning checklist](tutorial-network-checklist.md) and [Configure networking for your VMware private cloud](tutorial-configure-networking.md).
- - If you select ultra disks, use Ultra Performance for the Azure VMware Solution private cloud and then [enable ExpressRoute FastPath](/azure/expressroute/expressroute-howto-linkvnet-arm#configure-expressroute-fastpath).
+ - If you select ultra disks, use Ultra Performance for the Azure VMware Solution private cloud and then [enable ExpressRoute FastPath](../expressroute/expressroute-howto-linkvnet-arm.md#configure-expressroute-fastpath).
- If you select premium SSDs, use Standard (1 Gbps) for the Azure VMware Solution private cloud. You must use Standard\_DS##\_v3 to host iSCSI. If you encounter quota issues, request an increase in [vCPU quota limits](../azure-portal/supportability/per-vm-quota-requests.md) per Azure VM series for Dsv3 series.
Now that you've attached a disk pool to your Azure VMware Solution hosts, you ma
- [Managing an Azure disk pool](../virtual-machines/disks-pools-manage.md ). Once you've deployed a disk pool, there are various management actions available to you. You can add or remove a disk to or from a disk pool, update iSCSI LUN mapping, or add ACLs. -- [Deleting a disk pool](/azure/virtual-machines/disks-pools-deprovision#delete-a-disk-pool). When you delete a disk pool, all the resources in the managed resource group are also deleted.
+- [Deleting a disk pool](../virtual-machines/disks-pools-deprovision.md#delete-a-disk-pool). When you delete a disk pool, all the resources in the managed resource group are also deleted.
-- [Disabling iSCSI support on a disk](/azure/virtual-machines/disks-pools-deprovision#disable-iscsi-support). If you disable iSCSI support on a disk pool, you effectively can no longer use a disk pool.
+- [Disabling iSCSI support on a disk](../virtual-machines/disks-pools-deprovision.md#disable-iscsi-support). If you disable iSCSI support on a disk pool, you effectively can no longer use a disk pool.
- [Moving disk pools to a different subscription](../virtual-machines/disks-pools-move-resource.md). Move an Azure disk pool to a different subscription, which involves moving the disk pool itself, contained disks, managed resource group, and all the resources.
azure-vmware Backup Azure Vmware Solution Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/backup-azure-vmware-solution-virtual-machines.md
You can restore individual files from a protected VM recovery point. This featur
Now that you've covered backing up your Azure VMware Solution VMs with Azure Backup Server, you may want to learn about: - [Troubleshooting when setting up backups in Azure Backup Server](../backup/backup-azure-mabs-troubleshoot.md).-- [Lifecycle management of Azure VMware Solution VMs](lifecycle-management-of-azure-vmware-solution-vms.md).
+- [Lifecycle management of Azure VMware Solution VMs](./integrate-azure-native-services.md).
azure-vmware Configure Dhcp Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/configure-dhcp-azure-vmware-solution.md
Title: Configure DHCP for Azure VMware Solution
description: Learn how to configure DHCP by using either NSX-T Manager to host a DHCP server or use a third-party external DHCP server. Previously updated : 05/28/2021 Last updated : 07/13/2021 # Customer intent: As an Azure service administrator, I want to configure DHCP by using either NSX-T Manager to host a DHCP server or use a third-party external DHCP server.
In this how-to article, you'll use NSX-T Manager to configure DHCP for Azure VMw
>[!IMPORTANT] >For clouds created on or after July 1, 2021, the simplified view of NSX-T operations must be used to configure DHCP on the default Tier-1 Gateway in your environment.--
->[!IMPORTANT]
+>
>DHCP does not work for virtual machines (VMs) on the VMware HCX L2 stretch network when the DHCP server is in the on-premises datacenter. NSX, by default, blocks all DHCP requests from traversing the L2 stretch. For the solution, see the [Configure DHCP on L2 stretched VMware HCX networks](configure-l2-stretched-vmware-hcx-networks.md) procedure.
azure-vmware Configure Dns Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/configure-dns-azure-vmware-solution.md
+
+ Title: Configure DNS forwarder for Azure VMware Solution
+description: Learn how to configure DNS forwarder for Azure VMware Solution using the Azure portal.
++ Last updated : 07/15/2021+
+#Customer intent: As an Azure service administrator, I want to <define conditional forwarding rules for a desired domain name to a desired set of private DNS servers via the NSX-T DNS Service.>
+++
+# Configure a DNS forwarder in the Azure portal
+
+>[!IMPORTANT]
+>For Azure VMware Solution private clouds created on or after July 1, 2021, you now have the ability to configure private DNS resolution. For private clouds created before July 1, 2021, that need private DNS resolution, open a [support request](https://rc.portal.azure.com/#create/Microsoft.Support) and request Private DNS configuration.
+
+By default, Azure VMware Solution management components such as vCenter can only resolve name records available through Public DNS. However, certain hybrid use cases require Azure VMware Solution management components to resolve name records from privately hosted DNS to properly function, including customer-managed systems such as vCenter and Active Directory.
+
+Private DNS for Azure VMware Solution management components lets you define conditional forwarding rules for the desired domain name to a selected set of private DNS servers through the NSX-T DNS Service.
+
+This capability uses the DNS Forwarder Service in NSX-T. A DNS service and default DNS zone are provided as part of your private cloud. To enable Azure VMware Solution management components to resolve records from your private DNS systems, you must define an FQDN zone and apply it to the NSX-T DNS Service. The DNS Service conditionally forwards DNS queries for each zone based on the external DNS servers defined in that zone.
+
+>[!NOTE]
+>The DNS Service is associated with up to five FQDN zones. Each FQDN zone is associated with up to three DNS servers.
+
+>[!TIP]
+>If desired, you can also use the conditional forwarding rules for workload segments by configuring virtual machines on those segments to use the NSX-T DNS Service IP address as their DNS server.
++
+## Architecture
+
+The diagram shows that the NSX-T DNS Service can forward DNS queries to DNS systems hosted in Azure and on-premises environments.
+++
+## Configure DNS forwarder
+
+1. In your Azure VMware Solution private cloud, under **Workload Networking**, select **DNS** > **DNS zones**. Then select **Add**.
+
+ >[!NOTE]
+ >For private clouds created on or after July 1, 2021, the default DNS zone is created for you during the private cloud creation.
++
+ :::image type="content" source="media/networking/configure-dns-forwarder-1.png" alt-text="Screenshot showing how to add DNS zones to an Azure VMware Solution private cloud.":::
+
+1. Select **FQDN zone** and provide a name, the FQDN zone, and up to three DNS server IP addresses in the format of **10.0.0.53**. Then select **OK**.
+
+ It takes several minutes to complete, and you can follow the progress from **Notifications**.
+
+ :::image type="content" source="media/networking/nsxt-workload-networking-configure-fqdn-zone.png" alt-text="Screenshot showing the required information needed to add an FQDN zone.":::
+
+ >[!IMPORTANT]
+ >While NSX-T allows spaces and other non-alphanumeric characters in a DNS zone name, certain NSX resources such as a DNS Zone are mapped to an Azure resource whose names donΓÇÖt permit certain characters.
+ >
+ >As a result, DNS zone names that would otherwise be valid in NSX-T may need adjustment to adhere to the [Azure resource naming conventions](../azure-resource-manager/management/resource-name-rules.md#microsoftresources).
+
+ YouΓÇÖll see a message in the Notifications when the DNS zone has been created.
+
+1. Ignore the message about a default DNS zone. A DNS zone is created for you as part of your private cloud.
+
+1. Select the **DNS service** tab and then select **Edit**.
+
+ >[!TIP]
+ >For private clouds created on or after July 1, 2021, you can ignore the message about a default DNS zone as one is created for you during private cloud creation.
++
+ >[!IMPORTANT]
+ >While certain operations in your private cloud may be performed from NSX-T Manager, for private clouds created on or after July 1, 2021, you _must_ edit the DNS service from the Simplified Networking experience in the Azure portal for any configuration changes made to the default Tier-1 Gateway.
+
+ :::image type="content" source="media/networking/configure-dns-forwarder-2.png" alt-text="Screenshot showing the DNS service tab with the Edit button selected.":::
+
+1. From the **FQDN zones** drop-down, select the newly created FQDN and then select **OK**.
+
+ It takes several minutes to complete and once finished, you'll see the *Completed* message from **Notifications**.
+
+ :::image type="content" source="media/networking/configure-dns-forwarder-3.png" alt-text="Screenshot showing the selected FQDN for the DNS service.":::
+
+ At this point, management components in your private cloud should be able to resolve DNS entries from the FQDN zone provided to the NSX-T DNS Service.
+
+1. Repeat the above steps for other FQDN zones, including any applicable reverse lookup zones.
++
+## Verify name resolution operations
+
+After youΓÇÖve configured the DNS forwarder, youΓÇÖll have a few options available to verify name resolution operations.
+
+### NSX-T Manager
+
+NSX-T Manager provides the DNS Forwarder Service statistics at the global service level and on a per-zone basis.
+
+1. In NSX-T Manager, select **Networking** > **DNS**, and then expand your DNS Forwarder Service.
+
+ :::image type="content" source="media/networking/nsxt-manager-dns-services.png" alt-text="Screenshot showing the DNS Services tab in NSX-T Manager.":::
+
+1. Select **View Statistics** and then from the **Zone Statistics** drop-down, select your FQDN Zone.
+
+ The top half shows the statistics for the entire service, and the bottom half shows the statistics for your specified zone. In this example, you can see the forwarded queries to the DNS services specified during the configuration of the FQDN zone.
+
+ :::image type="content" source="media/networking/nsxt-manager-dns-services-statistics.png" alt-text="Screenshot showing the DNS Forwarder statistics.":::
++
+### PowerCLI
+
+The NSX-T Policy API lets you run nslookup commands from the NSX-T DNS Forwarder Service. The required cmdlets are part of the `VMware.VimAutomation.Nsxt` module in PowerCLI. The following example demonstrates output from version 12.3.0 of that module.
+
+1. Connect to your NSX-T Server.
+
+ >[!TIP]
+ >You can obtain the IP address of your NSX-T Server from the Azure portal under **Manage** > **Identity**.
+ >
+ >:::image type="content" source="media/networking/configure-dns-forwarder-4.png" alt-text="Screenshot showing the NSX-T Server IP address.":::
+
+ ```powershell
+ Connect-NsxtServer -Server 10.103.64.3
+ ```
+
+1. Obtain a proxy to the DNS Forwarder's nslookup service.
+
+ ```powershell
+ $nslookup = Get-NsxtPolicyService -Name com.vmware.nsx_policy.infra.tier_1s.dns_forwarder.nslookup
+ ```
+
+1. Perform lookups from the DNS Forwarder Service.
+
+ ```powershell
+ $response = $nslookup.get('TNT86-T1', 'vc01.contoso.corp')
+ ```
+
+ The first parameter in the command is the ID for your private cloud's T1 gateway, which you can obtain from the DNS service tab in the Azure portal.
+
+1. Obtain a raw answer from the lookup using the following properties of the response.
+
+ ```powershell
+ $response.dns_answer_per_enforcement_point.raw_answer; (()) DiG 9.10.3-P4-Ubuntu (()) @10.103.64.192 -b 10.103.64.192 vc01.contoso.corp +timeout=5 +tries=3 +nosearch ; (1 server found) ;; global options: +cmd ;; Got answer: ;; -))HEADER((- opcode: QUERY, status: NOERROR, id: 10684 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;vc01.contoso.corp. IN A ;; ANSWER SECTION: vc01.contoso.corp. 3046 IN A 172.21.90.2 ;; Query time: 0 msec ;; SERVER: 10.103.64.192:53(10.103.64.192) ;; WHEN: Thu Jul 01 23:44:36 UTC 2021 ;; MSG SIZE rcvd: 62
+ ```
+
+ In this example, you can see an answer for the query of vc01.contoso.corp showing an A record with the address 172.21.90.2. Also, this example shows a cached response from the DNS Forwarder Service, so your output may vary slightly.
azure-vmware Configure Nsx Network Components Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/configure-nsx-network-components-azure-portal.md
- Title: Configure NSX network components using Azure VMware Solution
-description: Learn how to use the Azure VMware Solution to configure NSX-T network segments.
- Previously updated : 06/28/2021-
-# Customer intent: As an Azure service administrator, I want to configure NSX network components using a simplified view of NSX-T operations a VMware administrator needs daily. The simplified view is targeted at users unfamiliar with NSX-T Manager.
---
-# Configure NSX network components using Azure VMware Solution
-
-An Azure VMware Solution private cloud comes with NSX-T by default. The private cloud comes pre-provisioned with an NSX-T Tier-0 gateway in **Active/Active** mode and a default NSX-T Tier-1 gateway in Active/Standby mode. These gateways let you connect the segments (logical switches) and provide East-West and North-South connectivity.
-
-After deploying Azure VMware Solution, you can configure the necessary NSX-T objects from the Azure portal. It presents a simplified view of NSX-T operations a VMware administrator needs daily and targeted at users not familiar with NSX-T Manager.
-
-You'll have four options to configure NSX-T components in the Azure VMware Solution console:
--- **Segments** - Create segments that display in NSX-T Manager and vCenter.--- **DHCP** - Create a DHCP server or DHCP relay if you plan to use DHCP.--- **Port mirroring** ΓÇô Create port mirroring to help troubleshoot network issues.--- **DNS** ΓÇô Create a DNS forwarder to send DNS requests to a designated DNS server for resolution. -
->[!IMPORTANT]
->You can still use NSX-T Manager for the advanced settings mentioned and other NSX-T features.
-
->[!IMPORTANT]
->For clouds created on or after July 1, 2021, the simplified view of NSX-T operations must be used to configure components on the default Tier-1 Gateway in your environment.
-
-## Prerequisites
-Virtual machines (VMs) created or migrated to the Azure VMware Solution private cloud should be attached to a network segment.
-
-## Create an NSX-T segment in the Azure portal
-You can create and configure an NSX-T segment from the Azure VMware Solution console in the Azure portal. These segments are connected to the default Tier-1 gateway, and the workloads on these segments get East-West and North-South connectivity. Once you create the segment, it displays in NSX-T Manager and vCenter.
-
->[!NOTE]
->If you plan to use DHCP, you'll need to [configure a DHCP server or DHCP relay](#create-a-dhcp-server-or-dhcp-relay-using-the-azure-portal) before you can create and configure an NSX-T segment.
-
-1. In your Azure VMware Solution private cloud, under **Workload Networking**, select **Segments** > **Add**.
-
-2. Provide the details for the new logical segment and select **OK**.
-
- :::image type="content" source="media/configure-nsx-network-components-azure-portal/add-new-nsxt-segment.png" alt-text="Screenshot showing how to add a new NSX-T segment in the Azure portal.":::
-
- - **Segment name** - Name of the logical switch that is visible in vCenter.
-
- - **Subnet gateway** - Gateway IP address for the logical switch's subnet with a subnet mask. VMs are attached to a logical switch, and all VMs connecting to this switch belong to the same subnet. Also, all VMs attached to this logical segment must carry an IP address from the same segment.
-
- - **DHCP** (optional) - DHCP ranges for a logical segment. A [DHCP server or DHCP relay](#create-a-dhcp-server-or-dhcp-relay-using-the-azure-portal) must be configured to consume DHCP on Segments.
-
- - **Connected gateway** - *Selected by default and is read-only.* Tier-1 gateway and type of segment information.
-
- - **T1** - Name of the Tier-1 gateway in NSX-T Manager. A private cloud comes with an NSX-T Tier-0 gateway in Active/Active mode and a default NSX-T Tier-1 gateway in Active/Standby mode. Segments created through the Azure VMware Solution console only connect to the default Tier-1 gateway, and the workloads of these segments get East-West and North-South connectivity. You can only create more Tier-1 gateways through NSX-T Manager. Tier-1 gateways created from the NSX-T Manager console are not visible in the Azure VMware Solution console.
-
- - **Type** - Overlay segment supported by Azure VMware Solution.
-
-The segment is now visible in the Azure VMware Solution console, NSX-T Manger, and vCenter.
-
-## Create a DHCP server or DHCP relay using the Azure portal
-
-You can create a DHCP server or relay directly from Azure VMware Solution in the Azure portal. The DHCP server or relay connects to the Tier-1 gateway created when you deployed Azure VMware Solution. All the segments where you gave DHCP ranges will be part of this DHCP. After you've created a DHCP server or DHCP relay, you must define a subnet or range on segment level to consume it.
-
-1. In your Azure VMware Solution private cloud, under **Workload Networking**, select **DHCP** > **Add**.
-
-2. Select either **DHCP Server** or **DHCP Relay** and then provide a name for the server or relay and three IP addresses.
-
- >[!NOTE]
- >For DHCP relay, you only require one IP address for a successful configuration.
-
- :::image type="content" source="media/configure-nsx-network-components-azure-portal/add-dhcp-server-relay.png" alt-text="Screenshot showing how to add a DHCP server or DHCP relay in Azure VMware Solutions.":::
-
-4. Complete the DHCP configuration by [providing DHCP ranges on the logical segments](#create-an-nsx-t-segment-in-the-azure-portal) and then select **OK**.
-
-## Configure port mirroring in the Azure portal
-
-In this step, you'll configure port mirroring to monitor network traffic that involves forwarding a copy of each packet from one network switch port to another. This option places a protocol analyzer on the port that receives the mirrored data. It analyzes traffic from a source, a VM, or a group of VMs, and then sent to a defined destination.
-
-To set up port mirroring in the Azure VMware Solution console, you'll:
-
-* Create the source and destination VMs or VM groups ΓÇô The source group has a single VM or multiple VMs where the traffic is mirrored.
-
-* Create a port mirroring profile ΓÇô You'll define the traffic direction for the source and destination VM groups.
--
-1. In your Azure VMware Solution private cloud, under **Workload Networking**, select **Port mirroring** > **VM groups** > **Add**.
-
- :::image type="content" source="media/configure-nsx-network-components-azure-portal/add-port-mirroring-vm-groups.png" alt-text="Screenshot showing how to create a VM group for port mirroring.":::
-
-1. Provide a name for the new VM group, select VMs from the list, and then **OK**.
-
-1. Repeat these steps to create the destination VM group.
-
- >[!NOTE]
- >Before creating a port mirroring profile, make sure that you've created both the source and destination VM groups.
-
-1. Select **Port mirroring** > **Port mirroring** > **Add** and then provide:
-
- :::image type="content" source="media/configure-nsx-network-components-azure-portal/add-port-mirroring-profile.png" alt-text="Screenshot showing the information required for the port mirroring profile.":::
-
- - **Port mirroring name** - Descriptive name for the profile.
-
- - **Direction** - Select from Ingress, Egress, or Bi-directional.
-
- - **Source** - Select the source VM group.
-
- - **Destination** - Select the destination VM group.
-
- - **Description** - Enter a description for the port mirroring.
-
-1. Select **OK** to complete the profile.
-
- The profile and VM groups are visible in the Azure VMware Solution console.
-
-## Configure a DNS forwarder in the Azure portal
-
-In this step, you'll configure a DNS forwarder where specific DNS requests get forwarded to a designated DNS server for resolution. A DNS forwarder is associate with a **default DNS zone** and up to three **FQDN zones**.
-
-When a DNS query is received, a DNS forwarder compares the domain name with the domain names in the FQDN DNS zone. The query gets forwarded to the DNS servers specified in the FQDN DNS zone if a match is found. Otherwise, the query gets forwarded to the DNS servers specified in the default DNS zone.
-
->[!NOTE]
->To send DNS queries to the upstream server, a default DNS zone must be defined before configuring an FQDN zone.
-
->[!TIP]
->You can also use the [NSX-T Manager console to configure a DNS forwarder](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.5/administration/GUID-A0172881-BB25-4992-A499-14F9BE3BE7F2.html).
--
-1. In your Azure VMware Solution private cloud, under **Workload Networking**, select **DNS** > **DNS zones** > **Add**.
-
- :::image type="content" source="media/configure-nsx-network-components-azure-portal/nsxt-workload-networking-dns-zones.png" alt-text="Screenshot showing how to add DNS zones to an Azure VMware Solution private cloud.":::
-
-1. Select **Default DNS zone** and provide a name and up to three DNS server IP addresses in the format of **8.8.8.8**.
-
- :::image type="content" source="media/configure-nsx-network-components-azure-portal/nsxt-workload-networking-configure-dns-zones.png" alt-text="Screenshot showing the required information needed to add a default DNS zone.":::
-
-1. Select **FQDN zone** and provide a name, the FQDN zone, and up to three DNS server IP addresses in the format of **8.8.8.8**.
-
- :::image type="content" source="media/configure-nsx-network-components-azure-portal/nsxt-workload-networking-configure-fqdn-zone.png" alt-text="Screenshot showing the required information needed to add an FQDN zone.":::
-
-1. Select **OK** to finish adding the default DNS zone and DNS service.
-
-1. Select the **DNS service** tab, select **Add**. Provide the details and select **OK**.
-
- :::image type="content" source="media/configure-nsx-network-components-azure-portal/nsxt-workload-networking-configure-dns-service.png" alt-text="Screenshot showing the information required for the DNS service.":::
-
- >[!TIP]
- >**Tier-1 Gateway** is selected by default and reflects the gateway created when deploying Azure VMware Solution.
-
- The DNS service was added successfully.
-
- :::image type="content" source="media/configure-nsx-network-components-azure-portal/nsxt-workload-networking-configure-dns-service-success.png" alt-text="Screenshot showing the DNS service added successfully.":::
azure-vmware Configure Port Mirroring Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/configure-port-mirroring-azure-vmware-solution.md
+
+ Title: Configure port mirroring for Azure VMware Solution
+description: Learn how to configure port mirroring to monitor network traffic that involves forwarding a copy of each packet from one network switch port to another.
++ Last updated : 07/16/2021+
+# Customer intent: As an Azure service administrator, I want to configure port mirroring to monitor network traffic that involves forwarding a copy of each packet from one network switch port to another.
+++
+# Configure port mirroring in the Azure portal
+
+After deploying Azure VMware Solution, you can configure port mirroring from the Azure portal. Port mirroring places a protocol analyzer on the port that receives the mirrored data. It analyzes traffic from a source, a virtual machine (VM), or a group of VMs, and then sent to a defined destination.
+
+In this how-to, you'll configure port mirroring to monitor network traffic that involves forwarding a copy of each packet from one network switch port to another.
+
+## Prerequisites
+
+- An Azure VMware Solution private cloud with access to the vCenter and NSX-T Manager interfaces. For more information, see the [Configure networking](tutorial-configure-networking.md) tutorial.
+
+- network segment (do we need a network segment to configure port mirroring?)
+
+## Create the VMs or VM groups
+
+You'll create the source and destination VMs or VM groups. The source group has a single VM or multiple VMs where the traffic is mirrored.
+
+1. In your Azure VMware Solution private cloud, under **Workload Networking**, select **Port mirroring** > **VM groups** > **Add**.
+
+ :::image type="content" source="media/networking/add-port-mirroring-vm-groups.png" alt-text="Screenshot showing how to create a VM group for port mirroring.":::
+
+1. Provide a name for the new VM group, select VMs from the list, and then **OK**.
+
+1. Repeat these steps to create the destination VM group.
+
+ >[!NOTE]
+ >Before creating a port mirroring profile, make sure that you've created both the source and destination VM groups.
+
+## Create a port mirroring profile
+
+You'll create a port mirroring profile that defines the traffic direction for the source and destination VM groups.
+
+1. Select **Port mirroring** > **Port mirroring** > **Add** and then provide:
+
+ :::image type="content" source="media/networking/add-port-mirroring-profile.png" alt-text="Screenshot showing the information required for the port mirroring profile.":::
+
+ - **Port mirroring name** - Descriptive name for the profile.
+
+ - **Direction** - Select from Ingress, Egress, or Bi-directional.
+
+ - **Source** - Select the source VM group.
+
+ - **Destination** - Select the destination VM group.
+
+ - **Description** - Enter a description for the port mirroring.
+
+1. Select **OK** to complete the profile.
+
+ The profile and VM groups are visible in the Azure VMware Solution console.
azure-vmware Deploy Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/deploy-azure-vmware-solution.md
description: Learn how to use the information gathered in the planning stage to
Last updated 07/09/2021+ # Deploy and configure Azure VMware Solution
azure-vmware Disaster Recovery Using Vmware Site Recovery Manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/disaster-recovery-using-vmware-site-recovery-manager.md
+
+ Title: Deploy disaster recovery with VMware Site Recovery Manager
+description: Deploy disaster recovery with VMware Site Recovery Manager (SRM) in your Azure VMware Solution private cloud.
+ Last updated : 07/15/2021++
+# Deploy disaster recovery with VMware Site Recovery Manager
+
+This article explains how to implement disaster recovery for on-premises VMware virtual machines (VMs) or Azure VMware Solution-based VMs. The solution in this article uses [VMware Site Recovery Manager (SRM)](https://docs.vmware.com/en/Site-Recovery-Manager/https://docsupdatetracker.net/index.html) and vSphere Replication with Azure VMware Solution. Instances of SRM and replication servers are deployed at both the protected and the recovery sites.
+
+SRM is a disaster recovery solution designed to minimize downtime of the virtual machines in an Azure VMware Solution environment if there was a disaster. SRM automates and orchestrates failover and failback, ensuring minimal downtime in a disaster. Also, built-in non-disruptive testing ensures your recovery time objectives are met. Overall, SRM simplifies management through automation and ensures fast and highly predictable recovery times.
+
+vSphere Replication is VMware's hypervisor-based replication technology for vSphere VMs. It protects VMs from partial or complete site failures. In addition, it simplifies DR protection through storage-independent, VM-centric replication. vSphere Replication is configured on a per-VM basis, allowing more control over which VMs are replicated.
+
+In this article, you'll implement disaster recovery for on-premises VMware virtual machines (VMs) or Azure VMware Solution-based VMs.
++
+## Supported scenarios
+
+SRM helps you plan, test, and run the recovery of VMs between a protected vCenter Server site and a recovery vCenter Server site. You can use SRM with Azure VMware Solution with the following two DR scenarios:
+
+- On-premise VMware to Azure VMware Solution private cloud disaster recovery
+- Primary Azure VMware Solution to Secondary Azure VMware Solution private cloud disaster recovery
+
+The diagram shows the deployment of the primary Azure VMware Solution to secondary Azure VMware Solution scenario.
++
+You can use SRM to implement different types of recovery, such as:
+
+- **Planned migration** commences when both primary and secondary Azure VMware Solution sites are running and fully functional. It's an orderly migration of virtual machines from the protected site to the recovery site where no data loss is expected when migrating workloads in an orderly fashion.
+
+- **Disaster recovery** using SRM can be invoked when the protected Azure VMware Solution site goes offline unexpectedly. Site Recovery Manager orchestrates the recovery process with the replication mechanisms to minimize data loss and system downtime.
+
+ In Azure VMware Solution, only individual VMs can be protected on a host by using SRM in combination with vSphere Replication.
+
+- **Bidirectional Protection** uses a single set of paired SRM sites to protect VMs in both directions. Each site can simultaneously be a protected site and a recovery site, but for a different set of VMs.
+
+>[!IMPORTANT]
+>Azure VMware Solution doesn't support:
+>
+>- Array-based replication and storage policy protection groups
+>- VVOLs Protection Groups
+>- SRM IP customization using SRM command-line tools
+>- One-to-Many and Many-to-One topology
++
+## Deployment workflow
+
+The workflow diagram shows the Primary Azure VMware Solution to secondary workflow. In addition, it shows steps to take within the Azure portal and the VMware environments of Azure VMware Solution to achieve the end-to-end protection of VMs.
++
+## Prerequisites
+
+### Scenario: On-premises to Azure VMware Solution
+
+- Azure VMware Solution private cloud deployed as a secondary region.
+
+- [DNS resolution](configure-dns-azure-vmware-solution.md) to on-premises SRM and virtual cloud appliances.
+
+ >[!NOTE]
+ >For private clouds created on or after July 1, 2021, you can configure private DNS resolution. For private clouds created before July 1, 2021, that need a private DNS resolution, open a [support request](https://rc.portal.azure.com/#create/Microsoft.Support) to request **Private DNS configuration**.
+
+- ExpressRoute connectivity between on-premises and Azure VMware Solution - 2 Gbps.
+
+### Scenario: Primary Azure VMware Solution to secondary
+
+- Azure VMware Solution private cloud must be deployed in the primary and secondary region.
+
+ :::image type="content" source="media/vmware-srm-vsphere-replication/two-private-clouds-different-regions.png" alt-text="Screenshot showing two Azure VMware Solution private clouds in separate regions.":::
+
+- Connectivity, like ExpressRoute Global Reach, between the source and target Azure VMware Solution private cloud.
+
+ :::image type="content" source="media/vmware-srm-vsphere-replication/global-reach-connectity-to-on-premises.png" alt-text="Screenshot showing the connectivity between the source and target private clouds.":::
+
+
+## Install SRM in Azure VMware Solution
+
+1. In your on-premises datacenter, install VMware SRM and vSphere.
+
+ >[!NOTE]
+ >Use the [Two-site Topology with one vCenter Server instance per PSC](https://docs.vmware.com/en/Site-Recovery-Manager/8.3/com.VMware.srm.install_config.doc/GUID-F474543A-88C5-4030-BB86-F7CC51DADE22.html) deployment model. Also, make sure that the [required vSphere Replication Network ports](https://kb.VMware.com/s/article/2087769) are opened.
+
+1. In your Azure VMware Solution private cloud, under **Manage**, select **Add-ons** > **Disaster recovery**.
+
+ The default CloudAdmin user in the Azure VMware Solution private cloud doesn't have sufficient privileges to install VMware SRM or vSphere Replication. The installation process involves multiple steps outlined in the [Prerequisites](#prerequisites) section. Instead, you can install VMware SRM with vSphere Replication as an add-on service from your Azure VMware Solution private cloud.
+
+ :::image type="content" source="media/VMware-srm-vsphere-replication/disaster-recovery-add-ons.png" alt-text="Screenshot of Azure VMware Solution private cloud to install VMware SRM with vSphere Replication as an add-on" border="true" lightbox="media/VMware-srm-vsphere-replication/disaster-recovery-add-ons.png":::
+
+1. From the **Disaster Recovery Solution** drop-down, select **VMware Site Recovery Manager (SRM) ΓÇô vSphere Replication**.
+
+ :::image type="content" source="media/VMware-srm-vsphere-replication/disaster-recovery-solution-srm-add-on.png" alt-text="Screenshot showing the Disaster recovery tab under Add-ons with VMware Site Recovery Manager (SRM) - vSphere replication selected." border="true" lightbox="media/VMware-srm-vsphere-replication/disaster-recovery-solution-srm-add-on.png":::
+
+1. Provide the License key, select agree with terms and conditions, and then select **Install**.
+
+ >[!NOTE]
+ >If you don't provide the license key, SRM is installed in an Evaluation mode. The license is used only to enable VMware SRM.
+
+ :::image type="content" source="media/VMware-srm-vsphere-replication/disaster-recovery-solution-srm-licence.png" alt-text="Screenshot showing the Disaster recovery tab under Add-ons with the License key field selected." border="true" lightbox="media/VMware-srm-vsphere-replication/disaster-recovery-solution-srm-licence.png":::
++
+## Install the vSphere Replication appliance
+
+After the SRM appliance installs successfully, you'll need to install the vSphere Replication appliances. Each replication server accommodates up to 200 protected VMs. Scale in or scale out as per your needs.
+
+1. From the **Replication using** drop-down, on the **Disaster recovery** tab, select **vSphere Replication**.
+
+ :::image type="content" source="media/vmware-srm-vsphere-replication/vsphere-replication-1.png" alt-text="Screenshot showing the vSphere Replication selected for the Replication using option.":::
+
+1. Move the vSphere server slider to indicate the number of replication servers you want based on the number of VMs to be protected. Then select **Install**.
+
+ :::image type="content" source="media/vmware-srm-vsphere-replication/vsphere-replication-2.png" alt-text="Screenshot showing how to increase or decrease the number of replication servers.":::
+
+1. Once installed, verify that both SRM and the vSphere Replication appliances are installed.
+
+ >[!TIP]
+ >The Uninstall button indicates that both SRM and the vSphere Replication appliances are currently installed.
+
+ :::image type="content" source="media/vmware-srm-vsphere-replication/vsphere-replication-3.png" alt-text="Screenshot showing that both SRM and the replication appliance are installed.":::
+
+
+## Configure site pairing in vCenter
+
+After installing VMware SRM and vSphere Replication, you need to complete the configuration and site pairing in vCenter.
+
+1. Sign in to vCenter as cloudadmin@vsphere.local.
+
+1. Navigate to **Site Recovery**, check the status of both vSphere Replication and VMware SRM, and then select **OPEN Site Recovery** to launch the client.
+
+ :::image type="content" source="media/vmware-srm-vsphere-replication/open-site-recovery.png" alt-text="Screenshot showing vSphere Client with the vSphere Replication and Site Recovery Manager installation status as OK." border="true":::
++
+1. Select **NEW SITE PAIR** in the Site Recovery (SR) client in the new tab that opens.
+
+ :::image type="content" source="media/vmware-srm-vsphere-replication/new-site-pair.png" alt-text="Screenshot showing vSphere Client with the New Site Pair button selected for Site Recovery." border="true":::
+
+1. Enter the remote site details, and then select **NEXT**.
+
+ >[!NOTE]
+ >An Azure VMware Solution private cloud operates with an embedded Platform Services Controller (PSC), so only one local vCenter can be selected. If the remote vCenter is using an embedded Platform Service Controller (PSC), use the vCenter's FQDN (or its IP address) and port to specify the PSC.
+ >
+ >The remote user must have sufficient permissions to perform the pairings. An easy way to ensure this is to give that user the VRM administrator and SRM administrator roles in the remote vCenter. For a remote Azure VMware Solution private cloud, cloudadmin is configured with those roles.
+
+ :::image type="content" source="media/vmware-srm-vsphere-replication/pair-the-sites-specify-details.png" alt-text="Screenshot showing the Site details for the new site pair." border="true" lightbox="media/vmware-srm-vsphere-replication/pair-the-sites-specify-details.png":::
+
+1. Select **CONNECT** to accept the certificate for the remote vCenter.
+
+ At this point, the client should discover the VRM and SRM appliances on both sides as services to pair.
+
+1. Select the appliances to pair and then select **NEXT**.
+
+ :::image type="content" source="media/vmware-srm-vsphere-replication/pair-the-sites-new-site.png" alt-text="Screenshot showing the vCenter Server and services details for the new site pair." border="true" lightbox="media/vmware-srm-vsphere-replication/pair-the-sites-new-site.png":::
+
+1. Select **CONNECT** to accept the certificates for the remote VMware SRM and the remote vCenter (again).
+
+1. Select **CONNECT** to accept the certificates for the local VMware SRM and the local vCenter.
+
+1. Review the settings and then select **FINISH**.
+
+ If successful, the client displays another panel for the pairing. However, if unsuccessful, an alarm will be reported.
+
+1. At the bottom, in the right corner, select the double-up arrow to expand the panel to show **Recent Tasks** and **Alarms**.
+
+ >[!NOTE]
+ >The SR client sometimes takes a long time to refresh. If an operation seems to take too long or appears "stuck", select the refresh icon on the menu bar.
+
+1. Select **VIEW DETAILS** to open the panel for remote site pairing, which opens a dialog to sign in to the remote vCenter.
+
+ :::image type="content" source="media/vmware-srm-vsphere-replication/view-details-remote-pairing.png" alt-text="Screenshot showing the new site pair details for Site Recovery Manager and vSphere Replication." border="true" lightbox="media/vmware-srm-vsphere-replication/view-details-remote-pairing.png":::
+
+1. Enter the username with sufficient permissions to do replication and site recovery and then select **LOG IN**.
+
+ For pairing, the login, which is often a different user, is a one-time action to establish pairing. The SR client requires this login every time the client is launched to work with the pairing.
+
+ >[!NOTE]
+ >The user with sufficient permissions should have **VRM administrator** and **SRM administrator** roles given to them in the remote vCenter. The user should also have access to the remote vCenter inventory, like folders and datastores. For a remote Azure VMware Solution private cloud, the cloudadmin user has the appropriate permissions and access.
+
+ :::image type="content" source="media/vmware-srm-vsphere-replication/sign-into-remote-vcenter.png" alt-text="Screenshot showing the vCenter Server credentials." border="true":::
+
+ You'll see a warning message indicating that the embedded VRS in the local VRM isn't running. This is because Azure VMware Solution doesn't use the embedded VRS in an Azure VMware Solution private cloud. Instead, it uses VRS appliances.
+
+ :::image type=" content" source=" media/vmware-srm-vsphere-replication/pair-the-sites-summary.png" alt-text="Screenshot showing the site pair summary for Site Recovery Manager and vSphere Replication." border="true" lightbox="media/vmware-srm-vsphere-replication/pair-the-sites-summary.png":::
+
+## SRM protection, reprotection, and failback
+
+After you've created the site pairing, follow the VMware documentation mentioned below for end-to-end protection of VMs from the Azure portal.
+
+- [Using vSphere Replication with Site Recovery Manager (vmware.com)](https://docs.vmware.com/en/Site-Recovery-Manager/8.3/com.vmware.srm.admin.doc/GUID-2C77C830-892D-45FF-BA4F-80AC10085DBE.html)
+
+- [Inventory Mappings for Array-Based Replication Protection Groups and vSphere Replication Protection Groups (vmware.com)](https://docs.vmware.com/en/Site-Recovery-Manager/8.3/com.vmware.srm.admin.doc/GUID-2E2B4F84-D388-456B-AA3A-57FA8D47063D.html)
+
+- [About Placeholder Virtual Machines (vmware.com)](https://docs.vmware.com/en/Site-Recovery-Manager/8.3/com.vmware.srm.admin.doc/GUID-EFE73B20-1C68-4D2C-8C86-A6E3C6214F07.html)
+
+- [vSphere Replication Protection Groups (vmware.com)](https://docs.vmware.com/en/Site-Recovery-Manager/8.3/com.vmware.srm.admin.doc/GUID-CCF2E768-736E-4EAA-B3BE-50182635BC49.html)
+
+- [Creating, Testing, and Running Recovery Plans (vmware.com)](https://docs.vmware.com/en/Site-Recovery-Manager/8.3/com.vmware.srm.admin.doc/GUID-AF6BF11B-4FB7-4543-A873-329FDF1524A4.html)
+
+- [Configuring a Recovery Plan (vmware.com)](https://docs.vmware.com/en/Site-Recovery-Manager/8.3/com.vmware.srm.admin.doc/GUID-FAC499CE-2994-46EF-9164-6D97EAF52C68.html)
+
+- [Customizing IP Properties for Virtual Machines (vmware.com)](https://docs.vmware.com/en/Site-Recovery-Manager/8.3/com.vmware.srm.admin.doc/GUID-25B33730-14BE-4268-9D88-1129011AFB39.html)
+
+- [How Site Recovery Manager Reprotects Virtual Machines with vSphere Replication (vmware.com)](https://docs.vmware.com/en/Site-Recovery-Manager/8.3/com.vmware.srm.admin.doc/GUID-1DE0E76D-1BA7-44D8-AEA2-5B2218E219B1.html)
+
+- [Perform a Failback (vmware.com)](https://docs.vmware.com/en/Site-Recovery-Manager/8.3/com.vmware.srm.admin.doc/GUID-556E84C0-F8B7-4F9F-AAB0-0891C084EDE4.html)
+++
+## Ongoing management of your SRM solution
+
+While Microsoft aims to simplify VMware SRM and vSphere Replication installation on an Azure VMware Solution private cloud, you are responsible for managing your license and the day-to-day operation of the disaster recovery solution.
+
+## Scale limitations
+
+| Configuration | Limit |
+| | |
+| Number of protected Virtual Machines | 1000 |
+| Number of Virtual Machines per recovery plan | 1000 |
+| Number of protection groups per recovery plan | 250 |
+| RPO Values | 5 min, 30 min, 60 min, 90 min, 120 min |
+| Total number of virtual machines per protection group | 4 |
+| Total number of recovery plans | 250 |
+| Number of VMs with RPO of 5 minutes | 100 |
+| Number of VMs with RPO of 30 minutes | 300 |
+| Number of VMs with RPO of 60 minutes | 300 |
+| Number of VMs with RPO of 90 minutes | 200 |
+| Number of VMs with RPO of 120 minutes | 100 |
++
+## SRM licenses
+
+You can install VMware SRM using an evaluation license or a production license. The evaluation license is valid for 60 days. After the evaluation period, you'll be required to obtain a production license of VMware SRM.
+
+You can't use pre-existing on-premises VMware SRM licenses for your Azure VMware Solution private cloud. Work with your sales teams and VMware to acquire a new term-based production license of VMware SRM.
+
+Once a production license of SRM is acquired, you'll be able to use the Azure VMware Solution portal to update SRM with the new production license.
++
+## Uninstall SRM
+
+If you no longer require SRM, you must uninstall it in a clean manner. Before you uninstall SRM, you must remove all SRM configurations from both sites in the correct order. If you do not remove all configurations before uninstalling SRM, some SRM components, such as placeholder VMs, might remain in the Azure VMware Solution infrastructure.
+
+1. In the vSphere Client or the vSphere Web Client, select **Site Recovery** > **Open Site Recovery**.
+
+2. On the **Site Recovery** home tab, select a site pair and select **View Details**.
+
+3. Select the **Recovery Plans** tab, right-click on a recovery plan and select **Delete**.
+
+ >[!NOTE]
+ >You cannot delete recovery plans that are running.
+
+4. Select the **Protection Groups** tab, select a protection group, and select the **Virtual Machines** tab.
+
+5. Highlight all virtual machines, right-click, and select **Remove Protection**.
+
+ Removing protection from a VM deletes the placeholder VM from the recovery site. Repeat this operation for all protection groups.
+
+6. In the **Protection Groups** tab, right-click a protection group and select **Delete**.
+
+ >[!NOTE]
+ >You cannot delete a protection group that is included in a recovery plan. You cannot delete vSphere Replication protection groups that contain virtual machines on which protection is still configured.
+
+7. Select **Site Pair** > **Configure** and remove all inventory mappings.
+
+ a. Select each of the **Network Mappings**, **Folder Mappings**, and **Resource Mappings** tabs.
+
+ b. In each tab, select a site, right-click a mapping, and select **Delete**.
+
+8. For both sites, select **Placeholder Datastores**, right-click the placeholder datastore, and select **Remove**.
+
+9. Select **Site Pair** > **Summary**, and select **Break Site Pair**.
+
+ >[!NOTE]
+ >Breaking the site pairing removes all information related to registering Site Recovery Manager with Site Recovery Manager, vCenter Server, and the Platform Services Controller on the remote site.
+
+10. In your private cloud, under **Manage**, select **Add-ons** > **Disaster recovery**, and then select **Uninstall the replication appliances**.
+
+11. Once replication appliances are uninstalled, from the **Disaster recovery** tab, select **Uninstall for the Site Recovery Manager**.
+
+12. Repeat these steps on the secondary Azure VMware Solution site.
++
+## Support
+
+VMware SRM is a Disaster Recovery solution from VMware.
+
+Microsoft only supports install/uninstall of SRM and vSphere Replication Manager and scale up/down of vSphere Replication appliances within Azure VMware Solution.
+
+For all other issues, such as configuration and replication, contact VMware for support.
+
+VMware and Microsoft support teams will engage each other as needed to troubleshoot SRM issues on Azure VMware Solution.
++
+## References
+
+- [VMware Site Recovery Manager Documentation](https://docs.vmware.com/en/Site-Recovery-Manager/https://docsupdatetracker.net/index.html)
+- [Compatibility Matrices for VMware Site Recovery Manager 8.3](https://docs.vmware.com/en/Site-Recovery-Manager/8.3/rn/srm-compat-matrix-8-3.html)
+- [VMware SRM 8.3 release notes](https://docs.vmware.com/en/Site-Recovery-Manager/8.3/rn/srm-releasenotes-8-3.html)
+- [VMware vSphere Replication Documentation](https://docs.vmware.com/en/vSphere-Replication/https://docsupdatetracker.net/index.html)
+- [Compatibility Matrices for vSphere Replication 8.3](https://docs.vmware.com/en/vSphere-Replication/8.3/rn/vsphere-replication-compat-matrix-8-3.html)
+- [Operational Limits of Site Recovery Manager 8.3](https://docs.vmware.com/en/Site-Recovery-Manager/8.3/com.vmware.srm.install_config.doc/GUID-3AD7D565-8A27-450C-8493-7B53F995BB14.html)
+- [Operational Limits of vSphere Replication 8.3](https://docs.vmware.com/en/vSphere-Replication/8.3/com.vmware.vsphere.replication-admin.doc/GUID-E114BAB8-F423-45D4-B029-91A5D551AC47.html)
+- [Calculate bandwidth for vSphere Replication](https://docs.vmware.com/en/vSphere-Replication/8.3/com.vmware.vsphere.replication-admin.doc/GUID-4A34D0C9-8CC1-46C4-96FF-3BF7583D3C4F.html)
+- [SRM installation and configuration](https://docs.vmware.com/en/Site-Recovery-Manager/8.3/com.vmware.srm.install_config.doc/GUID-B3A49FFF-E3B9-45E3-AD35-093D896596A0.html)
+- [vSphere Replication administration](https://docs.vmware.com/en/vSphere-Replication/8.3/com.vmware.vsphere.replication-admin.doc/GUID-35C0A355-C57B-430B-876E-9D2E6BE4DDBA.html)
+- [Pre-requisites and Best Practices for SRM installation](https://docs.vmware.com/en/Site-Recovery-Manager/8.3/com.vmware.srm.install_config.doc/GUID-BB0C03E4-72BE-4C74-96C3-97AC6911B6B8.html)
+- [Network ports for SRM](https://docs.vmware.com/en/Site-Recovery-Manager/8.3/com.vmware.srm.install_config.doc/GUID-499D3C83-B8FD-4D4C-AE3D-19F518A13C98.html)
+- [Network ports for vSphere Replication](https://kb.vmware.com/s/article/2087769)
+
azure-vmware Move Azure Vmware Solution Across Regions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/move-azure-vmware-solution-across-regions.md
In this step, you'll use the source NSX-T configuration to configure the target
>[!NOTE] >You'll have multiple features configured on the source NSX-T, so you must copy or read from the source NXS-T and recreate it in the target private cloud. Use L2 Extension to keep same IP address and Mac Address of the VM while migrating Source to target AVS Private Cloud to avoid downtime due to IP change and related configuration.
-1. [Configure NSX network components](configure-nsx-network-components-azure-portal.md) required in the target environment under default Tier-1 gateway:
+1. [Configure NSX network components](configure-nsx-network-components-azure-portal.md) required in the target environment under default Tier-1 gateway.
1. [Create the security group configuration](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/administration/GUID-41CC06DF-1CD4-4233-B43E-492A9A3AD5F6.html).
azure-vmware Tutorial Configure Networking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-configure-networking.md
description: Learn to create and configure the networking needed to deploy your
Last updated 04/23/2021+
+#Customer intent: As a < type of user >, I want < what? > so that < why? >.
+ # Tutorial: Configure networking for your VMware private cloud in Azure
In this tutorial, you learned how to:
Continue to the next tutorial to learn how to create the NSX-T network segments that are used for VMs in vCenter. > [!div class="nextstepaction"]
-> [Create an NSX-T network segment](tutorial-nsx-t-network-segment.md)
+> [Create an NSX-T network segment](./tutorial-nsx-t-network-segment.md)
azure-vmware Tutorial Nsx T Network Segment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-nsx-t-network-segment.md
Title: Tutorial - Add an NSX-T network segment in Azure VMware Solution
-description: Learn how to create an NSX-T network segment to use for virtual machines (VMs) in vCenter.
+description: Learn how to add an NSX-T network segment to use for virtual machines (VMs) in vCenter.
Previously updated : 03/13/2021+ Last updated : 07/16/2021
-# Tutorial: Add a network segment in Azure VMware Solution
+# Tutorial: Add an NSX-T network segment in Azure VMware Solution
+
+After deploying Azure VMware Solution, you can configure an NSX-T network segment either from NSX-T Manager or the Azure portal. Once configured, the segments are visible in Azure VMware Solution, NSX-T Manger, and vCenter. NSX-T comes pre-previsioned by default with an NSX-T Tier-0 gateway in **Active/Active** mode and a default NSX-T Tier-1 gateway in **Active/Standby** mode. These gateways let you connect the segments (logical switches) and provide East-West and North-South connectivity.
+
+>[!TIP]
+>The Azure portal presents a simplified view of NSX-T operations a VMware administrator needs regularly and targeted at users not familiar with NSX-T Manager.
-The virtual machines (VMs) created in vCenter are placed onto the network segments created in NSX-T and are visible in vCenter.
In this tutorial, you learn how to: > [!div class="checklist"]
-> * Navigate in NSX-T Manager to add network segments
-> * Add a new network segment
-> * Observe the new network segment in vCenter
+> * Add network segments using either NSX-T Manager or the Azure portal
+> * Verify the new network segment
## Prerequisites An Azure VMware Solution private cloud with access to the vCenter and NSX-T Manager interfaces. For more information, see the [Configure networking](tutorial-configure-networking.md) tutorial.
-## Add a network segment
+## Use NSX-T Manager to add network segment
+
+The virtual machines (VMs) created in vCenter are placed onto the network segments created in NSX-T and are visible in vCenter.
[!INCLUDE [add-network-segment-steps](includes/add-network-segment-steps.md)]
+## Use Azure portal to add an NSX-T segment
+++
+## Verify the new network segment
+
+Verify the presence of the new network segment. In this example, **ls01** is the new network segment.
+
+1. In NSX-T Manager, select **Networking** > **Segments**.
+
+ :::image type="content" source="media/nsxt/nsxt-new-segment-overview-2.png" alt-text="Screenshot showing the confirmation and status of the new network segment is present in NSX-T.":::
+
+1. In vCenter, select **Networking** > **SDDC-Datacenter**.
+
+ :::image type="content" source="media/nsxt/vcenter-with-ls01-2.png" alt-text="Screenshot showing the confirmation that the new network segment is present in vCenter.":::
+ ## Next steps In this tutorial, you created an NSX-T network segment to use for VMs in vCenter. You can now: -- [Create and manage DHCP for Azure VMware Solution](configure-dhcp-azure-vmware-solution.md)
+- [Configure and manage DHCP for Azure VMware Solution](configure-dhcp-azure-vmware-solution.md)
- [Create a content Library to deploy VMs in Azure VMware Solution](deploy-vm-content-library.md) - [Peer on-premises environments to a private cloud](tutorial-expressroute-global-reach-private-cloud.md)
azure-web-pubsub Howto Secure Shared Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/howto-secure-shared-private-endpoints.md
+
+ Title: Secure Azure Web PubSub outbound traffic through Shared Private Endpoints
+
+description: How to secure Azure Web PubSub outbound traffic through Shared Private Endpoints to avoid traffic go to public network
++++ Last updated : 07/13/2021+++
+# Secure Azure Web PubSub outbound traffic through Shared Private Endpoints
+
+If you're using [event handler](https://azure.github.io/azure-webpubsub/concepts/service-internals#event_handler) in Azure Web PubSub Service, you might have outbound traffic to upstream. Upstream such as
+Azure Web App and Azure Functions, can be configured to accept connections from a list of virtual networks and refuse outside connections that originate from a public network. You can create an outbound [private endpoint connection](../private-link/private-endpoint-overview.md) to reach these endpoints.
+
+ :::image type="content" alt-text="Shared private endpoint overview." source="media\howto-secure-shared-private-endpoints\shared-private-endpoint-overview.png" border="false" :::
+
+This outbound method is subject to the following requirements:
+++ The upstream must be Azure Web App or Azure Function.+++ The Azure Web PubSub Service service must be on the Standard tier.+++ The Azure Web App or Azure Function must be on certain SKUs. See [Use Private Endpoints for Azure Web App](../app-service/networking/private-endpoint.md).+
+## Shared Private Link Resources Management APIs
+
+Private endpoints of secured resources that are created through Azure Web PubSub Service APIs are referred to as *shared private link resources*. This is because you're "sharing" access to a resource, such as an Azure Function, that has been integrated with the [Azure Private Link service](https://azure.microsoft.com/services/private-link/). These private endpoints are created inside Azure Web PubSub Service execution environment and are not directly visible to you.
+
+At this moment, you can use Management REST API to create or delete *shared private link resources*. In the remainder of this article, we will use [Azure CLI](/cli/azure/) to demonstrate the REST API calls.
+
+> [!NOTE]
+> The examples in this article are based on the following assumptions:
+> * The resource ID of this Azure Web PubSub Service is _/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/webPubSub/contoso-webpubsub.
+> * The resource ID of upstream Azure Function is _/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.Web/sites/contoso-func.
+
+The rest of the examples show how the _contoso-webpubsub_ service can be configured so that its upstream calls to function go through a private endpoint rather than public network.
+
+### Step 1: Create a shared private link resource to the function
+
+You can make the following API call with the [Azure CLI](/cli/azure/) to create a shared private link resource:
+
+```dotnetcli
+az rest --method put --uri https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/webPubSub/contoso-webpubsub/sharedPrivateLinkResources/func-pe?api-version=2021-06-01-preview --body @create-pe.json
+```
+
+The contents of the *create-pe.json* file, which represent the request body to the API, are as follows:
+
+```json
+{
+ "name": "func-pe",
+ "properties": {
+ "privateLinkResourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.Web/sites/contoso-func",
+ "groupId": "sites",
+ "requestMessage": "please approve"
+ }
+}
+```
+
+The process of creating an outbound private endpoint is a long-running (asynchronous) operation. As in all asynchronous Azure operations, the `PUT` call returns an `Azure-AsyncOperation` header value that looks like the following:
+
+```plaintext
+"Azure-AsyncOperation": "https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/webPubSub/contoso-webpubsub/operationStatuses/c0786383-8d5f-4554-8d17-f16fcf482fb2?api-version=2021-06-01-preview"
+```
+
+You can poll this URI periodically to obtain the status of the operation.
+
+If you are using the CLI, you can poll for the status by manually querying the `Azure-AsyncOperationHeader` value,
+
+```donetcli
+az rest --method get --uri https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/webPubSub/contoso-webpubsub/operationStatuses/c0786383-8d5f-4554-8d17-f16fcf482fb2?api-version=2021-06-01-preview
+```
+
+Wait until the status changes to "Succeeded" before proceeding to the next steps.
+
+### Step 2a: Approve the private endpoint connection for the function
+
+> [!NOTE]
+> In this section, you use the Azure portal to walk through the approval flow for a private endpoint to Azure Function. Alternately, you could use the [REST API](/rest/api/appservice/web-apps/approve-or-reject-private-endpoint-connection) that's available via the App Service provider.
+
+> [!IMPORTANT]
+> After you approved the private endpoint connection, the Function is no longer accessible from public network. You may need to create other private endpoints in your own virtual network to access the Function endpoint.
+
+1. In the Azure portal, select the **Networking** tab of your Function App and navigate to **Private endpoint connections**. Click **Configure your private endpoint connections**. After the asynchronous operation has succeeded, there should be a request for a private endpoint connection with the request message from the previous API call.
+
+ :::image type="content" alt-text="Screenshot of the Azure portal, showing the Private endpoint connections pane." source="media\howto-secure-shared-private-endpoints\portal-function-approve-private-endpoint.png" lightbox="media\howto-secure-shared-private-endpoints\portal-function-approve-private-endpoint.png" :::
+
+1. Select the private endpoint that Azure Web PubSub Service created. In the **Private endpoint** column, identify the private endpoint connection by the name that's specified in the previous API, select **Approve**.
+
+ Make sure that the private endpoint connection appears as shown in the following screenshot. It could take one to two minutes for the status to be updated in the portal.
+
+ :::image type="content" alt-text="Screenshot of the Azure portal, showing an Approved status on the Private endpoint connections pane." source="media\howto-secure-shared-private-endpoints\portal-function-approved-private-endpoint.png" lightbox="media\howto-secure-shared-private-endpoints\portal-function-approved-private-endpoint.png" :::
+
+### Step 2b: Query the status of the shared private link resource
+
+It takes minutes for the approval to be propagated to Azure Web PubSub Service. To confirm that the shared private link resource has been updated after approval, you can also obtain the "Connection state" by using the GET API.
+
+```dotnetcli
+az rest --method get --uri https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/webPubSub/contoso-webpubsub/sharedPrivateLinkResources/func-pe?api-version=2021-06-01-preview
+```
+
+This would return a JSON, where the connection state would show up as "status" under the "properties" section.
+
+```json
+{
+ "name": "func-pe",
+ "properties": {
+ "privateLinkResourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.Web/sites/contoso-func",
+ "groupId": "sites",
+ "requestMessage": "please approve",
+ "status": "Approved",
+ "provisioningState": "Succeeded"
+ }
+}
+
+```
+
+If the "Provisioning State" (`properties.provisioningState`) of the resource is `Succeeded` and "Connection State" (`properties.status`) is `Approved`, it means that the shared private link resource is functional and Azure Web PubSub Service can communicate over the private endpoint.
+
+### Step 3: Verify upstream calls are from a private IP
+
+Once the private endpoint is set up, you can verify incoming calls are from a private IP by checking the `X-Forwarded-For` header at upstream side.
++
+## Next steps
+
+Learn more about private endpoints:
+++ [What are private endpoints?](../private-link/private-endpoint-overview.md)
backup About Azure Vm Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/about-azure-vm-restore.md
This article describes how the [Azure Backup service](./backup-overview.md) rest
| [Restore to create a new virtual machine](./backup-azure-arm-restore-vms.md) | Restores the entire VM to OLR (if the source VM still exists) or ALR | <li> If the source VM is lost or corrupt, then you can restore entire VM <li> You can create a copy of the VM <li> You can perform a restore drill for audit or compliance <li> This option won't work for Azure VMs created from Marketplace images (that is, if they aren't available because the license expired). | | [Restore disks of the VM](./backup-azure-arm-restore-vms.md#restore-disks) | Restore disks attached to the VM | All disks: This option creates the template and restores the disk. You can edit this template with special configurations (for example, availability sets) to meet your requirements and then use both the template and restore the disk to recreate the VM. | | [Restore specific files within the VM](./backup-azure-restore-files-from-vm.md) | Choose restore point, browse, select files, and restore them to the same (or compatible) OS as the backed-up VM. | If you know which specific files to restore, then use this option instead of restoring the entire VM. |
-| [Restore an encrypted VM](./backup-azure-vms-encryption.md) | From the portal, restore the disks and then use PowerShell to create the VM | <li> [Encrypted VM with Azure Active Directory](../virtual-machines/windows/disk-encryption-windows-aad.md) <li> [Encrypted VM without Azure AD](../virtual-machines/windows/disk-encryption-windows.md) <li> [Encrypted VM *with Azure AD* migrated to *without Azure AD*](/azure/virtual-machines/windows/disk-encryption-faq#can-i-migrate-vms-that-were-encrypted-with-an-azure-ad-app-to-encryption-without-an-azure-ad-app) |
+| [Restore an encrypted VM](./backup-azure-vms-encryption.md) | From the portal, restore the disks and then use PowerShell to create the VM | <li> [Encrypted VM with Azure Active Directory](../virtual-machines/windows/disk-encryption-windows-aad.md) <li> [Encrypted VM without Azure AD](../virtual-machines/windows/disk-encryption-windows.md) <li> [Encrypted VM *with Azure AD* migrated to *without Azure AD*](../virtual-machines/windows/disk-encryption-faq.yml#can-i-migrate-vms-that-were-encrypted-with-an-azure-ad-app-to-encryption-without-an-azure-ad-app-) |
| [Cross Region Restore](./backup-azure-arm-restore-vms.md#cross-region-restore) | Create a new VM or restore disks to a secondary region (Azure paired region) | <li> **Full outage**: With the cross region restore feature, there's no wait time to recover data in the secondary region. You can initiate restores in the secondary region even before Azure declares an outage. <li> **Partial outage**: Downtime can occur in specific storage clusters where Azure Backup stores your backed-up data or even in-network, connecting Azure Backup and storage clusters associated with your backed-up data. With Cross Region Restore, you can perform a restore in the secondary region using a replica of backed up data in the secondary region. <li> **No outage**: You can conduct business continuity and disaster recovery (BCDR) drills for audit or compliance purposes with the secondary region data. This allows you to perform a restore of backed up data in the secondary region even if there isn't a full or partial outage in the primary region for business continuity and disaster recovery drills. | ## Next steps - [Frequently asked questions about VM restore](/azure/backup/backup-azure-vm-backup-faq#restore) - [Supported restore methods](./backup-support-matrix-iaas.md#supported-restore-methods)-- [Troubleshoot restore issues](./backup-azure-vms-troubleshoot.md#restore)
+- [Troubleshoot restore issues](./backup-azure-vms-troubleshoot.md#restore)
backup About Restore Microsoft Azure Recovery Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/about-restore-microsoft-azure-recovery-services.md
Using the MARS agent you can:
## Next steps - For more frequently asked questions, see [MARS agent FAQs](backup-azure-file-folder-backup-faq.yml).-- For information about supported scenarios and limitations, see [Support Matrix for the backup with the MARS agent](backup-support-matrix-mars-agent.md).
+- For information about supported scenarios and limitations, see [Support Matrix for the backup with the MARS agent](backup-support-matrix-mars-agent.md).
backup Backup Azure Mars Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-mars-troubleshoot.md
We recommend that you check the following before you start troubleshooting Micro
- [Ensure the MARS agent is up to date](https://go.microsoft.com/fwlink/?linkid=229525&clcid=0x409). - [Ensure you have network connectivity between the MARS agent and Azure](#the-microsoft-azure-recovery-service-agent-was-unable-to-connect-to-microsoft-azure-backup). - Ensure MARS is running (in Service console). If you need to, restart and retry the operation.-- [Ensure 5% to 10% free volume space is available in the scratch folder location](./backup-azure-file-folder-backup-faq.yml#what-s-the-minimum-size-requirement-for-the-cache-folder-).
+- [Ensure 5% to 10% free volume space is available in the scratch folder location](/azure/backup/backup-azure-file-folder-backup-faq#what-s-the-minimum-size-requirement-for-the-cache-folder-).
- [Check if another process or antivirus software is interfering with Azure Backup](./backup-azure-troubleshoot-slow-backup-performance-issue.md#cause-another-process-or-antivirus-software-interfering-with-azure-backup). - If the backup job completed with warnings, see [Backup Jobs Completed with Warning](#backup-jobs-completed-with-warning) - If scheduled backup fails but manual backup works, see [Backups don't run according to schedule](#backups-dont-run-according-to-schedule).
We recommend that you check the following before you start troubleshooting Micro
| Error code | Reasons | Recommendations | | - | | | | 0x80070570 | The file or directory is corrupted and unreadable. | Run **chkdsk** on the source volume. |
- | 0x80070002, 0x80070003 | The system cannot find the file specified. | [Ensure the scratch folder isn't full](/azure/backup/backup-azure-file-folder-backup-faq#manage-the-backup-cache-folder) <br><br> Check if the volume where scratch space is configured exists (not deleted) <br><br> [Ensure the MARS agent is excluded from the antivirus installed on the machine](/azure/backup/backup-azure-troubleshoot-slow-backup-performance-issue#cause-another-process-or-antivirus-software-interfering-with-azure-backup) |
+ | 0x80070002, 0x80070003 | The system cannot find the file specified. | [Ensure the scratch folder isn't full](/azure/backup/backup-azure-file-folder-backup-faq#manage-the-backup-cache-folder) <br><br> Check if the volume where scratch space is configured exists (not deleted) <br><br> [Ensure the MARS agent is excluded from the antivirus installed on the machine](./backup-azure-troubleshoot-slow-backup-performance-issue.md#cause-another-process-or-antivirus-software-interfering-with-azure-backup) |
| 0x80070005 | Access Is Denied | [Check if antivirus or other third-party software is blocking access](./backup-azure-troubleshoot-slow-backup-performance-issue.md#cause-another-process-or-antivirus-software-interfering-with-azure-backup) | | 0x8007018b | Access to the cloud file is denied. | OneDrive files, Git Files, or any other files that can be in offline state on the machine |
Unable to find changes in a file. This could be due to various reasons. Please r
## Next steps - Get more details on [how to back up Windows Server with the Azure Backup agent](tutorial-backup-windows-server-to-azure.md).-- If you need to restore a backup, see [restore files to a Windows machine](backup-azure-restore-windows-server.md).
+- If you need to restore a backup, see [restore files to a Windows machine](backup-azure-restore-windows-server.md).
backup Backup Azure Move Recovery Services Vault https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-move-recovery-services-vault.md
If you need to keep the current protected data in the old vault and continue the
You can move many different types of resources between resource groups and subscriptions.
-For more information, see [Move resources to new resource group or subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md).
+For more information, see [Move resources to new resource group or subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md).
backup Backup Blobs Storage Account Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-blobs-storage-account-cli.md
Last updated 06/18/2021
# Back up Azure Blobs in a storage account using Azure CLI
-This article describes how to back up [Azure Blobs](/azure/backup/blob-backup-overview) using Azure CLI.
+This article describes how to back up [Azure Blobs](./blob-backup-overview.md) using Azure CLI.
> [!IMPORTANT] > Support for Azure Blobs backup and restore via CLI is in preview and available as an extension in Az 2.15.0 version and later. The extension is automatically installed when you run the **az dataprotection** commands. [Learn more](/cli/azure/azure-cli-extensions-overview) about extensions.
az dataprotection backup-instance create -g testBkpVaultRG --vault-name TestBkpV
## Next steps
-[Restore Azure Blobs using Azure CLI](restore-blobs-storage-account-cli.md)
+[Restore Azure Blobs using Azure CLI](restore-blobs-storage-account-cli.md)
backup Backup Encryption https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-encryption.md
Azure Backup includes encryption on two levels:
## Next steps - [Azure Storage encryption for data at rest](../storage/common/storage-service-encryption.md)-- [Azure Backup FAQ](/azure/backup/backup-azure-backup-faq#encryption) for any questions you may have about encryption
+- [Azure Backup FAQ](/azure/backup/backup-azure-backup-faq#encryption) for any questions you may have about encryption
backup Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-support-matrix.md
The following table describes the features of Recovery Services vaults:
**Move vaults** | You can [move vaults](./backup-azure-move-recovery-services-vault.md) across subscriptions or between resource groups in the same subscription. However, moving vaults across regions isn't supported. **Move data between vaults** | Moving backed-up data between vaults isn't supported. **Modify vault storage type** | You can modify the storage replication type (either geo-redundant storage or locally redundant storage) for a vault before backups are stored. After backups begin in the vault, the replication type can't be modified.
-**Zone-redundant storage (ZRS)** | Available in the UK South, South East Asia, Australia East, North Europe, Central US and Japan East.
+**Zone-redundant storage (ZRS)** | Supported in preview in UK South, South East Asia, Australia East, North Europe, Central US and Japan East.
**Private Endpoints** | See [this section](./private-endpoints.md#before-you-start) for requirements to create private endpoints for a recovery service vault. ## On-premises backup support
backup Restore Managed Disks Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/restore-managed-disks-cli.md
az dataprotection job list-from-resourcegraph --datasource-type AzureDisk --oper
## Next steps
-[Azure Disk Backup FAQ](/azure/backup/disk-backup-faq)
+[Azure Disk Backup FAQ](./disk-backup-faq.yml)
backup Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/whats-new.md
For more information, see [Encryption for Azure Backup using customer-managed ke
## Next steps -- [Azure Backup guidance and best practices](guidance-best-practices.md)
+- [Azure Backup guidance and best practices](guidance-best-practices.md)
bastion Bastion Connect Vm Ssh https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/bastion-connect-vm-ssh.md
In order to connect to the Linux VM via SSH, you must have the following ports o
1. On the **Connect using Azure Bastion** page, enter the **Username** and select **SSH Private Key from Azure Key Vault**. :::image type="content" source="./media/bastion-connect-vm-ssh/ssh-key-vault.png" alt-text="SSH Private Key from Azure Key Vault":::
-1. Select the **Azure Key Vault** dropdown and select the resource in which you stored your SSH private key. If you didnΓÇÖt set up an Azure Key Vault resource, see [Create a key vault](https://docs.microsoft.com/azure/key-vault/secrets/quick-create-powershell) and store your SSH private key as the value of a new Key Vault secret.
+1. Select the **Azure Key Vault** dropdown and select the resource in which you stored your SSH private key. If you didnΓÇÖt set up an Azure Key Vault resource, see [Create a key vault](../key-vault/secrets/quick-create-powershell.md) and store your SSH private key as the value of a new Key Vault secret.
>[!NOTE]
- >Please store your SSH private key as a secret in Azure Key Vault using the **PowerShell** or **Azure CLI** experience. Storing your private key via the Azure Key Vault portal experience will interfere with the formatting and result in unsuccessful login. If you did store your private key as a secret using the portal experience and no longer have access to the original private key file, see [Update SSH key](https://docs.microsoft.com/azure/virtual-machines/extensions/vmaccess#update-ssh-key) to update access to your target VM with a new SSH key pair.
+ >Please store your SSH private key as a secret in Azure Key Vault using the **PowerShell** or **Azure CLI** experience. Storing your private key via the Azure Key Vault portal experience will interfere with the formatting and result in unsuccessful login. If you did store your private key as a secret using the portal experience and no longer have access to the original private key file, see [Update SSH key](../virtual-machines/extensions/vmaccess.md#update-ssh-key) to update access to your target VM with a new SSH key pair.
> :::image type="content" source="./media/bastion-connect-vm-ssh/key-vault.png" alt-text="Azure Key Vault":::
In order to connect to the Linux VM via SSH, you must have the following ports o
## Next steps
-For more information about Azure Bastion, see the [Bastion FAQ](bastion-faq.md).
+For more information about Azure Bastion, see the [Bastion FAQ](bastion-faq.md).
bastion Bastion Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/bastion-faq.md
Make sure the user has **read** access to both the VM, and the peered VNet. Addi
|Microsoft.Network/networkInterfaces/ipconfigurations/read|Gets a network interface IP configuration definition.|Action| |Microsoft.Network/virtualNetworks/read|Get the virtual network definition|Action| |Microsoft.Network/virtualNetworks/subnets/virtualMachines/read|Gets references to all the virtual machines in a virtual network subnet|Action|
-|Microsoft.Network/virtualNetworks/virtualMachines/read|Gets references to all the virtual machines in a virtual network|Action|
+|Microsoft.Network/virtualNetworks/virtualMachines/read|Gets references to all the virtual machines in a virtual network|Action|
cdn Cdn Restrict Access By Country Region https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cdn/cdn-restrict-access-by-country-region.md
Title: Restrict Azure CDN content by country/region
description: Learn how to restrict access by country/region to your Azure CDN content by using the geo-filtering feature. documentationcenter: ''-+ Last updated 07/07/2021-+ # Restrict Azure CDN content by country/region
In the country/region filtering rules table, select the delete icon next to a ru
* Only one rule can be applied to the same relative path. That is, you can't create multiple country/region filters that point to the same relative path. However, because country/region filters are recursive, a folder can have multiple country/region filters. In other words, a subfolder of a previously configured folder can be assigned a different country/region filter.
-* The geo-filtering feature uses country/region codes to define the countries/regions from which a request is allowed or blocked for a secured directory. Although Akamai and Verizon profiles support most of the same country/region codes, there are a few differences. For more information, see [Azure CDN country/region codes](/previous-versions/azure/mt761717(v=azure.100)).
+* The geo-filtering feature uses country/region codes to define the countries/regions from which a request is allowed or blocked for a secured directory. **Azure CDN from Verizon** and **Azure CDN from Akamai** profiles use ISO 3166-1 alpha-2 country codes to define the countries from which a request will be allowed or blocked for a secured directory.
cloud-services-extended-support Deploy Prerequisite https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/deploy-prerequisite.md
Deployments that utilized the old diagnostics plugins need the settings removed
``` ## Access Control
-The subsciption containing networking resources needs to have [network contributor](https://docs.microsoft.com/azure/role-based-access-control/built-in-roles#network-contributor) access or above for Cloud Services (extended support). For more details on please refer to [RBAC built in roles](https://docs.microsoft.com/azure/role-based-access-control/built-in-roles)
+The subsciption containing networking resources needs to have [network contributor](../role-based-access-control/built-in-roles.md#network-contributor) access or above for Cloud Services (extended support). For more details on please refer to [RBAC built in roles](../role-based-access-control/built-in-roles.md)
## Key Vault creation
Key Vault is used to store certificates that are associated to Cloud Services (e
- Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support). - Deploy a Cloud Service (extended support) using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), [Template](deploy-template.md) or [Visual Studio](deploy-visual-studio.md). - Review [frequently asked questions](faq.md) for Cloud Services (extended support).-- Visit the [Cloud Services (extended support) samples repository](https://github.com/Azure-Samples/cloud-services-extended-support)
+- Visit the [Cloud Services (extended support) samples repository](https://github.com/Azure-Samples/cloud-services-exte